url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/118636?sort=newest
|
## Topological conditions of Kolmogorov Extension Theorem
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
KET is often used to construct stochastic processes in continuous time when the state space is $\Bbb R^d$. As far as I am familiar with its proof, it uses standard monotonic class-like arguments together with Caratheodory Extension Theorem. Neither of the two latter theorems requires any topological conditions. Moreover, I have not been able to find where KET uses that the space is $\Bbb R^d$ rather than just some measurable space.
Q: is it true that KET holds for general measurable spaces, and if not - where is the topology of $\Bbb R^d$ crucial in the proof? Also, maybe any counterexample (for exsitence or uniqueness) is known?
I also asked a related question here but didn't get any answer.
-
1
Check the Theorems 14.35,14,36 in A.Klenke's book "Probability theory. A Comprehensive Course." The state space can be quite general (for example a Polish space), but there are some restrictions. – Liviu Nicolaescu Jan 11 at 16:16
## 1 Answer
The KET fails for general measurable spaces, the classical example can be found in a paper by Andersen and Jessen. Topological assumptions are necessary so that the resulting measure is not only finitely additive but countably additive. There exists a quasi-topological condition of measure spaces, perfectness, that is sufficient. A probability space $(\Omega,\sigma,\mu)$ is perfect if for every random variable $f:\Omega\to\mathbb{R}$, there exists a Borel set $B\subseteq f(\Omega)$ with measure one under the distribution $\mu\circ f^{-1}$. A proof of KET under the assumption that the marginal measures are perfect due to Lamb is given here. The strategy of the proof is to employ an existence result for regular conditional probability spaces and the construct the proces for them using the Ionescu-Tulcea theorem.
-
Thanks a lot, Michael! Will you consider also putting this answer on a linked MSE question? – Ilya Jan 11 at 16:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9189807772636414, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/115004/the-integers-as-a-sequential-but-non-first-countable-topological-group/116385
|
## The integers as a sequential but non-first countable topological group
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Completely unaware of the Bohr topology, I recently asked whether or not there was a Hausdorff group topology on the integers $\mathbb{Z}$ which made the group fail to be first countable. For me, this topological group is a bit extreme since there are no non-trivial convergent sequences. I'm very interested to know if there is a sequential example.
If $\mathbb{Z}$ is given a Hausdorff group topology which makes it a sequential space, must it be first countable?
-
1
I believe there are sequential non-Frechet (hence non-first-countable) Hausdorff group topologies on any abelian group. Unfortunately I don´t remember where I remember this from. – Ramiro de la Vega Nov 30 at 18:16
## 2 Answers
The answer is no.
It is proved in Topologies on Abelian Groups (E.G. Zelenyuk and I.V. Protasov, Math. USSR Izvestiya, 1991), that on every infinite abelian group there exists a sequential Hausdorff group topology which is not first-countable.
-
Thanks very much Ramiro. – Jeremy Brazas Dec 14 at 19:22
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Consistently, you can get even more, as noted by Hrusak and Ramos-Garcia in this paper: http://www.matmor.unam.mx/~michael/reprints_files/precompact-groups.pdf
There are consistent examples of Fréchet-Urysohn non-first-countable Hausdorff group topologies on $\mathbb{Z}$.
Fréchet-Urysohn means that for every non-closed set $A$ and point $x \in \overline{A} \setminus A$ there is a countable sequence inside $A$ converging to $x$. Fréchet-Urysohn is the same as every subspace is sequential. Malykhin's problem asks for an example of a countable non-metrizable Fréchet-Urysohn topological group. The consistency of a positive answer to it has been known for some time, and recently Hrusak and Ramos-Garcia came up with a proof of the consistency of a negative answer, thus establishing its independence from ZFC.
Take a family of $\omega_1$ many distinct characters on $\mathbb{Z}$ separating the points of $\mathbb{Z}$ and consider the coarsest topology making each of those characters continuous. This is a Hausdorff group topology on $\mathbb{Z}$ with no countable local base and a base of cardinality $\omega_1$. The latter implies that it is Fréchet-Urysohn in any model of ZFC+$\mathfrak{p}>\omega_1$ (see below).
Call a family $\mathcal{F}$ of infinite subsets of a countable set strongly centered if every finite subfamily of $\mathcal{F}$ has infinite intersection. We say that a set $S$ is a pseudointersection of the family $\mathcal{F}$ if $S \setminus F$ is finite for every $F \in \mathcal{F}$. Now `$$\mathfrak{p}:=\min \{|\mathcal{F}|: \mathcal{F} \mbox{ is a strongly centered family without an infinite pseudointersection} \}$$`
Since $\mathcal{F}$ is a family of subsets of a countable set we have $\mathfrak{p} \leq \mathfrak{c}$. Moreover, you can cook up an infinite pseudointersection of a given countable family of infinite subsets of a countable set by an easy diagonalization, so $\mathfrak{p} \geq \omega_1$. It is known that $\mathfrak{p}>\omega_1$ is consistent. For example, under Martin's Axiom we have $\mathfrak{p}=\mathfrak{c}$, and hence it suffices to take a model of Martin's Axiom plus the negation of the Continuum Hypothesis.
Every countable topological space with a local base of cardinality $<\mathfrak{p}$ at every point is Fréchet-Urysohn.
Proof: Let $A \subset X$ be a non-closed set and $x \in \overline{A} \setminus A$. Let `$\{U_\alpha: \alpha < \kappa \}$` enumerate a local base at $x$, where $\kappa < \mathfrak{p}$. Then `$\mathcal{F}=\{U_\alpha \cap A: \alpha < \kappa \}$` is a strongly centered family of subsets of the countable set $A$. Since $\mathcal{F}$ has cardinality smaller than $\mathfrak{p}$ we can fix an infinite pseudointersection $S \subset A$ of $\mathcal{F}$. Now $S$ is a sequence inside $A$ which converges to $x$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9210060238838196, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/142916-please-help-ap-calc-test-tomorrow.html
|
# Thread:
1. ## please help ap calc test tomorrow
is there a formula for integrating fractions??? sorry if this is hard to understand but i need to find the INTEGRAL of x/(2+e^x).
2. ## integrating fractions
Hello,
When facing such problems you should realize that:
$\int \frac{x}{2+e^{x}}dx=\int \left(\frac{1}{2+e^{x}}\cdot x \right)dx$
If you define $f(x)=\frac{1}{2+e^{x}}$ and $g(x)=x$ then you have:
$\int f(x)g(x) dx$
From here you can simply apply the rule of integration by parts.
3. what is the rule of integration by parts??
4. Here ya go
Integration by Parts - HMC Calculus Tutorial
5. That made no sense to me.
Please show me with my problem
6. I don't think integration by parts should be used here.
Are you sure this was the question? This integral is rather difficult to evaluate. Cf. Wolfram Alpha
7. Originally Posted by lovek323
I don't think integration by parts should be used here.
Are you sure this was the question? This integral is rather difficult to evaluate. Cf. Wolfram Alpha
Yes state the exact question you are having problems with.
8. Yeah, because I was trying to do this by parts. It is not easy by any means.
9. There's always an integral involving Li(x) on the AP exam, right you guys? The AP board loves those almost as much as Si(x).
10. Originally Posted by maddas
There's always an integral involving Li(x) on the AP exam, right you guys? The AP board loves those almost as much as Si(x).
I never took AP and we didn't learn that in calc 1 or 2. What is Li(x) and Si(x)?
11. Originally Posted by helpmeee
is there a formula for integrating fractions??? sorry if this is hard to understand but i need to find the INTEGRAL of x/(2+e^x).
Are there integral terminals, that is, is it a definite integral?
12. Originally Posted by helpmeee
is there a formula for integrating fractions??? sorry if this is hard to understand but i need to find the INTEGRAL of x/(2+e^x).
no such integration is required on either the AB or BC exam unless it is a definite integral on the calculator part of the exam.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9597752094268799, "perplexity_flag": "middle"}
|
http://naml.us/blog/2008/11/singular-values-vs-eigenvalues
|
# Notes
Geoffrey Irving
## Singular values are not the magnitudes of eigenvalues
A week ago someone asked whether the singular values of a general (real) matrix are the magnitudes of its eigenvalues. There are various ways to see that the answer is no, but here’s an amusingly nonconstructive proof:
Consider a random matrix $A$ taken from $\mathrm{GL}\left(n\right)$ with some smooth distribution. With probability 1 all singular values of $A$ will be unique. However, with nonzero probability $A$ will be near a rotation matrix and will have a complex conjugate pair of eigenvalues with the same magnitude. Therefore, the singular values of $A$ are not always the magnitudes of the eigenvalues.
Tags: algebra, math
This entry was posted on Sunday, November 23rd, 2008 at 11:39 pm and is filed under math. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8907598853111267, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/188357/continuity-of-f-circ-f-circ-cdots
|
# Continuity of $f\circ f \circ \cdots$
Let $f:D\to D$, where $D\subseteq \mathbb{R}^n$, be a continuous function. Under what conditions is $f\circ f \circ \cdots$ continuous? Here, $\circ$ stands for the composition operator and sometimes the notation $f^2=f\circ f$ is used. So in this notation, when is $\lim_n f^n$ continuous?
There is a counterexample I can think of: $f(t)=t^\alpha$ over $D=[0,1]$ for $\alpha>1$. Then $f^n(t)=t^{\alpha^n}$ and the point-wise limit of $f^n$ is $\left(\lim_n f^n\right)(t)=0$ for $t\in[0,1)$ and $\left(\lim_n f^n\right)(1)=1$ which is not continuous.
One think one could suggest is that $f^n$ be a uniformly convergent sequence of functions. But what should this imply for $f$?
-
2
If $D$ is connected, then a necessary condition is that $\{x\mid f(x)=x\}$ must be connected, too. That disqualifies $t^\alpha$ for every $\alpha\notin\{0,1\}$. – Henning Makholm Aug 29 '12 at 13:41
@HenningMakholm Thanks a lot! Indeed, sounds intuitively correct. But, is there a proof for that? – Pantelis Sopasakis Aug 29 '12 at 13:46
2
If $f$ is continuous, then every value of $\lim_n f^n(x)$ must be a fixpoint, so the limit (if it exists) maps $D$ surjectively to the set of fixpoints. (It's surjective because the limit function clearly maps every fixpoint to itself). Therefore $(\lim f)(D)=\{x\in D\mid f(x)=x\}$. But continuous functions map connected sets to connected sets. – Henning Makholm Aug 29 '12 at 13:50
@HenningMakholm I only know that continuous functions map connected sets to connected ones. But does the converse hold as well? And if yes, is it sufficient to prove that $\lim f$ maps $D$ to a connected set? What if there is some other connected set $Z$ so that $(\lim f)(Z)$ is not connected? However, your suggestion serves as a criterion to test whether $\lim f$ is not continuous. – Pantelis Sopasakis Aug 29 '12 at 14:04
1
I'm suggesting only a necessary condition, not a sufficient one. There are functions whose fixpoint sets are connected, and whose limits exist but are not continuous, such as (for $D=\mathbb R^2$, but in polar coordinates) $$(r,\theta)\mapsto (r,\theta+\sin \theta)$$ – Henning Makholm Aug 29 '12 at 14:10
## 1 Answer
In complex analysis, such things are studied. Going back to Fatou and Julia. Say $f$ is analytic on a domain in the complex plane. The most important condition is that the set $\{f^n : n \ge 1 \}$ of iterates should be a normal family ... this is what you need for the limit to be an analytic function.
On $\Re^n$ do we have a similar result for functions that are merely continuous. i.e. if $f^n$ is continuous and the family $\{f^n\}$ is uniformly bounded, is it also normal? – Pantelis Sopasakis Aug 29 '12 at 16:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940427839756012, "perplexity_flag": "head"}
|
http://freelance-quantum-gravity.blogspot.com/2009/02/nvitation-to-algebraic-geometry.html?showComment=1258767869398
|
# DiY quantum gravity
An independent viewpoint about quantum gravity.
## Monday, February 16, 2009
### An invitation to algebraic geometry
I have mentioned previously in this blog that I was studying algebraic geometry. O.K. I have concluded my first phase of study and it is time to tell something about it.
Algebraic geometry is a very broad topic with different possible approaches and epochs of development. Most traditional courses/books of initiation on the subject go through the algebraic approach. As the name suggests there is a lot of algebra involved. It is expected that the reader would be familiar with basic abstract algebra (groups, rings and the basics of ideals of rings), possibly field (in the mathematical sense nothing to do with a physic field, of course) theory and the very related subject of Galois theory. This is the standard content of undergraduate courses in alge3bra. In addition it is needed what is known under the name of "commutative algebra" which may, or maybe not, be covered in undergraduate courses. Commutative algebra consists mainly of a deeper analysis of rings and ideals. In particular noetherian rings, Hilbert nullentensantz theorem and things like that.
A bibliography of all this could be DORRONSORO, J. : NUMEROS GRUPOS Y ANILLOS (in Spanish, for sure there are lot of books covering the same topic in the English literature)for basic abstract algebra. For field/Galois theory a very canonical reference could be Ian Stewart: Galois Theory.
About commutative algebra a classical reference is the book of atiyah with precisely that name. Actually a friend of me, disr3ecomended me that book and considered a much better choice Miles Reid: Undergraduate Commutative Algebra.
Well, that are the expected prerequisites. Quite a lot if you are a physicist and not a mathematician. Later I´ll suggest other paths, but by now I´ll follow the traditional way.
With That algebraic basic you could try to bit one of the hard books, but it would be a better option to begin with an introductory book. One of the traditional introductory books is Fulton´s one algebraic curves available on line for free (info posted by Peter Woit in his blog, not even wrong). The books begins with a introductory chapter which serves a remaindering of the necessary algebra, but if you don´t previously know it it will not be too helpful. I must say that this summer I followed a course on that subject. The teacher was a physicist reconverted in mathematician. He has a paper, in collaboration, about string theory. It was that background of the teacher which aimed me to follow the course. I must say that it was a great course. To begin not all the audience were mathematicians so the teacher took a lot of time to reminder, and exemplify, the algebraic machinery. It also explained very clearly the geometric aspects. Later in this post ll say actually something about algebraist geometry and not only about the bibliography, by now I´ll just say that a (regular) algebraic curve can be identified with a Riemann surface. Every string theorists is well aware of the fact that the theory of Riemann surfaces is the basic of the Polyakov path integral. But, in fact, a traditional course in algebraic curves doesn't include the material specifically necessary for that purpose. Fortunately it includes some concepts, mainly the theory of divisor, which has become important for the subject of compactifications. Later more about it. It also tells about Blowing ups, which are actually of interest for compactifications.
After such a book, or similar ones (for example, "undergraduate algebraic geometry", also by Miles Reid one could try more serous books. For example Shafarevich: basic algebraic geometry, which covers most general algebraic spaces, aspects of the analytic approach as well as abstract geometry (schemes and all that, more on it later), or the book of Robin Hartsone which is still harder (and cover more topics), or, much better, both books.
Certainly that is a lot for a physicist. And he will not go into the most necessary goals until the last books. It is time to offer alternatives.
Recently it has appeared totally recommendable book, that gives name to this post, An Invitation to Algebraic Geometry by various authors, mainly Karen Smith.
The algebraic prerequisites of the book are only linear algebra. It gives a concise, and worked, intro to the necessary extra algebra. It does a very good job in explaining the flavour of the traditional algebraic geometry (which is not present in analytic geometry). The geometric aspects, and it´s relations to geometry are very well explained. It covers most of the topics necessary for string theory, the previously mentioned, (divisor,blowing ups), it gives some hints on abstract algebraic geometry (schemes, which is related to sheaves and the analytic side of algebraic geometry) and also, and this is important, families of algebraic surfaces. Under that name probably is not familiar for an sting theorist. The important fact is that is the concept which is behind the idea of moduli space. Loosely speaking a moduli space is an algebraic space which has the property that any of its point can be identified with other algebraic space. The most basic example is the projective plane. Any point of the projective plane can be identified with the rect that pass through it.
The concept of moduli space is a very recent one. The book of Joe Harris: Algebraic Geometry, a first course, cover more details about it than the book I was talking previously. The precise definition of the concept is very hard. The problem is that a moduli space usually is going to have singular points. To work in a proper way with that point you must go to abstract algebraic varieties (schemes), and, further, orbifolded schemes and things like that. The first introduction of the idea of moduli space dates back to the work of grothendieck which goes beyond the previous works of Teichmuller. There the re talking about families of algebraic curves (Riemann surfaces) and their complex structures. The teichmuller space of Riemann surface S of a given genus (intuitively, the genus is the number of holds) is the quotient of Conf(S)/Diff(S). Here Conf(S) is Met(S)/Cinf(S) where Met(s) is the set of possible metrics on S and Cinf is the group al infinite differentiable functions on S, which acts in the group o f metrics generating conformal transformations. -this is well known material for string theorists. It is covered in the chapters about the Polyakov integral. There one is explained how to get the moduli space form the teichmuller space and that the beltrami differentials form a basic of the tangent space to the moduli space and many other things. Is one is familiarized with differential geometry on manifolds, some basic group theory and complex analysis(and who ins´t nowadays) he can follow the ideas and accept that space3s as "abstract" spaces whose dimension can be determined (with the help of the Riemann Roch theorem) and can work with them to get the desired result, expressions for the cross sections. But if one reads books on algebraic geometry one becomes aware of the fact that those topic have a more geometric, and not so abstract, nature. Of course, as I said, the precise formulation of the idea requires heavier machinery that the one covered in string theory books. The pre3cise concept of moduli space of a Riemann surfaces is due to David Mumford and he got a Fields medal for it. And it is a very recent work, it is dated in the last sixtie3s and former seventies. It is somewhat incredible how soon such an abstract and difficult concept, combined with the also very recent, for the date, theory of conformal field theory, were combined in a baby string theory in a time where the math basic of more physicists was essentially the book of Arfken (or Mathews Walker) plus some tensor analysis and pedestrian Lie groups theory. I guess that there3 is a history in that achievement that deserves to e told, pity I have idea of that.
Well, we have a recommended book in the algebraic part. Lets go for a book in the analytic part. My choice 8freely available online) is U. Bruzzo. INTRODUCTION TO. ALGEBRAIC TOPOLOGY AND. ALGEBRAIC GEOMETRY.
It is a short book. the first part covers algebraic topology,almost form the beginning. It, later, introduce the very important concept of presheaves and sheaves. Them he introduces Cech co-homology. He treats fibered spaces, de rham theory, characteristic classes, the very hard topic of spectral sequences and, in general it covers a good part the material covered in canonical books of algebraic topology such as the one by Spanier, or the famous "differential forms in algebraic topology" of Bott and Tue (see this entry int he U-duality blog for more info in that book and in the general subject of math for string theory).
The second part of the book covers, from the analytic viewpoint, the same material on divisors (and the related concept of line bundles,ample and very ample bundles) that is mentioned in the book y Karen Smith and others. It also covers algebraic curves and the Riemann Roch theorem. The last chapter has a somewhat misleading title "nodal curves". The "nodal" makes reference to a type of singularity in an algebraic curve. An algebraic curve (it is time to say at last something about them xD) is, roughly speaking the set of zeros of a polynomials (enough polynomials to get a one dimensional space in the field of definition of the polynomials-note, a complex one dimensional space is a two dimensional real space-). A singularity is a point where the tangent space is not defined. That can e due to a self-intersection of the curve (so we have "two tangent spaces) or a point where the tangent space is not defined (the curve has not a derivative). The former is a nodal point, and can be resolved by the technique of blowing-up. This consist of cutting the singular point and to replace it by the projective space over it (see the books for the details). Well, the question is under the name of "nodal curves" is hidden the most well known concept of blowing-up. I would mention that I have followed this semester a course in the UCM which covered this analytic viewpoint of algebraic geometry. The official teacher was a mathematician who also has a publication on string theory, actually, a good amount of the course was finally imparted by a different teacher. I previously knew sheave theory, but certainly my knowledge on the subject has greatly growth ;-).
Well, with that two books one has a decent basic in algebraic geometry. He could go from there directly to specific reviews relating this to string theory. Soon ill comment something about it. Firs another suggestions. In the blog entry of U-duality it is recommended the Nakaharas book. I would add another book which I have already mentioned here. Topology and quantum field theory, by Charles Nash. It covers Riemann surfaces, and it relation to string theory. It also cover many other subjects. For example the theory of elliptic operators and its relation to topology, which is central to many results in the analytic part of algebraic geometry. In fact the Nash book covers that aspects (I suspect that it follows closely the book of Weill on analytic manifolds). Certainly a great book.
ON the pure math side one could go with the previously mentioned books of shafarevich and Hartsone (I have read some chapters of the former and I have been teach ed the first chapter of the second). An additional recommendation is the encyclopedic Principles of Algebraic Geometry by Griffith & Harris. Of those books the only one that covers moduli spaces is the one of Hartsone.
OK, what about not pure math books? Where is the string theory?. Lets go to it.
The canonical reference would be the book Mirror symmetries freely available on-line form the claymath institute (yes,the same claymath institute of the millenium prizes). Really one could begin with that book and to forget about all the above ones. It is a very extensive one, and covers almost all the mentioned subjects, together with the new, ones that I´ll speak now, toric varieties. The first part is reasonably accessible to a physicist with a decent basic on modern math (without need of abstract algebra). I guess that the book somewhat fails in providing the geometric-geometric viewpoint of the subject, something that , I insist, is very well covered in the Karen Smith book in a very acce3sible way.
OK, last references, arxiv reviews of the application of all this to string theory.
I would begin by TASI LECTURES ON COMPACTIFICATION AND DUALITY by DAVID R. MORRISON. It dates in nov 2004 and the arxiv signature is: hep-th/0411120v1
The third and fourth part are specifically related to the subject of algebraic geometry in string theory. It explains how Calaby -Yau manifolds are good compactifications of string theory and it explains how the theory of divisors and ample line bundles are good tools to get kahler classes of Calabi-Yau manifolds and why that is a good thing.
A different use of algebraic geometry,and specially the blowing up techniques, is for resolution of singularities in orbifolds and also, of conifolds. Resolving singularities of conifolds one can describe process where by varying the moduli space of a calabi-yau one gos trought transitions where some topological aspects of the calabi yau change.You can describe that changing aspects in terms of intersection theory. And you can describe intersection theory in terms of divisors. The first papers on topological change are due to Brian Green and Aspinwell (who, B.T.W. also has a book named Mirror Symmetry which covers those topics) on one side and Witten on the other side. Perhaps the most closed paper to the algebraic geometric techniques is this. the paper is form 1993. Later the topic got a different, more complete, treatment by placing D-branes in the conifold singular point (sorry, I have not time now to search the arxiv reference for the paper).
For papers explaining toric geometry , kind of generalization of projective spaces, very useful in F-theory construction you can read the3 recommended papers of the string wiki:
Toric geometry and calabbi yau compactifications y Maximilian Kreuzer (arXiv:hep-th/0612307v2)
Lectures on complex geometry, Calabi–Yau manifolds and toric geometry by Vincent Bouchard (arXiv:hep-th/0702063v1) or, also:
The Geometer’s Toolkit to String Compactifications (arXiv:0706.1310v1)
Certainly I don´t find them the more funny papers in math and I have not, still, readed them completely.
Once you have all this math background you could go without fear to study string compactifications, specifically F-theory ones. If you want a complete and detailed physical guide on the field of aplication you could use this paper:
LES HOUCHES LECTURES ON CONSTRUCTING STRING VACUA by Frederik Denef (arXiv:0803.1194v1)
I apologyce by not saying nothingn about the subject of elliptic curves (a very special kind of algebraic curves) and the related topics of eliptic integrals and aliptics function and of its generalizations (modular forms), maybe in other post. In fact from now to that point I´ll probaly have a better knowledge of that subjects and it will be a better post anyway ;-).
Publicado por Javier
Etiquetas: mathemathical physic, string theory
#### 9 comments:
Anonymous said...
Bonjour, freelance-quantum-gravity.blogspot.com!
[url=http://viagrasinreceta.fora.pl/ ]comprar viagra [/url] [url=http://farmaciaviagra.fora.pl/ ]vendo viagra [/url] [url=http://viagrafarmacia.fora.pl/ ]vendo viagra online[/url] [url=http://medicoviagra.fora.pl/ ]venta de viagra en espana[/url] [url=http://viagrabarata.fora.pl/ ]vendo viagra [/url] [url=http://esviagraonline.fora.pl/ ]venta de viagra en espana[/url]
Anonymous said...
Good day !.
You re, I guess , perhaps very interested to know how one can manage to receive high yields .
There is no initial capital needed You may commense to receive yields with as small sum of money as 20-100 dollars.
AimTrust is what you haven`t ever dreamt of such a chance to become rich
The company represents an offshore structure with advanced asset management technologies in production and delivery of pipes for oil and gas.
It is based in Panama with structures everywhere: In USA, Canada, Cyprus.
Do you want to become really rich in short time?
Thats your chance Thats what you wish in the long run!
I`m happy and lucky, I began to take up real money with the help of this company,
and I invite you to do the same. If it gets down to choose a proper companion utilizes your money in a right way - that`s the AimTrust!.
I earn US\$2,000 per day, and my first investment was 500 dollars only!
It`s easy to start , just click this link http://oxyxodic.1accesshost.com/jiwinu.html
and go! Let`s take this option together to feel the smell of real money
Anonymous said...
Hello !.
You may , perhaps very interested to know how one can reach 2000 per day of income .
There is no initial capital needed You may commense earning with as small sum of money as 20-100 dollars.
AimTrust is what you need
The firm represents an offshore structure with advanced asset management technologies in production and delivery of pipes for oil and gas.
It is based in Panama with structures everywhere: In USA, Canada, Cyprus.
Do you want to become an affluent person?
Thats your chance Thats what you desire!
I feel good, I started to get income with the help of this company,
and I invite you to do the same. If it gets down to select a proper partner utilizes your money in a right way - that`s the AimTrust!.
I take now up to 2G every day, and my first investment was 500 dollars only!
It`s easy to get involved , just click this link http://azozutykev.lookseekpages.com/yvukoni.html
and go! Let`s take our chance together to get rid of nastiness of the life
henrylow said...
Cash Making Opportunities - The Beginning The working life is already tough enough, but the worries of being out of work was even tougher. The unsecured working environment have prompted me to search the internet for an alternative source of extra income so that I could learn how to Make Money Work for me and be Financially Independent. I listed down a number of Free Internet Business Opportunity Ideas while researching ways how people earn money online while working-from-home.......
www.onlineuniversalwork.com
Anonymous said...
[url=http://sapresodas.net/][img]http://sapresodas.net/img-add/euro2.jpg[/img][/url]
[url=http://vioperdosas.net/]nero[/url] open source store software free software canada
Adobe Acrobat 9 Pro [url=http://vioperdosas.net/]adobe photoshop cs3 extended torrent[/url] windows xp tricks
[url=http://sapresodas.net/]downloadable phone software[/url] cheap web software
[url=http://sapresodas.net/]adobe software educational discount[/url] cheap oem software reviews
cheap oem software adobe [url=http://sapresodas.net/]buy windows xp pro software[/url][/b]
somaie said...
The best place for freelance projects is freelancing sites. Freelancing sites are the best option for part time home based business and freelance jobs. There are many types of work available at freelancing sites
www.onlineuniversalwork.com
Term Papers said...
I have been visiting various blogs for my term papers writing research. I have found your blog to be quite useful. Keep updating your blog with valuable information... Regards
Term Papers said...
I have been visiting various blogs for my term papers writing research. I have found your blog to be quite useful. Keep updating your blog with valuable information... Regards
Anonymous said...
that was a good thing freelance-quantum-gravity.blogspot.com
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338397979736328, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-equations/200521-reduction-single-higher-order-linear-ode.html
|
Thread:
1. Reduction to single higher-order linear ODE
I need to reduce the following system to a single higher-order linear ODE for y
x'(t) - y'(t) + z(t) = 0
x(t) + y'(t) - z'(t) = 1
y(t) + z'(t) = t3- 1
I've tried setting ( z = y' - x' ) & (x = 1 - y' - z')
However on substituting these it ends up as a big mess and I can't go anywhere. Could you someone tell me what substitution or technique I should use to work this problem?
Thanks.
2. Re: Reduction to single higher-order linear ODE
Can get you to a cubic, don't know if it's possible to do better
Differentiate eq(1), $x'' - y''+z'=0$ and substitute from eq(3), $x''-y''+(t^{3}-1-y)=0..........(4)$
Substitute for $z'$ in eq(2) also, $x+y'-(t^{3}-1-y)=1...........(5).$
Now differentiate $(5)$ twice and then subtract $(4)$ to eliminate the $x''$ term.
Note that you lose information during this process though, The 1 on the RHS of the second equation differentiates out and so could be any number at all.
3. Re: Reduction to single higher-order linear ODE
Yes that works, thanks a lot. I can't see a way of keeping the "1" unless I integrate the first equation and sub x(t) in the second. But I don't think that's possible. I believe this answer is sufficient. Thanks again
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216676950454712, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/tagged/linear-algebra
|
Tagged Questions
Questions on the linear algebra functionality of Mathematica.
0answers
43 views
Nontrivial solutions of equation
Here I have one problem how to find nontrivial solution of a system of equation. I want to choose one variable, for example X1 and to get solutions of X2(X1) and X3(X1). It is not difficult when I ...
0answers
5 views
Finding Vectors in cartesian form [migrated]
I am stuck on this question could you please help me. Find,in Cartesian form, the equations of the straight line through the point with position vector (-1,2,-3) parallel to the direction given by ...
2answers
140 views
A matrix-vector cross product
I want to do a cross product involving a vector of Pauli matrices $\vec \sigma = \left( {{\sigma _1},{\sigma _2},{\sigma _3}} \right)$; for example, $\vec \sigma \times \left( {1,2,3} \right)$. ...
1answer
85 views
Exploiting self-adjointness when changing basis
I am using Mathematica to analyze a real, self-adjoint matrix $H$ of the size $32 \times 32$, which comes from a physics problem. In the picture there is also a matrix $Q$ which commutes with $H$. I ...
1answer
39 views
Partial row reduction of a matrix
I have an $m\times n$ matrix (presumably of full rank) with $m>n$, and I would like to row reduce it, but leave the last column unreduced; that is, I want to get output on the form \$\pmatrix{ 1 ...
4answers
249 views
How to find the index of a square matrix in Mathematica quickly?
Let $A$ be an $n\times n$ complex matrix. The smallest nonnegative integer $k$ such that $\mathrm{rank}(A^{k+1})=\mathrm{rank}(A^{k})$, is the index fo $A$ and denoted by $\mathrm{Ind}(A)$. I would ...
1answer
82 views
large matrix eigenvalue problem
I need solve a very large complex matrix (not sparse and not symmetry) eigenvalue problem, e.g., 1e4*1e4 or even 1e6*1e6. How large dimensions of the matrix can Mathematica support? And, how about ...
0answers
53 views
Fast calculation of commute distances on large graphs (i.e. fast computation of the pseudo-inverse of a large Laplacian / Kirchhoff matrix)
I have a large, locally connected and undirected graph $G$ with $\approx 10^4$ vertices and $\approx 10^5$ to $\approx 10^6$ edges. Moreover I can bound the maximum vertex degree as $Q_{max}$. I ...
3answers
102 views
Compute the rank of a matrix with variable entries
Say I have a matrix like $$M=\left( \begin{array}{c c c} x & xz & w-2x \\ wz^3 & xy & z \\ y^2-z^3 & x+w & z+x^5 \end{array} \right)$$ is it possible to ask Mathematica ...
2answers
72 views
Matrix echelon/upper diagonal form
Is there a way to find the echelon form of a matrix in Mathematica? I see there is a function to find the reduced echelon form, RowReduce[], but I can't see ...
2answers
99 views
Computing distance matrix for a list
Using functional programming in Mathematica, how can I compute a distance matrix for every element in a list of matrices... The distance would be computed between the item in the list and a "target ...
0answers
101 views
Calculating the rank of a huge sparse array
By virtue of the suggestion in my previous question, I constructed the sparse matrix whose size is $518400 \times 86400$, mostly filled with $0$ and $\pm 1$. Now I want to calculate its rank. Since ...
2answers
120 views
Efficient ways to create matrices and solve matrix equations
I am attempting, for the first time, to use Mathematica to do some serious linear algebra. I would like to solve systems of equations of the form $$U_{n n'} f_{n'} = b_n.$$ I have an expression for ...
0answers
105 views
6x6 matrix NullSpace
I'm working with a 6x6 matrix. Whenever I try to find the NullSpace and FullSimplify it, I get the error No more memory ...
2answers
116 views
Defining a non-commutative operator algebra in Mathematica
I am trying to develop a function(s) to do some commutator algebra to compute the enveloping algebra and ideals of a Lie algebra. For example if I have $SU(2)$ algebra generated by $L_i$ ($i=1,2,3$), ...
1answer
98 views
Computing the sign of the real part of eigenvalues in a 3D linearized system with 3 parameters
So I have this dynamical system given by: $$\left\{\begin{aligned} x' &= a(y-\phi(x))\\ y' &= x-y+z\\ z' &= -by \end{aligned}\right.$$ where $\phi(x) = \mu x^3 - \nu x$ and $a,b,\mu,\nu$ ...
2answers
231 views
Gram Schmidt Process for Polynomials
I want to implement the Gram Schimdt procedure to the vector space of polynomials of degree up to 5, i.e. I want to find an orthogonal basis from the set of vectors $v=(1,x,x^2,x^3,x^4,x^5)$. The ...
1answer
217 views
Efficient method for inverting a block tridiagonal matrix
Is there a better method to invert a large block tridiagonal Hermitian block matrix, other than treating it as a ordinary matrix? For example: ...
2answers
154 views
Why does my matrix lose rank?
I want to check the rank of a matrix for observability, but Mathematica loses a rank if the matrix contains very large numbers. Let's say my matrix is ...
1answer
95 views
Why is EigenValues returning Root expressions?
This is the code I have: ...
1answer
62 views
Why does Eigenvalues[matrix I defined] not work? [duplicate]
This is the code I have in my mathematica notebook. I want to find the eigenvalues of the matrix I created called Hmatrix as defined below. However when I type Eigenvalues[Hmatrix] I get the Hmatrix ...
1answer
132 views
How to get the determinant and inverse of a large sparse symmetric matrix?
For example, the following is a $12\times 12$ symmetric matrix. Det and Inverse take too much time and don't even work on my ...
1answer
118 views
Interpolating a Bivariate Polynomial over a Finite Field
Let $F=GF(p)$ be a finite field, $p$ prime and write $F^\times=\{x_1,\ldots,x_n\}$. I'm trying to implement an earlier version of Sudan's list-decoding algorithm for Reed Solomon Codes ...
2answers
604 views
Solving a tridiagonal system of linear equations using the Thomas algorithm
I'm trying to write a function that can solve a tridiagonal system of linear equations using the Thomas algorithm. It basically solves the following equation. (Details can be found at the Wiki page ...
1answer
330 views
Octonions in Mathematica
Is there a package or Notebook for Mathematica that can enable me to do some numerical calculations with octonions? Maybe a way to plug-in the octonion multiplication table?
1answer
425 views
Eigenvalues and Determinant of a large matrix
Can anybody kindly explain to me what is going wrong regarding a simple problem I have? I can find the eigenvalues of a large matrix using Eigenvalues[], but when I ...
1answer
66 views
Confirming the existence of a function related to a matrix
Is it possible to get an answer to the following question in Mathematica? Let $M$ be a $n$ by $n$ matrix, is there a function $m:\mathbb{N}\times \mathbb{N}\rightarrow \mathbb{Z}$ such that ...
1answer
301 views
TensorContract of inverse matrix
Matrix inverse in mathematica If $A$ is an invertible $n \times n$ matrix, then $A\cdot A^{-1} = I$. To get this statement in Mathematica, you need the assumption ...
2answers
201 views
Speed up 4D matrix/array generation
I have to fill a 4D array, whose entries are $\mathrm{sinc}\left[j(a-b)^2+j(c-d)^2-\phi\right]$ for a fixed value of $\phi$ (normally -15) and a fixed value of $j$ (normally about 0.00005). The way ...
2answers
193 views
badly conditioned matrix (General::luc)
With some matrices I am receiving the following message Inverse::luc Result for Inverse of badly conditioned matrix (M) may contain significant numerical errors. How can I tell to Mathematica to ...
1answer
127 views
Can I reduce a matrix inequality such as $\mathbf x^\prime\mathbf A\mathbf x > \mathbf x^\prime\mathbf x$?
I'm new to Mathematica. When I do linear algebra, I wonder if I can have an inequality such as $\mathbf x^\prime\mathbf A\mathbf x > \mathbf x^\prime\mathbf x$, where $\mathbf x$ is a column vector ...
0answers
94 views
How to express this output in the form $X=A.x$?
This problem arose in my stereo vision project. I have two matrices: A = \left( \begin{array}{ccc} \text{x1}*\text{p131}-\text{p111} & \text{x1}*\text{p132}-\text{p112} & ...
0answers
62 views
Inverse problem of Eigenvalues in DSolve
For my graduation exam I must prepare system of equations to satisfy some specific conditions. I have solutions, output 2, but I need equations eq11 and eq22. So here is an example. ...
1answer
252 views
Decoupling system of differential equations
Here I have one task and it is preparation for small exam. I solved it by hand for first case 1), but I need to check it in $Mathematica$ and to try to implement it for both cases 1) and 2) ...
3answers
85 views
Selecting terms containing some expression
Imagine I have an expression like a*k + (a^2)*b*c + b*e and I would like to obtain the term containing, for example, some power of a. In that case I would ...
1answer
102 views
How to create a large sparse block matrix
I need to generate a very large sparse block matrix, with blocks consisting only of ones along the diagonal. I have tried several ways of doing this, but I seem to always run out of memory. The ...
0answers
178 views
More efficient matrix-vector product
Dear mathematica users, In my present research I am faced with a real dense $n\times n$ matrix $A$ where $n \geq 3000$ (hopefully even more). The coefficients of this matrix are fixed, but I will ...
2answers
181 views
Compiling LinearSolve[] or creating a compilable procedural version of it
Earlier today I had a discussion with a representative at Premier Support about the 2 questions I've asked here over the past couple of days: Seeking strategies to deploy a function securely ...
3answers
233 views
Correct way to populate a DiagonalMatrix?
I would like to create a series of correlation matrices that starts with : sensMat[[1]] = DiagonalMatrix[ { 1,1,1,1,1 } ]) // MatrixForm and iterates in 0.1 ...
0answers
82 views
Not getting the required eigenvalues [closed]
I'm trying to use Mathematica to show that the eigenvalues of $U$ are $\pm\dfrac{1-i}{\sqrt{2}}$, where $U = (I + T + iS)(I - T- iS)^{-1}$ where \$ S = \left( \begin{matrix} 1 & 1 \\ 1 ...
0answers
76 views
Evaluating a function on permutations of its arguments
Say I have a function "temp" of $n+1$ variables, $y,z1,z2,z3,...,zn$. I want to test if my function has certain symmetries like swapping $y$ with square of any $z$, swapping any two of the zs, ...
1answer
107 views
RowReduce Problem
Here are two examples: RowReduce[{{3, 1, a}, {2, 1, b}}] evaluates to {{1, 0, a - b}, {0, 1, -2 a + 3 b}} but ...
2answers
437 views
Linear equation with complex numbers
I have to solve an equation of the type $$a z+b \overline{z}=c$$ with $a,b,c\in\mathbb{C}$. My approach is to set $F(z)=a z+b \overline{z}-c$ transform $z$ to $x+i y$ and then get a real linear ...
1answer
123 views
Functions that operate on symbolic matrices?
I'd like to write functions that operate on symbolic matrices, and do nothing when fed anything else. ...
1answer
157 views
Verifying and deriving basic (block) matrix identities
How can I use the new symbolic matrix/tensor capabilities to verify matrix identities, such as (1) or (2) Even better, how can I ask Mathematica to derive expressions for X, Y, Z, and U like ...
0answers
165 views
Matrix algebra vs. PrincipalComponents and Varimax/Oblimin
Using matrix algebra I can calculate loadings and scores from the covariance matrix (data matrix is column centered): ...
0answers
130 views
Parallel linear algebra with arbitrary precision
Is it possible to do parallel linear algebra with arbitrary precision within Mathematica (in a simple manner, as is done for the machine precision)?
1answer
196 views
Find an inverse matrix, regarding a parameter as real
I have the matrix ...
2answers
302 views
Eigensystem, Eigenvalue doesn't output nonreal eigenvalues
Basically I have a matrix and when I used either Eigenvalue or Eigensystem, it doesn't output nonreal eigenvalues, instead it ...
4answers
348 views
Dual-Grid Graph Paper With Mathematica?
Is there a slick way to generate the dual-grid graphs such as you can see on pages 7, 9, and 10 of this article, or this one? I've searched, but found nothing. Thanks in advance...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8872956037521362, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/107933?sort=votes
|
## Automorphisms of a specific type of weighted projective space
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A question very close to this one was already asked: http://mathoverflow.net/questions/67363/automorphisms-of-a-weighted-projective-space
But the answer given does not satisfy my needs. So avoiding having two questions that are identical, I am interested in a specific type of weighted projective space, namely $\mathbb{P}(1, 1, \cdots, 1, k)$ for some natural number $k$. The case $k = 1$ gives rise the usual projective space. To be more precise, I am considering a real weighted projective space with weight vector $(1, 1, \cdots, 1, k)$.
So the question is can one characterize, in general, the automorphism group of such a space. If $k = 1$ then we get the projective linear group, namely the group $\textbf{GL}(\mathbb{R}^n)/\mathbb{R}^\ast \cong \textbf{PGL}(\mathbb{R}^n)$. In general, can one characterize the automorphism group of weighted projective spaces of the type $\mathbb{P}(1, 1, \cdots, 1, k)$?
-
## 2 Answers
I'm not sure how one deals with the general case, but your example is the cone in $\mathbb P^{N+1}$ over the $k$-th Veronese embedding of $\mathbb P^{n-1}(\mathbb K)$ in $\mathbb P^N$ ($n=$ number of 1's, $N=$ the appropriate dimension for the Veronese embedding).
So for $k>1$ the automorphism group $G$ is an extension $$1\to \mathbb K^*\times {\mathbb K}^{N+1}\to G\to \mathbb PGL(n)\to 1.$$ EDIT: as J\'er\'emy points out below and in his answer, the kernel in the sequence above is not a direct product, but a more complicated group.
This is easily seen by taking homogeneous coordinates in $\mathbb P^{N+1}$ such that the vertex of the cone is, say, $[1,0\dots 0]$ and representing an automorphism by a matrix such that the first column is $^t(1,0\dots 0)$.
-
2
Pay attention that the kernel of the map $G\to \mathrm{PGL}(n,\mathbb{K})$ is not equal to $\mathbb{K}^*\times \mathbb{K}^{N+1}$. In fact is it not abelian: for example, the elements $(x_1:\dots:x_n:y)\mapsto(x_1:\dots:x_n:\alpha y)$ and $(x_1:\dots:x_n:y)\mapsto (x_1:\dots:x_n:y+(x_1)^k)$ do not commute in general. See below. – Jérémy Blanc Sep 28 at 12:25
You are right, I had been too hasty. Thanks for pointing this out. I'll edit my answer. – rita Sep 28 at 14:00
Thanks for editing, anyway the good point was to describe the isomorphism and action on $\mathbb{P}^{n-1}$. But it is funny that the kernel is quite strange and depends in fact on the roots of unity in $\mathbb{K}^*$. – Jérémy Blanc Sep 28 at 16:58
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As rita said $\mathbb{P}(1,\dots,1,k)$ is naturally isomorphic to the cone in $\mathbb{P}^{N+1}$ over the $k$-th embedding (Take the map which sends $(x_1:\dots:x_n:y)$ onto $((x_1)^k:(x_1)^{k-1}x_2:...:(x_n)^k:y)$ where the $N+1$ first coordinates are the monomials of degree $k$ in $x_1,\dots,x_n$), so there is a natural morphism from the group $G=\mathrm{Aut}(\mathbb{P}(1,\dots,1,k))$ to $\mathrm{PGL}(n,\mathbb{K})$. However, the kernel is not the one which was described in the above answer.
We can in fact give $G$ more explicitly (because there are a priori many extensions given two groups):
We choose $k>1$ (otherwise the description is different and obvious). We identify $\mathbb{K}^{N+1}$ with the set of homogeneous polynomials of degree $k$ in $n$ variables. The group $\mathrm{GL}(n,\mathbb{K})$ naturally acts on $\mathbb{K}^{N+1}$.
Let $H$ be the semi-direct product $\mathbb{K}^{N+1}\rtimes \mathrm{GL}(n,\mathbb{K})$. There is a natural surjective map $H\to G$, that we describe now:
The action of $\mathbb{K}^{N+1}$ on $\mathbb{P}(1,\dots,1,k)$ is given by $(x_1:\dots:x_n:y)\mapsto (x_1:\dots:x_n:y+P(x_1,\dots,x_n))$ where $P\in\mathbb{K}^{N+1}$ is the corresponding polynomial.
The action of $\mathrm{GL}(n,\mathbb{K})$ on $\mathbb{P}(1,\dots,1,k)$ is given by the action on $x_1,\dots,x_n$.
It yields thus a morphism $H\to G$ whose kernel is the subgroup $L$ of $\mathrm{GL}(n,\mathbb{K})$ consisting of diagonal matrices of the form $\{\lambda I| \lambda^k=1\}$.
The group $G=\mathrm{Aut}(\mathbb{P}(1,\dots,1,k))$ is thus equal to the quotient of $\mathbb{K}^{N+1}\rtimes \mathrm{GL}(n,\mathbb{K})$ by the subgroup $L$.
The surjective morphism $G\to \mathrm{PGL}(n,\mathbb{K})$ corresponds to the projection on $\mathrm{GL}(n,\mathbb{K})/I$ followed by the quotient by the image of all diagonal matrices (we have first killed only finitely many and then kill all others). The kernel of this map is thus equal to $\mathbb{K}^{N+1}\rtimes\mathbb{K}^{*}/I$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271906614303589, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showpost.php?p=3432646&postcount=3
|
View Single Post
Hi, Thank you for your reply. I am only interested in the case with B=0. The expression for the magnetization is $M=[1-\sinh^{-4}(2/k_B T)]^{1/8}$. The susceptibility is obtained from $\chi = \frac{<M^2>- <M>^2}{k_B T}$. But I don't know the analytical expression for $\chi$...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318825602531433, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/5798?sort=oldest
|
## Is there a natural way to give a bisimplicial structure on a 2-category?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I mean by the nerve functor.
Given a 2-category $\mathcal{C}$, if we forget the 2-category structure (just view $\mathcal{C}$ as a category), the nerve functor will give us a simplicial set $N\mathcal{C}$. However, $\mathcal{C}$ is a 2-category, thus for any two objects $x,y\in\mathcal{C}$, $Hom_{\mathcal{C}}(x,y)$ is a category, applying the nerve functor gives us a simplicial set $N(Hom(x,y))$.
My question is, can these two simplicial set structure compatible in some way, gives us a bisimplicial set $N_{p,q}(\mathcal{C})$, say? Or is there another way to give a bisimplicial structure on a 2-category?
-
## 1 Answer
Yes. This is called the double nerve of a 2-category.
See in particular the first reference cited at that link.
-
Thanks a lot! But I can't open the .pdf file of the first reference. Am I the only one suffering from this problem? – Fei Nov 17 2009 at 13:40
I have added the arXiv link now: arxiv.org/abs/math.CT/0406615 – Urs Schreiber Nov 17 2009 at 13:51
Yeah, it works, thank you very much – Fei Nov 17 2009 at 16:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8798958659172058, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/25374?sort=votes
|
## duplicate detection problem
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Restated from stackoverflow:
Given:
• array a[1:N] consisting of integer elements in the range 1:N
Is there a way to detect whether the array is a permutation (no duplicates) or whether it has any duplicate elements, in O(N) steps and O(1) space without modifying the original array?
clarification: the array takes space of size N but is a given input, you are allowed a fixed amount of additional space to use.
The stackoverflow crowd has dithered around enough to make me think this is nontrivial. I did find a few papers on it citing a problem originally stated by Berlekamp and Buhler (see "The Duplicate Detection Problem", S. Kamal Abdali, 2003)
-
So you want this to be faster than sorting (which can be done in $O(N\mathrm{log}(N))$ steps if I am not mistaken)? – Roland Bacher May 20 2010 at 15:18
yes. the sort-and-look-for-duplicates approach is the easy O(N log N) solution. – Jason S May 20 2010 at 17:17
2
Sorting is not an O(n log n) solution, because it uses much more than O(1) space and/or modifies the array. And this input could be sorted in O(n) time anyway because it's all small integers. – David Eppstein May 20 2010 at 17:26
1
By small I meant polynomially bounded in the input size. Integers in the range 1..n can be sorted by bucket sort in linear time and integers in the range 1..polynomial can be sorted by radix sort in linear time. It's not a question of what's realistically large, it's a question of whether you allow your inputs to be used as array indexes or you artificially pretend your computer can only access them via pairwise comparisons. – David Eppstein May 20 2010 at 20:50
2
Stupid observation: with only O(1) space, you can't actually address the whole array. So you probably want something like "O(1) space, but pointers count as constant space." – David Speyer Jun 2 2010 at 18:27
show 3 more comments
## 6 Answers
It's at least possible to test whether the input is a permutation with a randomized algorithm that uses O(1) space, always answers "yes" when it is a permutation, and answers "yes" incorrectly when it is not a permutation only with very small probability.
Simply pick a hash function $h(x)$, compute $\sum_{i=1}^n h(i)$, compute $\sum_{i=1}^n h(a[i])$, and compare the two sums.
Ok, some care needs to be used in defining and choosing among an appropriate family of hash functions if you want a rigorous solution (and I suppose we do want one, since we're on mathoverflow not stackoverflow). Probably the simplest way is just to fill another array $H$ with random numbers and let $h(x)=H[x]$, but that is unacceptable because it uses too much space. I'll leave this part as unsolved and state this as a partial answer rather than claiming full rigor at this point.
See also my paper Space-Efficient Straggler Identification in Round-Trip Data Streams via Newton's Identitities and Invertible Bloom Filters which solves a more general problem (if there are O(1) duplicates, say which ones are duplicated, using only O(1) space) with the same lacuna in how the hash functions are defined. It also contains a proof that an algorithm that makes only a single pass over the data cannot solve the problem exactly and deterministically, but of course that doesn't apply to algorithms with random access to the input array.
-
Why use a hash function when you can use the identity and have correct output every time? Just compute $\sum a[i]$ and compare with $n(n+1)/2$. – Dror Speiser May 20 2010 at 18:12
@Dror: Because sum and compare with n(n+1)/2 does not correctly check for duplicates, e.g. a valid permutation 1,2,3,4,5,6...N vs. 2,2,2,4,5,6,...,N – Jason S May 20 2010 at 18:19
David: +1. I assume this is like primality testing where you can use multiple passes to increase probability? – Jason S May 20 2010 at 20:52
Yes, or just use a hash function with a bigger range. – David Eppstein May 20 2010 at 21:01
3
One hash function that takes $O(1)$ storage and obviously works is $h(x)=2^x\bmod p$ where $p$ is a large random prime. But that leads to an $O(n\log n)$ time algorithm because it takes $O(\log n)$ time to evaluate $h(x)$ (for integer arguments in the range 1..n) in any reasonable model of machine arithmetic. I'm pretty sure that taking polynomials modulo a prime doesn't work (there are specific inputs that will trick all low-degree polynomials), but that doesn't exhaust the possibilities. – David Eppstein May 23 2010 at 0:01
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In the complexity theory literature there is a related problem known as the element distinctness problem: given a list of $n$ numbers, determine if they are all distinct.
Of course this problem isn't quite the same; one might expect that if you assume all numbers are in the range {$1,\ldots,n$} that you might solve the problem more efficiently.
The wikipedia article http://en.wikipedia.org/wiki/Element_distinctness_problem mentions the linear time bucket sort solution for the special case of ${1,\ldots,n}$. The purpose of my answer is to let you know of a common name for the problem so that maybe your web searches will fare better. Much is known about element distinctness and I am sure that your special case has been studied to death.
-
thank you -- seems like many of the problems in math / CS have already been solved so much of the problem is just figuring out what they are called – Jason S May 21 2010 at 0:38
This is still an open and interesting problem. The best deterministic algorithm that I know of takes $O(n \log n)$ time and $O(\log n)$ words of space by Munro, Fich and Poblete in Permuting in place. This paper doesn't explicitly mention the problem of detecting if there is a duplicate but the method they develop for permuting in place is directly applicable. It is still possible that there is a true linear time and $O(1)$ words of space solution (either randomised or deterministic).
If you simply increase the alphabet size from $n$ the situation changes drastically. Even if you change it to $2n$ the complexity of finding if there is a duplicate is unknown and in particular no near linear time solution is known for small space. The most obvious randomised approach is to hash the elements down to the range $[1,\dots,n]$. You are then left with the problem of trying to distinguish real duplicates from ones created by hash collisions. With full independence it seems you can most likely do this in something like $O(n^{3/2})$ time but I am not sure if this has ever been formally analyzed in published work. However, we can't actually use a hash function with full independence without also using linear space so the problem as before is to show that a hash function family whose members can be represented in small space and which has the desired properties actually exists.
For even larger alphabets of size $n^2$ there is an existing lower bound for small space algorithms given in Time-space trade-off lower bounds for randomized computation of decision problems. With space $O(\log n)$ bits (or $O(1)$ words) it simplifies to approximately $\Omega(n \sqrt{\log n/\log{\log{n}}})$. This means that no linear time solution is possible in this case.
COMMENT: This should be a comment to David Eppstein's answer but I don't have the points for that. The function $h(x) = 2^x \bmod p$ with $p$ a prime with $O(\log n)$ bits is very interesting. Although it is clear that it takes $\Theta(\log n)$ time to evaluate the hash function once (by repeated squaring, assuming constant time operations on words), is it obvious that it can't be done faster on average when evaluating at $n$ points by some clever method? Consider, for example, an array with the elements in increasing order. In this case it takes only $O(n)$ time to compute all the hash values.
-
This should be a comment but I don't have the reputation for that.
What is your model of computation? Such an array (and in particular such a permutation) takes $\mathcal{O}(N\log N)$ space to store, so it would take that much time just to read it.
-
1
It is quite often assumed that if you have an input of size $N$ on a RAM then, you are allowed standard operations on registers of length $O(\log N)$ in unit time. Apparently, this is what was meant here. – fedja May 20 2010 at 15:58
Reasonable assumptions are probably that elements of arrays/memory of size N are reachable in O(1) time, that addition/multiplication/subtraction/division of quantities of N or N<sup>2</sup> are operations in O(1) time. The stackoverflow discussion talked about computing the product of the array's elements, but arbitrary-precision computation of quantities in the range of N! is probably unreasonable without accounting for using large numbers. – Jason S May 20 2010 at 17:24
Basically David's approach: we fix $M$ = number of bits storage, and compute the indicator $h = XOR ( hash_M (a [i] ) )$
where $hash_M$ is a hash function to $M$ bits (eg MD5 masked to M bits). We decide that it is a permutation without repetitions by comparing with the same indicator for the ordered array (1..N). This is order N. And there is a probability of error which should be around $1/2^M$... if I'm not mistaken.
-
EDIT: This turns out to be not an answer for practical N as well. Given two numbers and the right N, one can play games with the modulo pattern of the two numbers and create two other numbers such that their contribution to the modulo counts replicates that of the two given numbers. Thus a practical solution based on modular arithmetic may need space O(ln N) and a multiplicative time factor of O(ln N). Oops. END EDIT
For arbitrary N, this is not an answer, but for practical N, say N < 2^64, one approach is to consider the residues mod p of the array entries for primes p from 2 up to a sufficient limit, say 60.
If the counts match the expected distribution, then (I think) the list is a permutation if no element lies outside the range [1,P], where P > 2^64 and is the product of the primes from 2 up to 60. In general, the algorithm uses space Q * B and time O( Pi(Q)*N ), where Q is the largest prime used, B is the size of N (or of an array element), and Pi(Q) is the number of primes less than or equal to Q. Additionally, pi(Q) is significantly less than ln(N) and Q is not much larger (with respect to N) than pi(Q). For practical N, this approach should suffice.
Gerhard "Ask Me About System Design" Paseman, 2010.05.20
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354391694068909, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/53369-formal-modulus-proof-how-close-am-i-print.html
|
# Formal Modulus Proof: How close am I?
Printable View
• October 12th 2008, 09:45 PM
aaronrj
Formal Modulus Proof: How close am I?
Show that if n is an odd positive integer then n^2 = 1(mod 8).
I see that any odd square has 1 as a remainder when calculated. Example: 49 = 7 * 7 = 1(mod 4), and any odd number squared equals an odd number. Let 2k represent all positive even integers. So n^2 = 1(mod 2k) for all odd positive integers.
Is this an acceptable proof?
• October 12th 2008, 09:53 PM
Jhevon
Quote:
Originally Posted by aaronrj
Show that if n is an odd positive integer then n^2 = 1(mod 8).
I see that any odd square has 1 as a remainder when calculated. Example: 49 = 7 * 7 = 1(mod 4), and any odd number squared equals an odd number. Let 2k represent all positive even integers. So n^2 = 1(mod 2k) for all odd positive integers.
Is this an acceptable proof?
no, this proof is not acceptable. especially since you are trying to prove it with examples.
Assume $n$ is odd. then we can write $n = 2k + 1$ for some integer $k$.
but that means $n^2 = (2k + 1)^2 = 4k^2 + 4k + 1$
now all you need to show is that you can write that expression in the form $8m + 1$ for some integer $m$. since all numbers equivalent to 1 mod 8 have that form
• October 13th 2008, 07:10 AM
aaronrj
need more help
Sorry, I see where you're going but I can't fill in the steps.
We have 4k^2 + 4k + 1
How does 8m + 1 fit into that?
Sub in 8m+1 for k?
• October 13th 2008, 11:27 AM
Jhevon
Quote:
Originally Posted by aaronrj
Sorry, I see where you're going but I can't fill in the steps.
We have 4k^2 + 4k + 1
How does 8m + 1 fit into that?
Sub in 8m+1 for k?
no, here's a further hint: leave the +1 alone and factor an 8k out of the first two terms. you get
8k(k + 1)/2 + 1
now, obviously, your task is to find out whether k(k + 1)/2 is an integer
• October 13th 2008, 07:41 PM
aaronrj
Correct Proof
Take 4k^2 + 4k + 1 = n^2
Factor: 4(k^2 + k) + 1 - 1 = n^2 - 1
Therefore: 4(k^2 + k) = n^2 - 1
This is how the book proves it.
I don't see where you were going with:
8k(k + 1)/2 + 1
Perhaps a bit more clarity is needed next time you attempt to offer aid.
• October 13th 2008, 08:25 PM
Jhevon
Quote:
Originally Posted by aaronrj
Take 4k^2 + 4k + 1 = n^2
Factor: 4(k^2 + k) + 1 - 1 = n^2 - 1
Therefore: 4(k^2 + k) = n^2 - 1
This is how the book proves it.
I don't see where you were going with:
8k(k + 1)/2 + 1
Perhaps a bit more clarity is needed next time you attempt to offer aid.
that proof is not correct. look up the definition for $a \equiv b \mod n$. you will see that it means $n \mid (a - b) \Longleftrightarrow a - b = nk$ for some $k \in \mathbb{Z}$.
thus, saying $4(k^2 + k) = n^2 - 1$ is saying $n^2 \equiv 1\mod {\color{red}4}$ not $\mod 8$
to show that $n^2 \equiv 1\mod 8$ you must show that $8 \mid (n^2 - 1)$, or in other words, $n^2 - 1 = 8k$ for some integer $k$. that is what i was telling you to do. we need $8m = n^2 - 1$ for some integer $m$, provided $n$ is odd.
we got to the point $n^2 - 1 = 8 \frac {k(k + 1)}2$
now, we complete the proof if we can show $\frac {k(k + 1)}2$ is an integer. that is what i leave it to you to do
• October 14th 2008, 10:21 AM
aaronrj
Thanks.
Well, I guess I need to double check all of the proofs given in the book. Perhaps the author got lazy when writing the solutions. Thanks for clarifying the proof.
• October 14th 2008, 10:27 AM
Jhevon
Quote:
Originally Posted by aaronrj
Well, I guess I need to double check all of the proofs given in the book. Perhaps the author got lazy when writing the solutions. Thanks for clarifying the proof.
well, we're not done. how would you finish up? is that expression an integer or not?
• October 14th 2008, 12:05 PM
aaronrj
Did you not answer your own question?
Doesn't the theorem explicitly state that the expression must be an integer?
Theorem:
Let m be a positive integer. The integers a and b are congruent modulo m if and only if there is an integer k such that a = b + km.
I definately should have referred to the definition first before trying to solve the problem; I would have had a much easier time. Lesson learned.
• October 14th 2008, 04:15 PM
Jhevon
Quote:
Originally Posted by aaronrj
Doesn't the theorem explicitly state that the expression must be an integer?
Theorem:
Let m be a positive integer. The integers a and b are congruent modulo m if and only if there is an integer k such that a = b + km.
I definately should have referred to the definition first before trying to solve the problem; I would have had a much easier time. Lesson learned.
we are to prove that it is an integer. assuming it is is begging the question. of course, if they ask you to prove something, they already know it is true. you are required to show why.
• October 14th 2008, 04:54 PM
aaronrj
Predicate calculus
Let p represent the statement "The integers a and b are congruent modulo m."
Let q represent the statement "there is an integer k such that a = b + km."
If p $\Longleftrightarrow$ q
The domain is the set of all integers Z.
I don't think this is what you are looking for, but I thought I'd at least give it a shot.
• October 14th 2008, 05:07 PM
Jhevon
Quote:
Originally Posted by aaronrj
Let p represent the statement "The integers a and b are congruent modulo m."
Let q represent the statement "there is an integer k such that a = b + km."
If p $\Longleftrightarrow$ q
The domain is the set of all integers Z.
I don't think this is what you are looking for, but I thought I'd at least give it a shot.
aaronrj, pay attention. i left off exactly where you should pick up. i did everything for you except the last step. all i want you to tell me, is whether k(k + 1)/2 is an integer or not (and why). that is all. do that and you are done. stop beating around the bush. i told you these definitions already, what point is there bringing them up?
All times are GMT -8. The time now is 11:24 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518483877182007, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/5114/matrix-multiplication-of-bits/5115
|
# Matrix Multiplication of Bits
How to multiply the matrix of bits
example I have two 4x4 matrices of bits
1 1 1 1 1 1 1 0
0 0 0 0 0 1 1 1
1 0 0 1 1 0 1 1
0 1 1 1 1 0 1 0
I will apply this matrix multiplication for the modified IDEA(International Data Encryption Algorithm)
-
## 1 Answer
Multiplication of bits matrices works just like multiplication of number matrices, except the rule of addition is modified to: $1+1\mapsto 0$.
Let $U$ (resp. $V$) be a square matrix of $n\times n$ elements noted $u_{l,c}$ (resp. $v_{l,c}$) with $1\le l\le n$ and $1\le c\le n$. The product $U\cdot V$ is a square matrix $W$ of $n\times n$ elements noted $w_{l,c}$, with $w_{l,c}=\sum_{j=1}^n u_{l,j}\cdot v_{j,c}$
In the problem at hand, $n=4$. To compute the bit at (say) the third line and first column in the result, we'll use the above formula with $l=3$ and $c=1$, giving $w_{3,1}=\sum_{j=1}^4 u_{3,j}\cdot v_{j,1}$, that is $w_{3,1}=(u_{3,1}\cdot v_{1,1})+(u_{3,2}\cdot v_{2,1})+(u_{3,3}\cdot v_{3,1})+(u_{3,4}\cdot v_{4,1})$. The third line of the left matrix gives $u_{3,1}=1$, $u_{3,2}=0$, $u_{3,3}=0$, $u_{3,4}=1$. The first column of the right matrix gives $v_{1,1}=1$, $v_{2,1}=0$, $v_{3,1}=1$, $v_{4,1}=1$. Thus $w_{3,1}=(1\cdot1)+(0\cdot0)+(0\cdot1)+(1\cdot1)$. That simplifies to $w_{3,1}=1+0+0+1$, then $w_{3,1}=0$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8659117221832275, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/60311/infinite-series-of-the-form-sum-limits-k-1-infty-frac1ak1
|
# infinite series of the form $\sum\limits_{k=1}^{\infty}\frac{1}{a^{k}+1}$
Is there a method for evaluating infinite series of the form:
$\displaystyle\sum_{k=1}^{\infty}\frac{1}{a^{k}+1}, \;\ a\in \mathbb{N}$?. For instance, say $a=2$ and we have
$\displaystyle\sum_{k=1}^{\infty}\frac{1}{2^{k}+1}$.
I know this converges to $\approx .7645$, but is it possible to find its sum using some clever method?.
It would seem the Psi function is involved. I used the sum on Wolfram and it gave me
$$-1+\frac{{\psi}_{1/2}^{(0)}\left(1-\frac{\pi\cdot i}{\ln(2)}\right)}{\ln(2)}$$
I am familiar with the Psi function, but I am unfamiliar with that notation for Psi. What does the 1/2 subscript represent?
-
The integer a should be assumed to be at least 2 (and not simply in N). – Did Aug 28 '11 at 13:07
@Cody: I suggest that you don't call this function the "psi function" as there are many many functions for which we use the symbol $\psi$. Instead, call it by its name, the q-Polygamma function. (This makes it easier to look up too) Or in this case, the q-digamma function. – Eric♦ Aug 28 '11 at 14:10
Not a proof but I've tried "Plot[NSum[1/(a^k + 1), {k, 1, [Infinity]}] - (-Log[a/(a - 1)] + QPolyGamma[0, 1 - (I [Pi])/Log[a], 1/a])/Log[a], {a, 2, 10}]" in Mathematica and it turns out to be a zero function. – ziyuang Aug 28 '11 at 15:00
$\frac12$ in your expression for the q-polygamma function would be the value of the "quantum parameter" q. – J. M. Aug 28 '11 at 16:48
## 3 Answers
What you want to look at is Lambert Series. Notice that in the expression Wolfram Alpha spat out, the $\frac{-\pi i}{\log 2}$ is simply there to turn the positive sign into a negative one so that we are dealing with Lambert Series.
Theses series are power series where the coefficients are given by Dirichlet convolution, so they are often related to multiplicative functions.
Let $$L(q)=\sum_{n=1}^\infty a_n \frac{q^n}{1-q^n}=\sum_{n=1}^\infty \left(\sum_{d|n} a_d \right) q^n.$$
Then we can rewrite a series related to yours above: $$\sum_{n=1}^\infty \frac{1}{2^n-1}=\sum_{n=1}^\infty \frac{d(n)}{2^n}.$$
However, if the coefficients $a_n$ are functions which become nice when convolved with $1$ we can get something different. For example $$\sum_{n=1}^\infty \mu(n)\frac{q^n}{1-q^n}=q.$$
-
This is not precisely the question asked, but still very interesting: A sum evaluated in terms of Jacobi theta functions... $$\sum_{n={-\infty}}^\infty \frac{x^n}{1-a q^{2n}} = \frac{\theta_1(i\ln(ax)/2,q)\theta_2(0,q)\theta_3(0,q)\theta_4(0,q))}{2i\theta_1(i\ln(x)/2,q)\theta_1(i\ln(a)/2,q)}$$ for $|q|^2 < |x| < 1$.
Also: $$\sum_{n=-\infty}^\infty \frac{1}{\alpha^n-a\beta^n} = \frac{\theta_1(i\ln(\alpha/a)/2,\sqrt{\beta/\alpha}) \theta_2(0,\sqrt{\beta/\alpha})\theta_3(0,\sqrt{\beta/\alpha}) \theta_4(0,\sqrt{\beta/\alpha})}{2i \theta_1(i\ln(\alpha)/2,\sqrt{\beta/\alpha}) \theta_1(i\ln(a)/2,\sqrt{\beta/\alpha})}$$ for $0 < |\beta| < 1 < |\alpha|$.
(Provided I copied it correctly .)
-
The two formulas are the same, using $x=1/\alpha$ and $q^2=\beta/\alpha$. – Did Aug 28 '11 at 14:43
Well, theta functions and $q$-functions are rather intimately related, so I suppose this is not so surprising... – J. M. Aug 28 '11 at 16:49
1
Where did you copy it from? – anon Aug 28 '11 at 19:24
Not entirely relevant, but it's known that these numbers are irrational.
Peter B. Borwein, On the irrationality of $\sum_{n\gt0}(1/(q^n+r))$, J. Number Theory 37 (1991), no. 3, 253–259, MR1096442 (92b:11046) proves the sum is irrational when $r$ is rational and $q\ge2$ is an integer. Could be worth a look.
EDIT: As Dan notes, $r$ can't be zero. Also, it can't be of the form $-q^m$, lest we find ourselves dividing by zero.
-
r cannot be 0 if the sum is irrational. – Dan Brumleve Aug 29 '11 at 5:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9194849729537964, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/55913?sort=newest
|
## Contact manifolds that are not cooriented
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
One always sees the definition of a contact manifold $(X,\xi)$ as an odd dimensional manifold with a hyperplane distribution $\xi$ which can locally be expressed as $\xi = \ker \alpha$ for a 1-form $\alpha$. But in fact, it seems that in every example I know of, one always assumes that $\xi$ is cooriented, and hence we can write $\xi = \ker \alpha$ globally.
Is there a reason (other than historical) as to why coorientation wasn't built in automatically in the definition of a contact manifold? It seems strange that this isn't required in the definition.
-
Isn't the non-integrability condition "$\alpha\wedge (\mathrm{d}\alpha)^k \neq 0$ everywhere" part of the definition of contact manifold? – Qfwfq Feb 18 2011 at 22:44
## 2 Answers
As you say, the primary focus of research is on cooriented contact structures, but there is still some interest in non-coorientable contact structures (and it would become even more frustrating for those of us who are looking for information on the general case if the definition ruled out such).
For instance, David Crombecque's 2006 thesis showed that there can be some subtleties when considering the tightness of such contact structures. I quote:
In most studies, contact structures are always considered oriented. (Recall that a contact 3-manifold is always orientable but its contact structure does not have to be). It is often thought that if one has to deal with nonorientable contact structures, one may work with its orientation double cover. Although it is true that the tightness of the double cover implies the tightness of the corresponding nonorientable contact structure, our motivation is to realize that one cannot merely switch to the orientation double cover without loss of information when studying tightness. In this thesis, we systematically study the tightness of nonorientable contact structures and produce examples of 3-manifolds equipped with nonorientable tight contact structures for which the orientation double cover is overtwisted.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is not an answer to your question but here is a natural example where a non cooriented contact structure arise:
the space of contact elements (the grassmanian of hyperplane) on a manifold is isomorphic $P(T^*M)=T^*M\setminus M_0/\mathbb{R}$ . The local contact forms are locally given by the Liouville form on a local section of the quotient map by the action of $\mathbb{R}$. However you cannot define a global form (you can on a double cover which corresponds to the space cooriented contact elements).
This space is of interest, for instance submanifolds of $M$ give Legendrian submanifolds of $P(T^*M)$, and isotopy give Legendrian isotopy. Section of $P(T^*M)$ which are contact submanifolds are exactly the contact structure on $M$, etc.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938560962677002, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/20681-parametric-equation-normal.html
|
# Thread:
1. ## Parametric Equation of a Normal
I've need help finding the parametric equation for the normal to following function.
Is there a formula or general form of the parametric equation for a normal line?
I can only find info on circles in 2 dimensions . Any help would be greatly appreciated!
Thanks
- Simon
2. Originally Posted by Spimon
I've need help finding the parametric equation for the normal to following function.
Is there a formula or general form of the parametric equation for a normal line?
I can only find info on circles in 2 dimensions . Any help would be greatly appreciated!
Thanks
- Simon
the normal line to a curve $F(x,y,z)$ at the point $(x_0,y_0,z_0)$ is given by:
$\boxed{ x = x_0 + tF_x(x_0,y_0,z_0) \mbox { , }y = y_0 + tF_y(x_0,y_0,z_0) \mbox { , } z = z_0 + tF_z(x_0,y_0,z_0) }$ ----> Parametric equation of normal line
where $t$ is a parameter, and $F_x(x_0,y_0,z_0), F_y(x_0,y_0,z_0), \mbox { and } F_z(x_0,y_0,z_0)$ are the partial derivatives of $F$ with respect to $x,y, \mbox { and }z$ respectively, evaluated at the point $(x_0,y_0,z_0)$
or
$\boxed { \frac {x - x_0}{F_x(x_0,y_0,z_0)} = \frac {y - y_0}{F_y(x_0,y_0,z_0)} = \frac {z - z_0}{F_z(x_0,y_0,z_0)}}$ -----> Symmetric equation of normal line
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.872480034828186, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/37584/can-we-patch-up-line-bundles
|
## Can we patch up line bundles?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let X be a complex manifold. Suppose we have holomorphic line bundles $L_i$ over $U_i$ where ${U_i}$ is an open covering of X. Suppose that $L_i$ and $L_j$ restrict to the same line bundle over the intersection of $U_i$ and $U_j$.
Can we patch these local line bundles into a global holomorphic line bundle L over X? That is, the restriction of L to $U_i$ is $L_i$.
-
Of course; what causes problems? – Martin Brandenburg Sep 3 2010 at 8:36
As far as I know that is pretty much the definition of line bundle. – Jesus Martinez Garcia Sep 3 2010 at 8:56
If you really mean that the restrictions are equal, then yes, nothing could possibly go wrong. But that situation rarely (ever?) arises in practice. Normally what you have is that the bundles are isomorphic on the intersections, and then indeed an additional condition is needed, namely compatibility on the triple intersections (cocycle condition). Still, this is explained everywhere line bundles are peddled, so I find the question somewhat mysterious. – Pete L. Clark Sep 3 2010 at 9:46
5
As Pete points out, assuming you only mean "isomorphic" on overlaps then the answer in general is "no". For instance in a neighbourhood of a smooth cubic curve in $\mathbb P^2$ there is a whole elliptic curve of line bundles. Taking the rest of your covering of $\mathbb P^2$ to be sufficiently fine, you can assume that on overlaps these line bundles are trivial. So you'd be asking if you could glue these line bundles to trivial line bundles on the rest of $\mathbb P^2$. You cannot, because there are very few line bundles on $\mathbb P^2$ -- discrete set $\mathbb Z$ rather than a continuum. – Richard Thomas Sep 3 2010 at 10:47
usually line bundles are defined by patching up of local trivializations. i was wondering if we can patch up non-trivial patches of line bundles. – Colin Tan Sep 4 2010 at 6:00
## 1 Answer
The cocycle condition for glueing applies to sheaves on any topological space, in particular to line bundles. See for instance Proposition 5.29 of
http://math.rice.edu/~hassett/teaching/465spring04/CCAGlec5.pdf.
-
thank you for the reference. – Colin Tan Sep 4 2010 at 6:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9189993143081665, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Surface
|
# Surface
This article discusses surfaces from the point of view of topology. For the Microsoft tablet, see Microsoft Surface. For other uses, see Differential geometry of surfaces, algebraic surface, and Surface (disambiguation).
An open surface with X-, Y-, and Z-contours shown.
In mathematics, specifically, in topology, a surface is a two-dimensional, topological manifold. The most familiar examples are those that arise as the boundaries of solid objects in ordinary three-dimensional Euclidean space R3 — for example, the surface of a ball. On the other hand, there are surfaces, such as the Klein bottle, that cannot be embedded in three-dimensional Euclidean space without introducing singularities or self-intersections.
To say that a surface is "two-dimensional" means that, about each point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth is (ideally) a two-dimensional sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian).
The concept of surface finds application in physics, engineering, computer graphics, and many other disciplines, primarily in representing the surfaces of physical objects. For example, in analyzing the aerodynamic properties of an airplane, the central consideration is the flow of air along its surface.
## Definitions and first examples
A (topological) surface is a nonempty second countable Hausdorff topological space in which every point has an open neighbourhood homeomorphic to some open subset of the Euclidean plane E2. Such a neighborhood, together with the corresponding homeomorphism, is known as a (coordinate) chart. It is through this chart that the neighborhood inherits the standard coordinates on the Euclidean plane. These coordinates are known as local coordinates and these homeomorphisms lead us to describe surfaces as being locally Euclidean.
More generally, a (topological) surface with boundary is a Hausdorff topological space in which every point has an open neighbourhood homeomorphic to some open subset of the closure of the upper half-plane H2 in C. These homeomorphisms are also known as (coordinate) charts. The boundary of the upper half-plane is the x-axis. A point on the surface mapped via a chart to the x-axis is termed a boundary point. The collection of such points is known as the boundary of the surface which is necessarily a one-manifold, that is, the union of closed curves. On the other hand, a point mapped to above the x-axis is an interior point. The collection of interior points is the interior of the surface which is always non-empty. The closed disk is a simple example of a surface with boundary. The boundary of the disc is a circle.
The term surface used without qualification refers to surfaces without boundary. In particular, a surface with empty boundary is a surface in the usual sense. A surface with empty boundary which is compact is known as a 'closed' surface. The two-dimensional sphere, the two-dimensional torus, and the real projective plane are examples of closed surfaces.
The Möbius strip is a surface with only one "side". In general, a surface is said to be orientable if it does not contain a homeomorphic copy of the Möbius strip; intuitively, it has two distinct "sides". For example, the sphere and torus are orientable, while the real projective plane is not (because deleting a point or disk from the real projective plane produces the Möbius strip).
In differential and algebraic geometry, extra structure is added upon the topology of the surface. This added structures detects singularities, such as self-intersections and cusps, that cannot be described solely in terms of the underlying topology.
## Extrinsically defined surfaces and embeddings
A sphere can be defined parametrically (by x = r sin θ cos φ, y = r sin θ sin φ, z = r cos θ) or implicitly (by x² + y² + z² − r² = 0.)
Historically, surfaces were initially defined as subspaces of Euclidean spaces. Often, these surfaces were the locus of zeros of certain functions, usually polynomial functions. Such a definition considered the surface as part of a larger (Euclidean) space, and as such was termed extrinsic.
In the previous section, a surface is defined as a topological space with certain properties, namely Hausdorff and locally Euclidean. This topological space is not considered a subspace of another space. In this sense, the definition given above, which is the definition that mathematicians use at present, is intrinsic.
A surface defined as intrinsic is not required to satisfy the added constraint of being a subspace of Euclidean space. It may seem possible for some surfaces defined intrinsically to not be surfaces in the extrinsic sense. However, the Whitney embedding theorem asserts every surface can in fact be embedded homeomorphically into Euclidean space, in fact into E4: The extrinsic and intrinsic approaches turn out to be equivalent.
In fact, any compact surface that is either orientable or has a boundary can be embedded in E³; on the other hand, the real projective plane, which is compact, non-orientable and without boundary, cannot be embedded into E³ (see Gramain). Steiner surfaces, including Boy's surface, the Roman surface and the cross-cap, are immersions of the real projective plane into E³. These surfaces are singular where the immersions intersect themselves.
The Alexander horned sphere is a well-known pathological embedding of the two-sphere into the three-sphere.
A knotted torus.
The chosen embedding (if any) of a surface into another space is regarded as extrinsic information; it is not essential to the surface itself. For example, a torus can be embedded into E³ in the "standard" manner (which looks like a bagel) or in a knotted manner (see figure). The two embedded tori are homeomorphic, but not isotopic: They are topologically equivalent, but their embeddings are not.
The image of a continuous, injective function from R2 to higher-dimensional Rn is said to be a parametric surface. Such an image is so-called because the x- and y- directions of the domain R2 are 2 variables that parametrize the image. A parametric surface need not be a topological surface. A surface of revolution can be viewed as a special kind of parametric surface.
If f is a smooth function from R³ to R whose gradient is nowhere zero, then the locus of zeros of f does define a surface, known as an implicit surface. If the condition of non-vanishing gradient is dropped, then the zero locus may develop singularities.
## Construction from polygons
Each closed surface can be constructed from an oriented polygon with an even number of sides, called a fundamental polygon of the surface, by pairwise identification of its edges. For example, in each polygon below, attaching the sides with matching labels (A with A, B with B), so that the arrows point in the same direction, yields the indicated surface.
Any fundamental polygon can be written symbolically as follows. Begin at any vertex, and proceed around the perimeter of the polygon in either direction until returning to the starting vertex. During this traversal, record the label on each edge in order, with an exponent of -1 if the edge points opposite to the direction of traversal. The four models above, when traversed clockwise starting at the upper left, yield
• sphere: $A B B^{-1} A^{-1}$
• real projective plane: $A B A B$
• torus: $A B A^{-1} B^{-1}$
• Klein bottle: $A B A B^{-1}$.
Note that the sphere and the projective plane can both be realized as quotients of the 2-gon, while the torus and Klein bottle require a 4-gon (square).
The expression thus derived from a fundamental polygon of a surface turns out to be the sole relation in a presentation of the fundamental group of the surface with the polygon edge labels as generators. This is a consequence of the Seifert–van Kampen theorem.
Gluing edges of polygons is a special kind of quotient space process. The quotient concept can be applied in greater generality to produce new or alternative constructions of surfaces. For example, the real projective plane can be obtained as the quotient of the sphere by identifying all pairs of opposite points on the sphere. Another example of a quotient is the connected sum.
## Connected sums
The connected sum of two surfaces M and N, denoted M # N, is obtained by removing a disk from each of them and gluing them along the boundary components that result. The boundary of a disk is a circle, so these boundary components are circles. The Euler characteristic $\chi$ of M # N is the sum of the Euler characteristics of the summands, minus two:
$\chi(M \# N) = \chi(M) + \chi(N) - 2.\,$
The sphere S is an identity element for the connected sum, meaning that S # M = M. This is because deleting a disk from the sphere leaves a disk, which simply replaces the disk deleted from M upon gluing.
Connected summation with the torus T is also described as attaching a "handle" to the other summand M. If M is orientable, then so is T # M. The connected sum is associative, so the connected sum of a finite collection of surfaces is well-defined.
The connected sum of two real projective planes, P # P, is the Klein bottle K. The connected sum of the real projective plane and the Klein bottle is homeomorphic to the connected sum of the real projective plane with the torus; in a formula, P # K = P # T. Thus, the connected sum of three real projective planes is homeomorphic to the connected sum of the real projective plane with the torus. Any connected sum involving a real projective plane is nonorientable.
## Closed surfaces
A closed surface is a surface that is compact and without boundary. Examples are spaces like the sphere, the torus and the Klein bottle. Examples of non-closed surfaces are: an open disk, which is a sphere with a puncture; a cylinder, which is a sphere with two punctures; and the Möbius strip.
### Classification of closed surfaces
Some examples of orientable closed surfaces (left) and surfaces with boundary (right). Left: Some orientable closed surfaces are the surface of a sphere, the surface of a torus, and the surface of a cube. (The cube and the sphere are topologically equivalent to each other.) Right: Some surfaces with boundary are the disk surface, square surface, and hemisphere surface. The boundaries are shown in red. All three of these are topologically equivalent to each other.
The classification theorem of closed surfaces states that any connected closed surface is homeomorphic to some member of one of these three families:
1. the sphere;
2. the connected sum of g tori, for $g \geq 1$;
3. the connected sum of k real projective planes, for $k \geq 1$.
The surfaces in the first two families are orientable. It is convenient to combine the two families by regarding the sphere as the connected sum of 0 tori. The number g of tori involved is called the genus of the surface. The sphere and the torus have Euler characteristics 2 and 0, respectively, and in general the Euler characteristic of the connected sum of g tori is 2 − 2g.
The surfaces in the third family are nonorientable. The Euler characteristic of the real projective plane is 1, and in general the Euler characteristic of the connected sum of k of them is 2 − k.
It follows that a closed surface is determined, up to homeomorphism, by two pieces of information: its Euler characteristic, and whether it is orientable or not. In other words, Euler characteristic and orientability completely classify closed surfaces up to homeomorphism.
Closed surfaces with multiple connected components are classified by the class of each of their connected components, and thus one generally assumes that the surface is connected.
### Monoid structure
Relating this classification to connected sums, the closed surfaces up to homeomorphism form a monoid with respect to the connected sum, as indeed do manifolds of any fixed dimension. The identity is the sphere, while the real projective plane and the torus generate this monoid, with a single relation P # P # P = P # T, which may also be written P # K = P # T, since K = P # P. This relation is sometimes known as Dyck's theorem after Walther von Dyck, who proved it in (Dyck 1888), and the triple cross surface P # P # P is accordingly called Dyck's surface.[1]
Geometrically, connect-sum with a torus (# T) adds a handle with both ends attached to the same side of the surface, while connect-sum with a Klein bottle (# K) adds a handle with the two ends attached to opposite sides of an orientable surface; in the presence of a projective plane (# P), the surface is not orientable (there is no notion of side), so there is no difference between attaching a torus and attaching a Klein bottle, which explains the relation.
### Surfaces with boundary
Compact surfaces, possibly with boundary, are simply closed surfaces with a number of holes (open discs that have been removed). Thus, a connected compact surface is classified by the number of boundary components and the genus of the corresponding closed surface – equivalently, by the number of boundary components, the orientability, and Euler characteristic. The genus of a compact surface is defined as the genus of the corresponding closed surface.
This classification follows almost immediately from the classification of closed surfaces: removing an open disc from a closed surface yields a compact surface with a circle for boundary component, and removing k open discs yields a compact surface with k disjoint circles for boundary components. The precise locations of the holes are irrelevant, because the homeomorphism group acts k-transitively on any connected manifold of dimension at least 2.
Conversely, the boundary of a compact surface is a closed 1-manifold, and is therefore the disjoint union of a finite number of circles; filling these circles with disks (formally, taking the cone) yields a closed surface.
The unique compact orientable surface of genus g and with k boundary components is often denoted $\Sigma_{g,k},$ for example in the study of the mapping class group.
### Riemann surfaces
A closely related example to the classification of compact 2-manifolds is the classification of compact Riemann surfaces, i.e., compact complex 1-manifolds. (Note that the 2-sphere and the torus are both complex manifolds, in fact algebraic varieties.) Since every complex manifold is orientable, the connected sums of projective planes are not complex manifolds. Thus, compact Riemann surfaces are characterized topologically simply by their genus. The genus counts the number of holes in the manifold: the sphere has genus 0, the one-holed torus genus 1, etc.
### Non-compact surfaces
Non-compact surfaces are more difficult to classify. As a simple example, a non-compact surface can be obtained by puncturing (removing a finite set of points from ) a closed manifold. On the other hand, any open subset of a compact surface is itself a non-compact surface; consider, for example, the complement of a Cantor set in the sphere, otherwise known as the Cantor tree surface. However, not every non-compact surface is a subset of a compact surface; two canonical counterexamples are the Jacob's ladder and the Loch Ness monster, which are non-compact surfaces with infinite genus.
### Proof
The classification of closed surfaces has been known since the 1860s,[1] and today a number of proofs exist.
Topological and combinatorial proofs in general rely on the difficult result that every compact 2-manifold is homeomorphic to a simplicial complex, which is of interest in its own right. The most common proof of the classification is (Seifert & Threlfall 1934),[1] which brings every triangulated surface to a standard form. A simplified proof, which avoids a standard form, was discovered by John H. Conway circa 1992, which he called the "Zero Irrelevancy Proof" or "ZIP proof" and is presented in (Francis & Weeks 1999).
A geometric proof, which yields a stronger geometric result, is the uniformization theorem. This was originally proven only for Riemann surfaces in the 1880s and 1900s by Felix Klein, Paul Koebe, and Henri Poincaré.
## Surfaces in geometry
Main article: Differential geometry of surfaces
Polyhedra, such as the boundary of a cube, are among the first surfaces encountered in geometry. It is also possible to define smooth surfaces, in which each point has a neighborhood diffeomorphic to some open set in E². This elaboration allows calculus to be applied to surfaces to prove many results.
Two smooth surfaces are diffeomorphic if and only if they are homeomorphic. (The analogous result does not hold for higher-dimensional manifolds.) Thus closed surfaces are classified up to diffeomorphism by their Euler characteristic and orientability.
Smooth surfaces equipped with Riemannian metrics are of fundational importance in differential geometry. A Riemannian metric endows a surface with notions of geodesic, distance, angle, and area. It also gives rise to Gaussian curvature, which describes how curved or bent the surface is at each point. Curvature is a rigid, geometric property, in that it is not preserved by general diffeomorphisms of the surface. However, the famous Gauss-Bonnet theorem for closed surfaces states that the integral of the Gaussian curvature K over the entire surface S is determined by the Euler characteristic:
$\int_S K \; dA = 2 \pi \chi(S).$
This result exemplifies the deep relationship between the geometry and topology of surfaces (and, to a lesser extent, higher-dimensional manifolds).
Another way in which surfaces arise in geometry is by passing into the complex domain. A complex one-manifold is a smooth oriented surface, also called a Riemann surface. Any complex nonsingular algebraic curve viewed as a complex manifold is a Riemann surface.
Every closed orientable surface admits a complex structure. Complex structures on a closed oriented surface correspond to conformal equivalence classes of Riemannian metrics on the surface. One version of the uniformization theorem (due to Poincaré) states that any Riemannian metric on an oriented, closed surface is conformally equivalent to an essentially unique metric of constant curvature. This provides a starting point for one of the approaches to Teichmüller theory, which provides a finer classification of Riemann surfaces than the topological one by Euler characteristic alone.
A complex surface is a complex two-manifold and thus a real four-manifold; it is not a surface in the sense of this article. Neither are algebraic curves defined over fields other than the complex numbers, nor are algebraic surfaces defined over fields other than the real numbers.
## See also
• Volume form, for volumes of surfaces in En
• Poincaré metric, for metric properties of Riemann surfaces
• Area element, the area of a differential element of a surface
• Roman surface
• Boy's surface
• Tetrahemihexahedron
## References
1. ^ a b c
• Dyck, Walther (1888), "Beiträge zur Analysis situs I", Math. Ann. 32: 459–512
• Gramain, André (1984). Topology of Surfaces. BCS Associates. ISBN 0-914351-01-X. (Original 1969-70 Orsay course notes in French for "Topologie des Surfaces")
• Bredon, Glen E. (1993). Topology and Geometry. Springer-Verlag. ISBN 0-387-97926-3.
• Massey, William S. (1991). A Basic Course in Algebraic Topology. Springer-Verlag. ISBN 0-387-97430-X.
• Francis, George K.; Weeks, Jeffrey R. (May 1999), "Conway's ZIP Proof", 106 (5), page discussing the paper: On Conway's ZIP Proof
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292676448822021, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/234321/injective-function-from-set-of-all-functions-f-mathbbr-to-mathbbr-to?answertab=active
|
# Injective function from set of all functions $f: \mathbb{R} \to \mathbb{R}$ to $\mathcal{P}(\mathbb{R})$
I'm looking for an injective function from the set $A$ of all functions $f: \mathbb{R} \to \mathbb{R}$ to $\mathcal{P}(\mathbb{R})$. Any hints?
I think the other direction is easy: An injective function from $\mathcal{P}(\mathbb{R})$ to $A$ is just a functions that maps all $X \in \mathcal{P}(\mathbb{R})$ to a function $g$ with $g(\mathbb{R})=X$. Is that correct?
-
4
– Brian M. Scott Nov 10 '12 at 18:05
I am fairly certain that I posted an answer to this question at least twice on this site. One of them quite recently. Please try to search before asking. – Asaf Karagila Nov 10 '12 at 18:32
## 2 Answers
How are you going to pick that function $g$? You need to specify a definite rule (and you’ll have a hard time finding a $g:\Bbb R\to\Bbb R$ such that $g[\Bbb R]=\varnothing$, though your idea can be modified to avoid that problem). An easier approach: send $X$ to the indicator function of $X$.
For an injection in the other direction, note that a function from $\Bbb R$ to $\Bbb R$ is a subset of $\Bbb R^2$: it’s a set of ordered pairs of real numbers. Thus, all you really need is an injection from $\wp(\Bbb R^2)$ to $\wp(\Bbb R)$. From an earlier question you already have an injection from $\Bbb R^2$ to $\Bbb R$; use it to get an injection from $\wp(\Bbb R^2)$ to $\wp(\Bbb R)$
-
Hint. Note that $\mathbb R \cong \mathcal P(\mathbb N)$, and that every function $\mathbb R \to \mathcal P(\mathbb N)$ corresponds uniquely (in a natural way) to a subset of $\mathbb R \times \mathbb N$.
Now you only need $\mathcal P(\mathbb R\times \mathbb N)\cong \mathcal P(\mathbb R)$ ...
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256570935249329, "perplexity_flag": "head"}
|
http://blogs.scienceforums.net/swansont/archives/date/2011/07/18
|
# Swans on Tea
Physics, tech and humor. Because science and learning are cool, and life's too short not to laugh.
## Archive for
July 18th, 2011
### The Butler’s Name is Emissivity
Published by swansont on July 18, 2011 under
We got a new toy in the lab recently — a thermal sensor, aka an IR thermometer, which uses the incoming radiation profile to sense the temperature of a remote object. It’s useful because it does not require physical contact. (I wasn’t there the day it arrived, and rumour has it that it “saw” some things that cannot be unseen as it was “tested” using quasi-calibrated biological standards of temperature at 37ºC )
So how does it work?
All objects will emit electromagnetic radiation; the ideal is a blackbody which emits according to
$P = A\epsilon{\sigma}T^4$
A is the area, sigma is a constant, T is temperature, and epsilon is the emissivity, which is 1 for a blackbody
If we look at an object about the size of a 2L soda bottle (roughly 4 cm radius, 33 cm tall), we find that at 300K, it emits about 7.5 Watts of power. Because the blackbody spectrum is a continuum, this represents a wide spectrum of wavelengths, which will have a peak in the infrared, somewhere around 10 microns. In thermal equilibrium its temperature will not be changing, which means it’s also absorbing 7.5 Watts from its surroundings (ignoring conduction and convection losses, which should be small) If the object is not in thermal equilibrium, it wither radiates more power and cools down, or absorbs extra radiative energy and heats up.
But that’s if it’s a perfect blackbody. If it isn’t — the emissivity is less than 1 — some of the incoming radiation is reflected. If less radiative power is absorbed, it has to have a lower temperature to be in equilibrium (or, at the same temperature, it radiates a lower power). So at a given emissivity, one could measure the radiation profile, and values at a few wavelengths will indicate the shape of the blackbody curve and will tell you the temperature.
I did this, though with a 12 oz (355 ml) can instead of a 2L bottle. You can see the dot of the built-in laser pointer showing that the sensor is pointed at the can.
Whoa! This is a can right out of my fridge, and is obviously not at 61ºF. What’s going on? Well, the emissivity did it. An aluminum can, even with some coloring, is nowhere near a blackbody. Depending on the particulars of how much oxidation and level of polishing, the emissivity of aluminum can range from about 0.2 to below 0.1. The thermometer’s spec sheet says it assumes a value of 0.95, so there’s quite a discrepancy. This means the sensor is getting a mix of radiation from the can (below 40ºF) and from the surrounding room-temperature items (and a little from me, sitting less than a meter away), and this skews the results.
What if we cut down on the reflections? I encased the can in a single sheet of paper.
OK, that’s significantly better. The paper is in good contact with the can and the can might be slightly warmer than in the first shot, as it’s been sitting at room temperature or been handled by a warm-blooded creature, so the surface temperature may have gone up a little. Cooling the paper down to the can’s surface temperature probably had a negligible effect.
What about a black surface?
That’s lower, though not dramatically (and probably not significant in terms of instrument sensitivity), though the same caveats apply — the can can only have warmed between shots. I didn’t return it to the fridge, and I was handling it.
There’s a cautionary tale here — you want to trust your instruments, but you have to know what is actually being measured to ensure you don’t let systematic errors into your results.
### This Might Be a Job for … Me!
Published by swansont on July 18, 2011 under
Label Puzzler: Original Recipe AND New Flavor?
[W]hile the box is closed, the ice cream inside exists in a quantum superposition of states in which it is both “original recipe” and “new flavor,” and only when the box is opened and the ice cream observed does the wave function collapse into either an “original” or “new” state. Alternatively, the two states would decohere and two new universes would form, in one of which you are eating original-recipe ice cream and in the other a parallel You is enjoying the new flavor. (I don’t think there’s any scenario in which a cat spontaneously forms. Please!)
…
Please let me know if you are interested in serving as an expert witness for a possible quantum-physics defense to consumer-law claims involving allegedly self-contradictory labels.
I’ll do this one pro bono. A new flavor has to be the original recipe. No quantum mechanics involved. People think of original recipe as meaning a throwback (especially since the New Coke debacle), but it doesn’t have to be that way. It’s only a contradiction if you know it’s not redundant. No label superposition. It would be neat to have entangled flavors, though, like one half-gallon of chocolate and one of vanilla, but you don’t know which is which until you open one of the containers.
This blog is protected by Dave's Spam Karma 2: 139930 Spams eaten and counting...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399054646492004, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/102733/comparability-implies-well-orderability
|
## Comparability implies well-orderability?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am trying to prove a small proposition that got me completely stumped, and I cannot find a single counterexample.
(ZF) Suppose that $E$ is such that for every $A\subseteq\mathcal P(E)$ either $|E|\leq|A|$ or $|A|\leq|E|$, then $E$ can be well-ordered.
It is not a biconditional statement since we have models of ZF (e.g. Solovay's model) where $\omega$ serves as a counterexample to this, but I still make true or false of the above statement.
Is this result known, or known to be false?
If the above is indeed false, how about a stronger requirement:
(ZF) Suppose $E$ is such that the cardinalities below $|\mathcal P(E)|$ are linearly ordered, then $E$ can be well-ordered.
-
"can it" must mean "cannot"? – Lee Mosher Jul 20 at 12:23
Yes. One of the problems of posting from a cellphone. I will edit that in a bit... Thanks! – Asaf Karagila Jul 20 at 12:34
## 1 Answer
Hi Asaf. It is open whether the continuum hypothesis for an infinite set $E$ implies the well-orderability of $X$. Of course, if $CH(E)$ holds, then the assumption in your (first) statement holds.
($CH(E)$ is the statement that any subset $A$ of $\mathcal P(E)$, either $A$ injects into $E$, or else $A$ is in bijection with $\mathcal P(E)$.)
This is a question that dates back to Ernst Specker, "Verallgemeinerte Kontinuumshypothese und Auswahlaxiom", Archiv der Mathematik 5 (1954), 332–337. There is a nice presentation in Akihiro Kanamori, David Pincus, "Does GCH imply AC locally?, in "Paul Erdős and his mathematics, II (Budapest, 1999)", Bolyai Soc. Math. Stud., 11, János Bolyai Math. Soc., Budapest, (2002), 413–426.
I do not know about the stronger statement you ask for. Part of the difficulty comes from the "bad" cardinal arithmetic we should have below $|{\mathcal P}(E)|$. For example, Specker proved that if CH holds for both $X$ and $\mathcal P(X)$, then $\mathcal P(X)$ is well-orderable.
-
Hi Andres, thank you for the answer. I indeed had in mind the case where $CH(E)$ holds, but I had a hunch that would be too hard to attack. I suppose this is as good as it gets... I will leave the question open for a few days and accept your answer if no one else posts anything substantial until then. – Asaf Karagila Jul 20 at 15:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9070082902908325, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/33002-applications-quadratic-equations.html
|
Thread:
1. Applications of Quadratic Equations
A square piece of cardboard is formed into a box by cutting 10-centimeter squares from each of the four corners and then folding up the sides, as shown in the figure. If the volume of the box is to be 49,000cm^3, what size square piece of cardboard is needed?
I did the first two but am not sure how to go about doing this one.
2. Originally Posted by Mad
A square piece of cardboard is formed into a box by cutting 10-centimeter squares from each of the four corners and then folding up the sides, as shown in the figure. If the volume of the box is to be 49,000cm^3, what size square piece of cardboard is needed?
I did the first two but am not sure how to go about doing this one.
Let $x$ be the side length of the original cardboard square. if we cut 10 cm squares from the corners (DRAW A DIAGRAM!) and fold it into a box, then we would have:
length = width = $x - 20$
and height = $10$
use these in the volume equation to find $x$
3. Originally Posted by Mad
A square piece of cardboard is formed into a box by cutting 10-centimeter squares from each of the four corners and then folding up the sides, as shown in the figure. If the volume of the box is to be 49,000cm^3, what size square piece of cardboard is needed?
I did the first two but am not sure how to go about doing this one.
Lets call the length of the paper L (just for fun )
since we are cutting 10cm squares from each corner the length of base will be $L-20$ (we are cutting two corners from each side) so the volume will be
$V=(L-20)(L-20)10$ and we know that the volume is 49,000
so we need to solve the equation
$49,000=10(L-20)^2 \iff 4,900 = (L-20)^2$
taking the square root we get
$\pm 70 =L-20 \iff L=20 \pm 70$
Since we are talking about a length we will only use the positive solution so we get...
$L=90$
4. Ha Ha you beat me this time.
There are too many cooks in the kitchen
5. Thanks guys.
An 18-wheeler left a grain depot with a load of wheat and traveled 550 mi to deliver the wheat before returning to the depot. Because of the lighter load on the return trip, the average speed of the truck returning was 5 mph faster than its average speed going. Find the rate returning if the entire trip, not counting unloading time or rest stops, was 21 h.
I don't know why I'm so bad at these.
6. Originally Posted by TheEmptySet
Ha Ha you beat me this time.
There are too many cooks in the kitchen
hehe, yeah. you're right. new plan: i'll be the head chef and order everyone around
7. Originally Posted by Mad
Thanks guys.
An 18-wheeler left a grain depot with a load of wheat and traveled 550 mi to deliver the wheat before returning to the depot. Because of the lighter load on the return trip, the average speed of the truck returning was 5 mph faster than its average speed going. Find the rate returning if the entire trip, not counting unloading time or rest stops, was 21 h.
I don't know why I'm so bad at these.
Please post new questions in a new thread.
so we know that $d=rt \iff t=\frac{d}{r}$
we know that his rate was "r" on the way there and "r+5" on the return trip
and that the total travel time was 21 hours.
so the time there from the above equation is
$t=\frac{550}{r}$
and the time on the way back is
$t=\frac{550}{r+5}$
so if we add the above times they should equal the total time. so we get
$\frac{550}{r}+\frac{550}{r+5}=21$
P.S the answer should work out to be r =50.
Good luck
8. Oh, sorry, I was just trying to save space rather than start a new thread every time I need help. We go through quite a bit in a day and the professor never seems to cover the more complicated stuff, she always does the problems that I already know how to do so I end up here and didn't want to clog up the boards.
Thankfully, others have complained about this, as well, and so she's said that she'll try to do harder examples so, hopefully, I won't have to ask for help as often.
And thank you for your patience, I can never seem to figure out what type of problem I should be setting up.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9572315812110901, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/18991/solving-a-system-of-equation-involving-the-chi-square-distribution/18996
|
# Solving a system of equation involving the chi-square distribution
Consider the system of equations:
$$\begin{align} 2000 &= \frac{2T}{\chi^2_{2n+2}(1-\alpha)} \\ 2370&= \frac{2T}{\chi^2_{2n}(\alpha)} \\ \end{align}$$
where $\alpha = 0.90$, $n$ is a degrees of freedom, and $T$ is a cumulative time of experience.
How do I calculate $n$ and $T$?
The difficulty is how to replace $\chi^2_{2n+2}(1-\alpha)$ and $\chi^2_{2n}(\alpha)$ for a clear system of equations.
I tried to approximate the chi-square distribution by the standard normal distribution, but I have not found the solution.
How to calculate $n$ and $T$ from this system?
-
4
Homework or "help-me-to-solve-my-problem-possibly-in-the-next-five-minutes" questions are accepted on this site provided you add some context and say what you know or what you tried, i.e. what's prevent you from finding the right solution. Please update your question, otherwise we'll have to close it. – chl♦ Nov 26 '11 at 12:19
1
– whuber♦ Nov 26 '11 at 16:28
Thanks for your update, @atamaths. (I've removed my downvote.) – chl♦ Nov 26 '11 at 17:46
I rolled back the latest edits (which changed the first two occurrences of $\alpha$ to $\alpha/2$) because they are inconsistent with the remainder of the question and they make the question inconsistent with the comments and the reply. – whuber♦ Nov 27 '11 at 16:43
## 1 Answer
You can eliminate $T$ from the two equations, leading to the single equation $$2000 \chi^2_{2n+2}(.1) = 2370 \chi^2_{2n}(.9) \, .$$ I would then solve this equation with a standard root finding method, e.g. uniroot in R. Since one can't expect $n$ be be an integer, this problem should be framed in terms of gamma distributions.
-
thanks, but how ? the answer is not clear enough – atamaths Nov 27 '11 at 17:09
What is unclear, atamaths? The answer is correct and helpful (in steering you away from looking for integral solutions) and it even references freely available software (`uniroot`). If you need more information than that, then please specify what is lacking. – whuber♦ Nov 30 '11 at 16:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216094613075256, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/54205/stability-of-nucleii-and-a-5
|
Stability of nucleii and $A=5$
Why there is no stable nuclei with $$A=5$$ in nuclide the chart and so in nature like we know it?
-
I don't think you're going to get a much more satisfying answer than "because helium-4 is in such a deep potential energy trough thanks to its special quantum numbers", but I'll let the people who know more about atomic physics than I do say that. – Jerry Schirmer Feb 17 at 15:25
2
Dear Jerry, you meant nuclear physics, not atomic physics, right? ;-) – Luboš Motl Feb 17 at 16:57
1 Answer
As Jerry Schirmer said, helium-4 is an extremely stable nucleus. What does it mean quantitatively? It means that it binding energy is very high, namely 28 MeV. In other words, helium-4 is 28 MeV/$c^2$ lighter than the sum of masses of two free protons and two free neutrons.
The best candidates $A=5$ nuclei would have 2 protons and 2 neutrons in the lowest state - i.e. in the same state as they occupy in helium-4 – but the additional 1 proton or 1 neutron would have to be added to a higher shell. But because this higher shell is so much higher in energy than the ground levels, one can't find an $A=5$ nucleus that would be lighter than the sum of the helium mass and one proton (or one neutron). The binding energy would have to be even greater than 28 MeV which means that the binding energy per nucleon would have to exceed 28/5=5.6 MeV. This is simply too much to ask; the binding energy you could get for 5 nucleons is simply smaller than 28 MeV, so any such object would quickly alpha-decay.
I should insert some calculation of the conceivable binding energy for 5 nucleons here except that there's clearly no "analytic" calculation. It's an extremely messy system one would have to describe by nuclear physics (ill-defined effective theory) or by QCD (calculable via lattice QCD, with big computers etc.). But let me mention that unlike atoms, where the new valence electrons may always be added and keep the stability, the nuclei are "more neutral" so the attractive force between the helium-4-like "core" of the $A=5$ object and the remaining nucleon is much weaker, sort of dipole-like, and isn't enough to produce a new stable bound state.
However, what I can say is that this fact about the absence of $A=5$ stable isotopes has important consequences. The Big Bang Nucleosynthesis – first three minutes when nuclei are created – essentially stalls once it reaches helium-4 nuclei. They can't absorb new protons/neutrons to become heavier and instead, the next reaction is the much rarer collision of two helium nuclei. One either has helium-3 plus helium-4 goes to lithium-7 plus positron plus photon; or beryllium-7 plus photon on the right hand side. Lithium-7 may absorb a proton to get back to 2 helium-4; beryllium-7 may absorb a neutron to become lithium-7.
These processes are "everything" one may have in empty space. Inside stars, one has pressure and temperature which helps to overcome the binding energy and stars may produce heavier elements, too.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.954339325428009, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/45847/can-an-element-other-than-the-neutral-element-be-its-own-inverse/45849
|
# Can an element other than the neutral element be its own inverse?
Take the following operation $*$ on the set $\{a, b\}$:
• $a * b = a$
• $b * a = a$
• $a * a = b$
• $b * b = b$
$b$ is the neutral element. Can $a$ also be its own inverse, even though it's not the neutral element? Or does the inverse property require that only the neutral element may be its own inverse but all other elements must have another element be the inverse.
-
## 5 Answers
Yes, an element other than the identity can be its own inverse. A simple example is the numbers $0,1,2,3$ under addition modulo 4, where 0 is the identity, and 2 is its own inverse.
-
Sorry for my lack of understanding, but how is 0 the identity? – Matt Ellen Jun 17 '11 at 9:48
3
@Matt: $0 + a = a$ for all $a$. – Qiaochu Yuan Jun 17 '11 at 10:01
@Qiaochu: Thanks for the help. – Matt Ellen Jun 17 '11 at 10:49
@MikeRand: Yes. Let's rename your $b$, call it $1$, and your $a$, call it $-1$, and think of your $\ast$ as ordinary multiplication. Then we have a group, and $-1$ (your $a$) is its own inverse, but $-1$ is not the identity element. – André Nicolas Jun 17 '11 at 12:24
Qiaochu means 0+a=a and a+0=a, he probably just didn't say it, since one can derive one from the other in the context of modulo addition (either via knowing modulo addition on a set as a group, or that modulo addition commutes), even if we didn't know 0 as the neutral. After all, one of the binary operations F on {0, 1} goes 00F=0, 01F=1, 10F=0, 11F=0, so 0+a=a in this case, but a+0 does not equal a. – Doug Spoonwood Jun 17 '11 at 15:39
Your set is isomorphic to the two-element group: $b=1$, $a=-1$, $*=$multiplication. So yes, $a$ can very well be its own inverse.
-
Denoting 1 for the identity element, we have for every group element $a$:
$a=a^{-1}\Leftrightarrow a^2=1$.
So a non-identity element is its own inverse iff it has order 2. This is perfectly possible, as Gerry Myerson showed. Or look at the group of automorphisms of $\mathbb{C}-\{0\}$ (invertible ring homomorphisms), the identity element being the identity map, and group multiplication being composition of automorpisms. Then complex conjugation has order 2: it sends $x+iy$ to $x-iy$, and the latter is sent back to $x+iy$.
-
Technical: the OP didn't say he was talking about groups. Just a binary operation. – GEdgar Jun 17 '11 at 13:51
Yes, you are right. – wildildildlife Jun 17 '11 at 14:02
All the elements that are its own inverses constitute a special example of the so-called elements of finite orders; in the finitely generated commutative groups, these elements constitute a subgroup, called the torsion group, of finite order itself. Of course then, these elements need not be neutral elements.
-
First off, let's get some definitions straight.
Definition 1: A neutral n on a set S with a binary operation * we define as an element n of S such that for all s belonging to S, (s*n)=s and (n*s)=s.
Definition 2: An inverse i on a set S of an element j of S which has neutral n and binary operation * we define as an element i of S such that (j*i)=n and (i*j)=n. j and i need not come as distinct, texts just use different letters so that they may come as distinct, as they often do.
Now for the operation * on {a, b} described in the original post, we have b as the neutral. We have (a*a)=b also. Thus, by definition 2 element "a" of {a, b} qualifies as its own inverse. So, you have a simple example in the original post.
-
My confusion was based on the fact that, as you noted, texts often use different letters or define the inverse as "another" element. – MikeRand Jun 17 '11 at 18:08
@MikeRand: I agree it's a bit confusing to write 'another element' rather than just 'an element'. Using different letter is on the other hand quite sensible; if i and j are elements it is of course possible that i=j. – wildildildlife Jun 18 '11 at 11:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93611079454422, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/45952/what-is-the-force-of-friction-between-two-bodies-given-their-masses-and-a-force/45954
|
# What is the force of friction between two bodies given their masses and a force pulling them as a unit accross a surface?
Where a force of 200N pulls two blocks together(as one system) across a horizontal table top(µ=0.800)
$m_A$ = 5.00kg, $m_B$ = 10.0kg
1. Find the acceleration of the system.
2. Find f$_k$ between B and A
I found a to be 5.485 m/s² which agrees with the textbook's 5.5m/s².
The textbook says b is 173N but I can't seem to get a number even close to that. How does one go about solving this kind of equation, please provide calculations or formulas in the order they are needed, insteed of just abstract steps.
-
## 2 Answers
If both block A, and B are moving together as a system, the two blocks will not have a kinetic friction between the two of them (because they are stationary to each other). Draw your free-body diagram of both blocks individually, and write an expression for all the forces acting on the each block. Share what you find by editing your question, so that we may know why you might not be getting the answer you desire.
$\sum F = ma$
Edit: You did not explain the question well enough, but I can see that the friction is indeed equal to 172.5 N assuming that mass B is on top of mass A, and the force is applied on the top mass.
-
Ok I figured it out, sorry the fact that block A is on top of block B is important.
F$_a$ = 5.485m/s²(5kg) = 27.425N
$\Sigma F_x = 200N ∴ f_s= 200N - F_a = 200N -27.435 =172.575$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564913511276245, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2010/07/20/upper-and-lower-ordinate-sets/?like=1&source=post_flair&_wpnonce=bbf4ba6a50
|
The Unapologetic Mathematician
Upper and Lower Ordinate Sets
Let $(X,\mathcal{S})$ be a measurable space so that $X$ itself is measurable — that is, so that $\mathcal{S}$ is a $\sigma$-algebra — and let $(Y,\mathcal{T})=(\mathbb{R},\mathcal{B})$ be the real line with the $\sigma$-algebra of Borel sets.
If $f$ is a real-valued, non-negative function on $X$, then we define the “upper ordinate set” to be the subset $V^*(f)\subseteq X\times Y$ such that
$\displaystyle V^*(f)=\{(x,y)\in X\times Y\vert0\leq y\leq f(x)\}$
We also define the “lower ordinate set” to be the subset $V_*(f)\subseteq X\times Y$ such that
$\displaystyle V_*(f)=\{(x,y)\in X\times Y\vert0\leq y<f(x)\}$
We will explore some basic properties of these sets.
First, if $f$ is the characteristic function $\chi_E$ of a measurable subset $E\subseteq X$, then $V^*(\chi_E)$ is the measurable rectangle $E\times[0,1]$, while $V_*(\chi_E)$ is the measurable rectangle $E\times[0,1)$.
Next, if $f$ is a non-negative simple function, then we can write it as the linear combination of characteristic functions of disjoint subsets. If $f=\sum a_i\chi_{E_i}$, then for each $i$ the upper ordinate set $V^*(a_i\chi_{E_i})$ is the measurable rectangle $E_i\times[0,a_i]$ while the lower ordinate set $V_*(a_i\chi_{E_i})$ is the measurable rectangle $E_i\times[0,a_i)$. Since the $E_i$ are all disjoint, the upper ordinate set $V^*(f)$ is the disjoint union of all the $V^*(a_i\chi_{E_i})$, and similarly for the lower ordinate sets. Thus the upper and lower ordinate sets of a simple function are both measurable.
Next we have some monotonicity properties: if $f$ and $g$ are non-negative functions so that $f(x)\leq g(x)$ for all $x\in X$, then $V^*(f)\subseteq V^*(g)$ and $V_*(f)\subseteq V_*(g)$. Indeed, if $(x,y)\in V^*(f)$ then $0\leq y\leq f(x)\leq g(x)$, so $(x,y)\in V^*(g)$ as well, and similarly for the lower ordinate sets.
If $\{f_n\}$ is an increasing sequence of non-negative functions converging pointwise to a function $f$, then $\{V_*(f_n)\}$ is an increasing sequence of sets whose union is $V_*(f)$. That $\{V_*(f_n)\}$ is increasing is clear from the above monotonicity property, and just as clearly they’re all contained in $V_*(f)$. On the other hand, if $(x,y)\in V_*(f)$, then $0\leq y<f(x)$ But since $\{f_n(x)\}$ increases to $f(x)$, this means that $y<f_n(x)$ for some $n$, and so $(x,y)\in V_*(f_n)$. Thus $V_*(f)$ is contained in the union of the $V_*(f_n)$. Similarly, if $\{f_n\}$ is decreasing to $f$, then $\{V^*(f)\}$ decreases to $V^*(f)$.
Finally (for now), if $f$ is a non-negative measurable function, then $V^*(f)$ and $V_*(f)$ are both measurable. The lower ordinate set $V_*(f)$ is easier, since we know that we can pick an increasing sequence of non-negative measurable simple functions converging pointwise to $f$. Their lower ordinate sets are all measurable, and they form an increasing sequence of measurable sets whose union is $V_*(f)$. Since this is a countable union of measurable sets, it must be itself measurable.
For $V^*(f)$ we have to be a little trickier. First, if $g$ is bounded above by $c$, then $c-g$ is non-negative and also bounded above by $c$, and we can find an increasing sequence $\{g_n\}$ of non-negative measurable simple functions converging pointwise to $c-g$. Then $\{c-g_n\}$ is a decreasing sequence of non-negative simple functions converging pointwise to $g$. The catch is that the measurability of a simple function only asks that all the nonzero sets on which it is defined be measurable. That is, in principle the zero set of $g_n$ may be non-measurable. However, the zero set of $g_n$ is the complement of $N(g_n)$, and since this set is measurable we can use the assumed measurability of $X$ to see that $X\setminus N(g_n)$ is measurable as well. And so we see that $c-g_n$ is measurable as well. Thus $\{V^*(g_n)\}$ is a decreasing sequence of measurable sets, converging to $V^*(g)$, which must thus be measurable.
Now, for a general $f$, we can consider the sequence $\{f\cap n\}$ which replaces any value $f(x)>n$ with $n$. Each of these functions is still measurable (again using the measurability of $X$), and is now bounded. Thus $\{V^*(f_n)\}$ is an increasing sequence of measurable sets, and I say that now their union is $V^*(f)$. Indeed, each is contained in $V^*(f)$, so the union must be. On the other hand, if $(x,y)\in V^*(f)$, then $y\leq f(x)$. But since $f(x)\in\mathbb{R}$, there is some $N$ so that $f(x)\leq N$. Thus $y\leq\min(f(x),N)=f_N(x)$, and so $(x,y)\in V^*(f_N)$, and is in the union as well. Since $V^*(f)$ is the union of a countable sequence of measurable sets, it is itself measurable.
Incidentally, this implies that if $f$ is a non-negative measurable function, then the difference $V^*(f)\setminus V_*(f)$ is measurable. But we can calculate this difference as
$\displaystyle\begin{aligned}V^*(f)\setminus V_*(f)&=\{(x,y)\in X\times Y\vert 0\leq y\leq f(x)\}\setminus\{(x,y)\in X\times Y\vert 0\leq y<f(x)\}\\&=\{(x,y)\in X\times Y\vert 0\leq y\leq f(x)\}\cap\{(x,y)\in X\times Y\vert 0\leq y<f(x)\}^c\\&=\{(x,y)\in X\times Y\vert 0\leq y\leq f(x)\}\cap\{(x,y)\in X\times Y\vert y<0\textrm{ or }f(x)\leq y\}\\&=\{(x,y)\in X\times Y\vert 0\leq y\leq f(x)\textrm{ and }(y<0\textrm{ or }f(x)\leq y)\}\\&=\{(x,y)\in X\times Y\vert 0\leq y\leq f(x)\leq y)\}\\&=\{(x,y)\in X\times Y\vert y=f(x)\}\end{aligned}$
That is, $V^*(f)\setminus V_*(f)$ is exactly the graph of the function $f$, and so we see that the graph of a non-negative measurable function is measurable.
Like this:
Posted by John Armstrong | Analysis, Measure Theory
21 Comments »
1. Is this correct: the unit circle in R^2 has measure 0 with respect to the measure of the plane because the circle is the product of sets of at most two points on the vertical lines intersecting the circle (and thus each set having measure 0)?
This made perfect sense if there was a product measure theorem that allowed a product of measure spaces indexed by a continium of real line instead of a product of finite or countably many measure spaces. Have you heard of such a theorem in standard measure theory?
Comment by Hamid | July 20, 2010 | Reply
2. Not offhand, but I’m not sure why you think it would help. In what way is $\mathbb{R}^2$ such a product?
Comment by | July 20, 2010 | Reply
3. Taking only the protion of the x-axis between [-1,1] and considering only the upper half plane, then the intersections of the vertical lines thorough [-1, 1] with the function representing the upper unit circle are just singletons each of measure zero. If this product theorem existed then the measure is zero.
I think what you showed in your upper and lower ordinate sets readily implies that the set is measurable alright.
Actually, this was the remark that our analysis instructor made of the solution to one of the final exam problems right after the exam some years ago. Maybe he was joking, but I still can’t get over it. It seems too easy of a joke.
Comment by Hamid | July 20, 2010 | Reply
4. Of course I am not trying to make a joke here. I am still curious if this makes sense.
Comment by Hamid | July 20, 2010 | Reply
5. I don’t really see how it would help.
Comment by | July 21, 2010 | Reply
6. [...] Upper and Lower Ordinate Sets [...]
Pingback by | July 21, 2010 | Reply
7. Does it not help to take f(x)=sqrt(1-(sin pi x)^2) for x in [-1, 1] given that we also know somehow that the upper half of unit disk has area pi/2? Don’t ordinate sets apply to show that the graph of (x, f(x)) is measurable?
Comment by Hamid | July 21, 2010 | Reply
8. That’s not what I mean. Where do you see an uncountably-indexed product space?
Comment by | July 21, 2010 | Reply
9. Because first A=[-1, 1] is uncountable; and second, A can be taken as the index set of a product measure (if it exists which is my question) of that many measurable spaces of singletons as subsets of the same real measure spaces as copies of the real vertical lines.
I myself have troubles understanding this and thought afterwards that it must be wrong, but have you not heard of this anywhere either?
Comment by Hamid | July 21, 2010 | Reply
10. Why would the product of all those vertical lines mean anything? The circle lives in the union (with some additional topological information, no less), not their product.
Comment by | July 21, 2010 | Reply
11. True. Let me first again ask you if Fubini’s theorem on the product of measures does not have a generalization to indices larger than omega cardinality; maybe there is a categorical answer. If so, please disregard the rest of the trivialities below.
The thing with this example is that it is a trivial example of more complicated problems, I suppose. My intuition actually fails to see the “right” picture in its most generality. Fibre bundles like sphere bundles where the base is the unit circle, or certain leaves of foliations of a manifold might be good examples. How does one integrate on the manifold globally in local correspondence with integration on each leaf. E.g. The torus as a circle bundle on another circle inherits its topology from those of the circles and as you noted it is a union of circles. How can one build a product topology and a sigma-algebra of Borel sets out of the open sets of torus’ product topology. That again is trivial as the surface of the torus can be envisioned at most as the product of 2 real Lebesgue measures.
Comment by Hamid | July 22, 2010 | Reply
12. I really don’t know about Fubini’s theorem for uncountable products. I also really don’t know what any of the examples you suggest has to do with uncountable products.
Comment by | July 22, 2010 | Reply
13. I was just trying to show with no luck that the vertical lines passing thorough the circle are not just a union of lines. Consider the line bundles on the circle. One gets a cylinder, a Mobius strip, or putting more twists to the lines before gluing the lines, passing thorough 0 and 1 in the interval [0,1], together then one might get line bundles with higher “spins”. In fact, e.g. for a cylinder we have the space of smooth vector bundles on the circle where instead of lines we have smoothly varying vectors on them.
But forget about this for now. Actually, Noncommutative Geometry 1994 of A. Connes chapter 1 section 4 might be of some use here as the pictures of the Kronecker foliation and the Lady in Blue, scrambled and non-scrambled, might also directly apply here.
Take a torus T again colored in black and white stripes or with more number of colors. What is a local or global diffeomorphism D of T so that D(T) has measure 0 of each stripe? It seems like if we do not have an equal partition of the torus into 3 stripes and if we transform the colors according to H: (f(x), g(y)):[0,1]x[0,1]–>[0,1]x[0,1] where f and g are both the adjusted Cantor ternery functions(as in Royden’s Real Analysis) applied to the (x,y) coordinates of the torus, then we whould have been done except that Cantor ternary functions are not measurable. I was almost there if I could kind of forget about measurablility. If the function is applied to 3 stripes in an exact kind of way rotate the torus less than 1/3 before applying. H is also probably not a diffeomorphism but only a homeomorphism.
One might again ask where the uncountable product is. I am afraid that I would read thorough all of noncommutative geometry book and still not get it. Is it a von Neumann algebra of type II_1? I am now more confused due to lack of knowledge of higher category theory 2-Cat and higher. Uncountable product is probably wrong to begin with.
I promise not to say one more extra word to you again unless you say so otherwise. You probably have to stand on your head as John Baez mentions of cocompactness, co-this and co-that, to read the stuff in my website; it has to be forced into becoming strictly mathematical.
Comment by | July 23, 2010 | Reply
14. Uncountable product is probably wrong to begin with.
That’s pretty much been my point all along.
And what gave you the impression that I’m uncomfortable with category theory (co-this, and co-that)? Please, do go back and read the few-hundred posts I’ve made on that subject. Just because I’m talking about analysis now doesn’t mean I’m always and forever an analyst.
Comment by | July 23, 2010 | Reply
15. I just quoted John Baez from n-category on a homological algebra puzzle. That’s all.
Comment by | July 23, 2010 | Reply
16. [...] Measures of Ordinate Sets If is a -algebra and is a Borel-measurable function we defined the upper and lower ordinate sets and to be measurable subsets of . Now if we have a measure on and Lebesgue measure on the Borel [...]
Pingback by | July 26, 2010 | Reply
17. Dr. Armstrong,
Please excuse me by bothering you but I think you have a typo in the definition of the “lower ordinate set”.
Comment by | April 8, 2013 | Reply
18. That being?
Comment by | April 8, 2013 | Reply
19. If I am right it is a very minor typo: “(…) such that $V^{∗}(f)=$” instead of “(…) such that $V_{∗}(f)=$“.
Comment by | April 9, 2013 | Reply
20. Repetition: If I am right it is a very minor typo: “(…) such that $V_{\ast }(f)$ instead of “(…) such that $V^{\ast }(f)$“.
Comment by | April 9, 2013 | Reply
21. Ah, yes. Thanks.
Comment by | April 9, 2013 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 106, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500579237937927, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/resonance+frequency
|
# Tagged Questions
0answers
25 views
### Find Resonance Frequencies [closed]
How can I find the resonance frequencies for the harmonic dumped oscillator when it is written in this form? $$y''\left(t\right)+2\zeta y'\left(t\right)+y\left(t\right)=\sin{(\omega t+\phi)}$$ where ...
3answers
132 views
### Frequency of a Tuning Fork
Question: Which of the following affect the frequency of a tuning fork? Tine stiffness Tine length The force with which it's struck Density of the surrounding air Temperature of the surrounding air ...
2answers
530 views
### Phase difference of driving frequency and oscillating frequency
If a mass is attached to a spring and is oscillating (SHM). If a driving force is applied it must be at the same frequency as the mass's oscillation frequency. However I'm told that the phase ...
1answer
268 views
### How would natural (resonant) frequencies affect amplitudes?
I read $y=A\sin(2\pi ft)$, where $A$=Amplitude, $f$=Frequency, $t$=Time and $y$=$Y$ position of the wave. Since natural frequencies only take the most effect when they are close to the frequency. How ...
0answers
400 views
### How to calculate the resonance frequencies of human being cells? [closed]
I would like to calculate the human being's cells natural resonance frequencies. can someone please help we with that? where should I start from?
1answer
272 views
### Resonance and Natural Vibrations in Vacuum
In my Physics textbook, it says that if two pendulums of the same natural frequency are placed next to each other and if one is set into vibration, the other starts resonating and when the first one ...
1answer
553 views
### static flow of water
The title, I don't know whether it's correct or not, but I came across a video in youtube, http://youtu.be/_PkgQQqpH2M. The author of video used the title and hence I used the same.. The video ...
1answer
198 views
### resonance frequency [closed]
A string has a mass per unit length of 9 10–3 kg/m. What must be the tension in the string if its second harmonic has the same frequency as the second resonance mode of a 2m long pipe open at one end? ...
3answers
532 views
### How could this person have discovered the resonant frequency from this string of magnets?
I stumbled onto this page http://mylifeisaverage.com/story/1364811/ and the post states that they were all making strings and shapes with these sets of 216 really small spherical earth ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237920045852661, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=1417400
|
Physics Forums
Thread Closed
Page 4 of 4 < 1 2 3 4
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
## Fermat's Last theorem
Here is a proof that comes awfully close to that:
Prove the there exist irrational a, b such that a[sup]b[/sub] is rational.
Look at [tex]\sqrt{2}^\sqrt{2}[/itex]. It is not known (last time I checked) if that number is rational or irrational. However:
If it is rational, then we are done.
If it is irrational, then $$\(\sqrt{2}^\sqrt{2}\)^\sqrt{2}\)= \sqrt{2}^2= 2$$ is rational.
We have proved that there exist irrationals a, b such that ab is rational but are unable to say what a and b are!
yeah that proof was in my discrete math book, it is cool. the example i was going to use was that given any number we know there is a bigger prime, despite the fact that at some point we won't be able to construct it. neither of these are the same thing, but i think one of the things that attracts me to math is that you can know things exist without having to see them exist. where in science statistical evidence is very important, in math it's almost irrelevant.
Thread Closed
Page 4 of 4 < 1 2 3 4
Thread Tools
| | | |
|--------------------------------------------|----------------------------|---------|
| Similar Threads for: Fermat's Last theorem | | |
| Thread | Forum | Replies |
| | General Math | 5 |
| | Calculus & Beyond Homework | 7 |
| | General Math | 53 |
| | Linear & Abstract Algebra | 13 |
| | Linear & Abstract Algebra | 10 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9597837924957275, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/81569?sort=newest
|
## Example of a morphism between exterior algebras that is $\mathbb{Z}_2$ graded but not $\mathbb{Z}$ graded??
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The title pretty much states my problem. I consider only finitely generated exterior algebras $\bigwedge V$. It is known that any morphism between exterior algebras y determined by its action on generators, i.e. its action on $V$. Does anyone know a good example of this kind of morphisms?
By $\mathbb{Z}_2$ graded I mean a morphism of algebras such that the parity of the degree of a form $\eta$ is preserved by such a morphism; and by $\mathbb{Z}$ graded I mean a morphism that preserves the grade of $\eta$, i.e. if $\eta$ is a $k$-form, then so is $f(\eta)$ where $f$ is the morphism in question. Thanks in advance.
-
2
Map the elements of a basis of $V$ to elements in the highest non-zero odd exterior power of $V$. – Mariano Suárez-Alvarez Nov 22 2011 at 2:53
## 1 Answer
There doesn't seem to be much more to say, so I'll just repeat Mariano's comment with an example. Let $V$ be a 3-dimensional vector space in degree 1, and consider any nonzero linear map from $V$ to $\wedge^3 V$. This induces an algebra endomorphism of $\bigwedge^\bullet V$ that preserves the $\mathbb{Z}/2\mathbb{Z}$-grading but not the $\mathbb{Z}$-grading.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9437490105628967, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/56836?sort=oldest
|
## Proof of a fact about traces
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm following the open courseware content on Machine Learning from Stanford University. In the lecture notes, it is given that
$$\Delta_A \ tr(ABA^TC) = CAB + C^TAB^T$$
which I tried but couldn't prove easily. It is not required to follow the course content but I just wondered and wanted to learn its proof. Any suggestions?
Update: $A$, $B$, and $C$ are matrices and $\Delta_A$ is the gradient operation on matrix $A$.
-
2
Dear Ismail: it's not necessary to open with a salutation or have a closing like "kind wishes". I guess $A$, $B$, $C$ are square matrices of the same size; could you say what is meant by $\Delta_A$ please? Also, a direct link to the lecture notes would be very nice. – Todd Trimble Feb 27 2011 at 16:56
1
Dear Todd: Thanks for you advice. I will take your remarks about how to ask a question into consideration. The course notes are here: stanford.edu/class/cs229/notes/cs229-notes1.pdf Page 9, Eq. 3. – İsmail Arı Feb 27 2011 at 17:07
2
I don't see the reason for getting a -1 for the question. Is that because I couldn't prove it easily? If so, why are there 8 current upvotes for the answer post. If I know the reason, I would update my questioning to mathoverflow community accordingly. – İsmail Arı Apr 10 2011 at 23:23
1
I didn't downvote the question, but I can see a good reason for doing so, namely, the carelessness with which the question was presented. No indication of what $A$, $B$, $C$, and $\Delta_A$ were, not editing the question to include this information when the defect was brought to your attention - in fact, I've just decided to downvote the question. – Gerry Myerson Apr 11 2011 at 2:18
Thanks for the warning. I updated the question. – İsmail Arı Apr 11 2011 at 8:27
## 1 Answer
I guess $\Delta_A$ denotes the derivative with respect to the elements of the matrix $A$ (more conventionally denoted by $\partial_{A}$).
To evaluate the derivative with respect to $A_{ij}$, write out the trace in terms of components and then use $\partial_{A_{ij}} A_{mn} = \delta_{im} \delta_{jn}$, $$\partial_{A_{ij}} \text{tr}(A B A^T C) = \partial_{A_{ij}}\sum_{mnkl} A_{mn} B_{nk} A_{lk} C_{lm}= \sum_{kl} B_{jk}A_{lk} C_{li} + \sum_{mn} A_{mn} B_{nj} C_{im}$$ $$= ( C^T A B^T+ C A B )_{ij}.$$ This is the component-wise version of your identity.
Note to the comment of Todd Trimble: the matrices $A,B$, and $C$ do not have to be necessarily square matrices. Their dimension just has to "match" ($A \in \mathbb{R}^{m \times n}$, $B\in \mathbb{R}^{n \times n}$, $C \in \mathbb{R}^{m \times m}$, with $m$ and $n$ arbitrary integers).
-
Dear Fabian: Your guess is correct. Sorry for missing definition. The operator in question is the gradient operation as you guessed. – İsmail Arı Feb 27 2011 at 17:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481602311134338, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/5170/why-do-we-need-in-rsa-the-modulus-to-be-product-of-2-primes
|
# Why do we need in RSA the modulus to be product of 2 primes?
I think I roughly understand how the RSA alorithm is working.
However, I don't understand why we need the $N$, which we use as a modulus, to be $pq$ for some large primes $p, q$.
I vaguely know it has something to do with factorization, but I am kind of lost. So, hypothetical questions.
• What would happen if the $N$ was not $pq$, but just a big prime?
• What if $N$ would be some random composite (that's easy to factor)?
The other parts of RSA would stay the same.
-
## 1 Answer
RSA would still "work" with such $N$, but isn't secure for $N$ that are easily factored. If you know the factorization of $N$ (which is trivial for prime $N$s) you can calculate the private key from the public key. This totally breaks the desired security properties of RSA.
The essential equation for RSA is that $m^{\phi(N)+1}= m \mod N$ for all $m$. This works for all $N$, but only for some $N$ it's hard to calculate $\phi(N)$. When using RSA we require $\phi(N)$ being hard to calculate, since once you know $\phi(N)$ you can get $d$ from $e$ by solving $e \cdot d = 1 \mod \phi(N)$ using the extended Euclidean algorithm (just like what you do when legitimately creating the key-pair).
If $N$ has more than two factors, but at least two of those are large and hard to guess, it's still secure. But almost nobody uses this RSA variation.
-
Oh.... so because the attacker would know $\phi(N)$, he would be able to deduce $d$ from $e$ because $de=1$ in mod $\phi(N)$. I think I am starting to get it. – Karel Bílek Oct 27 '12 at 19:24
1
Also, we need at least the key creator to be able to calculate $\phi(N)$, so a random non-factorable number doesn't fit, too. – Paŭlo Ebermann♦ Oct 27 '12 at 21:58
1
As explained, more than two factors for $N$ work. Combined with the CRT it is actually useful if you have fast hardware to perform $n$-bit modular exponentiation (e.g. $n=512$) and want more security than $2n$-bit RSA allows. This was noticed by several academics (including Pr. Jean-Jacques Quisquater, who has shown me the technique in the context of projected Smart Card signature system in the late 199x), patented in the US, and used to some degree. – fgrieu Oct 28 '12 at 9:13
2
@fgrieu: nice link. It might be worth noting that the encryption scheme also "works" with $N=p$ a big prime, but then becomes symmetric (and was actually invented by Polhig and Hellman) as opposed to RSA which is an asymmetric one. – bob Oct 28 '12 at 9:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9684945344924927, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/33372/how-is-fracdqt-measure-of-randomness-of-system/36102
|
# How is $\frac{dQ}{T}$ measure of randomness of system?
I am studying entropy and its hard for me to catch up what exactly is entropy.
Many articles and books write that entropy is the measure of randomness or disorder of the system. They say when a gas system is let expand the randomness increases etc. But they end up saying $\frac{\mathrm dQ}{T}$ is the measure of increase in randomness and is called the entropy.
Even if I believe that entropy is the measure of randomness of the system I don't understand:
1. How does $\frac{\mathrm dQ}{T}$ hold the information about the randomness of the system?
2. How is entropy independent property of any system. I suppose that any two parameter in the equation $PV=nRT$ should completely describe the system. Why would we need entropy?
Thank you.
-
4
My advice to you, based on personal experience, is to accept these confusions until you learn about the definition of entropy in statistical mechanics (which is what Nathaniel refers to below). Entropy is defined for any system where the specific state is chosen from a probability distribution, and it directly measures our ignorance about which state the system is in ($S=0$ means we know the state exactly, $S$ maximal means all states have equal probability). I find it very confusing to try to understand it through $dQ/T$, since that is a phenomenological rather than microscopic description. – Guy Gur-Ari Aug 3 '12 at 20:43
@GuyGur-Ari: Is there a formal definition for the word "phenomenological", one that contrests it from the latter? – Nick Kidman Aug 4 '12 at 18:49
2
I don't know about a 'formal' definition, but a phenomenological description is a description that is at the level of the observations. We observe that a system has certain properties that we can measure, like temperature, pressure, and so on. And we observe certain relations between these properties. When we combine these into a (hopefully simple) model -- that is a phenomenological description. For example, the ideal gas law and Ohm's law are both phenomenological models. – Guy Gur-Ari Aug 4 '12 at 22:29
1
A microscopic description (like statistical mechanics) considers the underlying degrees of freedom, and should explain how the observed 'macroscopic' phenomena emerge from the detailed model. If you derive the ideal gas law by postulating tiny atoms bouncing around -- that is a microscopic description. Of course a microscopic model at one level may become phenomenological once you go to more detailed observations, so this distinction depends on one's point of view. People who study optics would not call particle physicists 'phenomenologists' -- but strings theorists would. – Guy Gur-Ari Aug 4 '12 at 22:33
## 7 Answers
This answer is somewhat hand-wavy, but I do believe it should help to grasp the concepts on an intuitive level.
First of all, entropy is not a measure of randomness. For an isolated system in equilibrium under the fundamental assumption of statistical mechanics, the entropy is just $$S=k\ln\Omega$$ where $\Omega$ is the number of microstates - microscopic system configurations - compatible with the given macrostate - macroscopic equilibrium state characteristed by thermodynamical variables.
It follows from the second law $$\delta Q = T\mathrm{d}S=T\mathrm{d}(k\ln\Omega)=kT\frac1\Omega\mathrm{d}\Omega$$ or equivalently $$\mathrm{d}\Omega = \Omega\frac{\delta Q}{kT}$$ The energy $kT$ is related to the average energy per degree of freedom, so this formula tells us that the transfer of heat into a system at equilibrium opens up a new number of microstates proportional to the number of existing ones and the number of degrees of freedom the transferred energy may excite.
-
This answer effectively complements what has been already said, and succintly shows the relation between added heat and the increase in the number of possible microstates corresponding to the same macrostate; and does it without assuming no more than an elementary background of statistical mechanics. +1 – Mono Aug 4 '12 at 17:26
In my opinion, it isn't strictly correct to say that entropy is "randomness" or "disorder". The entropy is defined in statistical mechanics as $-k_B \sum_i p_i \log p_i$, where $k_B$ is Boltzmann's constant (which is only there to put it into physically convenient units) and $p_i$ is the probability that the system is in state $i$. These probabilities do not mean that the system is "randomly" jumping from one state to another (although quite often it is), they just mean that you, as an experimenter observing the system, don't know exactly which state is in, but you think some are more likely than others. Since Shannon (1948) and Jaynes (1957), this formula for the entropy has been interpreted in terms of the information that an experimenter has about a system: the less information, the more entropy. (Those links are just for completeness - I wouldn't recommend reading them as your first introduction to the subject.) The amount of information an experimenter has about a system can decrease for many reasons, but the only way it can increase is if the experimenter makes a new measurement. This is the reason for the second law of thermodynamics.
It should be noted that there are many different perspectives on the meaning of entropy and the second law, and not everyone agrees with the one I outlined above. However, I will try to answer your two questions from this point of view.
1. From a modern perspective, it's better to view $dS = \frac{dQ}{T}$ as a definition of $Q$ rather than of $S$. After all, $S$ already has a definition in terms of the probabilities. If we view $dQ$ as being defined as $TdS$ we can see that it's equal to $dU + PdV - \sum_i \mu_i dN_i$ (by rearranging the fundamental equation of thermodynamics), which is equal to the total change in energy minus the energy that's transferred in the form of work. (Here I've defined work as "mechanical work" $PdV$ plus "chemical work" $-\mu_i dN_i$. You can also add terms for electrical work, etc.)
2. There are several reasons we need to consider the entropy of an ideal gas. One is that $T$, which appears in the ideal gas law, is defined as $T=\frac{\partial U}{\partial S}$, so the $S$ comes in that way. Another is that the equation $PV = nRT$ does't tell you how the temperature changes when you add energy to the system. For that you need to know the heat capacity, which is closely related to the entropy. Finally, the concept of entropy is extremely useful in understanding why you can't build a perpetual motion machine.
If this point of view sounds like it might make sense to you, it might be worthwhile reading this paper by Jaynes, which takes a historical perspective, explaining how entropy was first discovered (and defined in terms of $dQ/T$), and how it then unexpectedly turned out to be all about information.
-
1
We can't really use this equation to define Q because it can not be extended to systems which are not in equilibrium. For such systems, the heat flux and entropy are well defined but not necessarily T. Temperature is a macroscopic parameter which happens to be equal with the average kinetic energy for systems in equilibrium. For systems out of equilibrium, everything is possible: from a multiple temperature to no temperature at all. I don't see any simple explanation to the question out of statistical physics. – Shaktyai Aug 4 '12 at 20:09
@Shaktyai I disagree that the heat flux can be well defined for systems with no definable $T$. The energy flux is always definable, but if there's no $T$ then there's no meaningful way to partition it into work versus heat. Or at least, I don't know of an example where this can be done. If you can show me one I'll change my answer. – Nathaniel Aug 4 '12 at 22:57
In non equilibrium statistical mechanics, the heat flux is just the moment of order 3 of the velocity: =int(1/2*m*(V-v)^2*v*f(r,v,t)) where V is the average velocity V=int(v*f(r,v,t)) If the system is not in LTE, then f(r,v,t) is not a Maxwellian and T is not defined. In collisional radiative models (star's atmosphere or fusion plasmas) it is very common to encounter distribution function with two or no temperature. – Shaktyai Aug 5 '12 at 17:42
It's worth noting that your definition of an elemental change in the entropy of a system, namely:
$dS=\displaystyle\frac{\delta Q}{T}$
It's just valid for an internally reversible change. This is not a technicism which can be omitted; I think part of your question might be related to the notion of heat (a measurable amount of energy transferred) and statistical uncertainty (which is, up to alternative and equivalent interpretations, the intrinsic meaning of entropy).
In an internally reversible process which involves heat addition or substraction from a system, that T under de heat (inexact) differential must be a uniform temperature across the system's spatial extension up to it's boundaries, so that at every moment the temperature of system's boundaries is equal to it's bulk temperature (and unique). That means that there are no temperature gradients inside the system of interest, and because of that very fact, there aren't any possible heat exchanges inside the system's boundaries. That is because, for a system to exchange heat with something else, there must be a difference in temperature between them, and if the difference is zero (they are equal) then no heat will be transferred. If you think about it this is a sound argument: a cold glass of water gets increasingly hotter when you leave it in a room, but when it reaches the same temperature of the air around it, then there's no more change and it stays there indefinitely.
Going back to the original equation, you can now interpret the RHS as telling you that, at situations where the system's temperature is uniform at every moment, the ratio of the infinitesimally small amount of heat added or substracted to the system by it's environment, and the unique temperature at every point of the system (which is nothing more but a measure of the mean kinetic energy of individual molecules which make it up), is equal to it's change in entropy. And what is entropy? Well, macroscopically talking, you can take what I've written above as a definition of entropy, and you can thermodynamically deduce that it is indeed a state function (it only depends on the point properties of the system, like it's pressure and temperature) and it doesn't depend upon the chain of events by which that state was reached.
On the other hand, statistical mechanics (which is a more recent way of addressing what we see macroscopically as thermodynamical properties, like entropy, starting from a mechanical description at the molecular level) gives us more details on the nature of entropy. I think it's better to think about it not as a measure of randomness but as the (macroscopic) uncertainty of the (microscopic) state of the system.
I'll give you a simple example: imagine you had a pool table with it's top totally covered by an opaque fabric, with just one open end for introducing the cue stick. Assume now that you know (by some means) that the eight balls are distributed in the table forming a straight line with an equal spacing between them, but you don't know where exactly this line stands in the table's rectangular area; and that, for the purpose of the experiment, the white one is just next to the hole (and of course you know it). Now, you take the cue stick, introduce it in the fabric's hole left open, and strike the cue ball. After a few seconds of (hearing) collisions, you can be sure that movement stopped under the fabric. What happened to your knowledge about the system?
Well, you don't know where does each ball gone (we've sealed the pockets, of course!) but you didn't knew it before the strike, did you? But then, you at least knew they were forming a line, and that information is now gone. From your outside point of view, your prior information about the positions of the balls and the energy and momentum you introduced in the system trough the strike isn't enough to rule out a huge number of possible actual distributions of the balls. At the begining of the experiment, you could at least write down the number of possible positions of the line of balls (perhaps by drawing a grid over the table's area, with each cell's side length equal to a ball's diameter, and counting the number of longitudinal cell lines) but now the number of possible positions has multiplied. Before and after you only have partial knowledge of the system's configuration (all you can do is count the possible ones, based on what you know about the system from the outside, which restrict the possibilities) but that knowledge has decreased after the experiment. It has nothing to do with the physics of the collisions between the balls: it has to do with the fact that you can't see the balls from your point of view, and all you can do is retrieve partial information through indirect measurements.
The analogy with the example above in a statistical system is that by measurements of macroscopic observables (like temperature, pressure, density, etc) we only measure mean microscopic properties. For example, temperature is a measure of the mean molecular kinetic energy, and pressure is a measure of the mean rate of momentum transferred by striking molecules per area unit. Measuring them gives us partial knowledge of it's microscopic configuration (like the original information you held about the positions of the pool balls). And any change in the macroscopic observables is correlated to a change in the possible (i.e. not ruled out) microscopic configurations, and then that causes a change in our knowledge about it. It turns out that those changes can be measured, and that's indeed entropy variation, in the sense that an entropy increase correlates to an uncertainty increase, or a knowledge decrease. Showing that this relation holds, starting from a mechanical framework, is the whole point behind statistical mechanics.
Finally, I hope you can see now that what $\displaystyle\frac{\delta Q}{T}$ is just analogue to the energy introduced by the strike in the experiment in relation to the previous knowledge of the position of the balls (lower temperatures imply less molecular translational, roational and vibational molecular movements, and vice versa, so it is actually a "partial measure" of their positions). So:
1. It doesn't hold the information about the randomness of the system, it is just a measure of the increase in uncertainty from a macroscopic perspective, and only holds for reversible processes (in general, entropy can increase without adding energy to a system).
2. As other answers have stated, entropy is needed to define some of the terms in any state equation (like the Ideal Gas law), and by the way, state equations are just approximations to the actual behavior of real substances (something pretty clear in the "ideal" part of the the law you cite), so it's natural for them to be based on more fundamental concepts (like entropy).
EDIT: As Nathaniel rightly pointed out below, my original statement that the validity of the macroscopic definition of entropy in terms of heat and temperature depended on the (tacitly) total reversibility of the process, was flawed. The only requirement for it to be valid is that the heat exchange process must be internally reversible, becasue we're only measuring this way the change in entropy inside the system (and so external irreversibilities associated with the process are irrelevant).
-
A wealth of meaningful info is contained in the above answers. However, a short and simple intuitive picture still seems missing.
The bottom line is that temperature measures the energy per degree of freedom, and hence $\frac{dQ}{T}$ measures nothing more than the number of degrees of freedom over which the energy has spread. The number of degrees of freedom describes the microscopic complexity (as others have remarked, the term 'randomness' many consider less appropriate) of the system - the amount of information needed to specify the system down to all its microscopic details. This quantity is known as the (statistical) entropy.
You might like this blog that discusses the subject.
-
You should think of the equation
$$dS = {dQ\over T}$$
As the definition of temperature, not of entropy. Entropy is more fundamental--- it's the size of the phase space, the log of the number of possible states. The temperature is a derivative of this with respect to energy.
To understand why this makes sense, put two systems side by side. If the energy flows from hot to cold, the loss of entropy in the hot system is more then compensated by the gain in entropy of the cold system. So energy will flow from hot to cold, statistically, on average.
It is not the properties of temperature which make $dQ\over T$ an entropy change, rather it is the properties of the entropy which makes the coefficient of $dS\over dQ$ an inverse-temperature.
-
An microscopic approach to entropy has lead to great insight and is in detail explained in the given answers.
To understand the concept of entropy there is an equally valid but macroscopic approach that might complement the given answers. The idea has been developed on the basis of 'adiabatic accessibility' and the authors Elliott H. Lieb and Jakob Yngvason have done an excellent job explain this concept, although a little heavy on the mathematical side (arxiv link). Their work has been summarized in the book The Entropy principle by André Thess.
So for whoever is interested in a different approach to rigorously define entropy should give this concept a closer look.
-
"How does dQT hold the information about the randomness of the system"
The answer lies in the microscopic definition of the heat. The velicity of any particle can be writen: V=Vb+v . Vb is the bulk velocity and vi the "random" velocity: =0. The kinetic energy associated to vi is the heat. So measuring the heat is nothing else than measuring the degree of randomness of the molecules in the system. If all the molecules fly in the same direction then vi=0 and V=Vb: the kinetic energy is the macroscopic kinetic energy Ec=1/2*m*Vb^2, if all the directions are equiprobable Vb=0 and the kinetic energy is purely heat.
" I suppose that any two parameter in the equation PV=nRT should completely describe the system. Why would we need entropy?" Take two gases (P1,V1,T1) and (P2,V2,T2) put them in contact. You can't predict how the temperature evolves without the entropy.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458931088447571, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/1778/a-probability-game/1857
|
# A probability game
Motivation: A friend asked me this question.
The Problem: Suppose you start off with a dollar. You flip a fair coin, if it lands on heads you win 50 cents otherwise you lose 50 cents. If after n flips you have a nonzero amount of money, you win. Whats the probability you win? What about the limiting case as n tends to infinity?
edit: In this game you are not allowed to have negative money. Thanks, Jonathan Fischoff, the linked helped greatly.
-
2
– Jonathan Fischoff Aug 7 '10 at 3:25
9
Are you allowed to have a negative amount of money or does the game end when you lose all your money? In either case I think the keyword you want is "one-dimensional random walk." – Qiaochu Yuan Aug 7 '10 at 4:31
Can the 'house' (the person betting against the player) ever run out of money? – Larry Wang Aug 7 '10 at 15:55
## 4 Answers
This type problem on random walk is usually solved with the Reflection Principle, with the walk visualized as a lattice path. Strangely, I can't find an online reference to the solution but it is given in Feller's book on probability theory, volume 1.
Here, measuring the money in units of 0.5 dollars, the walk, drawn in the $(x,y)$ plane, starts at (2,0), moves by $(+1,\pm 1)$ at each step, and the question is what is the probability that over $n$ moves the walk is always $\geq 1$.
-
– Grigory M Aug 7 '10 at 9:59
## Did you find this question interesting? Try our newsletter
email address
Here is a completely probabilistic proof inspired by Moron in http://math.stackexchange.com/questions/4044/finding-a-clever-solution-to-a-game-of-chance.
Let p be the probability that I end up with a net loss of 50 cents, then $p = \frac{1}{2} + \frac{1}{2} p^3$. That is to say, I either lose 50 cents, or gain a dollar and lose a dollar and 50 cents eventually. Solving for p I get 1, $(-1-\sqrt{5})/2$, and $(1-\sqrt{5})/2$ and it is not hard to see that we can dismiss the first two. Now the probability I end up with a net loss of one dollar is $p^2 = (3-\sqrt{5})/2$.
-
As n goes to infinity, you run out of money with probability 1. This is known as gambler's ruin or the drunk walk--if you are winning/losing money or walking side to side such that you win/lose/go left/go right with probability 1/2 (a 1-d simple symmetric random walk), and there is a boundary like going broke or ending up in the gutter, as the number of games/steps goes to infinity, the probability of hitting that boundary goes to 1.
I don't know how to prove it, but per Joshua Zucker's comment on A001405 in the OEIS, and agreeing with T..'s answer, the probability of ending up with a positive amount of money after n flips if you started with no money would be $\frac{1}{2^n}{n\choose \left\lfloor\frac{n}{2}\right\rfloor}$. Because you start with one dollar instead of no money, this is offset so that the probability of having money after n flips is $\frac{1}{2^{n-1}}{n-1\choose \left\lfloor\frac{n-1}{2}\right\rfloor}$ for n≥1.
-
As Issac pointed out, this is known as the Gamler's ruin problem. I recently wrote a couple of blog posts (this and the post linked from there) explaining how to calculate the ruin probability. I'll repeat part of one of the proofs here.
Problem Formulation
A gambler enters a casino with $n$ dollars in cash and starts playing a game where he wins with probability $p$ and looses with probability $q = 1-p$ The gampler plays the game repeatedly, betting $1$ dollar in each round. He leaves the gave it his total fortune reaches $N$ or he runs out of money (he is ruined), whichever happens first. What is the probability that the gambler is ruined.
A gambler's ruin can be modeled as a one-dimensional random walk in which we are interested in the hitting probability of the absorbing states. Calculating these probabilities is fairly straightforward. Let $P_N(n)$ denote the probability that the gambler's fortune reaches $N$ dollars before he is ruined on the condition that his current fortune is $n$. Then,
$P_N(n) = p P_N(n+1) + q P_N(n-1)$
which can be rewritten as
$\displaystyle [P_N(n+1) - P_N(n)] = \left(\frac q p \right)[ P_N(n) - P_N(n-1)]$
Since $P_N(0) = 0$, we have that
$\displaystyle P_N(2) - P_N(1) = \left(\frac qp \right) P_N(1)$
and similarly
$\displaystyle P_N(3) - P_N(2) = \left(\frac qp \right) [P_N(2) - P_N(1)] = \left( \frac qp \right)^2 P_N(1)$
Continuing this way, we get that
$\displaystyle P_N(n) - P_N(n-1) = \left( \frac qp \right)^{n-1} P_N(1)$.
and therefore, by adding the first $n$ such terms, we get
$\displaystyle P_N(n) = \sum_{k=0}^{n-1} \left( \frac qp \right)^k P_N(1)$.
Moreover, we know that
$\displaystyle P_N(N) = \sum_{k=0}^{N-1} \left( \frac qp \right)^k P_N(1) = 1$.
Thus,
$\displaystyle P_N(1) = \frac 1{\sum_{k=0}^{N-1} \left( \frac qp \right)^k} = \frac { 1 - (q/p)}{\strut 1 - (q/p)^N}, \quad p \neq q$
$P_N(1) = \frac 1N, \quad p = q$.
Combining with the previous expression for $P_N(n)$ we get,
$\displaystyle P_N(n) = \begin{cases} \frac{ 1 - (q/p)^n} {\strut 1 - (q/p)^N}, & p \neq 1/2 \ \frac{n}{N}, & p = 1/2 \end{cases}$.
For ease of representation let $\lambda = q/p$. Then, the probability of winning are
$\displaystyle P_N(n) = \frac{ 1 - \lambda^n} {\strut 1 - \lambda^N}, \quad \lambda \neq 1$
$P_N(n) = \frac{n}{N}, \quad \lambda = 1$.
-
1
That's a different problem. Gambler's ruin asks about the probability of reaching one of two absorbing barriers (0 and N) starting from a given point. The number of steps is unlimited so one can just just compute probabilities of reaching various states. In the problem posted by yjj, the number of steps is given and there is one absorbing barrier at 0. The solution is by the reflection principle, and I don't know if there is a closed form in the asymmetrical case $p \neq 1/2$. – T.. Aug 8 '10 at 20:20
Sorry. I didn't realize the difference. – Aditya Aug 8 '10 at 20:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435766935348511, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/4908/how-to-choose-model-parameters
|
# How to choose model parameters?
I'm studying math and attend this semester a course about interest rates. Now, some questions show up how exactly things are working in the real world.
My examples will be about interest rates model, but I guess there is no need to restrict ourselves to this case.
When you want to price for example bond prices, you will do the following:
1. modelling
2. pricing
3. calibration
Let's assume there is an equivalent martingale measure $Q$ such that all the bond prices are martingales. The density $\frac{dQ}{dP}$ is of a Girsanov type. Concerning the above list, I would start by choosing a model for my interest rate, i.e. I would directly write down the $Q$ dynamics of $r$. For simplicity assume $r$ has the following dynamics:
$$dr(t)=(b+\beta r(t))dt + \sigma dW^*(t)$$
where $b,\beta,\sigma$ are parameters and $W^*$ is one dimensional $Q$-Brownian motion.
For the second point I would start pricing using the usual formula
$$\pi(t,T) = E_Q[\exp{(-\int_t^Tr(u) du)}|\mathcal{F}_t]$$
where $\pi(t,T)$ denotes the bond prices at $t$ with maturity $T$. Since in this case $r(t)$ is normal distributed I can perfectly calculate the prices dependent on the parameters $b,\beta,\sigma$.
Now the questions show up in the last point, i.e. in 3. Here I want to choose my parameters in such a way, that the computed prices match with the market prices. So far I was just working with the risk-neutral measure $Q$. Do I really compare the, under $Q$, estimated prices with the market prices? Or how exactly does this calibration work in reality?
Thanks in advance for your help.
-
## 1 Answer
Yes, you do really use market prices to calibrate models derived under the risk-neutral measure. That is the whole reason why risk-neutral measures are utilized, to a) ease the calculations but mostly b) because under no arbitrage and one price for each security assumptions (among couple other other assumptions) the price derived under the risk neutral measure must equal the market price (otherwise you could arbitrage which you just assumed does not not exist). Thus, you can use market prices to calibrate risk-neutral models.
-
thanks a lot for your answer. Then there is just one "follow-up" question: Why do we care about $P$? It seems to me that we can actually totally forget about the real world measure. – hulik Jan 5 at 9:53
@hulik, correct. As long as we can find arbitrage portfolios that make us indifferent about future values of the underlying in 'P', we do not have to care about individuals' risk preferences. That is the beauty of risk-neutral pricing. Check out one of my earlier answers, I discussed it at length and it should make it clearer, including supplied references and links. – Freddy Jan 5 at 10:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9302442669868469, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2007/03/02/relations/?like=1&source=post_flair&_wpnonce=c1c9c39881
|
# The Unapologetic Mathematician
## Relations
Sooner or later I’ll have to use relations, so I may as well get the basics nailed down now.
A relation between two sets $X$ and $Y$ is just a subset $R$ of their Cartesian product $X\times Y$. That is, it’s a collection of pairs $(x,y)$. Often we’ll write $xRy$ when $(x,y)$ is in the relation.
A function $f$ is a special kind of relation where each element of $X$ shows up on the left of exactly one pair in $f$. In this case if $(x,y)$ is in the relation we write $f(x)=y$. Remember from our discussion of functions that I was saying every element of $X$ has to show up both at least once and at most once (that is, exactly once). A surjection is when every element of $Y$ shows up on the right side of a pair in $f$ at least once, and an injection is when each element in $Y$ shows up at most once.
Also interesting are the following properties a relation $R$ between a set $X$ and itself might have:
• A relation is “reflexive” if $xRx$ for every $x$ in $X$.
• A relation is “irreflexive” if $xRx$ is never true.
• A relation is “symmetric” if whenever $xRy$ then also $yRx$.
• A relation is “antisymmetric” if whenever $xRy$ and $yRx$ then $x$ and $y$ are the same.
• A relation is “asymmetric” if whenever $xRy$ then $yRx$ is not true.
• A relation is “transitive” if whenever $xRy$ and $yRz$ then also $xRz$.
A very important kind of relation is an “equivalence relation”, which is reflexive, symmetric, and transitive. We’ve already used these a bunch of times, actually. When a group $G$ acts on a set $X$, we can define a relation $\sim$ by saying $x\sim y$ whenever there is a group element $g$ so that $gx=y$. Since the identity of $G$ sends every element of $X$ to itself, this is reflexive. If there is a $g$ so that $gx=y$, then $g^{-1}y=x$, so this is symmetric. Finally, if we have $g$ and $h$ so that $gx=y$ and $hy=z$, then $(hg)x=z$, so the relation is transitive.
When we have an equivalence relation $\sim$ on a set $X$, we can break $X$ up into its “equivalence classes”. Any two elements $x$ and $y$ of $X$ are in the same equivalence class if and only if $x\sim y$. We write the set of equivalence classes as $X/\sim$. For group actions, these are the orbits of $G$.
Show for yourself that in our discussion of cosets that there’s an equivalence relation going on, and that the cosets of $H$ in $G$ are the equivalence classes of elements of $G$ under this relation.
### Like this:
Posted by John Armstrong | Fundamentals
## 2 Comments »
1. [...] Orders As a bonus today, I want to define a few more kinds of relations. [...]
Pingback by | March 11, 2007 | Reply
2. [...] given a topological space and an equivalence relation on the underlying set of we can define the quotient space to be the set of equivalence classes [...]
Pingback by | April 28, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 61, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333192706108093, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/47/with-sufficient-randomness-is-xor-an-acceptable-mechanism-for-encrypting/52
|
# With sufficient randomness, is XOR an acceptable mechanism for encrypting?
I have heard criticism of various cryptosystems saying that "at their heart, they were just XOR."
Is this just ignorance, or is there something inherently wrong with XOR based ciphers?
-
## 4 Answers
It's important to make the distinction between ciphers which use XOR internally as a component operation (which is nearly all of them), and 'ciphers' which just XOR the plaintext with a secret. If the key is the same length as the plaintext, then it's a one time pad, so in some sense, yes, with "sufficient randomness" you can safely encrypt with XOR. The problem with this is that it only applies if the key is exactly as long as the plaintext, and is perfectly uniform random. If it's not, then you've basically created a Vigenere cipher, which can be trivially broken. It's interesting, and unfortunate, how the 'provable security' of the OTP fools people into thinking they can just take a little shortcut or two and still get something safe.
Typically, people who implement simple XOR ciphers compound their mistake by using hard-coded keys, in which case a single known plaintext/ciphertext pair is sufficient to decrypt anything. That's probably mostly because if you were some kind of prebuild crypto toolkit, it would be easier to just use AES along with your RSA key management, etc, versus using a decent key distribution scheme and then using XOR with your safely shared keys.
-
If the key used to XOR your plaintext is any shorter than your plaintext, then the repeats will give it away. If the key is truely random, and never reused, it is effectively a one-time-pad.
The historical name for XOR encryption is Vernam cipher.
is there something inherently wrong with XOR based ciphers
The amount of effort you need to put into ensuring that your key is random enough is as much effort as you need to put into coming up with a more secure algorithm. By using XOR, there is a very weak link that you have to put huge amounts of effort into securing.
If you were doing something like in your other post, making the key out of the hash of the message, then your key is now the length of your hash output (160 bits for SHA-1, 512 bits for SHA-512). If your messages are guaranteed to be less than the length of the hash, then you're ok. If any message is longer, an attacker has a place to start.
-
But that's only true for text messages, right? What if we have, say, an executable file, or an image file? Is it still vulnerable? – Soumya Jul 12 '11 at 23:42
1
@Soumya92, even if it is a picture or exe, those file formats are well known, so you can still attack those. – Tangurena Jul 13 '11 at 0:44
I've got a feeling that I'm going to write this a lot on here: define "sufficient". The question you must answer is "what do you want to protect and how much is it worth to you"? In general, a plain XOR cypher with a key shorter than the total plaintext encrypted is pretty weak, and methods for decrypting are more or less trivial. So if there's a lot of value there, it's not adequate.
On the other hand, rot-13 is even more trivial to decrypt (after reading USENET for years and years I can almost read rot-13 directly.) It is, however, adequate (or nearly adequate, given the existence of freaks like me) for its usual purpose: preventing someone from being offended without having to take a purposeful step to agree to read something.
There is a good way to make this decision: base the decision on risk. By definition, risk is
R = P × H
where R is the risk, P is the probability of an undesired event happening, and H is the hazard, or consequences of that undesired event. Currency is almost always a good measure for H.
Now, consider the choice of a password rule. The probability of a password being guessed in one try of a brute force attack is roughly $2^{-h(p)}$ where $h(p)$ is the entropy of the password $p$ that satisfies some password rules, and the value is given in bits. (And entropy is $\log_2$ of the number of possible passwords that satisfy the rule, who says security is complicated?) The old fashioned all-lower-case alphabetic password of length $\le 8$ characters is about 38 bits. The entropy of an ATM card 4-digit pin number is only about 13 bits.
Now, let's say that your average ATM card account has a balance of \$3000, and you don't want to lose more than about \$10 per account per year from guessed PIN numbers. Is a 4 digit PIN sufficient?
$$R = 2^{-13} · \$3000,$$
(Pop quiz: what about for the guy who apparently had a \\$99 million checking account balance?)
A simple XOR cipher is easily decrypted using frequency analysis, so it's not very suitable for anything important. Why? Because as the length to the ciphertext grows, the probability of successfully decrypting the message approaches 1. Thus the risk approaches H. But if the value of what you're encrypting is near 0, then the risk is also near 0, and so an XOR encryption might be suitable.
-
There are non-linear risk measures - many people prefer loosing only a little (with high probability) than loosing much (with low probability), even when your formula says that it is the same risk. – Paŭlo Ebermann♦ Jul 13 '11 at 1:25
For your ATM-card/PIN example, you are supposing that for each account the card gets stolen once a year, and then only one PIN is tried there. – Paŭlo Ebermann♦ Jul 13 '11 at 14:25
RIght, that's utility weighted. Since everyone has a different utility function, that's not very useful. – Charlie Martin Jul 13 '11 at 19:20
@Paulo you're right that could be carried out a little further, say by having a rul that blocks the PIN after 3 unsuccessful tries. The important point, though, is still to identify risk as the driver. – Charlie Martin Jul 13 '11 at 19:22
The most known example of just XOR would be the One-time pad (or at least, one of its implementations). It just takes a random key stream and XORs it with the plaintext stream to create the cipherstream. The one-time pad is also the only provably perfect cipher, where knowing any amount of ciphertext and plaintext does not help to know any single additional bit of plaintext.
For a more practical example: The Output feedback mode and the Counter Mode for using block ciphers to create a stream cipher also are essentially just XOR: they each create a key stream from the key, and then XORs the key stream with the plaintext to create the ciphertext, or XOR the key stream with the cipher text to get back the plaintext.
Both the OTP and the OFM/CTR for stream ciphers have the same problem, which is inherent in just XOR: If you are doing a MITM-attack and can guess (parts of) the plaintext, you can replace the corresponding parts of the ciphertext to modify the plaintext to something of your choosing, without knowing anything at all about the key.
Whether this is a real problem depends on your application, and it can be avoided by combining the encryption with a message authentication code (MAC).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949942946434021, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/761/undergraduate-level-math-books/4439
|
## Undergraduate Level Math Books [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What are some good undergraduate level books, particularly good introductions to (Real and Complex) Analysis, Linear Algebra, Algebra or Differential/Integral Equations (but books in any undergraduate level topic would also be much appreciated)?
EDIT: More topics (Affine, Euclidian, Hyperbolic, Descriptive & Diferential Geometry, Probability and Statistics, Numerical Mathematics, Distributions and Partial Equations, Topology, Algebraic Topology, Mathematical Logic etc)
Please post only one book per answer so that people can easily vote the books up/down and we get a nice sorted list. If possible post a link to the book itself (if it is freely available online) or to its amazon or google books page.
-
6
It's no longer possible to add useful answers to this question (as there are too many!) and it's unclear whether this question would be "allowed" by modern standards -- far too broad. As it's been popping back to the front page fairly frequently, we've decided to close it. – Scott Morrison♦ Jul 11 2010 at 13:30
7
See discussion on meta: meta.mathoverflow.net/discussion/499/… (and remember to vote this comment up, so it is visible to others) – Victor Protsak Jul 14 2010 at 10:34
show 11 more comments
## 96 Answers
Searcóid: Elements of Abstract Analysis. I loved this book as an undergraduate, for many reasons, but mainly because it gave me an idea of the unity of mathematics. It starts from the axioms of set theory and takes you all the way to C*-algebras and the Gelfand-Naimark theorem. Here's the Google Books page.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Galois Theory by Ian Stewart is excellent. The third edition is quite different from the second and includes many more problems.
-
Jaenich: "Topology"
Introduces the concepts of point set topology ("paracompact" and all this stuff) motivating each via examples which are rigorously defined but also drawn. Other advantage: It is short!
-
I am surprised this has not been mentioned before (is it too advanced?):
Bott and Tu, Differential forms in algebraic topology.
The best introduction to de Rham cohomology, spectral sequences, characteristic classes from the algebraic point of view, and countless other topics.
-
2
... and seeing that Hatcher, Serre, Jacobson, Alperin, and Evans have been featured (some at the very top), I don't agree that it's "too tough for this list". – Victor Protsak May 25 2010 at 4:33
show 3 more comments
Algebra: Chapter 0, Paolo Aluffi
Best book on algebra I've had my hands on yet, and I love how it uses category theory. I wouldn't mind having a course taught from this one. Topics from group theory all the way through field theory, linear algebra, and homology. This book deserves more attention!
http://www.amazon.com/Algebra-Chapter-Graduate-Studies-Mathematics/dp/0821847813/ref=sr_1_1?ie=UTF8&s=books&qid=1278799249&sr=8-1
-
1
Terrific book for first year graduate algebra or honors undergraduate. – Andrew L Jul 10 2010 at 22:23
Also, I just started this book and absolutely love it
Geometry: Euclid and Beyond, Hartshorne
-
Apostol "Calculus"
-
Bartle "The Elements of Integration and Lebesgue Measure"
-
Real Mathematical Analysis by Charles Pugh
-
1
Thank you,someone finally mentioned this book.I'm hoping it supplants baby Rudin eventually.I affectionally call it "Rudin Done Right". – Andrew L Mar 18 2010 at 20:47
Lectures on Linear Algebra by I. M. Gel'fand
-
Alexandre Stefanov keeps an extensive list of free math books / lecture notes. The list is divided according to subject and updated frequently. I have found some very nice books there.
-
For a thorough introduction on Partial Differential Equations, read L.C. Evans, "Partial Differential Equations". Features both linear and nonlinear equations.
-
Kock, Vainsencher: An invitation to Quantum Cohomology.
Written in the most friendly and motivating style I have ever seen in a book. Almost has no prerequisites: You should that there exists something like algebraic varieties - without having to know any technical details - and that P^1 is such a thing. Everything else is provided in easy exercises or the text. It gives an excellent intuition about the subject with lots of outlooks on a field of current research, and at the same time manages to be easily undergraduate readable.
-
For a long time, Kolmogorov-Fomin's Introductory Real Analysis was my standard for a great mahtematics textbook. I can't imagine a better introduction to serious analysis.
The translation I'm linking to is very good, and includes excercises (the original has many fewer), but it is incomplete (it's missing the chapter on Fourier Series). So if you can read Russian, I recommend you get the original.
-
I'm a big fan of John Hubbard's "Vector Calculus, Linear Algebra and Differential Forms" text. I was a TA for the course twice at Cornell and was amazed at how well it went. The text has an extremely pleasant "zest" to it. When Hubbard asked me to take a look at it my first response was the text is "overflowing with the spirit of calculus". I still believe that. I have a hard time containing my praise.
The main problem with the text is that it's so engrossing. It places more demands on the student than a traditional service course text would ever consider. But it's also far more rewarding. At Cornell it was taught as a branch of their traditional calculus sequence -- it was a course that was earmarked for keener students, mostly from other departments.
In short, if you want to have physics, engineering and economics students appreciating the derivative as a linear approximation, thinking Lipschitz bounds for functions are cool, being interested in the computation of norms of linear operators, etc, this is a great resource.
-
From the quick look I've had, not much representation theory has been mentioned so here goes for undergrad level rep theory (perhaps suitable for 3rd/4th year in a standard sequence of undergraduate study), roughly in the order of difficulty (from easiest to hardest):
• James & Liebeck - "Representations and Characters of Groups" (a very good introduction)
• Sagan - "The symmetric group: representations, combinatorial algorithms, and symmetric functions"; (the first two chapters here at least are representation theory) OR James & Kerber - "Representation Theory of the Symmetric Group" (this one includes some modular representations of $S_n$)
• Alperin - "Local Representation Theory" (basically, modular representation theory)
• Hall - "Lie groups, Lie Algebras and Representation Theory" (a solid introduction to Lie theory); for a more advanced perspective Harris & Fulton - "Representation Theory: A first course" (but it could be slightly terse at points, but not necessarily)
For algebraic geometry, the one book I'd suggest is "Algebraic Geometry: A first course" by Joe Harris, very nice and full of examples. For algebraic number theory, a very good introduction is Janusz - "Algebraic Number Fields" (followed perhaps by Childress - "Class Field Theory", or Silverman - "The Arithmetic of Elliptic Curves" to go in a slightly different direction).
-
Karen Smith et al., An Invitation to Algebraic Geometry
-
1
Since I haven't looked at this book but might be interested... This book is "undergraduate level" for whom? Presumably, many Harvard senior math majors would be able to tackle it. How many senior math majors at a mid-tier public research university? mid-tier liberal arts college? compass point state college? – Alexander Woo Dec 28 2009 at 21:23
show 1 more comment
Here is an undergraduate level math book recommendation from an early undergrad's position:
I like "Linear Algebra Done Right". I've looked at a bunch of books on linear algebra, and the usual matrix approach is to me a big turn-off when what you're really interested in is the abstract machinery of transformations between vector spaces. I'm not a research mathematician. In fact, I don't even study linear algebra yet, but as a student of mathematics that like algebra, spaces, maps and all that good stuff, I find this to be a very readable account of linear algebra.
There are more abstract books on the subject, and my impression is that LADR prepares you for the next level way before you're usually "allowed to" by other accounts like Lax etc. The trade-off is that LADR is not a book for engineers, but this would be a sad world for a mathematician if that was something he had to worry about (in his spare time). Great for self-study. Reads like a novel. I'd probably prefer it if Axler used sets for span and bases instead of lists, but that's something you'll probably be able to shake off with the next book you read on the subject.
-
1
See the other answer on this book for my comments. – Gerald Edgar Dec 28 2009 at 15:24
show 1 more comment
Linear Algebra / Hoffman & Kunze - A book that truly develops linear algebra in a gradual manner. It starts with a basic discussion of systems of linear equations, matrices, Gaussian elimination, etc. and gradually progresses to the more abstract theory. Eventually it even touches upon subjects such as tensor products, the exterior algebra and the Grassmann ring. In short, it manages to cover a lot of linear algebra in a very leisurely and clear manner. I think that this is the quintessential example of a how an undergraduate level math book should be written. The only thing I don't like about it is the fact that quotient spaces aren't mentioned throughout the book (they're mentioned in the appendix, though).
-
There is a good list here, divided by subject, that also contains many links to freely available textbooks and lecture notes.
-
Kelley, General Topology
-
Linear Algebra and Its Applications by Gilbert Strang. You can also watch his video lectures at MIT OpenCourseWare
-
"Introduction to Mathematical Logic" by Ebbinghaus, Flum and Thomas
Careful introduction, addresses many doubts that one might have about why one does logic in this way and not some other, e.g. whether one is doing something circular when formulating set theory in 1st order logic, or e.g. it proves Lindstroem's Theorem, that says that classical 1st order logic has the highest power of expressability among the logics with completeness and Loewenheim-Skolem.
-
Metric Spaces by Mícheál Ó Searcóid
http://www.amazon.com/Metric-Spaces-Springer-Undergraduate-Mathematics/dp/1846283698/ref=sr_1_1?ie=UTF8&s=books&qid=1256082496&sr=1-1
It's an exhaustive introduction analysis at the level of the metric space that's well worth reading.
-
Introduction to Analysis by William R. Wade
http://www.amazon.com/Introduction-Analysis-4th-William-Wade/dp/0132296381/ref=sr_1_1?ie=UTF8&s=books&qid=1256082707&sr=1-1
This is a good transition from undergraduate calculus to analysis.
-
An Introduction to Manifolds by Loring W. Tu
http://www.amazon.com/Introduction-Manifolds-Universitext-Loring-W/dp/0387480986/ref=sr_1_1?ie=UTF8&s=books&qid=1256082981&sr=1-1
-
Riemannian Geometry: A Beginner's Guide by Frank Morgan
http://www.amazon.com/Riemannian-Geometry-Beginners-Frank-Morgan/dp/1568810733/ref=sr_1_1?ie=UTF8&s=books&qid=1256083041&sr=1-1
I love this book!
-
Fraleigh's "A First Course in Abstract Algebra"
http://www.amazon.com/First-Course-Abstract-Algebra-7th/dp/0201763907
-
How about an anti-recommendation? Someone in another answer mentioned Steven Axler's Linear Algebra Done Right. My comment, not as someone who has used this book in a class, but as someone who has taught the students from this class during the following term: It doesn't prepare the students to use linear algebra in engineering, in physics, in chemistry, or even in branches of mathematics other than abstract algebra.
-
3
1
@Gerald: I'm not sure omission of Cramer's Rule is such a big deal. Cramer's Rule is helpful for solving small linear systems (2 or 3 unknowns) since there are useful heuristics for calculating the determinants of 2x2 and 3x3 matrices, but Gaussian elimination is a more powerful and general algorithm for solving linear systems. – las3rjock Nov 7 2009 at 15:49
1
* Then how about "Linear Algebra Done Wrong", * Looks good, actually. Suitable for students interested in other branches of mathematics! – Gerald Edgar Dec 30 2009 at 13:35
1
The book is for students already familiar with matrix algebra, so this is not a valid criticism of the book. – Michael Greinecker May 25 2010 at 20:51
show 3 more comments
General Topology, by Stephen Willard
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338721036911011, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/1205/how-does-a-steady-wind-flow-generate-sound?answertab=oldest
|
How does a steady wind flow generate sound?
When a wind blow through sharp edge, say, edge of a paper, you can see the vibration of the paper and hear the sound.
For this type of oscillation, it should be a damped oscillation with external driving force $\mathcal{F}(x)$:
$\ddot{x}+2\lambda\dot{x}+\omega_{0}^{2}x=\mathcal{F}(x,t)$
with the driving force in the form of sinusoidal $\mathcal{F}(x,t)\sim F\sin(\omega t)$ (It should be a 3D version though). However, if the air flow is really steady, the external force should be constant so there is no oscillation at all from this equation.
A simple guess is that the motion of the paper influence the streamline of air flow so that the pressure gradient provides the exact sinusoidal driving force. This mechanism is reasonable because the propagation of sound wave has the oscillation relationship between the displacement and the pressure. A question here is how the air flow sustain the energy loss due to the damping.
It is also not easy to think of the initial condition as well as external force for these oscillation. So
1) What exactly is the physical mechanism to generate sound when a 'steady' air flowing through a sharp edge?
2) Another similar question is how the driving force act in the resonance in musical instrumental such as pipe? A 'steady' flow of air is also provided somewhere to the pipe, but air oscillates inside the pipe.
Edit: From the answers, people suggest few mechanism for different situations. Few more questions:
3) For static object, Kármán vortex street can form behind the object. So is the sound frequency the same as frequency of vortex generation, or the same as the expansion of the vortex? Sound is a spherical wave propagating outward, so identify the sound frequency should locate the point of generation.
4) Where is sound generated in the situation (1)? Is the sound frequency the same as the frequency of the vibrating paper?
Fluid mechanics is a difficult topics, but there are approximation for special cases trying to explain the underlying mechanism, such as the flag fluttering cited by j.c.
-
This is an structure-fluid interaction type of problem, where the elasticity of the boundary interacts with the fluid turbulence to produce an amplyfying effect. In essect, "steady flow" only exists in text books and not in real nature. Even the most viscous flow has turbulence and non-laminar effects. – ja72 Nov 23 '10 at 19:35
– Vortico Nov 23 '10 at 20:35
4 Answers
There are really two parts to your question. First, how does the wind affect the motion of the paper, and how does that motion then couple to the behavior of the wind?
For the case of a flag fluttering due to the wind (which may or may not be applicable depending on what you have in mind), the physics of the instability leading to fluttering has been worked out in a fairly celebrated paper of 2005 by Argentina and Mahadevan. They argue that:
[...] in a particular limit corresponding to a low-density fluid flowing over a soft high-density flag, the flapping instability is akin to a resonance between the mode of oscillation of a rigid pivoted airfoil in a flow and a hinged-free elastic plate vibrating in its lowest mode.
I suspect this paper and probably some of the papers that cite it would be the right place to look. As you can see, this part of the question that you've asked is of fairly high interest in current research.
Second, how does this motion generate sound? This is also an interesting question but I know less about this. I would suppose that some resonant vibrational modes of the sheet of paper that you're asking about (those involved in the flapping modes described by Argentina and Mahadevan) generate the sound. But I don't know very much about sound generation or acoustics.
-
::fires up a groovy surf-rock bassline:: Good, good, good citations! – dmckee♦ Nov 23 '10 at 7:32
The flag fluttering is different from a paper edge cos they have different boundary condition of the fixed point. Good and interesting paper though. – hwlau Nov 24 '10 at 4:25
It is always about a resonance between the vibrating element and a pressure mod in some resonator (in case of sharp edge, the edge itself oscillates).
I believe the oscillations itself are produced by the turbulent vortices forming Kármán street behind the reed or edge.
EDIT: I have even found a reference: http://arxiv.org/abs/physics/0008053v1
EDIT2: In detail, the process starts with static edge in laminar flow; speed of flow increases and finally the Reynolds number exceeds critical value in which streamlines start to disengage from edge first forming vortices in Karman street (this is called Strouhal instability) and then a totally random turbulent flow. The vortices interact with the edge "poking" it quite randomly (Karman street has some frequency, but I think it does matter only for some narrow cases); now the process is driven by resonances. The edge has some resonance frequencies, the air around can also have them if enclosed in some container, in case of some instruments the musician's lips are also involved -- all those for some kind of mechanical filter that amplifies certain frequencies (instruments) or ranges of frequencies (random setups). Finally, those mechanic vibrations induce pressure weaves in air that we hear as a sound, again either clear as in case of instruments or a noise as in case of trees, roofs and other stuff.
-
I have taken a look for the reference. It only give the simple calculation of the frequency, but not the mechanism. – hwlau Nov 22 '10 at 17:44
1
@hwlau Yes; this is because fluid dynamics is too complex to make an analytical calculation. I'll try to edit my answer to give more details. – mbq♦ Nov 22 '10 at 19:58
The full description of viscous fluid flow (i.e. the Navier-Stokes equations) is non-linear and can be sensitively dependent on initial conditions. What this means in practical terms it that you can't always count on your intuition.
The evolution of the Kármán vortex streets linked by mbq from laminar flow around an obstacle are are a classic demonstration. (And easy enough for a determined middle school student to create in the garage for a science project, though things are likely to get wet...)
Any way, once the wind starts doing non-linear things, it can generate periodic stresses, and from that you get the whistling or humming noise we all know and love. Add a resonant cavity and you can amplify nice pure tones which is the short-short version of how this works in wind instruments.
-
The Karman vortex streets seems occurs around a static object. How about if the paper edge is vibrating? Is the sound come from the vortex or the virbrating sheet? Or put it simpler, is the frequency of the generating vortex the same as vibrating sheet. – hwlau Nov 22 '10 at 17:43
@hwlau: It depends. I believe a flute is driven by pressure oscillations in the wind stream, but reeds work off the resonant response of the reed. It's a complicated subject, and the motion or vibration of the obstacle must be taken into account. Closed form solutions are generally impossible and the numeric discipline is called Computation Fluid Mechanics and is very challenging. – dmckee♦ Nov 22 '10 at 18:54
This is a similar problem to the karman vortices developing around cables/pipes when wind blows around them. When each vortex sheds and releases from the boundary layer there is a reactive aerodynamic "lift" developed which might move the structure. When the structure returns it forces the next vortex to shed aplifying the effect. It is like pushing a pendulum. If you push on the right time, even a tiny force can have a large effect.
Search cable galloping, aeolian vibrations and structural wind noise. Note that with cables the wind might excite the 160-th basic harmonic of the system, and not the first few as you might expect. Thats because the internal turbulence in wind resonates based on the Strouhal number is it might be of the order of 40 Hz-160 Hz.
-
Simple question: If it is turbulence, how can it generates the periodic motion for n-th harmonics. Or my assumption wrong? – hwlau Nov 24 '10 at 4:27
– ja72 Nov 24 '10 at 5:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9339147210121155, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/98873/which-cohomology-theories-have-a-formula-langle-omega-text-d-omega-rangle/100188
|
# Which cohomology theories have a formula $\langle \Omega,\text d \omega \rangle = \langle \partial \Omega,\omega \rangle$?
Is a formula
$$\langle \Omega,\text d \omega \rangle = \langle \partial \Omega,\omega \rangle$$
like Stokes theorem
$$\int_\Omega \text d \omega=\int_{\partial\Omega} \omega$$
common in cohomology theories?
Are there relevant examples and what is their interpretation?
-
Do you mean in extraordinary cohomology theories? – Grigory M Jan 16 '12 at 12:18
@Grigory M: Not necessarily, but since I don't know many cohomology theories to start with, I'm not restrictive. – Nick Kidman Jan 16 '12 at 14:39
A theory associated to a homology theory? One with a $d$-operator? – Nick Kidman Jan 16 '12 at 14:53
1
It is very infrequent that homology or cohomology theories in topology (algebraic topology) come from taking the homology of some complex. – Sean Tilson Jan 17 '12 at 4:18
1
@NickKidman There is standard definition of extraordinary (co)homology theory (Eilenberg-Steenrod axioms). If you mean something else -- what exactly? – Grigory M Jan 17 '12 at 8:21
## 2 Answers
The formula you ask about is reasonably common. E.g. in simplicial homology/cohomology, we triangulate a space into simplices. The chain complex made out of these simplices then computes the simplicial homology.
We can also define simplicial cochains: this is just the dual complex to the simplicial chain complex, so concretely a simplicial cochain just attaches a number to each simplex in our space.
If $\Omega$ is a simplex, and $\omega$ is a simplicial cochain, then by definition of the coboundary operator $d$ on simplicial cochains, one has the formula $$\langle \Omega, d\omega \rangle = \langle \partial \Omega, \omega\rangle.$$ This is more tautological than the de Rham cohomology case, though, because one doesn't have an a priori notion of simplicial cochains, or of the coboundary operator on them, so the whole theory of simplicial cohomology is defined to make this formula be true. (An entirely analogous story is true if we replace simplicial by singular everywhere in the above.)
What is somewhat special in the de Rham case is that we have an a priori notion both of simplices or other submanifold-like subobjects of a manifold and their boundaries, and of differential forms and the exterior derivative on forms. The formula relating them is in then not a formality; indeed, it allows you to compare simplicial or singular cohomology with de Rham cohomology, and is really the most important ingredient in the the fact that those cohomologies are isomorphic to de Rham cohomology.
If one keeps in the purely simplicial (or singular) context, at first the formula looks completely formal, and indeed simplicial (or singular) cohomology itself at first seems quite formal (and you can wonder why you need it, when you already have singular/simplicial homology). But there are other descriptions of singular cohomology, e.g. via obstruction theory. For example, one has the isomorphism $$H^1(X,\mathbb Z) = \text{ homotopy classes of maps } X \to S^1.$$ From this point of view simplicial or singular cohomology classes can take on a non-formal appearance, and in proving such statements, the formula you ask about plays a key role.
-
When people say cohomology theory, I usually think of some spectrum. The reason for that is Eilenberg and Steenrod developed axioms for what a cohomology theory on the category of pairs of "nice" spaces should do. They then showed that singular cohomology satisfied these and was essentially unique. De Rham cohomology is sort of miraculous in that it is easy to see the geometry. It is staring you right in the face, these things are forms on your manifold!
The pairing you see in stokes theorem is sort of a really special case for 2 reasons. The first is the inherent geometry in De Rham cohomology. We have a geometric interpretation of what it means to be a De Rham cocycle. And by geometric I mean that for every $\alpha \in H^n_{dR} X$ where $X$ is a manifold, we understand there is some intrinsic structure on $X$ that this is detecting. This is usually not the case for other theories. We have this in $K$-theory with vector bundles and cobordism with families of manifolds, but that is it. It would be a huge deal if someone understood what $\alpha \in tmf^*X$ was supposed to be on $X$ they would win a prize (like a hug or a job or something).
The second is that singular cohomology is one of the few cohomology theories that has a formulation as the homology of some chain complex. You could ask for homology theories on spaces to really be invariants that land in chain complexes and aren't all that well defined. I am pretty sure you are just left with singular cohomology.
So let me be liberal and say that a cohomology theory should be something that satisfies the Eilenberg-Steenrod axioms, except for the dimension axiom. Lets also suppose this cohomology theory has some sort of product, then Brown representability tells us we have a ring spectrum. This is what I mean by a represented cohomology theory. So in this setting $E^*X=[X,E]$ is just the homotopy classes of maps from $X$ into the spectrum $E$, whatever that means. Similarly $E_* X$ is just homotopy classes of maps from the sphere spectrum into $X \wedge E$.
While stokes theorem is really about De Rham theory, it is really foreshadowing.
Integration of differential forms against submanifolds is really a nice model of something called the cap product, which is really only well-defined because of Stokes theorem. The cap product is like a pairing between $E^*X$ and $E_*X$. The cap product is a little more subtle, so I won't talk about it (it's not hard, just a little involved).
To answer your very specific question, there is only such a formula in singular cohomology theories because the boundary operator only makes sense in that setting.
My suggestion is that you try to pair $a \in E^*X$ and $b \in E_*X$ to get an element in $\pi_*E$ (in the case you mention $E=H\mathbb{R}$ the Eilenberg-Maclane spectrum of the reals, which represents de Rham cohomology, and the output would be a real number ... an element of $\pi_0 H \mathbb{R}$).
Let me know if you need help, but remember to use the fact that we are working with represented cohomology and homology theories and that $E$ is a ring spectrum, so it has a product map $\mu: E \wedge E \to E$.
-
1
Well I don't understand most of this notation. Also I think there is a typo in each of the three parts of the answer. – Nick Kidman Jan 13 '12 at 23:06
3
– Sean Tilson Jan 17 '12 at 4:14
1
The question is about a Stokes'-type theorem, not just about the existence of a pairing. I agree that the question itself doesn't make much sense though, at least as it's currently phrased. – Aaron Mazel-Gee Jan 17 '12 at 12:09
ah, I see what you mean. Maybe I will edit later, but I would interpret the stokes theorem as a result saying that there is a well defined pairing in terms of chains and cochains. This is the same as saying that the pairing I assert is exists and is well defined up to homotopy. At least that is what it seems like. – Sean Tilson Jan 17 '12 at 23:17
3
There should be a ceremony for the Tilson Hug right there together with the Fields medal in IMU meetings! – Mariano Suárez-Alvarez♦ Jan 18 '12 at 19:20
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512667655944824, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/109378?sort=votes
|
## Finding Decision Boundary from empirical distribution
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Based on measuring a certain characteristic, we want to classify measurements as coming from either of two populations. The true population distributions are unknown (and we don't want to take any strong assumptions on this), but we have a number of measurements from each population providing us with two empirical distributions. Thus, we are looking for a method to determine a good (i.e., approximating the optimal) decision boundary based on the empirical distributions. Do any of you happen to know a general and accurate way to determine optimal decision boundaries given two arbitrary empirical distributions ?
-
## 1 Answer
There's always going to be a prior. Even a method that doesn't explicitly use one will be hiding one. The fact of the matter is, you expect your distributions to be simple in some respect, somewhat smooth, maybe belonging to a parametric family, etc. The more data you have, the less this assumption will matter, but it always will. You also need a prior on whether you think observations tend to come from $A$ or from $B$.
What you will prefer hearing:
Pick a gaussian kernel with variance $s$ and compare
$$f_a(x) = \sum_{\mathbf{x_i} \in A} g(\mathbf{x}-\mathbf{x_i})$$ $$f_b(x) = \sum_{\mathbf{x_i} \in B} g(\mathbf{x}-\mathbf{x_i})$$
if $f_a < f_b$, assign to bin $A$, otherwise assign to bin $B$
(note, this assumes that the cardinal of $A$ and $B$ are indicative of the prior probability of an element to be drawn from either)
Adjust $s$ according to the variance of your empirical distributions.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913538932800293, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/71435/which-conjectures-only-need-the-grand-riemann-hypothesis-to-become-genuine-theore/71477
|
## Which conjectures only need the Grand Riemann Hypothesis to become genuine theorems?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello,
I've been interested in number theory for several years, and as time goes by, I read more and more articles in which theorems begin with "Assume the Riemann Hypothesis holds." But up to now, I think I've almost never seen any beginning with "Assume the Grand Riemann Hypothesis holds". So, which are those "theorems" that only need the Grand Riemann Hypothesis to become certain results?
-
8
Since this ask for a list, I suggest community wiki mode. – quid Jul 27 2011 at 20:36
1
The Wikipedia article (en.wikipedia.org/wiki/Grand_Riemann_hypothesis) has this wonderful remark: "The Siegel zero, conjectured not to exist, is a possible real zero of a Dirichlet L-series, rather near s = 1." – TonyK Jul 27 2011 at 20:42
2
A trivial answer to the question: the Grand Riemann Hypothesis only needs to Grand Riemann Hypothesis to proved in order to become a theorem ;) – Peter Humphries Jul 28 2011 at 10:36
2
This is a duplicate question. See mathoverflow.net/questions/17209/…. By the way, I am surprised you read "more and more" articles which only assume RH but have almost never seen any assuming GRH. What applications have you been reading? Most of the really juicy applications need GRH. – KConrad Jul 28 2011 at 15:21
1
@Ryan, maybe you should edit your question to let everyone know what is meant by "Grand" RH as distinguished from "Extended" RH and "Generalized" RH. For a sense of scope here, most applications of the Generalized RH (meaning for Dedekind zeta-functions of number fields) don't really need it for all number fields, but they do need it for infinitely many number fields. – KConrad Jul 28 2011 at 15:27
show 3 more comments
## 4 Answers
I like the phrase "only need the grand Riemann hypothesis"...
One of my favorite results known contingent on this result (rather, the weaker generalized Riemann hypothesis), is that the ring of integers in a number field (EDIT: with infinite unit group) is Euclidean with respect to some Euclidean algorithm if an only if is is a PID. Interestingly, the "amount" of GRH needed here far exceeds that of the field in question. One must assume GRH for an infinite number of extension fields as well.
-
@Ramsey: I am not seeing "except for the imaginary quadratic fields $\mathbb{Q}(\sqrt{-19}),\ldots,\mathbb{Q}(\sqrt{-163})$" in your answer. But it should be there, shouldn't it? – Pete L. Clark Jul 28 2011 at 11:12
I remember reading this a couple of years ago. The condition needed was stronger if I recall correctly, and it was something like: the unit rank is at least 3. – Dror Speiser Jul 28 2011 at 11:31
@Pete: Of course! My slip up. I meant to include "infinite unit group" to make the statement more relevant to the question (oh, and, uh, correct as well). I'll edit. – Ramsey Jul 28 2011 at 13:26
@Dror: Infinitely many units suffices (conditional on a bunch of GRH's). This was proven by P.J. Weinberger in "On Euclidean rings of algebraic integers." – Ramsey Jul 28 2011 at 13:35
Nick, it's par for the course that theorems which assume GRH do so for infinitely many number fields. I agree it's important to be aware when describing this theorem of Weinberger that GRH is used not just for the number field under discussion, but rather for infinitely many of its extensions. However, in light of how typical it is for infinitely many instances of GRH to be required for theorems "under GRH", it's not a surprise that the "amount" of GRH goes beyond the specific number field in question. – KConrad Jul 28 2011 at 15:16
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For the Grand Riemann Hypothesis (RH for zeros of all automorphic $L$-functions), see the (somewhat technical) answer to
http://mathoverflow.net/questions/2826/equivalent-forms-of-the-grand-riemann-hypothesis
I think the Generalized Riemann Hypothesis (RH for zeros of Dirichlet $L$ functions) has the most significant number theoretic consequences. In addition to those listed at
http://en.wikipedia.org/wiki/Generalized_Riemann_hypothesis#Consequences_of_GRH
such as easy primality testing and good bounds on primes in arithmetic progressions, one also gets good lower bounds on class numbers for positive definite binary quadratic forms of discriminant $D$ (or equivalently, rings of integers in complex quadratic fields): for every $\epsilon>0$ there exists an effective constant $C(\epsilon)$ such that the class number $h(d)>C(\epsilon)|D|^{1/2-\epsilon}$.
-
The main result
''Assume that the generalized Riemann hypothesis (GRH) for zeta functions of number fields holds. There exists a deterministic algorithm that on input positive integers $n$ and $k$, together with the factorization of $n$ into prime factors, computes the element $T_n$ of the Hecke algebra $T(1, k)$ in running time polynomial in $k$ and $\log n$.''
of the recent book by Couveignes, Edixhoven, et al. (page 3)
http://www.math.univ-toulouse.fr/~couveig/book.htm
assumes the generalized Riemann hypothesis.
-
Consult the chapter entitled Assuming the Riemann Hypothesis and Its Extensions … on pages 61--67 of the recent book The Riemann Hypothesis: A Resource for the Afficionado and Virtuoso Alike by Peter Borwein, Stephen Choi, Brendan Rooney and Andrea Weirathmueller.
-
5
Squarely off-topic, but: is aficionado really the contrasting term to virtuoso? As I understand the meanings of these terms, there is a lot of overlap between the two. – Pete L. Clark Jul 28 2011 at 11:16
1
I find it akin to critic and performer. Gerhard "Ask Me About System Design" Paseman, 201.08.11 – Gerhard Paseman Aug 12 2011 at 1:17
"aficionado" implies firstly enthusiasm (affection, maybe akin to "amateur" in the sense of "amare") and secondly perhaps some competence, while "virtuoso" (skill) could be said to reverse the two. Yet I agree they do not directly contrast. – Junkie Aug 12 2011 at 11:47
1
Yes, I think they are using fancy words they don't fully understand. Probably they mean: A resource for both the enthusiastic amateur and the professional. – anon Aug 12 2011 at 12:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9152466058731079, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/168303-euclidean-metric-metric-print.html
|
# Euclidean metric is a metric
Printable View
• January 13th 2011, 10:59 PM
magus
Euclidean metric is a metric
I am trying to prove that the euclidean metric is a metric. I was able to prove the first three properties easily but the fourth is giving me trouble.
I have to prove that
$\displaystyle\sqrt{\sum_{i=1}^{n}(x_i-z_i)^2}\leq \sqrt{\sum_{i=1}^{n}(x_i-y_i)^2}+\sqrt{\sum_{i=1}^{n}(y_i-z_i)^2}$
Now in the outlined proof they ask you to first prove the CBS inequality and obtain the form
$\displaystyle\sum_{i=1}^{n}x_iy_i \leq \sqrt{\left(\sum_{i=1}^{n}x_i^2\right)\left(\sum_{ i=1}^{n}y_i^2\right)}$
And I've done this.
What I don't get is the jump from this to the proof of the fourth condition.
Could someone give me a hint (I'd actually prefer a hint rather then the whole solution if possible or a guide I want to get as much of this on my own as possible.)
Also, do I need to use $\sqrt{a+b}\leq \sqrt{a}+\sqrt{b}$ and if so how should I approach proving that?
• January 13th 2011, 11:50 PM
FernandoRevilla
$d^2(x,z)=\displaystyle\sum_{i=1}^n{(x_i-z_i)^2} =\displaystyle\sum_{i=1}^n(x_i-y_i+y_i-z_i)^2=$
$\displaystyle\sum_{i=1}^n(x_i-y_i)^2+\displaystyle\sum_{i=1}^n(y_i-z_i)^2+2\displaystyle\sum_{i=1}^n(x_i-y_i)(y_i-z_i)$
Now, use Cauchy Schwartz inequality:
$\displaystyle\sum_{i=1}^n(x_i-y_i)(y_i-z_i)\leq \left|\displaystyle\sum_{i=1}^n(x_i-y_i)(y_i-z_i)\right|\leq$
$\sqrt{\displaystyle\sum_{i=1}^n{(x_i-y_i)^2}}\sqrt{\displaystyle\sum_{i=1}^n{(y_i-z_i)^2}}$
You'll obtain:
$d^2(x,z)\leq \left(d(x,y)+d(y,z)\right)^2$
Fernando Revilla
• January 14th 2011, 12:13 AM
magus
$d^2(x,z)=\displaystyle\sum_{i=1}^n{(x_i-z_i)^2} =\displaystyle\sum_{i=1}^n(x_i-y_i+y_i-z_i)^2$
Makes everything crystal clear.
So from
$\displaystyle\sum_{i=1}^n(x_i-y_i)^2+\displaystyle\sum_{i=1}^n(y_i-z_i)^2+2\displaystyle\sum_{i=1}^n(x_i-y_i)(y_i-z_i)$
We get
$\displaystyle\sum_{i=1}^n(x_i-y_i)^2+\displaystyle\sum_{i=1}^n(y_i-z_i)^2+2\sqrt{\displaystyle\sum_{i=1}^n{(x_i-y_i)^2}}\sqrt{\displaystyle\sum_{i=1}^n{(y_i-z_i)^2}}$
Which becomes
$\displaystyle\sum_{i=1}^n{(x_i-z_i)^2\leq\left(\sqrt{\displaystyle\sum_{i=1}^n(x_ i-y_i)^2}+\sqrt{\displaystyle\sum_{i=1}^n(y_i-z_i)^2}\right)^2$
$\sqrt{\displaystyle\sum_{i=1}^n{(x_i-z_i)^2}}\leq\sqrt{\displaystyle\sum_{i=1}^n(x_i-y_i)^2}+\sqrt{\displaystyle\sum_{i=1}^n(y_i-z_i)^2}$
Thank you so much for the help.
All times are GMT -8. The time now is 02:27 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438737630844116, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/89431/why-the-gell-mann-matrices-in-the-su3-model-need-to-be-trace-orthogonal/89437
|
## Why the Gell-Mann matrices in the SU(3)-model need to be trace orthogonal ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Thank you Cristi Stoica for your answer to the previous post of this question. Your hint is to the point I think. We should look at the requirements to construct the corresponding root system.
My apology to Yemon Choi, Will Jagi , Theo Johnson-Freyd and all other readers. My question was formulated extremely short without any context. It is the first time I am asking the MathOverflow community for help. And it is hard to judge how far to go in the description of the theory behind the question and the context of the question. So, let me try again. Cristi gave a good hint I think but the question is not answered yet.
The special unitary group SU(3) is a inherent component of the standard model of particle physics. It models the gauge field of the color charge property that is related the strong particle interaction. Generally for SU(n), matrices are used as a representation of the generators of its Lie algebra. Those matrices (L_i) are complex, traceless and antihermitian and their Lie bracket is the commutator.
The structure constants f_abc are defined by [L_a,L_b]=2i*f_abc*L_c The matrices in this set that are diagonal, form a basis for the Cartan subalgebra.
Gell-Mann proposed a set of eight 3D-matrices (L_i, i=1 to 8) to be used for SU(3) in the standard model. They are similar to the Pauli matrices in the SU(2)-case. Additionally Gell-Mann requires that all eight matrices are trace orthogonal, or that tr(L_a L_b)=2*delta(a,b) for a and b =1 to 8. This means that the 9-dimensional vectors, corresponding to the matrices, are orthogonal. The basis for the Cartan-algebra in this case is the pair L_3 and L_8. All entries in L_3 and L_8 are zero except L_3(1,1)=-L_3(2,2)=sqrt(3)*L_8(1,1)=sqrt(3)*L_8(2,2)=-.5*sqrt(3)*L_8(3,3)=1. The completely antisymmetric structure constants are: .5* f_123=f_147=f_165=f_246=f_257=f_345=f_376=.5, f_458=f_678=sqrt(3)*.5
In G-M’s particular choice of the matrices you see the appearance of the sqrt(3). I have the feeling that the expressions could be simpler (without sqrt(3)) if L_8 is replaced by (L_3+L_8), so that the entries of L_8 are all zero except L_8(2,2)= -L_8(3,3)=1. Of course the structure constants would change: F_123=f_678=f_458=f_345=1,f_147=f_165=f_246=f_257=f_376=f_128=.5 However we would get tr(L_3 L_8) =.5 instead of 0. So the question is: Why the Gell-Mann matrices in the SU(3)-model need to be trace orthogonal?
-
2
This post is historically inaccurate. Gell-Mann proposed these matrices not for use in the $SU(3)$ gauge theory of the Standard Model but rather for the approximate $SU(3)$ flavor symmetry of the quark model. And as mentioned in the answer below, there is nothing special about this particular set of matrices. It is just one basis and any other basis would work equally well. Why is this question appropriate to MO? – Jeff Harvey Feb 26 2012 at 15:07
## 2 Answers
I'm no expert, and I haven't asked Professors Gell-Mann or Ne'eman, but with their choice of matrices, L_3 measures the familiar Heisenberg iso-spin quantum number, while L_8 measures the then-novel hypercharge. Mixing the operators would mix the quantum numbers.
-
Thanks Art. This is the answer I needed I guess. Indeed, the orthogonality must guarantee the independence of the quantum numbers. – HAJV Feb 26 2012 at 7:26
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It's just a choice of a basis. Compare it to an orthogonal vector basis. And please... try to write math in LaTeX :) (see the "How to write math" box on the right and below).
-
1
Of course, it is a choice. Any idea why G-M preferred or required the one he chose? – HAJV Feb 24 2012 at 21:18
4
It's simply a choice which simplifies calculations. Choosing an orthonormal basis for a vector space simplifies any calculation involving an inner product, of which in computing Feynman diagrams there are a-plenty. (Is this really a question for MO?) – José Figueroa-O'Farrill Feb 26 2012 at 1:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8974475264549255, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/44687/adjunction-up-to-distributor/44721
|
## Adjunction up to distributor
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose we have functors $F : C \to D_1$ and $G : D_2 \to C$, together with a distributor (profunctor) $D : D_1^{\rm op} \times D_2 \to {\rm Set}$. We could define "$G$ is right adjoint to $F$ up to $D$" as the existence of a natural isomorphism $C(c, Gd) \cong D(Fc, d)$. We could also consider generalizing this by replacing $C$ by two categories $C_1$ and $C_2$ related by a distributor. Question: has this sort of "adjunction up to distributor" been studied somewhere, and/or is there a better way of formulating it?
-
Noam, you were right about my answer; I wrote too soon. I am retracting it. Regarding my remark about being a special case of representable profunctor goes: the condition that "$G$ is right adjoint to $F$ up to $D$" is that the profunctor $D \circ F^{op}$ is representable by the functor $G$; see the nLab page on profunctor. – Todd Trimble Nov 3 2010 at 19:22
## 1 Answer
I will slightly modify my earlier answer which I retracted. There is the notion of collage of a profunctor $R: C^{op} \times D \to Set$, a category whose collection of objects is $Ob(C) \sqcup Ob(D)$, and where $\hom(x,y) = \hom(x,y)$ if $x$ and $y$ are both objects of $C$ or both objects of $D$, where $\hom(x,y) = R(x,y)$ if $x \in Ob(C)$ and $y \in Ob(D)$, and $\hom(x,y)$ is empty if $x \in Ob(D)$ and $y \in Ob(C)$. Composition is just as you'd expect.
Now, in Noam's notation, consider taking the collage of the profunctor $R = D \circ F^{op}: C^{op} \times D_2 \to Set$ (the composition here is profunctor composition). There is an obvious inclusion functor $i: C \to Coll(R)$ (acting as the identity on objects and morphisms). Then Noam's "right adjoint $G$ of $F$ up to $D$" is essentially equivalent to an ordinary right adjoint $G'$ to the inclusion $i$. For such a $G': Coll(D \circ F^{op}) \to C$, there are natural isomorphisms
$$Coll(D \circ F^{op})(ic, c') \cong C(c, G'c')$$
$$Coll(D \circ F^{op})(ic, d') \cong C(c, G'd')$$
($c' \in Ob(C)$, $d' \in Ob(D_2)$), and following the definition of collage, we calculate that $G'c'$ is $c'$ up to isomorphism, and $C(c, G'd') \cong (D \circ F^{op})(ic, d') = D(Fc, d') \cong C(c, Gd')$. So $G'$ is canonically isomorphic to the evident functor
$$(1_C, G): Coll(D \circ F^{op}) \to C$$
where $G$ is a right adjoint to $F$ up to $D$.
-
thanks for this explanation, and for your comment above about representability. – Noam Zeilberger Nov 6 2010 at 9:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9487812519073486, "perplexity_flag": "head"}
|
http://inperc.com/wiki/index.php?title=Cell
|
This site contains: mathematics courses and book; covers: image analysis, data analysis, and discrete modelling; provides: image analysis software. Created and run by Peter Saveliev.
# Cell
### From Intelligent Perception
Unlike cells in a cubical complexes, the cells in cell complexes are closed.
We define the $n$-dimensional cell, or simply $n$-cell, as a topological space $C$ homeomorphic to the closed ball ${\bf B}^n$ in ${\bf R}^n$:
$$C \approx {\bf B}^n = \{x \in {\bf R}^n: ||x|| \leq 1 \}.$$
Then, a $0$-cell is simply a point. A $1$-cell is homeomorphic to a closed interval. A $2$-cell is homeomorphic to a disk.
As a subspace of ${\bf R}^n$, ${\bf B}^n$ is partitioned into the interior and frontier:
$${\rm Int}({\bf B}^n) = \{x \in {\bf R}^n: ||x|| < 1 \},$$
$${\rm Fr}({\bf B}^n) = \{x \in {\bf R}^n: ||x|| = 1 \}.$$
It follows then that the cell $C$ is also partitioned into the interior and frontier:
$${\rm Int}(C) \approx {\bf R}^n,$$
$${\rm Fr}(C) \approx {\bf S}^{n-1}.$$
The latter is more commonly called the boundary of the cell and denoted by $\partial C$. Keep in mind that the same notation is used for the boundary of a cell in a cell complex (or cubical complex), in the algebraic sense.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9148558378219604, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/32416/hypergeometric-card-question
|
# Hypergeometric card question
I have a set of 60 things. Of those things, 24 belong to one type, 8 to another type, and 4 to a third type. If you select 7 things from the set, what's the probability of getting at least one of each together?
-
What do you mean by 24/60 8/60 etc? How many cards do you have in your deck? Do they have suits? How are they marked/numbered? – Michael Chernick Jul 16 '12 at 23:10
srry, changed it to make it easier to understand. – Harrison Cho Jul 16 '12 at 23:15
Are these two separate questions? In the first question, are the sets of 24, 8 and 4 overlapping? Also, is this homework? – Macro Jul 16 '12 at 23:16
no its not hw, its an argument with a frend. 2 diff questions xD sorry. and its a probability for a game. – Harrison Cho Jul 16 '12 at 23:19
Do you want the probability of getting exactly one of each type or at least one of each type? – Macro Jul 16 '12 at 23:21
show 2 more comments
## 1 Answer
One way to solve this is with inclusion-exclusion. It is easy to count the $7$-tuples which definitely miss some set of types, which might or might not miss any of the others. This means we can evaluate the terms of the following:
$\#$ ways to include all types
$$= \sum_{S \subset \lbrace 1,2,3 \rbrace} (-1)^{|S|} (\# 7-\text{tuples missing types in}~ S)$$
$= {60 \choose 7}$
$-{60-24 \choose 7}-{60 - 8 \choose 7} -{60-4 \choose 7}$
$+{60-24-8 \choose 7}+{60-24-4 \choose 7}+{60-8-4 \choose 7}$
$-{60-24-8-4 \choose 7}$
$= 386,206,920 - 8,347680-...+73,629,072-346,104$
$= 89,990,144$
````Mathematica code:
b[vec_] := Binomial[60 - Total[vec], 7] * (-1)^Length[vec]
Total[Map[b,Subsets[{24,8,4}]]
89990144
````
So about $90$ million out of $386$ million ($23.3\%$) of the combinations contain at least one of each type. If all $7$-tuples are equally likely, then the probability of getting one of each is $23.3\%$.
There are other methods. You can determine the number of ways to split $7$ types among the $4$ so that the first $3$ types have a multiplicity of at least $1$. For example, there could be $3$ of type $1$, $1$ of type $2$, $1$ of type $3$, and $2$ others, and the number of $7$-tuples with this multiplicity of types is ${24 \choose 3}{8 \choose 1}{4 \choose 1}{60-24-8-4 \choose 2}$. The number of terms you get here is ${7 \choose 3} = 35$ instead of $8$, and each term would be a product of $4$ binomial coefficients, so I think it would be more complicated, but it should give the same answer.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484473466873169, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/33152/is-there-a-reference-containing-standard-mathematical-notations/33233
|
## Is there a reference containing standard mathematical notations?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose you are writing a mathematical text (say an article) and you want to call an object (for example, a set) by a letter. It would be cool then to have some reference (optimally available on the internet) where you could find some standard letters and notations of mathematical objects and pick one that you like. Does such a "notation dictionary" exist?
ADDED. Thanks everybody for interesting answers! Maybe it is worth to add that I had in mind rather basic things. The question was trigged by my attempt to find a good letter to denote a subset of the segment $[0,1]$. Finally I decided to call it $T$ (in the course of the proof it turns out that $T$ is equal $[0,1]$ :) ).
-
1
@Dmitri: This is not directly relevant, but you might want to check out mathoverflow.net/questions/18723/… – Harry Gindi Jul 23 2010 at 23:40
1
Probably the question should be community wiki. – Greg Kuperberg Jul 24 2010 at 0:19
Whereas people tend to standardize symbols for particular concerete objects (e.g. the Cantor set is C) there is no need to standardize variables. You can really denote your generic subset of $[0,1]$ by any letter, in lower or capital cases, of the Latin or Greek, or other aplhabets. – Pietro Majer Jul 26 2010 at 7:16
## 4 Answers
Really the closest that you can get is Wikipedia or the right kind of search in Google Scholar. The community needs tools to establish or recognize consensus, which of course is an open-ended problem. These tools, while they are certainly far from perfect, are the best tools that exist. If you did embark on a project to document standards, that could be a great thing to do, but it would probably eventually be co-opted by Wikipedia.
When I think about quantum algebra, a topic which is notorious for "notation sprawl", I use Wikipedia and Google Scholar. The more traditional method is follow a few respected papers and textbooks, and this is also still reasonable.
-
2
Greg, since you value Wikipedia so highly, would you like to rewrite the article "Quantum group" so that it contains something besides notation (whether standard or not)? It's one of the weakest advanced math articles I've seen there. By the way, there is no real quality check mechanism on WP: it's all consensus based and depends strongly on the level of expertise of a small set of contributors and degree of voluntary compliance with various policies. – Victor Protsak Jul 24 2010 at 4:29
1
Note, however, that notation is not uniformized even across various articles on the same topic on WP! That is the only sensible thing to do, since different people contribute to different articles and editorial coordination involves enormous amount of work and can be difficult to implement on an open wiki. – Victor Protsak Jul 24 2010 at 4:35
3
Regardless of my opinion of Wikipedia, it is objectively an extremely useful and influential mathematical reference despite its obvious shortcomings. It would be a great public service for someone to rewrite the article on quantum groups, and various other articles. Unfortunately most people have no professional incentive to do it, and I am no exception. It could be a good issue to discuss with a financial patron. Otherwise, all I can say to people who improve Wikipedia is, "thank you". – Greg Kuperberg Jul 24 2010 at 5:01
1
I don't understand the comment about "no professional incentive" to edit Wikipedia. Is there a professional incentive to participate in MathOverflow? – András Salamon Jul 25 2010 at 16:42
2
It's true that I don't "need" the visibility. Nonetheless, I enjoy interaction with colleagues, and in this respect MO is more fun than WP. – Greg Kuperberg Jul 26 2010 at 4:15
show 5 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I'm not sure this is quite what you have in mind, but there is a "comprehensive" LaTeX symbol list: http://www.ctan.org/tex-archive/info/symbols/comprehensive/symbols-a4.pdf
Unfortunately, it doesn't make suggestions about what kinds of symbols should be used for what kinds of objects, but that's usually a moving target anyway.
-
Greg no. 2: +1 for a phantastic sense of humour! – Wadim Zudilin Jul 24 2010 at 11:14
The question presupposes the existence of some standard letters and notations of mathematical objects, which I'm doubtful about in many research areas. My experience with subjects that have a long history suggests that notation in mathematics evolves over time in less than logical ways. In some areas there simply isn't any "standard" notation, while in many others some influential sources have tended to establish a de facto standard. But quite a few mathematicians just make up their own symbols as they go along (I won't name any names), forcing readers to translate according to their own taste. It depends a lot on whose earlier work you most rely on. Even Chevalley, in volumes 2 and 3 of his abandoned series of books on the theory of Lie groups, made some really eccentric choices of fonts and letters.
On the other hand, there are some good lists of LaTeX symbols, as already pointed out. But such a list can only reflect overall usage, not tell you what is currently thought to be "standard" in a given subject area.
-
As a slight aside, there are mathematicians out there trying to improve mathematical notation. A good example of this is the book Concrete Mathematics (by Graham, Knuth and Patashnik) en.wikipedia.org/wiki/Concrete_Mathematics. They use (and possibly introduce) a much superior, memorable and obvious notation for rising factorials and falling factorials than the traditional notations (which are also subject to ambiguity between authors). There are many other instances of new notations. I hope that these are adopted widely enough to displace the poorer old notations. – Rhubbarb Jul 31 at 12:35
Indeed, when in your research you happen to meet an interesting object, totally unknown to you, first you have to check whether it has already defined, as it's quite likely to be. How to find a reference then? One good thing in modern mathematics is that today we have reached a good level of standardization; very often names are given just following the most reasonable and obvious way, avoiding fancy or weird terms. Therefore, in this situation the first question one has to ask is simply: "how would I name this guy?" and then, just google it. So, yes, the dictionary does exist and is the whole Internet. In my experience, it's not harder than guessing the name of a TeX command: often trying the obvious term is even quicker than go and look at the userguide. Let me tell you an example. Weeks time ago for some reason I become interested in subsets of Banach space that are stable for infinite convex combinations, i.e. with countably many positive coefficients summig to 1. How would you call such a special sort of convex? (I leave it as a riddle; you can check it directly on google). Once one has found a reference, everything is there, included the notation ( of course guessing directly the letter denoting an object would be much more difficult). Oh, and then there is MO: what could you imagine better than a living dictionary?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499083757400513, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/37524/differential-inclusions-for-distributions
|
## Differential inclusions for distributions.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a set valued function $F$ such that for every $x\in M$ (a manifold) we have that $F(x)\subset T_xM$, a differential inclusion is the "equation", $\dot{x} \in F(x)$.
I was wondering if someone has studied an analog of this for distributions, that is, consider a function such that $F(x)\subset (T_xM)^k$ and search for foliations such that in every point are tangent to the span of an element of $F(x)$.
I've tried googleing some similar names and could not find anything, but maybe here someone knows the key-word I am looking for. Also if there is any reference that would be good.
-
1
I don't know about in general, and I suspect this is not exactly what you are looking for, but a very similar problem is used for the method of characteristics for fully nonlinear first order partial differential equations. See en.wikipedia.org/wiki/Monge_cone – Willie Wong Sep 2 2010 at 23:34
Thanks, it was useful, though not exactly what I am looking for. – rpotrie Sep 3 2010 at 14:43
Could you be more specific and tell us where you are heading with this? Could you give an example of what you are aiming for? By `distribution' do you mean (a) a la Schwartz (eg. the Dirac delta function,etc) or (b) a subbundle of the tangent bundle? And by $(T_x M)^k$ do you want symmetric powers $Symm^k (T_x M)$ or the full k-fold tensor product? If you mean $\Lambda^k (T_x M)$ then there is a literature, eg. EDS by Bryant, Chern, et al (available free at the MSRI website) – Richard Montgomery Sep 10 2010 at 3:26
Thanks for the references, I will look at them. By distribution, I mean a subbundle of the tangent bundle, I was wondering under which conditions one can integrate a distribution close to a given one. In mathoverflow.net/questions/37130/… I asked a similar question (based on my ignorance on the subject) and got an answer on some posible restrictions. I was wondering if someone had made a systematic study. – rpotrie Sep 10 2010 at 6:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366018176078796, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/13167/pade-approximant
|
# Pade Approximant
I have some questions about Pade approximants.
1. Given a divergent power series $\sum_{n >0} a(n)x^{n}$ can we use a Pade Approximant to it $R(x)$ so we can obtain a SUM of the series for every $x$ ?
2. Given a Taylor power series $\sum_{n >0} c(n)x^{n}=O(x^{a})$ for positive $x >0$ for some positve but unknown $a$ can we obtain the value of $a$ by approximating the power series by a Pade Approximant?
3. Can we compute a Borel transform of a series $\sum_{n>0} \frac{a_n}{n!}x^{n}$ by a Pade Approximant?
4. Can we use Pade approximation to obtain numerical integration of series and integrals ?
-
I think you could expect more answers explaining what these Pade approximants look like. Greets – Robert Filter Aug 4 '11 at 12:46
Maybe this belongs on math.se? I'll flag it so the mods can migrate. – Colin K Aug 5 '11 at 5:11
## 2 Answers
Pade approximats are sometimes an alternative to a taylor approximation to a function. Imagine fitting (i.e. describing) some data with a taylor series
$D(x) = D_0 + D_1x + D_2x^2$
If this is a "correct" description of the data (i.e. higher order terms are negligible), then a Pade approximant should also describe the data. For example
$D(x) = \frac{D_0}{1-D_1x-D_2x^2}$
The reason is that the Taylor expansion of the second one is the same as the first expression plus some higher order terms that are "small". (This by the way is an explicit example of your question 2).
In many practical reasons, Pade approximants also do a nice job in describing data that have a singularity somewhere (not where you have data).
But I do not understand your questions:
1) If a series is divergent there is no way to sum it. A different thing is that you have the asymptotic expansion of a function (that can be a divergent series), and you want to reconstruct the original function (this is what Borel summation does, for example).
2) The rule is that if the taylor series is up to order $a$, then the sum of the orders of the numerator and denominator in the Pade approximant should also be $a$.
3) You can approximate the borel transform of a series by the Pade approximants, but I do not know if this would be useful...
4) Yes, they are a "good" approximation to a function, and then can be used as approximations to the integrals of the function.
Not sure if this is useful for you, in any case, If you tell us what you want to do, maybe we can help.
-
OK thanks Alberto :) :) – Jose Javier Garcia Aug 6 '11 at 10:56
I played around wirg Pade approximates for a couple of days once, but didn't find them very useful. The basic idea is you have some sort of polynomial expansion, and you want to approximately match the first N terms, by a function which is the ratio of a Polynomial of degree N-M and a monomial of degree M. If the coefficients of the original power series are known numeric values, by multiplying both sides by the monomial, and equating coefficients of equal power, you get an (N+1)th order linear system to solve for the Pade coefficients. By adjusting M you can usually find an approximant that does somewhat better on a finite interval than the original series. But I failed to see any magic there. Perhaps someone can enlighten us as to how this method can be made useful?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935996949672699, "perplexity_flag": "head"}
|
http://vladimirkalitvianski.wordpress.com/2013/01/06/popular-explanation-of-renormalization/
|
# Reformulation instead of Renormalizations
by Vladimir Kalitvianski
## A popular explanation of renormalization
I show where the error is made. Everyone can follow it.
Many think that renormalization belongs to relativistic quantum non linear field theories, and it is true, but it is not all the truth. The truth is that renormalization arises every time when we modify undesirably coefficients of our equations by introducing somewhat erroneous “interaction”, so we return to the old (good) values and call it renormalization. Both modifications of coefficients show our shameful errors in modeling and this can be demonstrated quite easily with help of a simple and exactly solvable equation system.
Let us consider a couple of very familiar differential equations with phenomenological coefficients (two Newton equations):
One can see that the particle acceleration excites the oscillator, if the particle is in an external force. In this respect it is analogous to the electromagnetic wave radiation due to charge acceleration in Electrodynamics.
The oscillator equation system can be equivalently rewritten via the external force:
It shows that the external force application point, i.e., our particle, is a part of the oscillator, and this reveals how Nature works (remember P. Dirac’s: “One wants to understand how Nature works” in his talk “Does Renormalization Make Sense?” at a conference on perturbative QCD, AIP Conf. Proc. V. 74, pp. 129-130 (1981)).
Systems (1) and (2) look like they do not respect an “energy conservation law”: the oscillator energy can change, but the particle equation does not contain any “radiation reaction” term. Our task is to complete the mechanical equation with a “radiation reaction” term, like in Classical Electrodynamics. It is namely here where we make an error. Indeed, let me tell you without delay that the right “radiation reaction” term for our particle is the following:
If we inject it in system (2), we will obtain a correct equation system:
Here we are, nothing else is needed for “reestablishing” the energy conservation law. System (4) can be derived from a physical Lagrangian in a regular way (see formula (22) here). We can safely give (4) to engineers and programmers to perform numerical calculations. Period. But it is not what we actually do in theoretical physics.
Instead, we, roughly speaking, insert (3) in (1) with help of our wrong ansatz on how “interaction” should be written. Let us see what then happens:
$\normalsize \begin{cases}M_p\mathbf{\ddot{r}}_p = \mathbf{F}_{ext}(t)+\alpha M_{osc}\ddot{\mathbf{r}}_{osc},\\ M_{osc}\mathbf{\ddot{r}}_{osc}+k\mathbf{r}_{osc}=\alpha M_{osc}\mathbf{\ddot{r}}_{p},\end{cases}\qquad (5)$
Although it is not visible in (5) at first glance, the oscillator equation gets spoiled – even the free oscillator frequency changes. Consistency with experiment gets broken. Why? The explanation is simple: while developing the right equation system, we have to keep the right-hand side of oscillator equation a known function of time, like in (2), rather than keep its “form” (1) (I call it “preserving the spirit, not the form”). Otherwise it will be expressed via unknown variable $\mathbf{\ddot{r}}_{p}$ which is coupled now to $\mathbf{\ddot{r}}_{osc}$, and this modifies the coefficient at the oscillator acceleration when $\mathbf{\ddot{r}}_{p}$ in the oscillator equation is replaced with the right-hand side of the mechanical equation. In other words, if we proceed from (1), then we will make an elementary mathematical error because we not only add the right radiation reaction term, but also modify coefficients in the oscillator equation, contrary to our goal. As a result, both equations from (5) have wrong exact solutions. If we insist on this way, it is just our mistake (blindness, stubbornness) and no “bare” particles are responsible for undesirable modifications of equation coefficients.
In CED and QED they advance such an “interaction Lagrangian” (self-action) that spoils both the “mechanical” and the “wave” equations because it preserves the equation “form”, not the “spirit”. In our toy model we too can explicitly spoil both equations and obtain:
with advancing a similar “interaction Lagrangian”:
$\normalsize L_{int}=-\alpha M_{osc}\left(\mathbf{\dot{r}}_p\cdot\mathbf{\dot{r}}_{osc}-\frac{\eta}{2} \mathbf{\dot{r}}_p ^2\right).\qquad (7)$
Here in (6) $\tilde{M}_p=M_p+\delta M_p,\; \tilde{M}_{osc}=M_{osc}+\delta M_{osc}$ – masses with “self-energy corrections”. Thus, it is the “interaction Lagrangian” (7) to (1) who is bad, not the original constants in (1), whichever smart arguments are invoked for proposing (7).
Moreover, there is a physical Lagrangian for the correct equation system (4). Therefore, we simply have not found it yet, so we are the main responsible for modifying the equation coefficients in our passage from (1) to (6), not some “bare particle interactions”.
In QFT they perform a second modification of coefficients, now in perturbative solutions of (6) to obtain perturbative solutions of (4), roughly speaking. Such a second modification is called “renormalization” and it boils down to deliberately discarding the wrong and unnecessary “corrections” $\delta M$ to the original coefficients in (6):
$\tilde{M}\to M$
In other words, renormalization is our “repair” of spoiled by us coefficients of the original physical equations, whatever these equations are – classical of quantum. Although it helps sometimes, it is not a calculation in the true sense, but a “working rule” at best. A computer cannot do numerically such solution (curve) modifications. The latter only can be done in analytical expressions by hand. Such a renormalization can be implemented as a subtraction of some terms from (7), and it underlines again the initial wrongness of (7). It only may work by chance – if the remainder (3) is guessed right in the end, as in our toy model.
P. Dirac, R. Feynman, W. Pauli, J. Schwinger, S. Tomonaga, and many others were against such a “zigzag” way of doing physics: introducing something wrong and then subtracting it. However nowadays this prescription is given a serious physical meaning, namely, they say that no discarding we do, but it is the original coefficients who “absorb” our wrong corrections because our original coefficients in (1) are “bare” and “running”! Of course, it is not true: nothing was bare/running in (1) and is such in (4), but this is how the blame is erroneously transfered from a bad interaction Lagrangian to good original equations and their constants. Both modifications of coefficients (self-action ansatz and renormalization) are presented as a great achievement today. It, however, does not reveal how Nature works, but how human Nature works. Briefly, this is nothing else but a self-fooling, let us recognize it. No grand unification is possible until we learn how to get to (4) directly from (1), without renormalization.
Most of our “theories” are non renormalizable just for this reason: stubbornly counting that renormalization will help us out, we, by analogy, propose wrong “interaction Lagrangians” that not only modify the original coefficients in equations, but also bring wrong “radiation reaction” terms. Remember the famous $\mathbf{\dddot{r}}_p$ leading to runaway exact solutions in CED. We must stop keeping to this wrong way of doing physics and pretending that everything is alright.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211012721061707, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/9924?sort=newest
|
## Order of the Tate-Shafarevich group
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I thought that the order of the Tate-Shafarevich group should always be a square (it's also supposed to be finite, but for the purposes of this question let's assume we know this) but I don't seem to find a good explanation; Wikipedia is silent on the matter.
While I know it may be an open problem, is there a good argument pro or contra this?
(I made this a community wiki as I mistakenly thought this was an open problem)
-
2
This should not be community wiki. As to the answer, there's more to be said... – Pete L. Clark Dec 28 2009 at 1:53
I thought is was an open problem -- I even put the tag [open-problem] originally. Once I realized it's solved, I undid the tag, but unfortunately CW cannot be undone :) You're welcome to post an answer, and if you want we can even do a new question/answer pair :) – Ilya Nikokoshev Dec 28 2009 at 2:41
It might be nice to start afresh with a new question. As to the answer, I have to give first dibs to Bjorn Poonen (and/or Michael Stoll). – Pete L. Clark Dec 28 2009 at 3:48
1
I'll supplement Pete's comment with this link: www-math.mit.edu/~poonen/papers/sha.ps – David Zureick-Brown♦ Dec 28 2009 at 7:47
## 3 Answers
The first example of an abelian variety with nonsquare Sha was discovered in a computation by Michael Stoll in 1996. He emailed it to me and Ed Schaefer, because his calculation depended on a paper that Ed and I had written. At first none of us believed that it was what it was: instead we thought it must be due to either an error in Stoll's calculations or an error in the Poonen-Schaefer paper. Stoll and I worked together over the next few weeks to develop a theory that explained the phenomenon, and this led to the paper http://math.mit.edu/~poonen/papers/sha.ps - that paper contains a detailed answer to your question.
To summarize a few of the key points: If the abelian variety over a global field $k$ has a principal polarization coming from a $k$-rational divisor (as is the case for every elliptic curve), then the order of Sha is a square (if finite), because it carries an alternating pairing - this is what Tate proved, generalizing Cassels' result for elliptic curves. For principally polarized abelian varieties in general, the pairing satisfies the skew-symmetry condition $\langle x,y \rangle = - \langle y,x \rangle$ but not necessarily the stronger, alternating condition $\langle x,x \rangle=0$, so all one can say is that the order of Sha is either a square or twice a square (if finite). Stoll and I gave an explicit example of a genus 2 curve over $\mathbf{Q}$ whose Jacobian had Sha isomorphic to $\mathbf{Z}/2\mathbf{Z}$ unconditionally (in particular, finiteness could be proved in this example).
If the polarization on the abelian variety is not a principal polarization, then the corresponding pairing need not be even skew-symmetric, so there is no reason to expect Sha to be even within a factor of $2$ of a square. And indeed, William Stein eventually found explicit examples and published them in the 2004 paper cited by Simon.
A final remark: Ironically, my result with Stoll quantifying the failure of Sha to be a square is used by Liu-Lorenzini-Raynaud to prove that the Brauer group $\operatorname{Br}(X)$ of a surface over a finite field is a square (if finite)!
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Incidentally, it was thought for some time that the Brauer group $B(X)$ of a surface $X$ over a finite field could have order which was not a square. This turned out to be false : if $B(X)$ is finite, then its order is a square (Liu--Lorenzini--Raynaud, Inventiones, 2005). Conjecturally, the group $B(X)$ is always finite.
-
Brian Conrad told me that this is not always the case; Tate's paper where he claimed this was misunderstood until some counterexamples were found. William Stein has a paper "Shafarevich-Tate groups of nonsquare order'' with counterexamples; it's available online at http://modular.fas.harvard.edu/papers/nonsquaresha/final2.ps.
-
Okay, that's not an open-problem them, I was wrong :) – Ilya Nikokoshev Dec 28 2009 at 1:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394057989120483, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/18844/when-and-why-to-take-the-log-of-a-distribution-of-numbers?rq=1
|
# When (and why) to take the log of a distribution (of numbers)?
Say I have some historical data e.g., past stock prices, airline ticket price fluctuations, past financial data of the company...
Now someone (or some formula) comes along and says "let's take/use the log of the distribution" and here's where I go WHY?
Questions:
1. WHY should one take the log of the distribution in the first place?
2. WHAT does the log of the distribution 'give/simplify' that the original distribution couldn't/didn't?
3. Is the log transformation 'lossless'? I.e., when transforming to log-space and analyzing the data, do the same conclusions hold for the original distribution? How come?
4. And lastly WHEN to take the log of the distribution? Under what conditions does one decide to do this?
I've really wanted to understand log-based distributions (for example lognormal) but I never understood the when/why aspects - i.e., the log of the distribution is a normal distribution, so what? What does that even tell and me and why bother? Hence the question!
UPDATE: As per @whuber's comment I looked at the posts and for some reason I do understand the use of log transforms and their application in linear regression, since you can draw a relation between the independent variable and the log of the dependent variable. However, my question is generic in the sense of analyzing the distribution itself - there is no relation per se that I can conclude to help understand the reason of taking logs to analyze a distribution. I hope I'm making sense :-/
In regression analysis you do have constraints on the type/fit/distribution of the data and you can transform it and define a relation between the independent and (not transformed) dependent variable. But when/why would one do that for a distribution in isolation where constraints of type/fit/distribution are not necessarily applicable in a framework (like regression). I hope the clarification makes things more clear than confusing :)
This question deserves a clear answer as to "WHY and WHEN"
-
2
Because this covers almost the same ground as previous questions here and here, please read those threads and update your question to focus on any aspects of this issue that haven't already been addressed. Note, too, #4 (and part of #3) are elementary questions about logarithms whose answers are readily found in many places. – whuber♦ Nov 23 '11 at 20:46
1
The clarification helps. You might want to ponder the fact, though, that regression with only a constant term (and no other independent variables) amounts to assessing the variation of the data around their mean. Therefore, if you really understand the effects of taking logs of dependent variables in regression, you already understand the (simpler) situation you are asking about here. In short, once you have answers to all four questions for regression, you don't need to ask them again about "the distribution in isolation." – whuber♦ Nov 23 '11 at 21:15
@whuber: I see...so I do understand the reasons for taking logs in regression, but only because I had been taught so - I understand it from the need to do so perspective i.e., to make sure the data fits within the assumptions of linear regression. That's my only understanding. Maybe what I'm missing is "real understanding" of the effect of taking logs and hence the confusion...any help? ;) – PhD Nov 23 '11 at 21:24
2
Ah, but you know much more than that, because after using logs in regression, you know that the results are interpreted differently and you know to take care in back-transforming fitted values and confidence intervals. I'm suggesting that you might not be confused and that you probably already know many of the answers to these four questions, even though you weren't initially aware of it :-). – whuber♦ Nov 23 '11 at 21:29
@whuber: Ah, I see. Time for some introspection, it seems :) – PhD Nov 23 '11 at 23:45
show 1 more comment
## 2 Answers
If you assume a model form that is non-linear but can be transformed to a linear model such as $\log Y = \beta_0 + \beta_1t$ then one would be justified in taking logarithms of $Y$ to meet the specified model form. In general whether or not you have causal series , the only time you would be justified or correct in taking the Log of $Y$ is when it can be proven that the Variance of $Y$ is proportional to the Expected Value of $Y^2$ . I don't remember the original source for the following but it nicely summarizes the role of power transformations.
The optimal power transformation is found via the Box-Cox Test where
• -1. is a reciprocal
• -.5 is a recriprocal square root
• 0.0 is a log transformation
• .5 is a square toot transform and
• 1.0 is no transform.
Note that when you have no predictor/causal/supporting input series the model is $Y_t=u +a_t$ and that there are no requirements made about the distribution of $Y$ BUT are made about $a_t$, the error process. In this case the distributional requirements about $a_t$ pass directly on to $Y_t$. When you have supporting series such as in a regression or in a Autoregressive–moving-average model with exogenous inputs model (ARMAX model) the distributional assumptions are all about $a_t$ and have nothing whatsoever to do with the distribution of $Y_t$. Thus in the case of ARIMA model or an ARMAX Model one would never assume any transformation on $Y$ before finding the optimal Box-Cox transformation which would then suggest the remedy (transformation) for $Y$. In earlier times some analysts would transform both $Y$ and $X$ in a presumptive way just to be able to reflect upon the percent change in $Y$ as a result in the percent change in $X$ by examining the regression coefficient between $\log Y$ and $\log X$. In summary transformations are like drugs some are good and some are bad for you! They should only be used when necessary and then with caution.
I have received two down votes BUT no explanation. This post is 100% correct and non-commercial. If people think there is something wrong or misguided they should say why. Perhaps I am being stalked by A..m who seems to have a contentious attitude.
-
– Andy W Nov 28 '11 at 13:22
:Andy W .. thanks for your advice. – IrishStat Nov 30 '11 at 15:45
Log-scale informs on relative changes (multiplicative), while linear-scale informs on absolute changes (additive). When do you use each? When you care about relative changes, use the log-scale; when you care about absolute changes, use linear-scale. This is true for distributions, but also for any quantity or changes in quantities.
Note, I use the word "care" here very specifically and intentionally. Without a model or a goal, your question cannot be answered; the model or goal defines which scale is important. If you're trying to model something, and the mechanism acts via a relative change, log-scale is critical to capturing the behavior seen in your data. But if the underlying model's mechanism is additive, you'll want to use linear-scale.
Example. Stock market.
Stock A on day 1: $\$$100. On day 2, $\$$101. Every stock tracking service in the world reports this change in two ways! (1) +$\1. (2) +1%. The first is a measure of absolute, additive change; the second a measure of relative change.
Illustration of relative change vs absolute: Relative change is the same, absolute change is different
Stock A goes from $\$$1 to $\$$1.10. Stock B goes from$\$$100 to $\$$110.
Stock A gained 10%, stock B gained 10% (relative scale, equal)
...but stock A gained 10 cents, while stock B gained $\$\$10 (B gained more absolute dollar amount)
If we convert to log space, relative changes appear as absolute changes.
Stock A goes from $log10(\$1)$to$log10(\$1.10)$ = 0 to .0413
Stock B goes from $log10(\$100)$to$log10(\$110)$ = 2 to 2.0413
Now, taking the absolute difference in log space, we find that both changed by .0413.
Both of these measures of change are important, and which one is important to you depends solely on your model of investing. There are two models. (1) Investing a fixed amount of principal, or (2) investing in a fixed number of shares.
Model 1: Investing with a fixed amount of principal.
Say yesterday stock A cost $\$$1 per share, and stock B costs $\$$100 a share. Today they both went up by one dollar to$\$$2 and $\$$101 respectively. Their absolute change is identical ($\$$1), but their relative change is dramatically different (100% for A, 1% for B). Given that you have a fixed amount of principal to invest, say $\$$100, you can only afford 1 share of B or 100 shares of A. If you invested yesterday you'd have$\$$200 with A, or $\$$101 with B. So here you "care" about the relative gains, specifically because you have a finite amount of principal.
Model 2: fixed number of shares.
In a different scenario, suppose your bank only lets you buy in blocks of 100 shares, and you've decided to invest in 100 shares of A or B. In the previous case, whether you buy A or B your gains will be the same ($\$\$1).
Now suppose we think of a stock value as a random variable fluctuating over time, and we want to come up with a model that reflects generally how stocks behave. And let's say we want to use this model to maximize profit. We compute a probability distribution whose x-values are in units of 'share price', and y-values in probability of observing a given share price. We do this for stock A, and stock B. If you subscribe to the first scenario, where you have a fixed amount of principal you want to invest, then taking the log of these distributions will be informative. Why? What you care about is the shape of the distribution in relative space. Whether a stock goes from 1 to 10, or 10 to 100 doesn't matter to you, right? Both cases are a 10-fold relative gain. This appears naturally in a log-scale distribution in that unit gains correspond to fold gains directly. Any unit gain of 1 to, or 9 to 10, in log space, corresponds to 1 fold gain, or 100% gain.
If you were to look at these same distributions in linear, or absolute space, you would think that higher-valued share prices correspond to greater fluctuations. For your investing purposes though, where only relative gains matter, this is not necessarily true.
Example 2. Chemical reactions. Suppose we have two molecules A and B that undergo a reversible reaction.
$A\Leftrightarrow B$
which is defined by the individual rate constants
($k_{ab}$) $A\Rightarrow B$ ($k_{ba}$) $B\Rightarrow A$
Their equilibrium is defined by the relationship:
$K=\frac{k_{ab}}{k_{ba}}=\frac{[A]}{[B]}$
Two points here. (1) This is a multiplicative relationship between the concentrations of $A$ and $B$. (2) This relationship isn't arbitrary, but rather arises directly from the fundamental physical-chemical properties that govern molecules bumping into each other and reacting.
Now suppose we have some distribution of A or B's concentration. The appropriate scale of that distribution is in log-space, because the model of how either concentration changes is defined multiplicatively (the product of A's concentration with the inverse of B's concentration). In some alternate universe where $K^*=k_{ab}-k_{ba}=[A]-[B]$, we might look at this concentration distribution in absolute, linear space.
That said, if you have a model, be it for stock market prediction or chemical kinetics, you can always interconvert 'losslessly' between linear and log space, so long as your range of values is $(0,\inf)$. Whether you choose to look at the linear or log-scale distribution depends on what you're trying to obtain from the data.
EDIT. An interesting parallel that helped me build intuition is the example of arithmetic means vs geometric means. An arithmetic (vanilla) mean computes the average of numbers assuming a hidden model where absolute differences are what matter. Example. The arithmetic mean of 1 and 100 is 50.5. Suppose we're talking about concentrations though, where the chemical relationship between concentrations is multiplicative. Then the average concentration should really be computed on the log scale. This is called the geometric average. The geometric average of 1 and 100 is 10! In terms of relative differences, this makes sense: 10/1 = 10, and 100/10 = 10, ie., the relative change between the average and two values is the same. Additively we find the same thing; 50.5-1= 49.5, and 100-50.5 = 49.5.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383547306060791, "perplexity_flag": "middle"}
|
http://agtb.wordpress.com/2009/08/18/communication-complexity-of-reaching-equilibrium/
|
# Turing's Invisible Hand
Feeds:
Posts
Comments
## Communication Complexity of Reaching Equilibrium
August 18, 2009 by algorithmicgametheory
## Motivation
The dogma of game theory (and economics) is that strategic situations will reach equilibrium — one way or another. How this will happen is often left un-answered, but from a computational point of view this is quite a central question. Indeed, a great amount of work has been done regarding the computational complexity of reaching an equilibrium of a game, reaching a conclusion that an infeasible amount of computation might be needed. In this post I want to discuss another aspect of the complexity of reaching equilibrium: the amount of information that needs to be transferred between the different players. The point of view here is that initially each player “knows” only his own utility function $u_i$ and then the players somehow interact until they find an equilibrium of the game $(u_1 .... u_n)$ defined by their joint utilities. The analysis here abstracts away the different incentives of the players (the usual focus of attention in game theory) and is only concerned with the amount of information transfer required between the players. I.e., we treat this as a purely distributed computation question where the input $(u_1 ... u_n)$ is distributed between the players who all just want to jointly compute an equilibrium point. We assume perfect cooperation and prior coordination between the players, i.e. that all follow a predetermined protocol that was devised to compute the required output with minimum amount of communication. The advantage of this model is that it provides lower bounds for all sorts of processes and dynamics and that eventually should reach equilibria: if even under perfect cooperation much communication is required for reaching equilibrium, then certainly whatever individual strategy each player follows, convergence to equilibrium cannot happen quickly.
The main result that we shall give in this post is that finding a pure Nash equilibrium (or determining that none exists) requires essentially communicating everything about the utility functions — no shortcuts are possible. This is the basic result that appears in two papers that studied the issue: Communication complexity as a lower bound for learning in games by Vincent Conitzer and Tuomas Sandholm that considered two-player games and How Long to Equilibrium? The Communication Complexity of Uncoupled Equilibrium Procedures by Sergiu Hart and Yishay Mansour whose focus is multi-player games. See also the presentation by Sergiu Hart. A future post will discuss the much more complicated case of finding a mixed Nash equilibrium, a question which is still mostly open, despite results given in the H&M paper.
## The Communication Complexity model
The basic communication complexity model considers $n$ players, where each player $i$ holds part of the input which we assume is an $N$-bit string, $x^i \in \{0,1\}^N$. The usual focus is on the two-player case, $n=2$, in which case we simplify the notation to use $x$ and $y$ rather than $x^1$ and $x^2$. The joint goal of these players is to compute the value of $f(x^1 ... x^n)$ where $f : (\{0,1\}^N)^n \rightarrow Z$ is some predetermined function. As $f(x^1 ... x^n)$ depends on all values of $x^i$, the players will need to communicate with each other in order to do so (we will use “broadcasting” rather than player-to-player channels), and they will do so according to a predetermined protocol. Such a protocol specifies when each of them speaks and what he says, where the main point is that each player’s behavior must be a function of his own input as well as what was broadcast so far. We will assume that all communication is in bits (any finite alphabet can be represented in bits). The communication complexity of $f$ is the minimum amount of bits of communication that a protocol for $f$ uses in the worst case. This model has been introduced in a seminal paper by Andy Yao, the standard reference is still the (just slightly dated) book Communication Complexity I wrote with Eyal Kushilevtiz, and online references include the wikipedia article, and the chapter from the Barak-Arora computational complexity book.
.
## The Disjointness Function
Our interest will be in lower bounds on the communication complexity. There are several techniques known for proving such lower bounds, the easiest of which is the “fooling set method” which we will use on the “disjointness function” for two players. The disjointness function, introduced in Yao’s original paper, plays a key role in communication complexity, and indeed it will turn out to be useful to us too: once we have a lower bound for it, it will be easy to easily deduce the lower bound for the function of our interest, finding a pure Nash equilibrium, by simple reduction.
The disjointness function: Alice holds $x \in \{0,1\}^N$ and Bob holds $y \in \{0,1\}^N$. They need to determine whether there exists an index $i$ where they both have 1, $x_i=1=y_i$. If we view $x$ as specifying a subset $S \subseteq \{1...N\}$ and $y$ as specifying $T \subseteq \{1 ... N\}$, then we are asking whether $S \cap T = \emptyset$.
Lemma: Any deterministic protocol for solving the disjointness problem requires at least $N$ bits of communication in the worst case.
Before proving this lemma, notice that it is almost tight, as Alice can always send all her input to Bob ($N$ bits) who then has all the information for determining the answer, who then sends it back to Alice (1 more bit). It is known that a $\Omega(N)$ lower bound also applies to randomized protocols, but the proof for the randomized case is harder, while the proof for the deterministic case is easy.
Proof: Let us look at the $2^N$ different input pairs of the form $\{<x,y> | \forall i,\: x_i+y_i=1\}$ (these are the “maximal disjoint” inputs). We will show that for no two of these input pairs can the protocol produce the same communication transcript (i.e. the exact same sequence of bits is sent throughout the protocol). It follows that the protocol must have at least $2^N$ different possible transcripts, and since these are all binary sequences, at least one of them must be of length at least $N$ bits.
Now comes the main point: why can’t two maximally disjoint pairs $<x,y>$ and $<x',y'>$ have the same transcript? Had we been in a situation that $f(x,y) \ne f(x',y')$ then we would know that the transcripts must be different since the protocol must output a different answer in the two cases; however in our case all “maximal disjoint” input pairs give the same answer: “disjoint”. We use the structure of communication protocols — that each player’s actions can only depend on his own input (and on the communication so far) — to show that if $<x,y>$ and $<x',y'>$ have the same transcript then $<x,y'>$ has the same transcript too (and so does $<x',y>$). Consider the players communicating on inputs $<x,y'>$: Alice, that only sees $x$, cannot distinguish this from the case of $<x,y>$ and Bob, that only sees $y'$ cannot distinguish it from the case $<x',y'>$. Thus when Alice communicates she will not deviate from what she does on of $<x,y>$ and Bob will not deviate from what he does on $<x',y'>$. Thus, none of them can be the first to deviate from the joint transcript, thus none of them ever does deviate and the joint transcript is also followed on $<x,y'>$. We now get our contradiction since either $x$ and $y'$ are not disjoint or $x'$ and $y$ are not disjoint, and thus at least one of these pairs cannot not share the same transcript, a contradiction. (The reason for non disjointness is that since $x \ne x'$ there must be an index $i$ with $x_i \ne x'_i$. If $x_i = 1$ and $x'_i=0$ then $y_i = 0$ and $y'_i=1$ in which case $x$ and $y'$ are not disjoint. Similarly, if $x'_i = 1$ and $x_i=0$ then $y'_i = 0$ and $y_i=1$ in which case $x'$ and $y$ are not disjoint.)
## Finding a Pure Nash Equilibrium
Back to our setting where each player $i$ holds his own utility function $u_i : S_1 \times ... S_n \rightarrow \Re$, where the $S_i$‘s are the commonly known strategy sets, which we will assume have size $m$ each. Thus each player’s input consists of $m^n$ real numbers, which we will always assume are in some small integer range. Perhaps it is best to start with a non-trivial (just) upper bound. Let us consider the class of games that are (strictly) dominance solvable, i.e. those that iterated elimination of strictly dominated strategies leaves a single strategy profile, which is obviously a pure Nash equilibrium. A simple protocol to find this equilibrium is the following:
• Repeat $mn$ times:
• For $i = 1 ...n$ do
• If player $i$ has a strategy that is dominated after the elimination of all previously removed strategies, then announce it, else say “pass”.
• Output the (single) profile of non-eliminated strategies.
What is remarkable about this protocol is that number of bits communicated is polynomial in $n$ and $m$, which may be exponentially smaller than the total size of the input which is $O(m^n)$ numbers for each player. Can some protocol like this be designed for all games that have a pure Nash equilibrium? The basic answer in no:
Theorem: Every deterministic (or randomized) protocol that recognizes whether the given game has a pure Nash equilibrium requires $\Omega(m^n)$ bits of communication.
Note that this theorem implies in particular that one cannot design a more efficient protocol that finds an equilibrium just for games that are guaranteed to have one, as adding a verification stage (where each player checks whether the strategy suggested to him is indeed a best reply to those suggested to the others) would convert it to a protocol that recognizes the existence of a pure Nash.
Proof: The proof is by a reduction from the disjointness problem. We will show that a protocol for determining the existence of a pure Nash equilibrium for games with $n$ players each having $m$ strategies can be used to solve the disjointness problem on strings of length $N = \Omega(m^n)$.
Let us start with the case $n=2$ and solve disjointness for length $N=(m-2)^2$. The idea is very simple: Alice will build a utility matrix $u_A$ by filling it with the values of its bit string $x$ in some arbitrary but fixed order, and Bob will build a matrix $u_B$ from $y$ using the same order. Now, any cell of the two matrices where $x_i=y_i=1$ will turn out to be a Nash equilibrium since both players get the highest utility possible in this matrix. For this idea to formally work, we just need to make sure that no other cell is a Nash equilibrium. This can be done in one of several ways (and different ones are used in the C&S and H&M papers), the simplest of which is adding two more rows and columns as follows:
With the addition of these rows and columns, each player’s best-reply always obtains a value of 1, and thus only a (1,1) entry may be a Nash equilibrium.
Now for general $n$. We add to Alice and Bob $n-2$ players that have utilities that are identically 0. Thus any strategy will be a best reply for these new players, and a pure Nash equilibrium is determined solely by Alice and Bob. The main change that the new players bring is that the utility functions of Alice and Bob are now $n$-dimensional matrices of total size $m^n$. Thus Alice and Bob can fold their length $N$ bit string into this larger table (as before keeping two strategies from their own strategy space to ensure no “none-(1,1)” equilibrium points), allowing to answer disjointness on $N=(m-2)^2 \cdot m^{n-2}$-bit long vectors.
## Variants
There are several interesting variants on the basic question:
1. The lower bound uses heavily the fact that the utilities of different strategies may be equal and thus there are many possible best replies. Would finding a Nash equilibrium be easier for games in “general position” — specifically if there was always a single best reply for each player? For the case of $n=2$ players, C&S show that the complexity does go down to $\Theta(m \log m)$. (Exercise: show that the randomized communication complexity in this case is $\Theta(m)$.) As $n$ grows, the savings are not exponential though and H&M show a $2^{\Omega(n)}$ lower bound for constant $m$. A small gap remains.
2. We have seen that finding an equilibrium of a dominance solvable game is easy. Another family that is guaranteed to posses a pure equilibrium is potential games. However, H&M show that finding a Nash equilibrium in an ordinal potential game may require exponential communication. I don’t know whether things are better for exact potential games.
3. The efficient protocol for dominance solvable games was certainly artificial. However, an efficient natural protocol exists as well: if players just best reply in round robin fashion then they will quickly converge to a pure Nash equilibrium. (This seems to be well known, but the only link that I know of is to one of my papers.)
Posted in Uncategorized | Tagged complexity of equilibria, technical | 6 Comments
### 6 Responses
1. [...] Communication Complexity of Reaching Equilibrium « Algorithmic … [...]
2. I wonder what the communication complexity of finding a Nash equilibrium using replicator dynamics would be. If you start from the interior, the system will converge to a Nash.
In terms of communication each agent will need to transmit their strategy profile each round. To make it rigorous, I suppose we should approximate strategy profiles using fixed point arithmetic and one could look at the average bits of communication to converge to within epsilon of an equilibrium. The epsilon being limited to how finely you wish to represent a strategy profile. A setup like that would allow one to measure communication in bits anyway.
My experience with the replicator dynamics for routing games is that they converge pretty quickly. I am not sure how fast the number of bits transmitted would go up as epsilon shrinks though.
http://en.wikipedia.org/wiki/Replicator_equation
• While I have not talked about convergence to mixed Nash equilibria yet, Replicator dynamics are certainly an example of the natural dynamics that the communication complexity approach aims to rule out as general mechanisms that always converge quickly. The numeric precision is not the problem but rather the high dimensionality.
For the case of two players (even in an evlolutionary single population setting), if the best reply for each strategy i is strategy i+1, then it will certainly take about m steps (m is the size of the strategy space) to “march all the way” to have enough weight on strategy m as required for equilibrium. For n players, it is possible to design utility functions that essentially “fold” an exponential in n sequence of best replies into the product of the strategy spaces. For such utilities we get an exponential convergence time of replicator dynamics as well as other “sequential reply strategies”, indpendantly of epsilon.
3. [...] Complexity of Mixed-Nash Equilibria In a previous post I discussed the communication complexity of reaching a pure Nash equilibrium. The communication complexity model aims to capture the basic information transfer bottleneck [...]
4. [...] There are also “concrete” models that study explicitly specific parameters such as communication or queries. One may dislike the fact that this complexity analysis does not restrict attention to [...]
5. In terms of communication each agent will need to transmit their strategy profile each round. To make it rigorous, I suppose we should approximate strategy profiles using fixed point arithmetic and one could look at the average bits of communication to converge to within epsilon of an equilibrium. The epsilon being limited to how finely you wish to represent a strategy profile. A setup like that would allow one to measure communication in bits anyway.
thanks
killing games
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 102, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356990456581116, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/7431-divergence-series.html
|
# Thread:
1. ## Divergence of Series
Can someone explain to me why one serie is convergent and the other one divergent? both seem to lead to 0 to me? Thx a lot for your help
http://www.math.uni-siegen.de/numeri...ine/img883.gif
http://www.math.uni-siegen.de/numeri...ine/img888.gif
2. Originally Posted by sportillo
Can someone explain to me why one serie is convergent and the other one divergent? both seem to lead to 0 to me? Thx a lot for your help
http://www.math.uni-siegen.de/numeri...ine/img883.gif
http://www.math.uni-siegen.de/numeri...ine/img888.gif
The general term goes to zero as n becomes large, but in neither case
can the sum be zero since they are the sums of positive terms and therefore
the sum must be bigger than the first term.
The first is the harmonic series, which can be shown to diverge by grouping
the terms:
$<br /> \sum_1^{\infty} \frac{1}{k} = 1 + \frac{1}{2} + \left(\frac{1}{3} + \frac{1}{4} \right)+\left( \frac{1}{5} + \frac{1}{6}+\frac{1}{7} + \frac{1}{8} \right)+ ...<br />$
Each group after the first has $2^{n-1}$ terms and the smallest term is $1/2^n$ and so each is greater $1/2$ so:
$<br /> \sum_1^{\infty} \frac{1}{k} > 1 + \frac{1}{2} + \frac{1}{2} + \frac{1}{2}+ ...<br />$
which shows that the series is divergent.
Showing that the second converges is fairly simple, the integral test should do
the job. Determing what it sums to is more tricky but Robin Chapman give a
number of methods here.
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.895811915397644, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/85682-derivative-application.html
|
# Thread:
1. ## derivative application
Find the equation of any quadratics that passes through the origin and are tangent to both y=-2x-4 and y=8x-49.
I have thought about this problem for most of the afternoon now and got no where. I tried drawing it to see the problem but no good. I just don't know where to start or how to approach this problem. What/how should I be thinking?
Thank you
2. Hi
The quadratics you are looking for passing through the origin, their generic equation is $f(x) = ax^2 + bx$
The tangent at a point whose abscissa is $x_0$ is :
$T_0 : y = (2ax_0+b)x-ax_0^2$
The quadratics are tangent to both y=-2x-4 and y=8x-49 iff there exists $x_0$ and $x_1$ such that
$2ax_0+b = -2$
$-ax_0^2 = -4$
$2ax_1+b = 8$
$-ax_1^2 = -49$
Solve for a, b, x0 and x1.
You will find 2 quadratics.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523897171020508, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/19642/can-a-rule-be-formulated-to-explain-this-to-7-year-old
|
# Can a rule be formulated to explain this to 7 year old?
I'm trying to teach math to my 7 year old daughter. I'm teaching following type of equations. $$\cdots - x = y$$
I'm able to explain her the rule that:
when $\cdots- x = y$, we can always take $x$ (value on the left of equation) to the other side of $=$ sign and flip( $-$ to $+$ and vice versa), $-$ to $+$ and get the answer.
Meaning when $\cdots - x = y$, we can always do $\cdots = y + x$ and get the answer.
This rule works for \begin{align*} x + \cdots &= y \\ \cdots + x &= y\\ \cdots - x &= y \\ \end{align*}
But it doesn't work for $x - \cdots = y$. Because if you apply the rule, you get - (answer) and not just ( answer )
My question is given that I'm trying to teach this to 7 year old, is there any better method where one rule would cover all 4 cases? Any ideas, thoughts...
\begin{align*} - x + \cdots &= y\\ -\cdots + x &= y\\
- \cdots - x &= y\\
- x - \cdots &= y \end{align*}
-
4
I always tro to avoid that rule (moving over and replacing sign) as it doesn't give any insight in the problem. If you have, for example, 3+x=5, I always say we add -3 to both sides (because that doesn't change equality), and we get -3+3+x=-3+5, hence x=8. – Fredrik Meyer Jan 31 '11 at 1:55
6
I agree with Fredrik that thinking in terms of doing the same thing to both sides is a better approach. However, I disagree with his arithmetic... :-) – Jesse Madnick Jan 31 '11 at 2:04
I've to come up with examples where i avoid negative numbers, they haven't reached there yet. – zobars Jan 31 '11 at 2:18
## 2 Answers
You shouldn't be using "rules" at all; that is not what mathematics is about, at any level. (This is an enormous pet peeve of mine. There is a commercial floating around Hulu about some kind of online tutoring program where a woman describes to a girl the rule for computing the area of a triangle given its base and height, and then completely fails to draw the diagram that explains why this rule works. It annoys me to no end. (This particular example is also brought up in the infamous Lockhart's lament.))
I have some amount of money in my bank account. When I withdraw $x$ dollars, I have $y$ dollars left. How much money did I have originally? $x + y$. How much money do I have now? $(x + y) - x = y$.
I have some amount of money in my bank account. When I deposit $x$ dollars, I now have $y$ dollars. How much money did I have originally? $y - x$. How much money do I have now? $(y - x) + x = y$.
At some point it is probably a good idea to mention that $x + y$ is the same as $y + x$ (that is, depositing $x$ dollars and then depositing $y$ dollars is the same as depositing $y$ dollars and then depositing $x$ dollars). Then you've covered all of the "cases."
Alternately, a physical analogy ought to work well. I am some distance away from a wall. When I move $x$ feet towards the wall, I am $y$ feet away from the wall. How far was I originally away from the wall? $x + y$. How far am I away from the wall now? $(x + y) - x = y$.
I am some distance away from a wall. When I move $x$ feet away from the wall, I am $y$ feet away from the wall. How far was I originally away from the wall? $y - x$. How far am I away from the wall now? $(y - x) + x = y$.
-
Hmm.. Why do you say math is not about rules ? I agree that i would really like them to learn what's actually going on when i'm teaching them x + .. = y type of equations and for single digit integers she's able to do mental math(similar to your bank deposit examples) and come up with answers, it's when i reached to double digit numbers i had to use something more than mental math. What would that be ? Am i just trying to teach something that has to wait ?? – zobars Jan 31 '11 at 2:07
2
@zobars: mathematics is as much about following rules as literature is about writing words. If you'd like a thorough discussion of this, you can read Lockhart's lament, which I've linked to above. – Qiaochu Yuan Jan 31 '11 at 2:12
Alright Qiaochu, i get the point. I would personally not like to go by rules. But i'm trying to find out best way to teach an elementary level kid and i get the idea that i could still do that without forming rules, they would get better idea with physical analogy. I'll give it a try and see how it goes.. Thanks for your answer and time. – zobars Jan 31 '11 at 2:15
3
@zobars: There's a place for rules and memorization, and a place for understanding. I would say everyone should memorize the multiplication tables, simply because having them at your fingertips is so much more useful than trying to figure them out from scratch every time; but understanding what multiplication is will be more useful than not understanding it. But once you get past the very basics, memorization just tends to get in the way of both understanding and ability to use the material. Some memorization is still useful (e.g., $(\sin x)'=\cos x$), but much less than people think. – Arturo Magidin Jan 31 '11 at 3:09
@Arturo, can't agree more with you. Just want to tailor this to elementary kids. – zobars Jan 31 '11 at 19:10
I'm not clear on what the "rule" you say "doesn't work" is... Still...
As Qiaochu says, don't do "rules". The key to all of these manipulations is:
If two things are equal, and you do the same thing to each of them, the results will also be equal.
So, if $A$ is equal to $B$, then adding $2$ to $A$ will result in the same thing as adding $2$ to $B$: if $A=B$, then $A+2 = B+2$.
If you have $\cdots - x = y$, then you have two things that are equal. Adding $x$ to both will still give you equal things, so $$(\cdots - x) + x = y + x.$$ Then using the fact that $-x+x = 0$, you get $\cdots = y+x$.
All of the manipulations you propose are instances of this: if you have two equal things, and you do the same thing to both, the results are still equal.
-
Yes Arturo, i'm realizing exactly how i should go about this now. I was just presuming that it would be hard for them to really understand equality, but i guess i don't know until i've tried. Thanks. – zobars Jan 31 '11 at 2:16
@zobars: You may want to go over the points of equality; they are intuitive enough, so perhaps they won't have any trouble with them. Everything is equal to itself; if $A$ is equal to $B$, then $B$ is equal to $A$; and if $A$ is equal to $B$, and $B$ is equal to $C$, then $A$ is equal to $C$. Also, you may want to delete one of the two comments above. – Arturo Magidin Jan 31 '11 at 2:55
Yes that makes sense. Thanks. i finally figured out how to delete a comment... – zobars Jan 31 '11 at 3:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.962649941444397, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/geometry?sort=votes&pagesize=30
|
Tagged Questions
The geometry tag has no wiki summary.
2answers
160 views
Generalized Complex Geometry and Theoretical Physics
I have been wondering about some of the different uses of Generalized Complex Geometry (GCG) in Physics. Without going into mathematical detail (see Gualtieri's thesis for reference), a Generalized ...
2answers
486 views
The Reeh-Schlieder theorem and quantum geometry
There have been some very nice discussions recently centered around the question of whether gravity and the geometry and topology of the classical world we see about us, could be phenomena which ...
6answers
1k views
Experimental evidence of a fourth spatial dimension?
As human beings, we observe the world in which we live in three dimensions. However, it is certainly theoretically possible that more dimensions exist. Is there any direct or indirect evidence ...
3answers
290 views
Question about associative 3-cycles on G2 manifolds
Let $X$ be a manifold with $G_2$ holonomy and $\Phi$ be the fundamental associative 3-form on $X$. Let $*\Phi$ be the dual co-associative 4-form on $X$. Now consider a particular associative 3-cycle ...
4answers
533 views
Why is there a search for an exchange particle for gravity?
Here's a question on something I've been wondering about for quite some time. (I am not a physicist.) If I understand correctly, according to Einstein's General Theory of Relativity, mass results in ...
4answers
8k views
Does the rotation of the earth dramatically affect airplane flight time?
Say I'm flying from Sydney, to Los Angeles (S2LA), back to Sydney (LA2S). During S2LA, travelling with the rotation of the earth, would the flight time be longer than LA2S on account of Los Angeles ...
2answers
376 views
Why are conformal transformations so prevalent in physics?
What is it about conformal transformations that make them so widely applicable in physics? These preserve angles, in other words directions (locally), and I can understand that might be useful. Also, ...
1answer
277 views
Why are fractal geometries useful for compact antenna design?
While most of what I've read about fractals has been dubious in nature, over the years, I keep hearing that these sorts of self-similar (or approximately self-similar) geometries are useful in the ...
1answer
401 views
Is C60 really the “most spherical” fullerene?
In the late 80's and early 90's, Smalley and others made claims that the C60 fullerene bearing icosahedral symmetry was the most spherical molecule known, and perhaps the most spherical that could ...
1answer
302 views
The role of metric in the Wave Equation
The wave equation is often written in the form $$(\partial^2_t-\Delta)u=0,$$ involving the Laplace-Beltrami operator $\Delta$. However, the Laplace-Beltrami operator $\Delta$ is defined only in the ...
1answer
148 views
An astronaut and a vengeful pole
Imagine an astronaut floating in free-space with no significant nearby gravitational influences. The astronaut takes an arbitrarily thin pole of uniform density with length $l$ and mass $m$, orients ...
2answers
207 views
Space-time geometry and metric
I am confused in one question in general relativity, why we can always express a space-time geometry only by metric. It means a metric, which is just about distance in tangent space, can tell us all ...
2answers
212 views
Can the electroweak/strong forces, and/or quantum mechanics be thought of as geometric?
Can the electroweak and strong forces be written as geometric theories? - Why and why not? Can quantum mechanics in general? For example, the Kaluza-Klein theory explains the electromagnetic field ...
2answers
149 views
Is a semi-Euclidean space possible?
Does exists a geometry (3d for example) which is Euclidean in 2 dimensions (x and y coordinates) and non-Euclidean when the third dimension (z) is taken into account? In other words a space where it ...
2answers
344 views
Prerequisites to start the study of noncommutative geometry in physics
What are prerequisites (in mathematics and physics), that one should know about for getting into use of ideas from noncommutative geometry in physics?
1answer
227 views
Relativistic space-time geometry
What subject (suggest book titles, etc.) should I study to get a clear grasping of hypersurfaces, 2-surfaces, and integration on them, mostly in special relativity (I'm not messing with general ...
1answer
97 views
Is there an upper bound on the gauge group rank in F-theory compactifications on CY 4-folds?
It is known that in F-theory compactifications on CY 4-folds one can get gauge groups with very large ranks. The largest single factor* gauge group for compact CY 4-folds I found in the literature is ...
1answer
128 views
How is the equation of motion on an ellipse derived?
I would like to show that a particle orbiting another will follow the trajectory \begin{equation} r = \frac{a(1-e^2)}{1 + e \cos(\theta)}. \end{equation} I would like to do this with minimal ...
1answer
95 views
Flat space metrics
This question concerns the metric of a flat space: $$ds^2=dr^2+cr^2\,\,d\theta^2$$ where $c$ is a constant. Why is it necessary to set $c=1$ to avoid singularities and to restrict $r\ge 0$? Thanks.
11answers
749 views
Is it possible for a physical object to have a irrational length?
Suppose I have a caliper that is infinitely precise. Also suppose that this caliper returns not a number, but rather whether the precise length is rational or irrational. If I were to use this ...
2answers
369 views
Where do I start with Non-Euclidean Geometry?
I've been trying to grok General Relativity for a while now, and I've been having some trouble. Many physics textbooks gloss over the subject with an "it's too advanced for this medium", and many ...
1answer
107 views
Nanotube chiral angle as a function of $n$ and $m$
I'm looking into nanotubes and I thought I'd assure myself that the basic geometry equations are indeed correct. No problems for the radius, I quickly found the known formula: R = ...
1answer
158 views
Is Dyson Sphere a stable construction?
Suppose that a star is encompassed by a Dyson Sphere. Do we need a position control system for the Dyson Sphere to keep its origin always aligned with the center of the star? Will it stay aligned ...
4answers
316 views
Gravitation is not force?
Einstein said that gravity can be looked at as curvature in space- time and not as a force that is acting between bodies. (Actually what Einstein said was that gravity was curvature in space-time and ...
1answer
164 views
What's a pseudo-rotation?
I'm sorry for this lexical, probably extremely elementary, question. But what is a pseudo-rotation? I just read this term for the first time, in the beginning of the 4th chapter book of CFT by Di ...
1answer
49 views
Uniqueness and existence of polygonal orbits through a spherical shell
Say we have a spherical wire mesh raised to a negative voltage. Then let's say we release a proton from near the surface, and away from the surface, at some angle and speed. Also, imagine that the ...
1answer
107 views
How would one calculate the amount of water contained in a cloud?
So I was looking out the sky one day and I wondered how I would go about calculating how much water was contained in a cloud. I figured the following simple outline 1) We need to roughly know how big ...
4answers
210 views
Formulation of general relativity
EDIT: I think I can pinpoint my confusion a bit better. Here comes my updated question (I'm not sure what the standard way of doing things is - please let me know if I should delete the old version). ...
1answer
82 views
How does holographic voxel density scale with holographic film metrics?
I'm trying to understand how one can generate bounds on the effective number of voxels (volumetric pixels) in a hologram, or information density, provided various metrics for the two-dimensional ...
2answers
459 views
Why was the truncated icosahedron (i.e. soccer ball) geometry chosen for the implosive lenses in the “Fat Man” atomic bomb?
Quoting from Wolfram Mathworld: " It is the shape used in the construction of soccer balls, and it was also the configuration of the lenses used for focusing the explosive shock waves of the ...
2answers
185 views
Shape of electric charges on sphere in equilibrium state
When electric charges of equal magnitude and sign are released on a regular sphere (and assume that they stick to the surface of the sphere, but they are free to move along its surface), what is the ...
1answer
180 views
Why is physical space equivalent to $\mathbb{R}^3$?
Why is physical space equivalent to $\mathbb{R}^3$, as opposed to e.g. $\mathbb{Q}^3$? I am trying to understand what would be the logical reasons behind our assumption that our physical space is ...
2answers
219 views
The shape of the earth$\ldots$
....is an oblate spheroid because centrifugal force stretches the tropical regions to a point farther from the center than they would be if the planet did not rotate. So we all learned in childhood, ...
1answer
1k views
How to calculate the projected area at different angles/vectors?
Please help me with the following. I want to know if there is an equation/set of equations to find out the projected area of a (3-D) cube when it is oriented at different angles of attack to the fluid ...
1answer
218 views
Tiling hexagons on a sphere surface
In attemopt to understand basic principles of non-Euclidean geometry and its relation to physical space, I am reading General Relativity by Ben Crowell. On page 149 there is a discussion of hexagons ...
1answer
1k views
Determining the center of mass of a cone
I'm having some trouble with a simple classical mechanics problem, where I need to calculate the center of mass of a cone whose base radius is $a$ and height $h$..! I know the required equation. But, ...
1answer
111 views
No attraction radially in an cylinder of spherical magnets
I have a set of small magnetic spheres the size of ball bearings. When many of them are built into a cylinder such that they are hexagonally packed, there is no magnetic attraction radially (between ...
1answer
164 views
Is it possible to mechanically isomerize an sp3 hybridized carbon center?
Imagine I have an sp3 hybridized carbon attached to four separate polyethylene chains. By pulling on the polyethylene chains in some manner, is it possible for me to mechanically isomerize the chiral ...
1answer
85 views
Can we project a 4D world using 3D video technology?
Traditional movies, TV, etc, faithfully show our 3-dimensional world using 2 dimensions. So can we have a movie that shows a 4-dimensional world using 3D technology?
5answers
127 views
Why is the world sheet of an open string a cylinder?
I went to a lecture a few weeks ago and was told the following: The world sheet of a closed string is a normal, standing cylinder. The world sheet of an open string is a cylinder on its side. This ...
1answer
80 views
How far does typical view of clouds/atmosphere extend?
The specific "sub questions" I'm asking are: When you are looking at clouds just on the horizon, how far away would they be? How wide (in km) is that total field of vision at roughly cloud height. ...
2answers
107 views
Stroboscope-and-telegraph problem
Narrative: Consider, in a suitably flat region, two straight lines which don't necessarily intersect. Let vector $\mathbf{x}$ point along one line, and vector $\mathbf{y}$ point along the other. Let ...
2answers
63 views
Proof that a spherical lens is stigmatic
In geometric optics, we generally allow that, for example in the case of a convex lens, rays coming from a particular point get refracted towards another particular point on the opposite side of the ...
0answers
49 views
Dirichlet's work on gravity in non-Euclidean space?
In the book The Norton History of Astronomy and Cosmology by the late John North I have found the following statement (page 514): "The German mathematician Lejeune Dirichlet studied the law of ...
0answers
91 views
Getting the AdS metric from maximally symmetric spaces
I am familiar with the way we derive the form of the FRW metric by just using the fact that we have a maximally symmetric space i.e the universe is homogeneous and isotropic in spatial coordinates. ...
1answer
121 views
Are there any clear and expressive plainword sense of metric tensor components?
Can the following groups of components of metric tensor can assigned a clear sense? https://docs.google.com/drawings/pub?id=1kVqkN1gT-a2fDy2S851l9iQKaMfaatCDo517OSZBHEo&w=467&h=228 I have ...
2answers
196 views
Is it necessary to embed a 4D surface in 5D space?
Lets consider the line element: $$ds^2=dr^2+r^2[d\theta^2+\sin^2\theta d\phi^2]$$ There are three variables r,theta and phi. If we use a surface constraint like r=constant the number of independent ...
1answer
55 views
Rømer's determination of the speed of light
I am trying to understand Rømer's determination of the speed of light ($c$). The geometry of the situation is shown in the image below. The determination involves measuring apparent fluctuations in ...
2answers
319 views
Does a cycle (in Simple Harmonic Motion) have to equal 2π?
So, I search for the definition of cycle and I get this in Wikipedia: A turn is a unit of angle measurement equal to 360° or 2π radians (or ...). A turn is also referred to as a revolution or ...
2answers
418 views
How to deduce this free body diagram?
Can someone provide a trigonometry/geometry insight to deduce the angle of the plane is the same as the angle of the component of the weight?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285944104194641, "perplexity_flag": "middle"}
|
http://nrich.maths.org/5830
|
Reaction Timer Timer
How can you time the reaction timer?
Very Old Man
Stage: 5 Challenge Level:
Li Ching-Yuen was a Chinese herbalist and longevity expert who was known to have died in 1928. He claimed to have been born in 1734, giving him a lifespan of 196 years. Investigations into birth records indicated that he was actually born in 1678, giving an even longer lifespan of 250 years!
Whilst this may seem unbelievable, is it? In this question we use statistics to look into the lifespan of very old people.
Whilst there is no conclusive historical evidence to support the birth date of Li Ching-Yuen, the following data concerning lifespans are known [at the time of writing this question (October 2008); sources given below]
• There were about 450000 people in the world aged over 100.
• There were 82 living people who were known to be over the age of 110
• There were 2 people known to be over the age of 115 (ages 115 and 116)
• There are 31 unverified claims of people over the age of 110, two of whom claimed to be aged 115 and 116.
• In the past 50 years, 25 people are known for certain to have lived beyond the age of 115.
• In the past 50 years, 2 people are known for certain to have lived beyond the age of 120 (dying at ages 120 and 122).
A hypothesis $H$ is made saying: Once you make it to your 100th birthday there is a fixed probability $p$ of surviving to your next birthday on any given subsequent birthday. For example, if $p$ were $0.05$ then the hypothesis says that on my 100th birthday there is a $5$% chance of surviving until I am $101$; on my $101$st birthday there would be a $5$% chance of surviving until I am $102$ and so on.
Does the data approximately fit this hypothesis? What values of $p$ would seem most appropriate?
Assume that the hypothesis is true with a generous value of $p=0.5$. With this hypothesis, how many 100 year olds would need to be in a room before we might feel confident that one would live to the age of 196 suggested by Li Ching-Yuen himself? How does this number compare with the number of people on earth today (6.7 billion)?
Extension: There are many statistical complications involved in predicting death rates. How many can you think of? How might these effect these statistics in future?
The data in this problem was collected from the websites of The Gerontologists and The Centenarians .
Living is a risky business. To see more about the statistics concerning living and for an estimate of your life expectance, see the Understanding Uncertainty pages.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9754795432090759, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/174576-dis-continuous-function-sketch-graph.html
|
# Thread:
1. ## dis/continuous function - Sketch Graph
Hello,
how can I sketch the Graph of the following function?
----------4 x < 2
f(x) = {
----------x² 2 ≤ x
and I've to know if this function is continuous or not but I think that I can solve this after sketching the function ?
Infi
2. Originally Posted by Infi
Hello,
how can I sketch the Graph of the following function?
----------4 x < 2
f(x) = {
----------x² 2 ≤ x
and I've to know if this function is continuous or not but I think that I can solve this after sketching the function ?
Infi
Hi Infi,
$f(x) = \left \{\begin {array}{cc}4, & \mbox{ if } x < 2 \\ x^2, & \mbox{ if } x \ge 2 \end {array}\right$
This is a piecewise function. The first branch is a horizontal line at F(x) = 4 for all x less than 2. f(2) is not defined on this branch.
The second branch is the right branch of a parabola starting at f(x) = x^2 for all x greater than or equal to 2.
Since the second branch includes f(2) = 4, the function is continuous for all values of x.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279277920722961, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4087769
|
Physics Forums
## Simple Integration Doubt regarding integral of dy
1. The problem statement, all variables and given/known data
What is the result of ∫dx??? is it x or x+C
1) Through indefinite integration, it gives x+C
2) If I take a geometric interpretation, this integral gives me the area under the [f(x) and x] graph where f(x)=1 so by that the integral must be x(is there a C????)
If it were x+C then in the khan academy video
http://www.khanacademy.org/math/calc...tial-equations
He puts
∫dy=y, wouldn't it be ∫dy=y+C
2. Relevant equations
3. The attempt at a solution
Getting confused!
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Let us assume: dy = dx and integrate from 0 to y and a to x respectively. Here the value of a would determine the expression of the function, namely the C in your equation. In the case of x, the a is given as 0. Whereas a is any given number we obtain the result x + C.
Thanks for the reply. I understood what you wrote above and it is quite helpful. But in the case of solving differential equations or in any other application of calculus, will we take it as x or x+C similarly will we take it as y or y+C (I am self-studying calculus right now so I will be posting many doubts which I have. Help like this will be appreciated) :)
## Simple Integration Doubt regarding integral of dy
If it is in INDEFINITE integration, there will ALWAYS be an arbitrary constant C.
If it is a DEFINITE integration, there is NEVER a C.
If there are other conditions given to you (say the value of the integrate at some point), you might be able to calculate C.
Then, in the Khan academy video, whose link I have given above, shouldn't it be y+C in the first part of the video. If there is, then wouldn't the C's cancel out giving only the variables
Each indefinite integration yields an arbitrary constant C. So ∫dy will give a constant Cy, ∫dx will give another constant Cx1 and ∫x2dx another. Having said that, Khan academy has absorbed all three of them in one and named it C, which is a valid thing to do as long as they are all additive constants.
In the case of simple differential equation, if we know the initial conditions, we can work out C by substituting values into the expression. It is equivalent to integrate $\int^{y}_{y_{0}}g(y)dy = \int^{x}_{x_{0}}f(x)dx$ where $y_{0} x_{0}$ are the initial conditions. I must apologise for the ambiguities above. I was trying to express the indefinite integration with definite integration. Both expressions are equivalent in the case
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by sarvesh0303 1. The problem statement, all variables and given/known data What is the result of ∫dx??? is it x or x+C I thought about this two ways: 1) Through indefinite integration, it gives x+C 2) If I take a geometric interpretation, this integral gives me the area under the [f(x) and x] graph where f(x)=1 so by that the integral must be x(is there a C????)
The area of what region? You have an upper boundary (y= 1) and lower boundary (y= 0), but have not specified left and right boundaries. If you take some fixed value, $x_0$, as left boundary and the variable x as right boundary, you have a rectangle of height 1 and width $x- x_0$. The integral is $x- x_0$ which is the same as x+ C for C equal to $x_0$.
If it were x+C then in the khan academy video http://www.khanacademy.org/math/calc...tial-equations He puts ∫dy=y, wouldn't it be ∫dy=y+C 2. Relevant equations 3. The attempt at a solution Getting confused!
Recognitions:
Homework Help
Quote by sarvesh0303 1. The problem statement, all variables and given/known data What is the result of ∫dx??? is it x or x+C I thought about this two ways: 1) Through indefinite integration, it gives x+C 2) If I take a geometric interpretation, this integral gives me the area under the [f(x) and x] graph where f(x)=1 so by that the integral must be x(is there a C????) If it were x+C then in the khan academy video http://www.khanacademy.org/math/calc...tial-equations He puts ∫dy=y, wouldn't it be ∫dy=y+C 2. Relevant equations 3. The attempt at a solution Getting confused!
Constants of integration are arbitrary as long as no additional information is fed into the problem. So, if we have and equation of the form dy = dx, we can integrate on both sides to get y + K = x + L, where K and L are two separate constants of integration. We can, of course, re-write this as y = x + C, where C = L - K is also an arbitrary constant. So, for example, y = x, y = x+5, y = x - 3, y = x + 2π, ... all satisfy dy = dx.
RGV
Thread Tools
| | | |
|------------------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: Simple Integration Doubt regarding integral of dy | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 9 |
| | Math & Science Software | 10 |
| | Calculus | 10 |
| | Quantum Physics | 1 |
| | Calculus | 7 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9065976142883301, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/13314/is-r2-useful-or-dangerous/13317
|
# Is $R^2$ useful or dangerous?
I was skimming through some lecture notes by Cosma Shalizi (in particular, section 2.1.1 of the second lecture), and was reminded that you can get very low $R^2$ even when you have a completely linear model.
To paraphrase Shalizi's example: suppose you have a model $Y = aX + \epsilon$, where $a$ is known. Then $\newcommand{\Var}{\mathrm{Var}}\Var[Y] = a^2 \Var[x] + \Var[\epsilon]$ and the amount of explained variance is $a^2 \Var[X]$, so $R^2 = \frac{a^2 \Var[x]}{a^2 \Var[X] + \Var[\epsilon]}$. This goes to 0 as $\Var[X] \rightarrow 0$ and to 1 as $\Var[X] \rightarrow \infty$.
Conversely, you can get high $R^2$ even when your model is noticeably non-linear. (Anyone have a good example offhand?)
So when is $R^2$ a useful statistic, and when should it be ignored?
-
3
– whuber♦ Jul 20 '11 at 20:47
4
I have nothing statistical to add to the excellent answers given (esp. the one by @whuber) but I think the right answer is "R-squared: Useful and dangerous". Like pretty much any statistic. – Peter Flom Jul 21 '11 at 10:47
7
The answer to this question is: "Yes" – EpiGrad Apr 23 '12 at 20:52
## 6 Answers
To address the first question, consider the model
$$Y = X + \sin(X) + \varepsilon$$
with iid $\varepsilon$ of mean zero and finite variance. As the range of $X$ (thought of as fixed or random) increases, $R^2$ goes to 1. Nevertheless, if the variance of $\varepsilon$ is small (around 1 or less), the data are "noticeably non-linear." In the plots, $var(\varepsilon)=1$.
Incidentally, an easy way to get a small $R^2$ is to slice the independent variables into narrow ranges. The regression (using exactly the same model) within each range will have a low $R^2$ even when the full regression based on all the data has a high $R^2$. Contemplating this situation is an informative exercise and good preparation for the second question.
Both the following plots use the same data. The $R^2$ for the full regression is 0.86. The $R^2$ for the slices (of width 1/2 from -5/2 to 5/2) are .16, .18, .07, .14, .08, .17, .20, .12, .01, .00, reading left to right. If anything, the fits get better in the sliced situation because the 10 separate lines can more closely conform to the data within their narrow ranges. Although the $R^2$ for all the slices are far below the full $R^2$, neither the strength of the relationship, the linearity, nor indeed any aspect of the data (except the range of $X$ used for the regression) has changed.
(One might object that this slicing procedure changes the distribution of $X$. That is true, but it nevertheless corresponds with the most common use of $R^2$ in fixed-effects modeling and reveals the degree to which $R^2$ is telling us about the variance of $X$ in the random-effects situation. In particular, when $X$ is constrained to vary within a smaller interval of its natural range, $R^2$ will usually drop.)
The basic problem with $R^2$ is that it depends on too many things (even when adjusted in multiple regression), but most especially on the variance of the independent variables and the variance of the residuals. Normally it tells us nothing about "linearity" or "strength of relationship" or even "goodness of fit" for comparing a sequence of models.
Most of the time you can find a better statistic than $R^2$. For model selection you can look to AIC and BIC; for expressing the adequacy of a model, look at the variance of the residuals.
This brings us finally to the second question. One situation in which $R^2$ might have some use is when the independent variables are set to standard values, essentially controlling for the effect of their variance. Then $1 - R^2$ is really a proxy for the variance of the residuals, suitably standardized.
-
8
What an amazingly thorough and responsive answer by @whhuber – Peter Flom Jul 21 '11 at 10:40
Doesn't AIC and BIC explicitly adjust for the number of estimated parameters? If so, doing a comparison to and unadjusted R^2 seems unfair. So I ask, does your critique hold adjusted R^2? It seems like if you were penalized for 'slicing' that adjusted R^2 would be able to go back to telling you about the goodness of fit of the model. – Russell S. Pierce Jul 22 '11 at 13:56
2
@dr My critique applies perfectly to adjusted $R^2$. The only cases where there's much of a difference between $R^2$ and the adjusted $R^2$ are when you are using loads of parameters compared to the data. In the slicing example there were almost 1,000 data points and the slicing added only 18 parameters; the adjustments to $R^2$ wouldn't even affect the second decimal place, except possibly in the end segments where there were only a few dozen data points: and it would lower them, actually strengthening the argument. – whuber♦ Jul 22 '11 at 14:01
1
The answer to the question in your first comment ought to depend on your objective and there are several ways to interpret "testing for a linear relationship." One is, you want to test whether the coefficient is nonzero. Another is, you want to know whether there is evidence of nonlinearity. $R^2$ (by itself) isn't terribly useful for either, although we know that a high $R^2$ with plenty of data means their scatterplot looks roughly linear--like my second one or like @macro's example. For each objective there is an appropriate test and its associated p-value. – whuber♦ Jul 22 '11 at 16:41
1
For your second question we ought to wonder what might be meant by "best" linear fit. One candidate would be any fit that minimizes the residual sum of squares. You could safely use $R^2$ as a proxy for this, but why not examine the (adjusted) root mean square error itself? It's a more useful statistic. – whuber♦ Jul 22 '11 at 16:44
show 6 more comments
Your example only applies when the variable $\newcommand{\Var}{\mathrm{Var}}X$ should be in the model. It certainly doesn't apply when one uses the usual least squares estimates. To see this, note that if we estimate $a$ by least squares in your example, we get:
$$\hat{a}=\frac{\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}}{\frac{1}{N}\sum_{i=1}^{N}X_{i}^{2}}=\frac{\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}}{s_{X}^{2}+\overline{X}^{2}}$$ Where $s_{X}^2=\frac{1}{N}\sum_{i=1}^{N}(X_{i}-\overline{X})^{2}$ is the (sample) variance of $X$ and $\overline{X}=\frac{1}{N}\sum_{i=1}^{N}X_{i}$ is the (sample) mean of $X$
$$\hat{a}^{2}\Var[X]=\hat{a}^{2}s_{X}^{2}=\frac{\left(\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}\right)^2}{s_{X}^2}\left(\frac{s_{X}^{2}}{s_{X}^{2}+\overline{X}^{2}}\right)^2$$
Now the second term is always less than $1$ (equal to $1$ in the limit) so we get an upper bound for the contribution to $R^2$ from the variable $X$:
$$\hat{a}^{2}\Var[X]\leq \frac{\left(\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}\right)^2}{s_{X}^2}$$
And so unless $\left(\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}\right)^2\to\infty$ as well, we will actually see $R^2\to 0$ as $s_{X}^{2}\to\infty$ (because the numerator goes to zero, but denominator goes into $\Var[\epsilon]>0$). Additionally, we may get $R^2$ converging to something in between $0$ and $1$ depending on how quickly the two terms diverge. Now the above term will generally diverge faster than $s_{X}^2$ if $X$ should be in the model, and slower if $X$ shouldn't be in the model. In both case $R^2$ goes in the right directions.
And also note that for any finite data set (i.e. a real one) we can never have $R^2=1$ unless all the errors are exactly zero. This basically indicates that $R^2$ is a relative measure, rather than an absolute one. For unless $R^2$ is actually equal to $1$, we can always find a better fitting model. This is probably the "dangerous" aspect of $R^2$ in that because it is scaled to be between $0$ and $1$ it seems like we can interpet it in an absolute sense.
It is probably more useful to look at how quickly $R^2$ drops as you add variables into the model. And last, but not least, it should never be ignored in variable selection, as $R^2$ is effectively a sufficient statistic for variable selection - it contains all the information on variable selection that is in the data. The only thing that is needed is to choose the drop in $R^2$ which corresponds to "fitting the errors" - which usually depends on the sample size and the number of variables.
-
+1 Lots of nice points. The calculations add quantitative insights to the previous replies. – whuber♦ Aug 23 '11 at 16:44
If I can add an example of when $R^2$ is dangerous. Many years ago I was working on some biometric data and being young and foolish I was delighted when I found some statistically significant $R^2$ values for my fancy regressions which I had constructed using stepwise functions. It was only afterwards looking back after my presentation to a large international audience did I realize that given the massive variance of the data - combined with the possible poor representation of the sample with respect to the population, an $R^2$ of 0.02 was utterly meaningless even if it was "statistically significant"...
Those working with statistics need to understand the data!
-
1
+1 Very interesting example (and a good story). – whuber♦ Jan 31 '12 at 16:58
3
No statistic is dangerous if you understand what it means. Sean's example has nothing special to do with R square it is the general problem of being enamored with statistical significance. When we do statistical testing in practice we are only interested in meaningful differences. Two populations never have identical distributions. If they are close to equal we don't care. With very large sample sizes we can detect small unimportant differences. That is why in my medical research consulting I emphasize the difference between clinical and statistical significance. – Michael Chernick May 6 '12 at 1:16
3
Initially my clients often thin that statistical significance is the goal of the research. They need to be shown that it is not the case. – Michael Chernick May 6 '12 at 1:16
When you have a single predictor $R^{2}$ is exactly interpreted as the proportion of variation in $Y$ that can be explained by the linear relationship with $X$. This interpretation must be kept in mind when looking at the value of $R^2$.
You can get a large $R^2$ from a non-linear relationship only when the relationship is close to linear. For example, suppose $Y = e^{X} + \varepsilon$ where $X \sim {\rm Uniform}(2,3)$ and $\varepsilon \sim N(0,1)$. If you do the calculation of
$$R^{2} = {\rm cor}(X, e^{X} + \varepsilon)^{2}$$
you will find it to be around $.914$ (I only approximated this by simulation) despite that the relationship is clearly not linear. The reason is that $e^{X}$ looks an awful lot like a linear function over the interval $(2,3)$.
-
To the remarks below by Erik and Macro I don't think anyone has it out for me and it is probably better to have one combined answer instead of three separate ones but why does it matter to the point that so much discussion centers around how you write things and where you write it instead of fcusing on what is said? – Michael Chernick May 6 '12 at 4:13
4
@MichaelChernick, I don't think there is "so much" discussion about how one writes things. The guidelines we've tried to help you with are more along the lines of "if everyone did that, this site would be very disorganized and hard to follow". It may seem like there is a lot of discussion about these things, but that's probably just because you've been a very active participant since you joined, which is great, since you clearly bring a lot to the table. If you want to talk more about this, consider starting a thread on meta rather than a comment discussion under my unrelated answer :) – Macro May 6 '12 at 4:40
One situation you would want to avoid $R^2$ is multiple regression, where adding irrelevant predictor variables to the model can in some cases increase $R^2$. This can be addressed by using the adjusted $R^2$ value instead, calculated as
$\bar{R}^2 = 1 - (1-R^2)\frac{n-1}{n-p-1}$ where $n$ is the number of data samples, and $p$ is the number of regressors not counting the constant term.
-
5
Note that adding irrelevant variables is guaranteed to increase $R^2$ (not just in "some cases") unless those variables are completely collinear with the existing variables. – whuber♦ Jan 9 '12 at 18:57
1. A good example for high R square with a nonlinear is the quadratic function y=x^2 restricted to the interval [0,1]. With 0 noise it will not have an R square of 1 if you have 3 or more points since they will not fit perfectly on a straight line. But if the design points are scattered uniformly on the [0, 1] the R square you get will be high perhaps surprisingly so. This may not be the case if you have a lot of points near 0 and alot near 1 with little or nothing in the middle.
2. R square will be poor in the perfect linear case if the noise term has a large variance. So you can take the model Y= x + e which is technically a perfect linear model but let the variance in e tend to infinity and you will have R square going to 0. Inspite of its deficiencies R square does measure the percentage of variance explained by the data and so it does measure goodness of fit. A high R square means a good fit but we still have to be careful about the good fit being casued by too many parameters for the size of the data set that we have.
3. In the multiple regression situation there is the overfitting problem. Add variables and R square will always increase. The adjusted R square remedies this somewhat as it takes account of the number of parameters.
-
4
– chl♦ May 4 '12 at 22:35
Someone undeleted my answers. Answers 2 and 3 were merged by chi and therefore he deleted my separate answers. It is a procedural issue not one about content. The content was preserved. I don't object but I think it would be more appropriate to let me decide whether or not to edit it – Michael Chernick May 5 '12 at 17:35
Your two other replies are still deleted, AFAICT. I thought it would be helpful for you if I merged your separate replies into the present one. My comment above is just to let you know about site policy. (I'm sure you will get used to this site shortly.) Of course, you are free to edit the present material as you like now. – chl♦ May 5 '12 at 18:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 93, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951625406742096, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/5323?sort=newest
|
## Infinitely many primes of the form 2^n+c as n varies?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
At the time of writing, question 5191 is closed with the accusation of homework. But I don't have a clue about what is going on in that question (other than part 3) [Edit: Anton's comments at 5191 clarify at least some of the things going on and are well worth reading] [Edit: FC's excellent answers shows that my lack of clueness is merely due to ignorance on my own part] so I'll ask a related one.
My impression is that it's generally believed that there are infinitely many Mersenne primes, that is, primes of the form $2^n-1$. My impression is also that it's suspected that there are only finitely many Fermat primes, that is, primes of the form $2^n+1$ (a heuristic argument is on the wikipedia page for Fermat primes). [EDIT: on the Wikipedia page there is also a heuristic argument that there are infinitely many Fermat primes!]
So I'm going to basically re-ask some parts of Q5191, because I don't know how to ask that a question be re-opened in any other way, plus some generalisations.
1) For which odd integers $c$ is it generally conjectured that there are infinitely many primes of the form $2^n+c$? For which $c$ is it generally conjectured that there are only finitely many? For which $c$ don't we have a clue what to conjecture? [Edit: FC has shown us that there will be loads of $c$'s for which $2^n+c$ is (provably) prime only finitely often. Do we still only have one $c$ (namely $c=-1$) for which it's generally believed that $2^n+c$ is prime infinitely often?]
2) Are there any odd $c$ for which it is a sensible conjecture that there are infinitely many $n$ such that $2^n+c$ and $2^{n+1}+c$ are simultaneously prime? Same question for "finitely many $n$".
3) Are there any pairs $c,d$ of odd integers for which it's a sensible conjecture that $2^n+c$ and $2^n+d$ are simultaneously prime infinitely often? Same for "finitely often".
-
Fermat primes have the form $2^{2^n}+1$. – Kevin O'Bryant Oct 21 2011 at 1:21
Yes but it's an easy exercise to check that if $2^n+1$ is prime then it's a Fermat prime. – Kevin Buzzard Oct 22 2011 at 8:37
## 6 Answers
Buzzard is correct to be skeptical of the most naive arguments: Erdos observed that $2^n + 9262111$ is never prime.
Question one is an incredibly classical problem, of course. Observe that the proof that $2^n + 3$ and $2^n + 5$ are both prime finitely often can plausibly work for a single expression $2^n + c$ for certain $c$. It suffices to find a finite set of pairs $(a,p)$ where $p$ are distinct primes such that every integer is congruent to $a$ modulo $p - 1$ for at least one pair $(a,p)$. Then take $-c$ to be congruent to $2^{a}$ modulo $p$. (Key phrase: covering congruences). I could write some more, but I can't really do any better than the following very nice elementary talk by Carl Pomerance:
www.math.dartmouth.edu/~carlp/PDF/covertalkunder.pdf
Apparently the collective number theory brain of mathoverflow is remaking 150 year old conjectures that have been known to be false for over 50 years! I was going to let this post consist of the first line, but I guess I'm feeling generous today. On the other hand, I'm increasingly doubtful that I'm going to get an answer to question 2339.
-
4
I didn't make a conjecture---I asked a question ;-) Fabulous post FC; thanks. – Kevin Buzzard Nov 13 2009 at 15:38
1
Actually, looking at 2339 I see you've accused me of making a conjecture there too, whereas if I look at the source I again see that I only asked a question :-) – Kevin Buzzard Nov 13 2009 at 15:41
3
@buzzard: it happens to the best of us, even Serre and Poincare. – Harrison Brown Nov 13 2009 at 15:42
2
@buzzard: I think Harrison is referring to people turning other people's "questions" into "conjectures". – Lavender Honey Nov 13 2009 at 16:16
5
Finally I'll remark that it was Swinnerton-Dyer that has caused all this trouble. I had a couple of pints of cider with him once and then he let fly about conjectures these days not being worth anything, and how he remembered the good old days (by which I think he meant the sixties) when conjectures were things that were most certainly true, and everything else was just a question. Since that conversation I've been much more reluctant to conjecture anything! – Kevin Buzzard Nov 13 2009 at 16:38
show 4 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A heuristic approach might be to examine how many large primes are known for a given $c$.
A good source is Probable Primes Top 10000
The search form is here
Some results from the PRP Top 10000 dabase:
````c largest-n number-of-primes-in-the-database:
3 479844 11
5 193965 4
7 566496 6
9 173727 6
11 345547 6
13 175628 2
````
Probably the computational effort for different $c$ in the database is quite different.
OEIS has data too.
-
Hi there, I hope I'm not duplicating anything with what I will write. Here goes: If one considers the Bateman-Horn conjecture, it predicts that $$\sum_{n \leq x}\Lambda(f(n)) \sim \prod_p\left(\frac{p-n_p}{p-1}\right)x$$
where $\Lambda(n)$ is the von Mangoldt function and $n_p$ is the number of solutions to the equation $f(n) \equiv 0 \bmod p$ in $\mathbb{Z}/p\mathbb{Z}$. The reason for the form of each Euler factor is as follows: For each $p$, usually there is $(p-1)/p$ chance of nondivisibility by $p$ in the integers. But if the set in consideration is the image of the polynomial $f$, then the new probability of nondivisibility by $p$ is $(p-n_p)/p$. So the Euler factor is $((p-n_p)/p)/((p-1)/p) = (p-n_p)/(p-1)$. Whether the infinite product over these factors is zero or not is like a competition between primes with $n_p=0$ and $n_p>1$. It really depends on the density of these primes with various $n_p$.
Therefore I'd be curious to know whether the above Bateman-Horn conjecture can be generalized to this case as follows(?): $$\sum_{n \leq x}\Lambda(2^n+c) \sim \prod_p\left(\mbox{something}\right)x.$$
Or is there any reason why heuristics for $2^n+c$ must be treated differently from $f \in \mathbb{Z}[x]$?
It is interesting to consider something analogous to $n_p$ in the case of $2^n+c \equiv 0 \bmod p$. For $p|c$, we have $n_p=0$ since $2^n$ will never be $0 \bmod p$. For $p\nmid c$, one must consider the cyclic subgroup $<2>$ generated by $2$ in $(\mathbb{Z}/p\mathbb{Z})^{*}$. Let $h$ be the order of this subgroup. We have $h | p-1$. Let $\delta_p = 1$ if $c \in <2>$ and $\delta_p=0$ otherwise.
Then for each $p$, numbers of the form $2^n+c$ have probability $$(\delta_p \times (p-1)/h)/(p-1) = \delta_p / h$$ of divisibility by $p$. Therefore, perhas the Euler factor here should be (?) $$\left(\frac{(h-\delta_p)/h}{(p-1)/p}\right).$$
Putting all this together, can one conjecture (?) $$\sum_{n \leq x}\Lambda(2^n+c) \sim \prod_{p|c}\left(\frac{p}{p-1}\right)\prod_{p \nmid c}\left(\frac{(h-\delta_p)/h}{(p-1)/p}\right)x.$$
Thanks.
-
That seems unreasonable: the Euler product form in the twin prime, Schinzel, Bateman-Horn, etc, conjectures have their source in the Chinese Remainder Theorem (independence of reductions of integers modulo distinct primes). The analogue property fails for powers of $2$. (Or of any other integer $a\geq 2$). This is also why, for instance, sieve methods are unsuccessful for these types of questions. – Denis Chaperon de Lauzières Oct 20 2011 at 7:11
Ah, thanks very much. I really should have read the Lenstra-Pomerance-Wagstaff conjectures (en.wikipedia.org/wiki/Mersenne_conjectures) before I said the above. Thanks. – Timothy Foo Oct 20 2011 at 9:39
Hi, I believed that there are always an infinitude of primes in all forms of 2^n + c except c = 1 (fermat numbers). I don't have a proof though but gathering some data on my research of forms 2^x+3 and 2^x+5. I am interested the reason being that together they will produce infinite twin primes and prime arithmetic progression for (3,2^x+3,2^x+1 + 3), again just my belief and based on algorithm I am working on.
-
2
In the other thread there is already a proof that 2^x+3 and 2^x+5 will not produce infinitely many twin primes, and there is also a bunch of evidence to suggest that 3,2^x+3,2^{x+1}+3 will not produce infinitely many sets of three primes in an AP. – Kevin Buzzard Nov 13 2009 at 14:21
Thanks for this, now I concede about this result, of course my original question still holds. How big x exponent is? I would imagine it to be big. – Jaime Montuerto Nov 13 2009 at 18:41
There are certainly (many!) pairs of integers c, d for which it's known that can't be simultaneously prime infinitely often -- take c = 1, d = -1, for instance, where 2^n - 1 can only be prime if n is prime, but 2^n + 1 can only be prime if n is a power of 2. Alternatively, taking everything (mod 3), we see that c = -1, d = 7 are only simultaneously prime at n = 2.
So there are lots of local obstructions to these pairs being simultaneously prime, although probably not enough to rule out all but finitely many of them.
[Edited because this is unlikely to fit in a comment]: Here's a rough stab at a really basic heuristic (for part 1 and part 3) which I don't see a way to make actually work, but maybe someone else will...
So let's fix c and consider the (mod p) behavior of 2^n + c for each odd prime p. Since 2^n is periodic (mod p), for large n we have some congruence classes (mod p-1) for which 2^n + c is always composite. To simplify the analysis, we can just consider the behavior at p such that 2 is a primitive root (mod p), in which case there's exactly one such congruence class (mod p-1) for every p.
The problem now is that there's no obvious way to sieve this, since the primes minus one don't behave very nicely with respect to multiplication. My initial hunch -- which is only a hunch, not backed up by anything resembling logic -- is that generically speaking, we might see roughly the same behavior as if we were to sieve by the primes, i.e., 2^n + c is coprime to all the primes for which 2 is a primitive root with probability 1/log n. This is much larger than the PNT predicts, of course, but notably it's still small enough that (assuming my back-of-the-envelope calculation and wildly speculative hunch are correct) we should expect that are typically simultaneously prime only finitely often.
[Edit^2]: So after some more thought I see where my hunch is horribly, horribly wrong, namely that there's no reason to believe that a = b (mod p-1) and a = d (mod q-1) has any solutions. But I do suspect that we do get some "new" forbidden exponents for almost every prime, which ::waves hands vigorously:: suggests that the values of n with 2^n + c prime do have density 0 and in particular probably have density at most O(1/log n), which is still good for a heuristic for (3). Can anyone make this more precise?
-
1
Yes. Indeed Anton's comment in 5191 gives another example (c=3,d=5) where there's a (similar-looking but slightly harder) proof that they can both only be prime finitely often. I also agree that probably the local conditions won't always save you (again follow Anton's idea and find explicit n,c,d such that 2^n+c and 2^n+d aren't prime but only have nice juicy prime factors of size > 10^6). So now we need some sensible heuristics to continue, if there are any sensible heuristics on these matters. I guess it's also worth remarking that no-one has said anything about Q1 yet. – Kevin Buzzard Nov 13 2009 at 12:25
Naive conjecture for (2) and (3) would be that there are only finitely many such $n$ (unless $c=d$, since then (3) becomes (1)). The reason is that heuristically the 'probability' that a number of order $m$ is prime is roughly $1/\log(m)$, and the sum $$\sum_n \frac{1}{\log(2^n+c)\log(2^n+d)}$$ converges.
-
2
In general I am nervous about these sorts of heuristics, because applied naively they would predict infinitely many Fermat primes, whereas applied more sensibly (i.e. think about which n can occur from an elementary viewpoint first and then heuristicise) they predict finitely many. On the other hand I don't think this specific objection (my nervousness about heuristics) applies to your heuristics (because if we made intelligent elementary observations which ruled out certain n first then the sum would only get smaller). So I think you've probably done (2) and (3). – Kevin Buzzard Nov 13 2009 at 9:19
1
Actually, suddenly I am nervous about this argument. If c=1 then one can say a lot about possible factors of 2^n+c, for example, and a competing heuristic on the Wikipedia page for Fermat primes seems to say that incorporating these facts screws up the heuristic that there are only finitely many Fermat primes. Does a general heuristic look like this: "here are some things I can think of, now let's assume everything else is random and sum 1/log"? So implicit in such a heuristic is the assertion that you've not missed anything? – Kevin Buzzard Nov 13 2009 at 10:13
Ok here is an explicit comment about your heuristic that I hope worries you. If 2 is not a primitive root mod a prime number q, then q might never divide 2^n+c (e.g. 7 never divides 2^n+9 and surely there will be other primes, possibly infinitely many more, with this property---certainly 2 never divides 2^n+9 either). Hence 2^n+9 is less likely to be composite than a random large number. For a proper heuristic you need to take this into account and recalculate. What I'm saying is that your heuristic might be too naive, and fixing it up to incorporate my comments might give a different answer – Kevin Buzzard Nov 13 2009 at 10:17
I share your anxiety about these heuristics in general, and extreme naivete of mine in particular. For example, by these heuristics would give that if alpha is a real number, then floor(alpha^(3^n)) is prime finitely often. However, it is false for some alpha. – Boris Bukh Nov 13 2009 at 10:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9575842618942261, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/103795/are-faces-of-a-compact-convex-body-opposed-iff-their-extreme-points-are-pairwi
|
## Are faces of a compact, convex body “opposed” iff their extreme points are pairwise “opposed”?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $P$ be a compact, convex subset of $\mathbb{R}^n$ (infinite-dimensional generalisations welcome, but not necessary). Let's say that disjoint subsets $W_1$, $W_2$ $\subset P$ are opposed if there exist parallel hyperplanes $H_1$, $H_2$ supporting $P$, such that $W_i \subset H_i \cap P$.
Let $F_1$ and $F_2$ be faces of $P$, such that their extreme points are pairwise opposed: i.e. $v_1, v_2 \in P$ are opposed whenever $v_i$ is an extreme point of $F_i$. Are $F_1$ and $F_2$ opposed?
I have a tentative proof when $P$ has affine dimension equal to 2, which I am struggling to generalise even to 3 dimensions. The converse is trivial. I'd also be interested to know if a proof of this requires some restriction on $P$ (e.g. letting $P$ be a polytope).
-
1
Are not the vertices of a tetrahedron (more generally a simplex) always opposed? Gerhard "Ask Me About System Design" Paseman, 2012.08.02 – Gerhard Paseman Aug 3 at 0:02
@Gerhard: Notice that he defined subset opposition to require disjointness, whereas no two facets of a simplex are disjoint. – Joseph O'Rourke Aug 3 at 0:59
That is right. If t&e poster had requested as a condition that the faces F_i be disjoint, there would be less of an issue. Also, I am not sure what dimension a facdt is, but I note that some edges of a simplex are opposed in pairs. Gerhard "Ask Me About System Design" Paseman, 2012.08.02 – Gerhard Paseman Aug 3 at 2:56
## 1 Answer
NO.
Let $P$ be the convex hull of two parabolic arcs, say $$\{\,(x,0,z)\in \mathbb R^3\mid 1\ge z=x^2\,\}$$ and $$\{\,(0,y,z)\in \mathbb R^3\mid 0\le z=-y^2+\varepsilon\cdot y+1\,\}.$$ Take $$F_1=\{\,(x,0,1)\in \mathbb R^3\mid |x|\le 1\,\}$$ and $$F_2=\{\,(0,y,0)\in \mathbb R^3\mid 0\le -y^2+\varepsilon\cdot y+1\,\}$$
You can approximate it by a polyhedra in such a way that $F_1$ and $F_2$ are still edges. This leads to a polyhedral example.
-
I don't understand the definition of the first parabolic arc. What is $y$? – Will Sawin Aug 2 at 20:15
@Will, $y=x$, now it is fixed. – Anton Petrunin Aug 2 at 21:33
Thank you very much, no wonder I was having no luck. I'm guessing that the convex hull of the points (0,0,0), (0,1,0), (0,e,1+e), (1,0,1), (-1,0,1), for small e, would work for a similar reason? – Sabri Aug 3 at 11:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288658499717712, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/100962/actions-of-the-discrete-heisenberg-group-by-formal-power-series-of-two-variables
|
## Actions of the discrete Heisenberg group by formal power series of two variables
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am interested in faithful actions of the discrete Heisenberg group $H$ by smooth diffeomorphisms of a surface $S$, that is, 1-1 homomorphisms $\phi \colon H \to \text{Diff}^{\infty}(S)$.
We say $p \in S$ is a fixed point if, for every $g \in H$, $\phi(g)(p) = p$. In this case, picking coordinates about $p$, the derivative gives us an action $H \to GL_2(\mathbb{R})$.
Let $G$ be the following group: $G$ is the set of pairs of formal power series $(\sum_{i, j} a_{i, j}x^iy^j, \sum_{i, j} b_{i, j}x^iy^j)$ such that $a_{0,0} = b_{0,0} = 0$, and the determinant $a_{1,0}b_{0,1} - a_{0,1}b_{1,0} \neq 0$. This forms a group under composition.
If we keep track of higher derivatives for the action $\phi$ at the fixed point $p$, we get an action $\psi \colon H \to G$. We have thus turned the analytic problem of understanding homomorphisms $\phi \colon H \to \text{Diff}^{\infty}(S)$ into the algebraic problem of understanding homomorphisms $\psi \colon H \to G$.
I know an example of an injective homomorphism $\psi \colon H \to G$. Ideally, I would like to classify (up to conjugacy) all possible injective homomorphisms $\psi \colon H \to G$. Does anyone know something about the algebra of the group $G$, which might help me move in that direction?
-
Just a remark: the only solvable subgroups of the mapping class group are virtually abelian. So you may assume that the image of your group in $Diff(S)$ must have center which is isotopic to the identity, since the image in $Mod(S)$ will be abelian. – Agol Jul 1 at 22:21
That's true; thanks for your remark. – Kiran Parkhe Jul 10 at 14:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9119661450386047, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2009/10/09/product-and-quotient-rules/?like=1&_wpnonce=67418a1937
|
# The Unapologetic Mathematician
## Product and Quotient rules
As I said before, there’s generally no product of higher-dimensional vectors, and so there’s no generalization of the product rule. But we can multiply and divide real-valued functions of more than one variable. Finding the differential of such a product or quotient function is a nice little exercise in using Cauchy’s invariant rule.
For all that follows we’re considering two real-valued functions of $n$ real variables: $f(x)=f(x^1,\dots,x^n)$ and $g(x)=g(x^1,\dots,x^n)$. We’ll put them together to give a single map from $\mathbb{R}^n$ to $\mathbb{R}^2$ by picking orthonormal coordinates $u$ and $v$ on the latter space and defining
$\displaystyle\begin{aligned}u&=f(x)\\v&=g(x)\end{aligned}$
We also have two familiar functions that we don’t often think of explicitly as functions from $\mathbb{R}^2$ to $\mathbb{R}$:
$\displaystyle\begin{aligned}p(u,v)&=uv\\q(u,v)&=\frac{u}{v}\end{aligned}$
Now we can find the differentials of $p$ and $q$
$\displaystyle\begin{aligned}dp(u,v)&=vdu+udv\\dq(u,v)&=\frac{1}{v}du-\frac{u}{v^2}dv=\frac{vdu-udv}{v^2}\end{aligned}$
Notice that the differential for $q$ is exactly the alternate notation I mentioned when defining the one-variable quotient rule!
With all this preparation out of the way, the product function $f(x)g(x)$ can be seen as the composition $p(f(x),g(x))$, while the quotient function $\frac{f(x)}{g(x)}$ can be seen as the quotient $q(f(x),g(x))$. So to calculate the differentials of the product and quotient we can use Cauchy’s invariant rule to make the substitutions
$\displaystyle\begin{aligned}u&=f(x)\\v&=g(x)\\du&=df(x)\\dv&=dg(x)\end{aligned}$
The upshot is that just like in the case of one variable we can differentiate a product of two functions by differentiating each of the functions, multiplying by the other function, and adding the two resulting terms. We just use the differential instead of the derivative.
$\displaystyle d\left[fg\right](x)=df(x)g(x)+f(x)dg(x)$
Similarly, we can differentiate the quotient of two functions just as in the one-variable case, but using the differential instead of the derivative.
$\displaystyle d\left[\frac{f}{g}\right](x)=\frac{df(x)g(x)-f(x)dg(x)}{g(x)^2}$
### Like this:
Posted by John Armstrong | Analysis, Calculus
No comments yet.
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174859523773193, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-applied-math/91588-question-about-schroedinger-s-wave-equation.html
|
# Thread:
1. ## A question about Schrödinger's Wave Equation.
So, we're going over the Born interpretation of the function, and it gives rise to the normalization condition:
$\int |\Psi (\bold{r}, t)|^2 ~d^3\bold{r} = 1$
My professor wasn't very descriptive of it, but the question I have is what does the $d^3\bold{r}$ mean? I know it represents a volume, and in the integral it is over all space... So, what does it mean mathematically?
2. Originally Posted by Aryth
So, we're going over the Born interpretation of the function, and it gives rise to the normalization condition:
$\int |\Psi (\bold{r}, t)|^2 ~d^3\bold{r} = 1$
My professor wasn't very descriptive of it, but the question I have is what does the $d^3\bold{r}$ mean? I know it represents a volume, and in the integral it is over all space... So, what does it mean mathematically?
It means the particle is somewhere, it is the integral of what is essentially a probability density over all space. You are normalising the wave function so that $|\Psi (\bold{r}, t)|^2$ may be interpreted as a pdf (possibly in the sense of a generalised function/density, but that's just the way physics works).
CB
3. Ah, I see. That makes sense. I appreciate it.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9549798965454102, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/106827?sort=oldest
|
## Fourier inversion formula for complex-valued random variables?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The characteristic function of a complex-valued random variable $X$ with pdf $\mu$ is given by $$\phi(t) = \int \exp[i \Re(\bar{t} X)] \; d\mu$$ (or, so says Wikipedia). How does one recover the pdf from $\phi$, i.e., what is the Fourier inversion formula for measures on $\mathbb{C}$? The $\mu$ I am working with is as "nice" as one could ask for.
P.S. Where would I find such a result? (Of course, I could try to work out the exact form of Pontryagin Duality for $\mathbb{C}$ from the definitions, but presumably somebody has done this before.)
-
## 1 Answer
you have to take the real part of $\bar{t}X$ in the exponent, in order for the integral to make sense; if you then decompose $t=t_1+it_2$ and $X=X_1+iX_2$ into real and imaginary parts, you just have a conventional two-dimensional Fourier transform
$\phi(t_1,t_2)=\int_{-\infty}^{\infty}dX_1 \int_{-\infty}^{\infty}dX_2 exp(it_1 X_1+i t_2 X_2) P(X_1,X_2)$
the inverse is, as usual,
$P(X_1,X_2)=(2\pi)^{-2}\int_{-\infty}^{\infty}dt_1 \int_{-\infty}^{\infty}dt_2 exp(-it_1 X_1-i t_2 X_2) \phi(t_1,t_2)$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331579208374023, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/71584/hyperarithmetic-statements-decidable-by-induction-up-to-a-recursive-ordinal/72322
|
## Hyperarithmetic statements decidable by induction up to a recursive ordinal
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The first version of this question received a helpful answer but was too vague to fully convey what I intended. I hope this version remedies that problem. For any hyperarithmetic set of integers $S$, does there exist a single recursive process that can determine if $s\in S$ for all integers provided it also has access to a notation from Kleene's $\mathcal{O}$ for a sufficiently large (which can vary with $s$) recursive ordinal? More precisely, given any hyperarithmetic set of integers $S$, is there a recursive function $f(s,n)$ where $s$ is an integer and $n$ is an ordinal notation in Kleene's $\mathcal{O}$ and the following holds? (Note if $n\in\mathcal{O}$ then $n_o$ is the ordinal represented by $n$.)\ $$(\forall s\in\omega)(\forall n\in\mathcal{O}) \hspace{.045in}f(s,n)= \mbox{ 0 (false), 1 (true) or 2 (unknown)}$$ $$(\forall s\in\omega)(\forall n\in\mathcal{O}) \hspace{.045in}((s\in S\equiv f(s,n)) \vee f(s,n)=2)$$ $$(\forall s\in\omega)(\exists n\in\mathcal{O})\hspace{.045in}((s\in S\equiv f(s,n))\wedge (\forall m\in\mathcal{O})\hspace{.045in} (n_o\leq m_o \rightarrow (s\in S\equiv f(s,m))))$$
-
the first version: mathoverflow.net/questions/70683/… – Kaveh Jul 29 2011 at 18:56
There's a problem: Kleene's O in its full generality goes beyond any computable theory, because any such theory has a limit computable ordinal which is not provably well ordered. Your question sounds like a strong statement of Cohen's "article of faith" from "Set Theory and the Continuum Hypothesis" that any arithmetic question is resolved by a large enough axiom of infinity (if it is understood that any large axiom of infinity is equivalent to the well-foundedness of a recursive ordinal). So how can your statement be proven? Even replacing "hyperarithmetic" by "pi-0-1" (halting problem)? – Ron Maimon Jul 30 2011 at 18:23
@Ron: I don't see why that is an obstacle to Paul's question having an answer. We can prove things about Kleene's O even if we don't have a complete theory of it. In fact, if in the third condition, if $n_o \le m_o$ is replaced by $n \;\le_\mathcal{O}\; m$ then I do believe we can prove the answer is yes, and nothing in your objection seems to hinge on that detail. – Daniel Mehkeri Jul 31 2011 at 5:43
## 1 Answer
Well this has gone unanswered for a while, so I'll give a pseudo-answer. First I'll answer yes to a slightly different question, then I'll guess that the answer to the question as stated is no, then I'll make some random comments.
My proposed alterations are: (1) instead of a three-valued computable function, have a two-valued partial computable function ("unknown" becomes a non-terminating computation); (2) replace $n_{\mathcal{O}} \le m_{\mathcal{O}}$ with $n \;\le_\mathcal{O}\; m$. The relation $\le_\mathcal{O}$ is computably enumerable - or rather, there is a two-valued partial computable relation that agrees with $\le_\mathcal{O}$ over the elements of $\mathcal{O}$, and that is sufficient here.
Suppose $A$ is a $\Pi^1_1$ set. As mentioned in the answer to the first question, any membership question can be reduced to a single query to $\mathcal{O}$; there is a computable $g$ such that $(\forall s \in \mathbb{N}) s \in A \leftrightarrow g(s) \in \mathcal{O}$
Suppose $A$ is $\Delta^1_1$. Then not only is it $\Pi^1_1$, but so is $\mathbb{N} \setminus A$. So there is also a computable $h$ such that $(\forall s \in \mathbb{N}) s \not\in A \leftrightarrow h(s) \in \mathcal{O}$
So define $f$ as: $f(s,n) = true$ if $g(s) \;\le_\mathcal{O}\; n$, $f(s,n) = false$ if $h(s) \;\le_\mathcal{O}\; n$, undefined otherwise. This is a partial computable function.
This meets the second condition because if $f(s,n)=true$ and $n \in \mathcal{O}$ then $g(s) \in \mathcal{O}$ so $s \in A$, and similarly for $f(s,n)=false$. As to the third, if $s \in A$ then $g(s) \in \mathcal{O}$ and $f(s,n)=true$ for all $n \in \mathcal{O}$ such that $n \;\ge_\mathcal{O}\; g(s)$, and similarly if $s \not\in A$.
You didn't say anything about how $A$ is given, like asking for a single $f(A,s,n)$ that takes as input a particular type of code for $A$. So I just directly identified hyperarithmetic sets with $\Delta^1_1$.
I believe the answer to the question as stated is no, and even if we make just the alteration (1) above:
$\mathcal{O}$ gives an infinite number of different notations to each infinite recursive ordinal. Let's say we had a "branch" $\mathcal{B} \subset \mathcal{O}$, by which I mean that $\mathcal{B}$ contains exactly one notation for each recursive ordinal and is totally ordered by $\le_\mathcal{O}$. Now your if your three conditions held, they would still hold with $\mathcal{B}$ replacing $\mathcal{O}$: universal quantifiers just restrict to the subset, and as for the one existential quantifier in the third condition, it follows from the assumed property of $\mathcal{B}$ that some member of $\mathcal{B}$ would satisfy that condition, if $\mathcal{O}$ did.
However Wikipedia implies there does exist such a subset $\mathcal{B}$ which does not even decide all $\Pi^0_1$ sentences. If so, then your three conditions can't hold for that $\mathcal{B}$, so they can't hold for $\mathcal{O}$.
Now for some random comments. There are some very unnatural notations in $\mathcal{O}$. In my positive answer, we're allowed to cook up unnatural notations $g(s)$ and $h(s)$ which basically just encode the sentences $s \in A$ and $s \not\in A$. In my negative answer, we require all notations for greater recursive ordinals to also be able to decide the question, and this includes unnatural notations.
For an example of an unnatural notation: given some formal system $T$, for every $n$ it is decidable whether there is a contradiction in $T$ of size $\le n$. We can basically choose a notation so that $\alpha_n$ is the n'th finite ordinal if there isn't, and something ill-founded if there is. From the assumption that transfinite induction up to $\alpha$ is valid, we can prove $T$ consistent. Conversely from the assumption that $T$ is consistent, we can prove $\alpha$ is well-founded, and that actually its order type is $\omega$. So I could say, any consistent formal system is provably consistent by transfinite induction up to $\omega$ .. but that's a very misleading way to put it!
The question of natural ordinal notations is the second of three conceptual questions that bug Solomon Feferman.
Maybe one could ask if there is some branch $\mathcal{B}$ such that, for all hyperarithmetic sets, the three conditions hold.
-
Thanks for the answer. It covers the question thoroughly but I do have a questions about the Wikipedia entry you reference. I do not understand the informal argument in "There exist $\aleph_0$ paths through $\mathcal{O}$ which are $\Pi^1_1$. Given a progression of recursively enumerable theories based on iterating Uniform Reflection, each such path is incomplete with respect to the set of true $\Pi^0_1$ sentences." I would appreciate a reference or more complete proof? – Paul Budnik Aug 10 2011 at 16:14
Not sure where they got that, or if they stated the result right for that matter. (Hence "pseudo-answer".) Maybe someone more knowledgeable will speak up. – Daniel Mehkeri Aug 11 2011 at 9:28
This post is way late, but I believe I posted the "number of paths through O based on Uniform Reflection". I was referencing the Feferman-Spector paper: "Incompleteness Along Paths in Progressions of Theories. Journal of Symbolic Logic 27 (4):383-390" published in 1962. – Everett Piper Aug 23 at 22:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 84, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474324584007263, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/50925/list
|
## Return to Answer
3 added 135 characters in body
Let $v_1$, $\ldots$, $v_N$ be the vertical streets and let $h_{1,j}$, $\ldots$, $h_{4,j}$ be the horizontal edges between $v_{j-1}$ and $v_j$. An admissible path $\gamma$ induces a coloring of the horizontal edges as follows: Consider a vertical street $v_j$. The path $\gamma$ uses either one or three of the incoming edges $h_{k,j}$ $(1\le k\le 4)$. If $\gamma$ uses one edge, color it black and the three other edges white. If $\gamma$ uses three edges, two of them are linked to each other by a part of $\gamma$ extending only to the left of $v_j$. Color these two edges red, the third edge black and the unused edge white. In all, there are 16 possible colorings $c:\lbrace 1,2,3,4\rbrace\to\lbrace b, r, w\rbrace$ that can result in this way. There is a $16\times 16$ transition matrix $T$ that encodes the possible matchings between the coloring $c$ of the edges $h_{k,j}$ and the coloring $c'$ of the edges $h_{k,j+1}$ (e.g., circles must be avoided). This matrix $T$ has to be determined "the hard way", i.e., by listing for each $c$ the possible $c'$. The number of admissible paths $\gamma$ is then obtained by applying $T^{N-1}$ to a suitable starting vector; so there is indeed a linear recurrence for the number of these paths.
An example: If $c$ contains a one black and two red edges, then using the vertical edges on $v_j$ in an admissible way one may
(a) continue the black and the two red edges into the next column individually, maybe at a different level,
or
(b) connect the black end of $c$ to either one of the red ends by a vertical segment creating a $\supset$ and continue the other red edge of $c$ into the next column, but as a black edge,
and, if room on $v_j$ permits, one may
(c) throw in two red edges beginning on $v_j$ which are connected by a vertical segment creating a $\subset$.
Here is a pictorial list (hopefully complete) of the possible transitions $c\to c'$:
http://www.math.ethz.ch/~blatter/grid.pdf
2 added a paragraph
Let $v_1$, $\ldots$, $v_N$ be the vertical streets and let $h_{1,j}$, $\ldots$, $h_{4,j}$ be the horizontal edges between $v_{j-1}$ and $v_j$. An admissible path $\gamma$ induces a coloring of the horizontal edges as follows: Consider a vertical street $v_j$. The path $\gamma$ uses either one or three of the incoming edges $h_{k,j}$ $(1\le k\le 4)$. If $\gamma$ uses one edge, color it black and the three other edges white. If $\gamma$ uses three edges, two of them are linked to each other by a part of $\gamma$ extending only to the left of $v_j$. Color these two edges red, the third edge black and the unused edge white. In all, there are 16 possible colorings $c:\lbrace 1,2,3,4\rbrace\to\lbrace b, r, w\rbrace$ that can result in this way. There is a $16\times 16$ transition matrix $T$ that encodes the possible matchings between the coloring $c$ of the edges $h_{k,j}$ and the coloring $c'$ of the edges $h_{k,j+1}$ (e.g., circles must be avoided). This matrix $T$ has to be determined "the hard way", i.e., by looking at all 256 cases. listing for each $c$ the possible $c'$. The number of admissible paths $\gamma$ is then obtained by applying $T^{N-1}$ to a suitable starting vector; so there is indeed a linear recurrence for the number of these paths.
An example: If $c$ contains a black and two red edges, then using the vertical edges on $v_j$ in an admissible way one may
(a) continue the black and the two red edges into the next column individually, maybe at a different level,
or
(b) connect the black end of $c$ to either one of the red ends by a vertical segment creating a $\supset$ and continue the other red edge of $c$ into the next column, but as a black edge,
and, if room on $v_j$ permits, one may
(c) throw in two red edges beginning on $v_j$ which are connected by a vertical segment creating a $\subset$.
1
Let $v_1$, $\ldots$, $v_N$ be the vertical streets and let $h_{1,j}$, $\ldots$, $h_{4,j}$ be the horizontal edges between $v_{j-1}$ and $v_j$. An admissible path $\gamma$ induces a coloring of the horizontal edges as follows: Consider a vertical street $v_j$. The path $\gamma$ uses either one or three of the incoming edges $h_{k,j}$ $(1\le k\le 4)$. If $\gamma$ uses one edge, color it black and the three other edges white. If $\gamma$ uses three edges, two of them are linked to each other by a part of $\gamma$ extending only to the left of $v_j$. Color these two edges red, the third edge black and the unused edge white. In all, there are 16 possible colorings $c:\lbrace 1,2,3,4\rbrace\to\lbrace b, r, w\rbrace$ that can result in this way. There is a $16\times 16$ transition matrix $T$ that encodes the possible matchings between the coloring $c$ of the edges $h_{k,j}$ and the coloring $c'$ of the edges $h_{k,j+1}$ (e.g., circles must be avoided). This matrix $T$ has to be determined "the hard way", i.e., by looking at all 256 cases. The number of admissible paths $\gamma$ is then obtained by applying $T^{N-1}$ to a suitable starting vector; so there is indeed a linear recurrence for the number of these paths.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 102, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9043135046958923, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/52638-parametric-representation-curve-print.html
|
parametric representation of a curve
Printable View
• October 8th 2008, 10:03 AM
superfets
parametric representation of a curve
Find a parametric representation for the curve x^2 + y + z = 2 and xy + z = 1.
• October 8th 2008, 12:52 PM
Opalg
Quote:
Originally Posted by superfets
Find a parametric representation for the curve x^2 + y + z = 2 and xy + z = 1.
This looks a bit tricky, because the curve splits into two separate pieces. If you subtract the sescond equation from the first then you get $x^2+y-xy=1$, which factorises as $(x-1)(1+x-y)=0$.
So the curve consists of the two pieces $x=1,\ y+z=1$ and $y=x+1,\ z=1-x-x^2$. It's easy enough to parametrise the two pices separately, and that is presumably the best that can be done.
• October 8th 2008, 01:21 PM
superfets
thanks for the effort
Don't think that's what i'm looking for. The first eq'n is a surface (parabolic cylinder) and the second eq'n is a plane. The intersection of the surface and the plane in 3 dimensions will give you a curve. It can't be in 2 pieces. I need to describe this curve by the three parametric functions. i.e. C; x = x(t) , y = y(t) , z = z(t). But thanks for trying.
• October 8th 2008, 03:26 PM
mr fantastic
Quote:
Originally Posted by superfets
Find a parametric representation for the curve X^2 + y + z = 2 and xy + z = 1.
There's an infinite number of possible answers. One possible answer:
Let x = t.
t is the parameter.
Now solve for y and z simulataneously in terms of t:
y + z = 2 - t^2 .... (1)
ty + z = 1 .... (2)
You can think about the possible values t can have ......
• October 9th 2008, 12:01 AM
Opalg
Quote:
Originally Posted by superfets
Don't think that's what i'm looking for. The first eq'n is a surface (parabolic cylinder) and the second eq'n is a plane. The intersection of the surface and the plane in 3 dimensions will give you a curve. It can't be in 2 pieces.
The second equation is certainly not a plane. The equation xy + z = 1 is not linear. It represents a hyperbolic surface which contains lines, including the line parametrised by (x,y,z) = (1,t,1-t) that I indicated in my previous comment.
You can also see this by following mr fantastic's method and paying particular attention to his final paragraph (look carefully at what happens at the particular value t=1 in his parametrisation).
All times are GMT -8. The time now is 05:31 PM.
Copyright © 2005-2013 Math Help Forum. All rights reserved.
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322560429573059, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/78796/when-are-roots-of-power-series-algebraic/78871
|
## When are roots of power series algebraic?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $K$ be a field and consider a power series $f(T) \in K[[T]]$. Under what conditions (on $K$ and/or on $f$) can we conclude that if $\alpha$ is a root of $f(T)$ then $\alpha$ is in fact algebraic over $K$?
This question is inspired by the following: In this paper of R. Pollack, in the proof of his Lemma 3.2, the author states that if $K$ is a finite extension of $\mathbb{Q}_p$ and $f(T)$ converges on the open unit disc of $\mathbb{C}_p$ and has finitely many roots, then each of these roots is algebraic over $K$.
I haven't yet come up with a proof of this statement (and any proofs offered would be appreciated), and perhaps having a proof in hand would help me to answer my question on my own, but I'm now curious about other situations in which this can occur.
-
1
@Qiaochu: I think he is implicitly applying the evaluation homomorphism, and also implicitly saying that this power series must (as least locally) converge, to give a well-defined function. – Jacques Carette Oct 21 2011 at 22:02
14
For K equal to any complete extension field of Q_p, this is a consequence of the Weierstrass Preparation Theorem for formal power series whose coefficients tend to 0 (which is what convergence at 1 requires). Or it's a consequence of Strassman's theorem for power series. The point is that you can factor the power series as a polynomial multiplied by a power series which is invertible on the closed unit disc. – KConrad Oct 21 2011 at 22:17
6
@Jeff H: The Weierstrass preparation theorem isn't just an example of this phenomenon, it is the phenomenon. – David Loeffler Oct 22 2011 at 7:41
3
@KConrad: This is an answer, not a comment. – Martin Brandenburg Oct 22 2011 at 9:50
7
In the case where $K$ is a finite extension of ${\bf Q}_p$, the argument I had in mind was the following: for every $\sigma$ in $Aut({\bf C}_p/K)$, $\sigma(\alpha)$ is again a root of $f$. Thus, $\alpha$ only has finitely many conjugates and thus must be algebraic. – Robert Pollack Oct 23 2011 at 4:11
show 7 more comments
## 3 Answers
At first I thought the question concerned the closed unit disc. Since it involves the open unit disc in ${\mathbf C}_p$ we need a little detail to see why the Weierstrass preparation theorem for series on the closed unit disc can be used. We'll pass to a suitable finite extension of K to pull this off.
Since there are assumed to be only finitely many zeros of $f(x)$ in the open unit disc of ${\mathbf C}_p$, let $\alpha$ be one of those zeros with largest absolute value. A priori $\alpha$ might be transcendental over $K$, but note that $|\alpha|_p$ is a rational power of $p$ and there are definitely $t$ in $\overline{K}$ with $|t|_p = |\alpha|_p$ (just take $t$ to be a suitable rational power of $p$ in ${\mathbf C}_p$). Let $L = K(t)$, which is a finite extension of $K$. Since $|t|_p < 1$, the power series $g(x) = f(tx)$ has coefficients in $L$ and converges on the closed unit disc in $L$ (equivalently, in ${\mathbf C}_p$). By the Weierstrass Preparation Theorem for (nonzero!) power series in $L[[x]]$ which converge on the closed unit disc, we can write $g(x) = W(x)U(x)$ where $W(x)$ is a polynomial in $L[x]$ and $U(x)$ is a power series in $L[[x]]$ with nonzero constant term which converges on the closed unit disc along with its inverse formal power series. Therefore on any complete extension field of $K$, $U(x)$ converges on its closed unit disc and is nonvanishing. For any root $r$ of $f(x)$ in ${\mathbf C}_p$, we have $|r|_p \leq |t|_p$, so $0 = f(r) = g(r/t) = W(r/t)U(r/t)$. The number $U(r/t)$ is nonzero, so $W(r/t) = 0$, which implies $r/t$ is algebraic over $L$, and thus over $K$. Since $t$ is algebraic over $K$, $r$ is algebraic over $K$.
Instead of assuming $f(x)$ has finitely many zeros in the open unit disc of ${\mathbf C}_p$ we only need the (superficially) weaker assumption that the roots of $f(x)$ in the open unit disc of ${\mathbf C}_p$ are uniformly bounded in absolute value away from 1. That is, assume that for some $\varepsilon > 0$ the two conditions $|r|_p < 1$ and $f(r) = 0$ in ${\mathbf C}_p$ imply $|r|_p \leq 1 - \varepsilon$. Then we can conclude the roots of $f(x)$ in the open unit disc of ${\mathbf C}_p$ are both algebraic over $K$ and finite in number. Picking $t$ in $\overline{K}$ so that $1 - \varepsilon \leq |t|_p < 1$, which is definitely possible, we can run through the previous argument and see that if $|r|_p < 1$ and $f(r) = 0$ in ${\mathbf C}_p$ then $W(r/t) = 0$, so not only does $r$ lie in a finite extension of $K$ but there are finitely many choices for $r$.
-
2
@KConrad: you don't even need to assume anything about your original power series other than that it is convergent on the open unit disc. Indeed, your argument shows that any root of a convergent power series which is contained in some closed disc is necessarily algebraic. But of course any point in the open disc is contained in some closed disc! Thus all roots of any convergent power series on the open unit disc are algebraic. – Robert Pollack Oct 23 2011 at 13:56
2
That's almost true. The zero series converges on the open unit disc and has quite a few non-algebraic roots. And you want the radius of a closed disc to be in $|K^\times|^{1/∞}=\{\sqrt[n]{|a|}:a\in K,,n≥1\}$. But I agree the proof shows for any nonzero series $f(x)$ with coefficients in $K$ and any complete extension field $F$ of $K$, the roots of $f(x)$ are algebraic over $K$ in any open disc of $F$ where $f(x)$ converges. That's also true for closed discs in $F$ with radius in $|K^\times|^{1/∞}$. Is there be a counterexample for a closed disc in $F$ with radius not in $|K^\times|^{1/∞}$? – KConrad Oct 23 2011 at 14:56
Is it true that any transcendental element of Cp has uncountably many conjugates? If so, then the argument I gave in the comments above would also show that all zeroes of convergent power series are algebraic. The argument would be, if there were a transcendental root then there would be uncountably many roots. But then in some closed disc there would be infinitely many zeroes. – Robert Pollack Oct 23 2011 at 15:24
2
Actually it's clear for a very simple reason why there's not going to be a root at an "inaccessible" radius: in the notation of my previous comment, say $f(x)$ converges on $\{z∈F:|z|≤R\}$ and $R$ is in $|F^\times|$ and not in $|K^\times|^{1/∞}$ (a better notation for that would be $|K^\times|^{\mathbf Q}$). If $|z|=R$ in $F$ then no two nonzero terms in the series for $f(z)$ have equal absolute value, and the terms in the series are tending to 0, so by the ultrametric inequality $|f(z)|$ is the absolute value of its largest term. So in particular $f(z) \not= 0$. – KConrad Oct 23 2011 at 16:04
@KConrad: Thank you, this is great! – Jeff H Oct 23 2011 at 18:05
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I feel considerable trepidation in muddying the waters that Keith has so nicely clarified, but it seems to me, in view of the discreteness of the valuation on $K$, that first, the hypothesis that there are only finitely many roots is unnecessary, and second, it's unnecessary to go by way of the closed unit disk.
As an example of a series with infinitely many roots, hold in mind the logarithmic series, $\sum_n (-1)^{n+1}x^n/n$, convergent on the open unit disk. Its roots are the numbers $\zeta-1$, $\zeta$ running through the $p$-power roots of unity. For a series to be convergent on the open disk of ${\bf C}_p$, it's necessary and sufficient that the limit slope of the Newton polygon be nonnegative. You can define this as $\lim_n v(a_n)/n$ if you like. In case the series has positive limit slope, then there's a segment with positive slope, and you can use Weierstrass Prep directly to show that there are only finitely many roots of the series in the open disk, and these are roots of an integral monic $K$-polynomial. If, as in the case of the log, your series has limit slope zero, then you need a suitably jazzed-up version of W-Prep saying every vertex $V$ of the polygon gives you a monic polynomial factor, still with $K$-integers for coefficients. Or, since in this case we're only worried about the roots being algebraic, you can just replace $f(x)$ by $f(p^\lambda x)$, where $\lambda$ is a positive rational between the negatives of the slopes on either side of $V$. And then apply the first case, 'cause now the limit slope is $\lambda$. In any case, and by whatever method, the roots of $f$ are all algebraic.
-
@Lubin: This is a great (counter?)example. Thanks for this alternate perspective! – Jeff H Oct 24 2011 at 1:14
Some sloppiness on my part: that should have been the lim inf of those numbers in the def. of the limit slope. – Lubin Oct 24 2011 at 17:53
Here's another proof of the fact that any root of a (non-zero) $p$-adic power series (convergent on the open unit disc) is algebraic. The proof is a little silly in that it uses Tate's theorem on Galois invariants of ${\mathbb C}_p$ (which is quite deep) instead of the Weierstrass preparation theorem (which is fairly elementary). But here it is in any case.
The key fact I need is the following which I'll prove after explaining how it completes the proof.
Claim: Any transcendental element of ${\mathbb C}_p$ has uncountably many conjugates (under $\text{Gal}(\overline{\mathbb Q}_p/{\mathbb Q}_p))$.
Accepting this claim for the moment, let $\alpha$ be a transcendental zero of $f(x)$, our convergent power series. Then all conjugates of $\alpha$ are also zeroes of $f(x)$ in the open unit disc. Thus, by the above claim, $f(x)$ has uncountably many zeroes in the open unit disc of ${\mathbb C}_p$. However, any uncountable set in a separable metric space (e.g. ${\mathbb C}_p$) has an accumulation point. But then the zeroes of $f(x)$ have an accumulation point, necessarily in the open unit disc since that is closed in ${\mathbb C}_p$, which forces $f(x)$ to be identically zero.
Returning now to the claim, let $G=\text{Gal}(\overline{\mathbb Q}_p/{\mathbb Q}_p)$ and let $H$ denote the subgroup of $G$ which stabilizes $\alpha$. Since the conjugates of $\alpha$ are in one-to-one correspondence with $G/H$, we will show that $H$ has uncountable index.
First note that $G$ acts continuously on ${\mathbb C}_p$ where we give ${\mathbb C}_p$ the $p$-adic topology (not the discrete topology). Thus $H$ is a closed subgroup and hence of the form $\text{Gal}(\overline{\mathbb Q}_p/M)$ for some algebraic extension$M/{\mathbb Q}_p$.
Assume $G/H$ is finite, and thus that $M$ is a finite extension of ${\mathbb Q}_p$. But then by Tate's theorem, $\alpha$ must be in $M$ as it is fixed by $\text{Gal}(\overline{\mathbb Q}_p/M)$. This is impossible as $\alpha$ is transcendental.
Thus, $G/H$ is infinite, and we must now show that it is uncountable. We use the following (standard) fact:
Fact: Any infinite compact Hausdorff space with no isolated points is uncountable.
Since $G$ is compact, $G/H$ is compact. The coset space $G/H$ is Hausdorff since $H$ is closed. To see that $G/H$ has no isolated points, note that if it has one isolated point, then all of its points are isolated as $G$ acts transitively (by left multiplication) on $G/H$ by homeomorphisms. But then $G/H$ is discrete which is impossible as it is infinite and compact.
Thus, $G/H$ is uncountable, and $\alpha$ has uncountably many conjugates.
-
Rob, this is wonderful...thank you!! – Jeff H Oct 28 2011 at 15:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 171, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9577446579933167, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/2959/can-i-perform-an-asset-allocation-optimization-if-assets-are-perfectly-uncorrela
|
# Can I perform an asset allocation optimization if assets are perfectly uncorrelated?
(Here is a link to the original post)
I received this interesting problem from a friend today:
Assume that you are a portfolio manager with \$10 million to allocate to hedge funds. The due diligence team has identified the following investment opportunities (here Expected Return and Expected StdDev stand for Expected Monthly Return and Expected Standard Deviation of Monthly Return and Price = Price of each investment unit):
Hedge Fund 1: Expected Return = .0101, Expected StdDev = .0212, Price = \$2 million
Hedge Fund 2: Expected Return = .0069, Expected StdDev = .0057, Price = \$8 million
Hedge Fund 3: Expected Return = .0096, Expected StdDev = .0241, Price = \$4 million
Hedge Fund 4: Expected Return = .0080, Expected StdDev = .0316, Price = \$1 million
What is the optimal allocation to each hedge fund (use MATLAB)?
The responses to the original post were things I had considered, but the loss of correlation among assets still seems like a big issue. Under the assumption that the assets are independent, the covariance matrix is diagonal, and using the standard constrained portfolio allocation tools in MATLAB seem to fail. Should I be choosing a specific objective function like Mike Spivey suggested in the original post while assuming independence?
-
## 3 Answers
There is nothing wrong in using Mean-Variance with a collection of assets that would be uncorrelated (which is almost impossible by the way). The algorithm should converge.
Mean-Variance optimization basically aims to take advantage of diversification, which is, trivially, impossible where asset are perfectly uncorrelated, so you won't get amazing results.
If you want to use MATLAB, I'd suggest you use frontcon which should enable you to compute an efficient frontier with your data.
Note that your setup requires you to implement constraints, as you would like to spend the totality of the available 10M, but certain assets are available for a limited amount. You can define the constraints as follows, expressing them as a percentage of the total value of the portfolio.
$$\mathbf{w}=(w_1,w_2,w_3,w_4)' \quad \text{and} \quad I_4 \mathbf{w} \leq (0.2,0.8,0.4,0.1)'$$
and
$$w_i \geq 0 \quad \forall i$$
Since MV would not produce nice results (not well diversified), you could look at equal risk contribution algorithms which would allow you to spread the risk over all your available assets. I understand it is commonly use in Hedge Fund allocation.
-
There's no problem at all using mean-variance optimization when correlations are zero. Any Quadratic Program solver will give you optimal weights. The problem is that the optimal weight a QP will give you will not, in general, result in dollar allocations that are integer multiples of the Price. To enforce that constraint, you could look into Integer Program solvers, which are designed to work with those type of constraints. Though, given how small your problem is, it would likely be easier to just list all possible combinations of allocations (by my count, there are only a couple hundred feasible allocations), and calculate whatever criterion you use for allocation decision (Sharpe ratio?) for each possibility.
-
Of course you need an objective function otherwise you would optimally allocate 100% to the fund with the highest risk adjusted returns.
-
Hmmm, I asked the wrong question I guess. When I use MATLAB's portfolio allocation toolbox, I think it does mean-variance analysis. Perhaps a better question is assuming I've set up the problem correctly assuming independence among monthly returns, would this assumption (0 correlation among assets) give results that make any sense (I suspect no)? I have read before that classical mean variance optimization is very sensitive to its inputs, so without correlation data, is MV analysis the best approach here? Thanks. – David Feb 13 '12 at 2:57
1
You are right, MV analysis makes no sense without utility function, otherwise what do you intend to optimize for? You need to specify a function that dictates the goal of your optimization. With independence assumption and no utility your allocation is gonna be 100% to the fund that maximizes E(r)/ E(sd), simple as that. – matt Feb 13 '12 at 3:04
OK thanks Matt. Now I'm going back to the basics using Lagrange multipliers. Given the information, one constraint seems simple: 2x + 8y + 4y + z = 10 (where x,y,w,z are amount of assets from hedge fund 1, 2, 3, and 4 respectively). Now for argument's sake, let's say I choose the utility function U(x,y,w,z) = xywz based purely on symmetrical considerations. But with the specification thus far, I haven't included any of the standard deviation data. Any suggestions on how I may proceed from here? Thanks in advance. – David Feb 13 '12 at 3:32
– chrisaycock♦ Feb 13 '12 at 5:07
David, you can simply scale your weights by volatility or even better, risk adjusted expected return within your utility function. Given your (simplistic) assumptions of zero correlations, I would suggest your utility function ONLY reflect weights of risk adjusted returns. You can scale the weights by your preference, thats what the definition of Utility is actually all about. However, you have not given any indications that may allow me determine what your utility should look like. – matt Feb 13 '12 at 6:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929152250289917, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/101928/what-do-we-know-about-graph-degree-sequences?answertab=votes
|
# What do we know about graph degree sequences?
The sequence of sizes of single vertex cuts of a graph is called its degree sequence. Is there an agreed-upon name for the sequence of sizes of k-vertex cuts? What can be said about two graphs which have the same sequences for all k
-
– Paul Jan 24 '12 at 10:15
2
@Paul: Yes. The point of the question is that the degree sequence records the number of edges connecting each vertex to the rest of the graph, which the OP generalizes to the sequence of numbers of edges connecting each set of $k$ vertices to the rest of the graph, and then asks what happens if $G$ and $H$ have the same generalized degree sequences for all $k$. – Louis Jan 24 '12 at 13:18
@Louis: Oh I see. Thank you for the explanation. – Paul Jan 24 '12 at 13:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9362658858299255, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/21086/what-is-meant-by-the-rest-energy-of-non-composite-particle
|
# What is meant by the rest energy of non-composite particle?
When talking about the rest energy of a composite particle such as a proton, part of the rest energy is accounted for by the internal kinetic energy of its constituent quarks. But what is physically meant by the rest energy of non-composite particles such as quarks?
-
The energy obtained by turning the quark into pure energy (eg light &c). I guess you already know this, though.. I think rest energy is just another property you take for granted (no intuition behind it), like the spin of an electron.. – Manishearth♦ Feb 16 '12 at 15:41
– Qmechanic♦ Feb 16 '12 at 15:54
Note that the valence quark masses make up several percent of the proton's mass. – dmckee♦ Feb 16 '12 at 16:11
## 1 Answer
One has to be familiar with four vectors. In the same way as for three vectors the length is an invariant of the vector and is obtained by the dot product of the vector with itself, the "length" of the relativistic four vector is the rest mass, by definition;
the mass $m$ entering the relativistic equation where $p$ is the momentum and $E$ the total energy,
$E^2 - p^2c^2 = m^2c^4$
when $p=0$ is the rest energy.
In a composite particle the invariant mass even when it is at rest is the invariant mass of the four vector composed by adding the four vectors of all the constituent particles . A composite particle displays an effective rest mass.
For a non composite particle, as the electron, the energy when at rest, $p=0$, is its mass.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93841552734375, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/58448/virtual-chain-conditions-in-groups/58449
|
## virtual chain conditions in groups
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In group theory, it's often very useful to know whether a family of subgroups (eg normal subgroups, Zariski-closed subgroups, ...) satisfies an ascending chain condition or a descending chain condition (that is, all ascending/descending chains in this family are finite). What I'm interested in is weaker 'virtual' chain conditions: a virtual DCC would be that given a sequence $G_1 > G_2 > \dots$ of subgroups (of some special kind) such that $G_{i+1}$ has infinite index in $G_i$ for all $i$, then the sequence must terminate. One can define virtual ACCs similarly.
Does anyone know of work that has been done on conditions of this kind, either showing that a family of subgroups satisfies the conditions or deriving consequences from them? References for analogous conditions in other algebraic contexts would also be interesting.
-
## 1 Answer
The virtual DCC doesn't seem so different from the notion of Krull dimension $1$ that I explained in answer to this http://mathoverflow.net/questions/2525/different-definitions-of-the-dimension-of-an-algebra/2588#2588 question.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9133082032203674, "perplexity_flag": "middle"}
|
http://nrich.maths.org/6331&part=
|
### Rotating Triangle
What happens to the perimeter of triangle ABC as the two smaller circles change size and roll around inside the bigger circle?
### Doodles
A 'doodle' is a closed intersecting curve drawn without taking pencil from paper. Only two lines cross at each intersection or vertex (never 3), that is the vertex points must be 'double points' not 'triple points'. Number the vertex points in any order. Starting at any point on the doodle, trace it until you get back to where you started. Write down the numbers of the vertices as you pass through them. So you have a [not necessarily unique] list of numbers for each doodle. Prove that 1)each vertex number in a list occurs twice. [easy!] 2)between each pair of vertex numbers in a list there are an even number of other numbers [hard!]
### Russian Cubes
How many different cubes can be painted with three blue faces and three red faces? A boy (using blue) and a girl (using red) paint the faces of a cube in turn so that the six faces are painted in order 'blue then red then blue then red then blue then red'. Having finished one cube, they begin to paint the next one. Prove that the girl can choose the faces she paints so as to make the second cube the same as the first.
# Iffy Logic
##### Stage: 4 Short Challenge Level:
Mathematical logic and thinking are grounded in a clear understanding of how the truths of various mathematical statements are linked together.
For example, for any number $x$ the expressions $x> 1$ and $x^2> 1$ are both mathematical statements which might be true or might be false. However, we always know that $x^2> 1$ IF $x> 1$, whereas it is not always the case that $x> 1$ IF $x^2> 1$ (consider $x=-2$, for example). Thus:
It is correct to write $\quad\quad x^2> 1$ IF $x> 1$
It is incorrect to write $\quad\quad x> 1$ IF $x^2> 1$
Test out your logical thinking with these statements where n and m are positive integers, assuming any obvious properties about numbers (full screen version ).
This text is usually replaced by the Flash movie.
You can view and print out the cards in this Word document .
Are there multiple solutions? If not, how do you know?
How would the logic change if $n$ and $m$ were not necessarily positive or not necessarily integers?
Extension: Note that this activity does not prove that the statements are true. How might you go about proving that certain combinations are correct? How might you go about proving that certain combinations are incorrect?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9254313707351685, "perplexity_flag": "middle"}
|
http://motls.blogspot.com/2012/06/speculating-on-cms-hints-of-320gev-mssm.html?m=0
|
The Reference Frame
Tuesday, June 12, 2012
... /////
Speculating on CMS hints of $320\GeV$ MSSM Higgs
TRF has considered that $125\GeV$ Higgs to be a sure thing since December 14th, 2011. New signals appear every day as experimental physicists say that they're indeed closer to the most official discovery you may get.
But I am convinced that even those people who say that they remain "open-minded" won't really be thrilled once the signal reaches the "official discovery" threshold. A convention may say that 4.5 sigma (the highest confidence level obtained from the 2011 combinations) and 6 sigma (which you may get now) are qualitatively different signals – only the latter is a discovery – but everyone correctly feels that the difference between them is relatively small.
In the absence of clear enough deviations of the published LHC data from the Standard Model, we may focus on the Higgs charts. If you play with the viXra combo Higgs java applet, you may try to search for other excesses besides the obvious $125\GeV$ line.
Luciano Maiani, a senior Italian physicist – who is famous for having stopped the LEP collider in 2000 when it was promising to discover a $115\GeV$ Higgs boson (it seems likely now that there's nothing over there and $125\GeV$ seems too far, so Nature has endorsed Maiani's decision now) – and his two collaborators fell in love with a rather small 2-sigma excess seen especially by the CMS detector especially in the $ZZ$ and perhaps $\gamma\gamma$ channel.
If you go to Phil Gibbs' combo applet, you may switch the experiment to "custom" and pick CMS; and switch the channel to "custom" and pick $H\to ZZ$. Where will you see the greatest peak? Yes, it will be near $320\GeV$, about 2 standard deviations above the mean.
This value of the mass is of course excluded as the mass of the (only) Higgs boson of the Standard Model, but much less safely excluded than the adjacent masses. More importantly, it could be another Higgs if supersymmetry is right and if its Higgs sector is not decoupled (note that Kane et al., for example, predict that only the single MSSM Higgs is light enough). Recall that supersymmetry implies that the God particle has five faces.
So Maiani, together with Polosa and Riquer, boldly assume that the extra excess near $320\GeV$ is a sign of the heavier CP-even neutral Higgs boson, $H$, and they try to check whether the emerging picture with the $h,H$ Higgs bosons at $125$ and $320\GeV$ seems sensible and consistent with the data.
Probing Minimal Supersymmetry at the LHC with the Higgs Boson Masses (arXiv)
In this freshly improved and updated February 2012 paper, they calculate various relationships between the masses given the assumption and the cross sections and their answer is that the hypothesis does seem sensible. What are the other supersymmetric parameters they're able to predict from their assumptions?\[
\eq{
\tan\beta&\approx 2\\
M_H &= M_{H^\pm}=M_A= 320\GeV\\
\sqrt{M_{\tilde t_R} M_{\tilde t_L}} &\approx 3.9\TeV
}
\] You see that $\tan\beta$ is relatively small in this scenario and all the four other Higgses clump near $320\GeV$. They also say that the number of $H$ events decaying to $ZZ$ should be about 3 times smaller than in the Standard Model. The charged and CP-odd Higgses would have clear signals while decaying to bottom quark pairs and/or $t\bar b$. The "geometric average" stop squark mass would be near $4\TeV$.
The heavy $H$ Higgs boson is created and decays to two photons at a rate that is 4 times lower than the figure for the Standard Model.
I think that one additional number they mention is rather worrisome for their point in the parameter space and, in fact, MSSM in general. The $\gamma\gamma$ decay channel of the light Higgs is predicted to be 20% weaker than in the Standard Model; preliminary inaccurate data suggest that it could, on the contrary, be up to 100% greater.
Of course, all these speculations are experimentally supported by a 2-sigma bump only so it may easily go away and it probably will. But once again, there's no law that would say that all bumps will go away and there will never be any new physics anymore...
Posted by Luboš Motl
|
Other texts on similar topics: experiments, LHC, string vacua and phenomenology
snail feedback (2)
:
reader Brian G Valentine said...
"there's no law that would say that all bumps will go away and there will never be any new physics anymore... "
The second part of this does NOT follow from the first - the bumps could go away, but it would certainly take some "physics" to make that happen.
I did not try applet, I do not know if it is tainted.
(Like AGE applets. Input amount of CO2 that will be added to atmosphere, find out how much Earth's temperature will "rise")
reader Luboš Motl said...
Thanks for your comment, Brian, but there must be some misunderstanding here.
The two parts of the sentence say exactly the same thing. If there's a "bump [deviation from the Standard Model charts/predictions] that doesn't go away", it's exactly the same thing as "new physics".
If there's no bump that is more than noise that goes away, it means that the Standard Model is OK and there's no visible new physics. And vice versa. If there's a bump that gets greater as you collect more data, it proves that the Standard Model is wrong and there are new physical phenomena.
I don't understand your "tainted applet" comment. The applet is a straightforward program to sum up - and do other simple enough operations - out of the publicly known data about the number of events of various kinds at the LHC.
Phil's formulae for the combinations could be a bit less sophisticated and precise than the "official ones" but they have turned out to be almost indistinguishable visually - and one may be sure that if there are big bumps in one source, there will also be bumps in the other source.
Cheers
LM
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9557566046714783, "perplexity_flag": "middle"}
|
http://cms.math.ca/10.4153/CJM-2002-044-3
|
Canadian Mathematical Society
www.cms.math.ca
| | | | |
|----------|----|-----------|----|
| | | | | | |
| Site map | | | CMS store | |
location: Publications → journals → CJM
Abstract view
# Multipliers on Vector Valued Bergman Spaces
Read article
[PDF: 242KB]
http://dx.doi.org/10.4153/CJM-2002-044-3
Canad. J. Math. 54(2002), 1165-1186
Published:2002-12-01
Printed: Dec 2002
• Oscar Blasco
• José Luis Arregui
Features coming soon:
Citations (via CrossRef) Tools: Search Google Scholar:
Format: HTML LaTeX MathJax PDF PostScript
## Abstract
Let $X$ be a complex Banach space and let $B_p(X)$ denote the vector-valued Bergman space on the unit disc for $1\le p<\infty$. A sequence $(T_n)_n$ of bounded operators between two Banach spaces $X$ and $Y$ defines a multiplier between $B_p(X)$ and $B_q(Y)$ (resp.\ $B_p(X)$ and $\ell_q(Y)$) if for any function $f(z) = \sum_{n=0}^\infty x_n z^n$ in $B_p(X)$ we have that $g(z) = \sum_{n=0}^\infty T_n (x_n) z^n$ belongs to $B_q(Y)$ (resp.\ $\bigl( T_n (x_n) \bigr)_n \in \ell_q(Y)$). Several results on these multipliers are obtained, some of them depending upon the Fourier or Rademacher type of the spaces $X$ and $Y$. New properties defined by the vector-valued version of certain inequalities for Taylor coefficients of functions in $B_p(X)$ are introduced.
MSC Classifications: 42A45 - Multipliers 46E40 - Spaces of vector- and operator-valued functions
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6955669522285461, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/265279/determining-automorphism-group?answertab=votes
|
# Determining automorphism group
Let $K=\mathbb{Q}[\sqrt{2},\sqrt{3}]$. In the book Abstract Algebra by Dummit and Foote, page 563, the author gave an example of finding the Galois group of $K/\mathbb{Q}$. Here are some arguments that relate to my question :
The extension $\mathbb{Q}[\sqrt{2},\sqrt{3}]$ is Galois over $Q$ since it is the splitting field of $(x^2-2)(x^2-3)$. Any automorphism $\sigma$ is completely determined by its action on the generators $\sqrt{2},\sqrt{3}$, which must be mapped to $\pm\sqrt{2}, \pm\sqrt{3}$, respectively.
Hence the only possibilities for automorphisms are the maps: \begin{cases} \sqrt{2}\mapsto \sqrt{2} \\ \sqrt{3}\mapsto \sqrt{3} \end{cases} \begin{cases} \sqrt{2}\mapsto -\sqrt{2} \\ \sqrt{3}\mapsto \sqrt{3} \end{cases} \begin{cases} \sqrt{2}\mapsto \sqrt{2} \\ \sqrt{3}\mapsto -\sqrt{3} \end{cases} \begin{cases} \sqrt{2}\mapsto -\sqrt{2} \\ \sqrt{3}\mapsto -\sqrt{3} \end{cases}
My question is :
1. Why does the automorphism have to map $\sqrt{2}$ to $\pm\sqrt{2}$, and $\sqrt{3}$ to $\pm\sqrt{3}$ ? Why can't we choose an automorphism like \begin{array}{l l} \sqrt{2}\mapsto \sqrt{3} \\ -\sqrt{2}\mapsto -\sqrt{3} \end{array} I tried to prove that the above map is not an automorphism but my attempt failed. Where am I wrong ?
2. Let $\sigma$ be the 2nd automorphism, $\tau$ be the 3rd automorphism, then what is: $\sigma(-\sqrt{2})$ and $\tau(-\sqrt{3})$ ?
P/S : I do not know the latex code of the bracket, mod please help me. Thanks
-
## 1 Answer
If $\alpha^2=2$ then $\sigma(\alpha)^2=\sigma(\alpha^2)=\sigma(2)=2$. In general, if $f(\alpha)=0$ with $f\in\mathbb Q[X]$, then also $f(\sigma\alpha)=0$ (because $\sigma f=f$).
For your second question note that with $u,v\in \mathbb Q$ you have $\sigma(u\alpha+v\beta)=u\sigma(\alpha)+v\sigma(\beta)$.
-
Thank @Hagen von Eitzen for your very nice answer and edit. I have one more question : How can we determine the Galois group of the splitting field ofsome seperable polynomial ? For example, determining the Galois group of the splitting field of $x^3-2=0$, there are 9 possibilites for the automorphism but the Galois group is only of order 6 ? – knot Jan 1 at 15:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8123000264167786, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-equations/161569-finding-general-solution-2nd-order-linear-ordinary-differential-equation.html
|
# Thread:
1. ## Finding the General Solution of a 2nd Order Linear Ordinary Differential Equation
Question: Find the general solution to the equation: $t^2y'' - 2y = t$.
Attempt at solution: The main catch to this question that causes me problems is the fact that the coefficients are not numbers but rather variables such as t in the question. I know to use the method of undetermined coefficients to solve this question, but i am just stuck on that part!
Thanks to all who contribute!
2. The equation is of the Cauchy Euler type.
Make the substitution
$z=\ln(t)$ then
$\frac{dy}{dt}=\frac{dy}{dz}\frac{dz}{dt}=\frac{1}{ t}\frac{dy}{dz}$
$y''=\frac{d}{dt}\frac{dy}{dt}=-\frac{1}{t^2}\frac{dy}{dz}+\frac{1}{t^2}\frac{dy^2 }{dz^2}$
Can you finish from here?
This gives the ODE
$t^2(-\frac{1}{t^2}\frac{dy}{dz}+\frac{1}{t^2}\frac{dy^2 }{dz^2})-2y=e^{z}$
$y''-y'-2y=e^{z}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279186129570007, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/39124/fibonacci-sequence-inversion/39177
|
## Fibonacci sequence inversion
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
How do I get the index in the sequence from the Fibonacci number?
0 1 1 2 3 5 8 13 21 34...
For example
N(3) = 4 (starting from zero)
N(34) = 9 (starting from zero)
..
N(X) = ?
I've seen an equation in wikipedia
There are other ways to compute it?
-
7
What's wrong with the logarithm? – Qiaochu Yuan Sep 17 2010 at 18:02
1
There's a sense in which the logarithm seems inelegant here, I think. – Michael Lugo Sep 17 2010 at 18:43
1
If a logarithm is distasteful, then you could also just iteratively compute the Fourier series until you find a number which is either greater than or equal to the number you are looking for. By doing this, you would ultimately find the index you want in time proportional to its location in the sequence. Since the Fourier series terms grow exponentially, this algorithm would be polynomial on the size of the input number (in bits), and so it isn't really that inefficient. – Mikola Sep 17 2010 at 20:11
5
The logarithm seems inevitable, but you can get rid of the floor function at the expense of some square roots. Just observe that $5F_n^2-2=\varphi^{2n}+\varphi^{-2n}$ and solve a quadratic equation in $\varphi^{2n}$. – Sergei Ivanov Sep 17 2010 at 20:21
## 6 Answers
As the previous answers have stated the map from $F_n \rightarrow n$ is essentially a logarithm. Since the binary representation of $F_n$ has about $c n$ bits (for the appropriate constant $c = \log_2 \phi$, where $\phi = (1+\sqrt{5})/2$), the bit complexity of calculating the logarithms is about $c' \log^2 n$, for some constant $c'$. Here's another simpler method which has the same complexity:
If you are given $F_n$ for some unknown $n$ if you knew $F_{n-1}$ with $n$ subtractions you can find out what $n$ is (run the fibonacci recursion in reverse). But $F_n/F_{n-1} \approx \phi$. So if you know $1/\phi$ to about $2n$ bits of precision you can find $F_{n-1}$ by multiplying by $1/\phi$ and rounding to the nearest integer. This calculation is certainly simpler than taking logs.
Another alternative is to take $p$-adic logarithms: the function $n \rightarrow F_n$ is $p$-adically continuous, so since $\mathbb{Z}$ is dense in $\mathbb{Z}_p$ it defines a unique $p$-adically continuous function, which by a criterion of Mahler is analytic. You can invert the function via Newton's method -- you just need to find the answer mod $p$ to get a starting point.
-
1
Hendrik Lenstra's paper "Profinite Fibonacci Numbers" is relevant to the $p$-adic version mentioned above math.leidenuniv.nl/~hwl/papers/fibo.pdf – Victor Miller Sep 18 2010 at 2:23
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For what it's worth, this is essentially A072649 in Sloane (http://www.research.att.com/~njas/sequences/A072649). All the formulas given there for that formula either are in terms of the logarithm or assume the Fibonacci numbers are already known and just search that sequence.
-
Well, you can almost certainly use integer and modular arithmetic, with the Chinese remainder theorem, because the sequence is periodic for any modulus n you want. This requires some pre-computation, but probably you can predict how much in advance if you know how large a Fibonacci number ahead of time.
For example, I know the index for 34 must be a multiple of 3, just because 34 is even.
Edit: In practical terms perhaps you just sieve on a range on integers instead of using the Chinese remainder theorem (see comments)? Using the number of bits of input as the complexity measure N, you'd need a range of length proportional to N (take some easy rational upper and lower bounds to log 2/log (golden ratio)). Then looking mod 2 you can strike out at least one third of the numbers. Modulo other small primes you strike out some proportion which depends on case but is not too small. You are going to continue until only one integer from the range remains as a candidate. This really doesn't look too bad: the period mod p may be large or small, but what matters is the proportion of the time a given residue class appears.
-
You shouldn't need much precomputation. Just knowing the number of digits of the Fibonacci number in question, you can compute n to within a constant factor, and after that it shouldn't be hard to use modular considerations to pin the index down precisely (if you really, really want to avoid using logarithms for some reason). – Qiaochu Yuan Sep 17 2010 at 18:49
1
If you wanted to compute this quickly, logs are hard to compute exactly. – Michael Lugo Sep 17 2010 at 18:58
In particular the sequence is periodic with period at most $n^2-1$ modulo any integer $n \ge 1$ (the proof is elementary and nearly obvious). So to do CRT you just need a bunch of primes (or maybe prime powers?). Shouldn't be much bigger than $n^2$? – drvitek Sep 18 2010 at 0:14
Any such formula has a problem in that two consecutive elements of the sequence are equal to 1. So any formula applies for sufficiently large members of the sequence.
Another question is whether you know that the number is a Fibonacci number and want to find the index, or whether the question involves detecting whether the number is a Fibonacci number and also determining its position in the sequence.
-
Invert the formula $F(n)= (r^n - (1-r)^n)/\sqrt{5}$ where $r=(1+\sqrt{5})/2$ by the Lagrange Inversion Formula (LIF). Let $X=\sqrt{5}*F(n)$ and $s=1-r$ so $X=r^n - s^n$. So
$$X=\sum_{j=0}^\infty \frac{(\ln(r))^j - (\ln(s))^j}{j! n^j}$$
Then n = X + sum of X^j * sum over all sequences (b(2),b(3),....,) of nonnegative integers of (-1)^(sum of b(i) from i=2 to j) * ( (sum of b(i) from i=2 to j) + j-1)!/(j!) * product from i=2 to j of (((ln(r))^i - (ln(s))^i)/i!)^b(i) / (b(i)!) ) such that sum of (i-1)*b(i) from i=2 to j equals j-1 and pray for convergence everywhere.
I got this form of the LIF from page 264 of G.P. Egorychev's book, "Integral Representations of Combinatorial Sums"
-
@resolvent: the LaTeX support exists for a reason. Use it and your formulae will appear much nicer (I fixed up the first paragraph for ya!). Also, proper handling of equation makes it clearer what you meant. For the displayed equation for $X = \ldots$ above, I had to guess whether you meant that $n^j$ factor to go in the denominator or the numerator. I probably guessed wrong, but it wasn't clear from how you written it in in-line text form. Please try to see if you can fix up the second paragraph? – Willie Wong Oct 4 2010 at 16:58
(I mean, it is okay if it is some simple things, but with all the sums and square roots floating around, you really should take advantage of the LaTeX display.) – Willie Wong Oct 4 2010 at 17:00
You can find the index using only integer arithmetic.
Since $F_n$ is monotone one can use Binary search using the fact that $F_i$ can be computed in $O(\log(i))$.
This method works for all monotone sequences and can be used to check if a number is in the sequence.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9286609888076782, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/2092-sets-counting.html
|
# Thread:
1. ## Sets and counting
If A union B = an empty set which of the following are true? (give all correct choices)
a. A is a proper set of B
b. B is a proper set of A
c. A intersection B = an empty set (correct)
d. A = an empty set (Correct)
e. B = an empty set (Correct)
Plese let me know if I have picked all the correct choices and if not explain why, because my textbook does not explain this very well.
2. Originally Posted by d.darbyshire
If A union B = an empty set which of the following are true? (give all correct choices)
a. A is a proper set of B
b. B is a proper set of A
c. A intersection B = an empty set (correct)
d. A = an empty set (Correct)
e. B = an empty set (Correct)
Plese let me know if I have picked all the correct choices and if not explain why, because my textbook does not explain this very well.
Looks OK to me the condition implies A and B are both the empty set, so
cases c thru e are true and a and b are false.
RonL
3. Originally Posted by d.darbyshire
If A union B = an empty set which of the following are true? (give all correct choices)
a. A is a proper set of B
b. B is a proper set of A
c. A intersection B = an empty set (correct)
d. A = an empty set (Correct)
e. B = an empty set (Correct)
Plese let me know if I have picked all the correct choices and if not explain why, because my textbook does not explain this very well.
If $A\cup B=\{\}$ then both $A,B$ must be empty sets. Because if $A,B$ are not then there is an element of $A \cup B$, but then it is also an element of $\{\}$ (by definition of equality). But this is a contradiction because $\{\}$ has no elements. Thus, both $A \mbox{ and } B=\{\}$.
A proper subset of a set is subset of a set which is not equal to that set. In mathematical terms,
$P\subset S\mbox{ iff } P\subseteq S,P\not =S$. Since $A=B$ as demonstrated in the previous paragraph they cannot be proper subsets of eachother. Thus, answer to #1 and #2 is no.
Assume $A\cap B$ is not empty. Then there exists an element which is common to both $A\mbox{ and }B$(by definition of intersection) which is not possible because $A,B$ are both empty. Thus, the answer to #3 is yes.
Questions #4 and #5 were demonstrated in the first paragraph, that they are both empty.
4. A question on notation here. The book I learned set theory from has the statement $A \subset B$ defined as "A is a subset of B" implying that all elements of A are contained in B. This means that $A=B$ is a possibility. However, I've noted on a number of occasions that members of the forum are using $A \subset B$ to mean that A cannot equal B. Such as in the previous post:
$P \subset S\mbox{ iff } P\subseteq S,P\not=S$
Is my book using non-standard notation, or are there different conventions in use?
-Dan
5. Originally Posted by topsquark
A question on notation here. The book I learned set theory from has the statement $A \subset B$ defined as "A is a subset of B" implying that all elements of A are contained in B. This means that $A=B$ is a possibility. However, I've noted on a number of occasions that members of the forum are using $A \subset B$ to mean that A cannot equal B. Such as in the previous post:
Is my book using non-standard notation, or are there different conventions in use?
-Dan
Look it up on Wikipedia!
http://en.wikipedia.org/wiki/Subset
Yes, I believe that your book is wrong (or non-standard) if your memory is correct.
6. Originally Posted by topsquark
A question on notation here. The book I learned set theory from has the statement $A \subset B$ defined as "A is a subset of B" implying that all elements of A are contained in B. This means that $A=B$ is a possibility. However, I've noted on a number of occasions that members of the forum are using $A \subset B$ to mean that A cannot equal B. Such as in the previous post:
Is my book using non-standard notation, or are there different conventions in use?
-Dan
Conventions change with time and from author to author. In general you
have to look at what an author has defined their symbols to mean.
However, having said that since we have both symbols $\subset$ and $\subseteq$ it seems
silly not to take advantage and let them denote proper subset, and subset
which takes advantage of the analogy with $>$ and $\ge$.
RonL
7. Originally Posted by ThePerfectHacker
Look it up on Wikipedia!
http://en.wikipedia.org/wiki/Subset
Yes, I believe that your book is wrong (or non-standard) if your memory is correct.
I don't mean to say anything bad about Wikipedia, but as it is written by users (aka not professionally written) I don't take anything I see there as gospel. I'm not saying that anything there has been deliberately written incorrectly, and it definately IS useful, but I don't know what kind of checks they use to make sure mistakes don't get in. I HAVE heard that errors are in there. So I like to check other places for verification.
-Dan
8. Originally Posted by topsquark
I don't mean to say anything bad about Wikipedia, but as it is written by users (aka not professionally written) I don't take anything I see there as gospel. I'm not saying that anything there has been deliberately written incorrectly, and it definately IS useful, but I don't know what kind of checks they use to make sure mistakes don't get in. I HAVE heard that errors are in there. So I like to check other places for verification.
-Dan
Just because a person does not have a Ph.D. in math does not mean it is professional, some amatuers know much more than professors.
9. Originally Posted by ThePerfectHacker
Just because a person does not have a Ph.D. in math does not mean it is professional, some amatuers know much more than professors.
When I used the phrase "not professionally written" I was referring to the fact that my 10 year old niece can post on Wikipedia. That's nice, but is someone going to go through her post and correct any errors? I don't know who is running the site and what kind of editing they do. Until I find out I will regard any information from the site as suspect.
I was most certainly not taking a poke at those without PhDs, as I don't have one either.
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9717491865158081, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/2199/what-is-the-forward-rate-for-a-black-karasinski-interest-rate-model?answertab=votes
|
# What is the forward rate for a Black-Karasinski interest rate model?
I was wondering if anyone could help me with the instantaneous forward rate equation for a Black-Karasinski interest rate model?
I was also after the Black-Karasinski Bond Option Pricing Formula.
-
1
Hi Ian, welcome to QuantSE. In order to maximize your chances to get an answer, please provide a link to a description of the model you are mentioning, or, even better, add the dynamics $dr$ to question. This will also make the site more readable for other users. – SRKX♦ Oct 19 '11 at 7:42
Also, you should provide us with what you've come up so far. – SRKX♦ Oct 19 '11 at 7:50
## 1 Answer
Hi the forward rate equation is not dependent on the model it is calculated upon the prices of zero coupon bonds by the following equation :
$$P(t,T)=exp{-\int_t^T f_t(u).du}$$
If you have a continuum of zero coupon bond prices which are sufficiently smooth then you can deduce from it that :
$$f_0(T)=-\frac{\partial Ln(P(0,T))}{\partial T}$$
Anyway, I think that what you are really asking for, is what is the set of SDEs followed by those instantaneous forward rates under proper measure. I haven't done the calculations (it really bothers me) but I can indicae the following procedure,to wit you have to reconcile HJM with BK, which are respectively given by :
$$d (ln r_t)= \kappa(\theta(t) - ln(r_t))dt +\sigma dW_t$$
and :
$$r_t=f(0,t)+\int_0^t\sigma'(u,t)[\int_u^t\sigma'(u,s)ds]du+\int_0^t\sigma'(u,t)dW_u$$
where
$$df_t(u)=\sigma'(t,u)[\int_t^u\sigma'(t,s)ds]dt+\sigma'(t,u)dW_t$$ and $r_t=f(t,t)$.
Anyway there is no analytical (to my knowledge) Bond and Bond Option prices in this model.
By the way that there are finite explosion time problems in this model. You should try Hull & white model or CIR model if you want your rates to stay positive.
Best Regards
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523995518684387, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/27002/instantons-anomalies-and-1-loop-effects
|
# Instantons, anomalies, and 1-loop effects
A symmetry is anomalous when the path-integral measure does not respect it. One way this manifests itself is in the inability to regularize certain diagrams containing fermion loops in a way compatible with the symmetry. Specifically, it seems that the effect is completely determined by studying 1-loop diagrams. Can someone give a heuristic explanation as to why this is the case? And is there a more rigorous derivation that "I just can't find any good way to regularize this thing."?
An alternative approach, due to to Fujikawa, is to study the path integral of the fermions in an instanton background. Then one sees that the zero modes are not balanced with respect to their transformation under the symmetries, leading to an anomalous transformation of the measure under this symmetry. Specifically, the violation is proportional to the instanton number, and thus one finds the non-conservation of the current is proportional to the instanton density. This is also found by the perturbative method above.
My question, which is a little heuristic, is how is it that the effect seems perturbative (and exact at 1-loop) on the one hand, and yet related to instantons, which are non-perturbative, on the other?
-
## 1 Answer
These are all good questions. Perhaps I can answer a few of them at once. The equation describing the violation of current conservation is
$$\partial^\mu j_\mu=f(g)\epsilon^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}$$
where f(g) is some function of the coupling constant. It is not possible to write any other candidate answer by dimensional analysis and by parity (assuming the current is the ordinary axial current...)
Now we integrate both sides over $\int d^4x$, and we find on the left hand side $\Delta Q$, meaning, now that the current is violated, the charge can change while the system evolves, while the right hand side is $$f(g)\int d^4x \epsilon^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}$$
The object on the right hand side is a known topological invariant of the gauge bundle, and it is an integer (if all the charges are appropriately quantized). So on the left hand side we get $\Delta Q$, which must be an integer (if all fundamental particles carry integer charge) and the right hand side is an integer too, up to the function $f(g)$.
This means that the function $f(g)$ cannot, in fact, depend on $g$. (More precisely, there is a scheme where it does not.) Hence, it is exact at one loop. This is the modern proof (without any computation) of the ABJ theorem about one-loop exactness of the anomaly.
So you see the deep connection between one loop and instantons... The violation of the conservation equation is at one loop, but to lead to interesting consequences we need to have a nontrivial gauge bundle.
About some of the other comments you made: ANY regularization scheme that respects Bose symmetry will lead to the anomaly, it is totally unavoidable. This is proven in http://inspirehep.net/record/154341?ln=en.
Another comment: anomalies can also arise from boson loops, for example, the trace anomaly. (It is not one-loop exact in any sense I am aware of.)
-
4
I really wish this site was not about to be shut down. There is nowhere else you can reliably expect to see discussion of QFT at this level. – Mitchell Porter May 2 '12 at 2:03
1
I noticed this just now ;) too bad. But it seems all the material won't disappear into thin air, which is good. – Zohar Ko May 2 '12 at 2:31
I like this short argument. But if f(g) is independent of g, shouldn't the effect already arise at 0 loop, hence classically? – Arnold Neumaier May 2 '12 at 18:46
No, the charges are normalized to one when there is a 1/g^2 in the expression for the current, so a constant $f$ is one-loop. – Zohar Ko May 2 '12 at 19:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380077719688416, "perplexity_flag": "head"}
|
http://nrich.maths.org/6959/index?nomenu=1
|
## 'Inverting Rational Functions' printed from http://nrich.maths.org/
### Show menu
These questions concerning rational functions were inspired by responses to the function machine project Steve's Mapping. In this problem use the definition that a rational function is defined to be any function which can be written as the ratio of two polynomial functions.
Consider these two rational functions
f(x)=\frac{2x+9}{x+2}\quad\quad g(x)=\frac{9-2x}{x-2}
Show that they are inverses of each other, in that
g(f(x))=f(g(x))=x
What happens for the values $x=\pm 2$?
Can you invert the rational function
h(x)=\frac{x-7}{2x+1}
Do rational functions always have inverse functions? Why?
In the examples given here, the inverses of our rational functions were also rational functions. Will this be the case more generally? Why not explore more generally or try to find inverse pairs of rational functions?
As you consider these rational functions, many questions might emerge in your mind such as: "do rational functions have fixed points?" or "Is there a relationship between the asymptotes in a function and the zeroes of its inverse?". Why not make a note of these questions and ask your teacher, yourself or your friends to try to solve them?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9139940738677979, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/154343-detetmining-equation-path-traced-out-point-two-seperate-graphs.html
|
# Thread:
1. ## Detetmining equation of a path traced out by a point from two seperate graphs?
(at some points below I use the symbols "{" and "}" to denote an ordered pair, instead of "(" and ")" since some of the ordered pairs are functions of numbers, and would result in two ")" next to each other. Just making a note to ensure clarity)
Okay, I was trying to do this problem earlier in my notebook, and I just couldn't figure it out. Consider a circle, which we'll call circle $C$, of radius 2, centered at the origin, $O = (0, 0) \;$.
Now, consider also an ellipse, which has its first focus, which I will call:
$f_1$,
placed on circle $C$. So, this means:
$f_1 = (2, 0)$.
Also, some more details, this elipse (at its starting point) has a horizontal major axis (so then a vertical minor axis), and it's major-axis is ontop of the x-axis of course (remeber, this is just at its starting point though), and the center of the elipse is:
$c = (4, 0)$
and therefore the second focus is:
$f_2 = (6, 0)$
Remeber though, all the above is only describing the elipse's starting position. Now imagine that we have a time variable, $t$, which will help us keep track of all the movement, and might make it easier to finnally write an equation in the end, parametrically maybe. But anyway, so the position of the circle and the elipse was described at:
$t = 0$
Just a few more things about this elipse, at $t=0$, of course. Its major vertices will be called, at time $t$, $v_1(t)$ and $v_2(t)$, and are given to be (at $t = 0$):
$v_1(0) = (0, 0)$
and:
$v_2(0) = (8, 0)$
also, we will only have $t$ such that:
$0 \leq t$
Now, as $t$ increases, picture the elipse moving, such that if $f_1(t) = [a_1(t), b_1(t)]$ at time $t$, then no matter the movement, the position of $f_1$ will always satisfy:
$[a_1(t)]^2 + [b_1(t)]^2 = r^2$
Where $r= \emph{The \; radius \; of \; circle \;} C = 2$
In other words, $f_1$ stays on the path of circle C no matter what time $t$ we have.
Now, the elipse moves in such a manner that ( given that we define the second focus at time $t$ to be $f_2(t) = [a_2(t), b_2(t) ]$, and we define a line $N_c$ at time $t$, as:
$N_c(t) = (m)[x-a_1(t)] + b_1(t)$
where
$m = \frac{b_2(t) - b_1(t)}{a_2(t) - a_1(t)}$
We define this line $N_c$ also, if the above did not cover it, to always be normal to the circle C...
In other words, as the point $f_1$ moves around the circle, the elipse does not dialate or stretch, but rather it moves so that its major axis always "makes a right angle with the circle C". (more properly, it makes a 90 angle with the line tangent to the circle $C$ at $f_1$ )
And, finnaly, imagine a point that lies on the elipse. At $t=0$ this point would be at the left most vertex on the elipse, $v_1(0)$, that is. Lets denote this point as:
$\Delta$
and remember that:
$\Delta(0) = v_1(0) = (0, 0)$
Now, imagine $\Delta$ following the path of the elipse over values of $t$ so that it makes a full revolution at $t=5$, and consequently continues to do so for every multiple of 5t, but of course only speaking of its movement with respect to the ellipse alone, for now. The direction of its movement on the elipse shall always stay the same, and will start as being clock-wise, in terms of clock-wise at the elipsses position at $t=0$; so the point $\Delta$ moves upward at first an slowly curves to the right and then half way through its orbit it hits the major axis coming from the top.
Also, the elipse will move around the circle C so that is makes a full orbit (speaking as if the whole elipse where one object) first at $t = 12$ and then every multiple of $12$ afterward.
So, with all that in mind, how would one go about deriving the equation of the path that the point $\Delta$ would trace out? I know where to start, but I keep getting stuck, and am having trouble figuring it out. Any help would be much appreciated.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 48, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9561640620231628, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/204812/pointwise-convergence-of-continuous-functions-implies-uniform-convergence-on-som?answertab=votes
|
# Pointwise convergence of continuous functions implies uniform convergence on some open subset
For some Banachspace $A$ we have a sequence of continuous functions $g_n:A\rightarrow \mathbb{R}$ pointwise converging to some $g:A\rightarrow\mathbb{R}$. Prove that for any $\epsilon>0$ there exist $\emptyset\not=U\subset A$ open and $N\in\mathbb{N}$ such that for all $n>N$ we have $\sup_{x\in U}\left|g_n(x)-g(x)\right|<\epsilon$.
I'm not sure how to approach this problem. Is it a good idea to prove something like local boundedness first?
-
Can you prove this for $A=\mathbb{R}$? – Chris Eagle Sep 30 '12 at 10:00
No, i'm not sure how to do this either. – davidg Sep 30 '12 at 10:06
Are you sure it holds? It might assume enough compact sets, and I'm not sure that every Banach space has such.. Also, is $g$ known to be continuous, as well? – Berci Sep 30 '12 at 11:28
What's wrong with the following?: Fix any $a \in A$. Since $g_n$ are continuous there is $\delta$ such that $|g_n(a) - g_n (x)| < \varepsilon / 3$ for $\|a-x\| < \delta$. – Matt N. Sep 30 '12 at 11:31
Since $g_n \to g$ pointwise there is $N$ such that $|g_n(a) -g(a)| < \varepsilon / 3$ for $n > N$. – Matt N. Sep 30 '12 at 11:32
show 4 more comments
## 1 Answer
Fix $\epsilon > 0$. For a given $N$ define $$B_N = \{x \in A : \forall m,n > N\,\,|g_m(x) - g_n(x)| \leq 2\epsilon\}$$ $$= \bigcap_{m,n > N} \{x \in A : |g_m(x) - g_n(x)| \leq 2\epsilon\}$$ Each set $\{x \in A : |g_m(x) - g_n(x)| \leq 2\epsilon\}$ is closed since $g_m$ and $g_n$ are continuous. Therefore the intersection $B_N$ is also closed. Because for each $x$ the sequence $\{g_n(x)\}$ is convergent, for each $x$ it is also a Cauchy sequence, so $\bigcup_N B_N$ is all of $A$. By the Baire Category theorem, some $B_N$ contains an open set $U$. For all $x \in U$ and all $m,n > N$ one has $|g_m(x) - g_n(x)| \leq 2\epsilon$. Taking limits as $m$ goes to infinity, one has $|g(x) - g_n(x)| \leq 2\epsilon$ for $n > N$, so in particular $|g(x) - g_n(x)| < \epsilon$ for $n > N$ as needed.
-
thank you! nice argument! – davidg Sep 30 '12 at 14:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951888918876648, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/95931-outer-measure.html
|
# Thread:
1. ## Outer Measure
So one of the axioms of outer measure is the following:
Countable subadditivity axiom: If $A = \bigcup_{n=1}^{\infty} A_n$ then $m^{*}A \leq \sum_{n=1}^{\infty} m^{*}A_n$.
Well it's not really an axiom since we want/have to prove it. But what is the point of using the " $\varepsilon/2^n$ trick" to prove this? E.g. we have the following:
$\sum_{k=1}^{\infty} |I_{kn}| < m^{*} A_n+ \frac{\varepsilon}{2^n}$ (1)
This is analogous to $\inf S \leq x \Rightarrow x < \inf S + \varepsilon$ for some $\varepsilon >0$ (with $x \in S$). But what is special about $\varepsilon/2^n$? Why not just use $\varepsilon$?
2. Hello,
Because $\sum_{n=1}^\infty \frac{\varepsilon}{2^n}=\varepsilon$, wouldn't it ?
3. Originally Posted by Moo
Hello,
Because $\sum_{n=1}^\infty \frac{\varepsilon}{2^n}=\varepsilon$, wouldn't it ?
yes, but the sum doesn't have to sum to $\varepsilon$ since it's arbitrary. I guess its convention.
4. Originally Posted by Sampras
yes, but the sum doesn't have to sum to $\varepsilon$ since it's arbitrary. I guess its convention.
Not really...
As you're dealing with a limit, you want $\varepsilon$ to appear. This one is arbitrary !
Since you'll sum from 1 to infinity, if you keep $\varepsilon$, you'll sum something positive an infinite amount of times.
Which is irrelevant for your problem.
$\varepsilon/2^n$ is arbitrary and > 0. It's a good candidate, because of this infinite sum.
And you'll get $\varepsilon$ in the end.
You could also have taken $\varepsilon/4^n$ or whatever you want. The main point is to get a converging series, arbitrary small.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9727256894111633, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/225265/prove-sum-product-rule
|
# Prove sum/product rule
How do I prove the sum/product rule in combinatorics? I think I should use induction but how do I start?
What should be the base case and the ... erm induction step?
Sum rule: suppose that an operation can be broken down into two tasks $A$ and $B$ if there are $N_a$ ways to do task $A$ and $N_b$ ways to do task $B$, the number of ways to do the operation is $N_a + N_b$
for product rule its the same only that its $N_aN_b$
-
2
Once the question is precisely formulated, the answer is clear, and probably does not require proof. But one can write a formal proof. It amounts to showing that $x+(y+1)=(x+y)+1$ and $x(y+1)=xy+y$, which are respectively part of the definition of sum and product. – André Nicolas Oct 30 '12 at 17:14
## 1 Answer
I assume all sets are finite, though the argument below can be extended to deal with the infinite case.
$\bullet$ The sume rule essentially states that $|A \cup B| = |A| + |B| - |A \cap B|$. How do you show that if $A \cap B = \emptyset$, then $|A \cup B| = |A| + |B|$?
This is accomplished by simply noticing that if $A = \{x_1, \dots , x_n\}$ and $B = \{y_1, \dots , y_m\}$, then $A \cup B = \{x_1, \dots , x_n, y_1, \dots , y_m\}$. Note also that $x_i \neq y_j$ for any pair $(i,j)$. Hence, $A \cup B$ is a set with $n + m$ distinct elements, and so $|A \cup B| = n + m = |A| + |B|$.
What if $A \cap B \neq \emptyset$? Then, write $A \cup B = (A^c \cap B) \cup (A \cap B) \cap (B^c \cap A)$ which is a union of disjoint sets and apply the above argument to these disjoint sets.
$\bullet$ The product rule essentially states that $|A \times B| = |A| |B|$. This follows essentially as before. Write out $A \times B$, and find the cardinality of the set.
Once you establish the sum-rule and product-rule for two sets, use induction for the general case.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391761422157288, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/111048-quadratic-approximation.html
|
# Thread:
1. ## Quadratic Approximation
Hi!
Any help?
Q:
Determine the quadratic approximation surface at the point (0,0) on the surface of
z=sqrt(x+1)/(y+1)
2. Start by defining the "quadratic surface" by the standard form $q(x,y)=ax^2+bx+cxy+dy+ey^2+f$ and we'll find values for the coefficients such that $q(0,0)=z(0,0),q_x(0,0)=z_x(0,0),q_y(0,0)=z_y(0,0), q_{xx}(0,0)=$ $z_{xx}(0,0),q_{xy}(0,0)=z_{xy}(0,0),q_{yy}(0,0)=z_ {yy}(0,0)$. Since this is a system of six equations and six unknowns, we are guaranteed a unique answer.
Derivatives of z:
$z(x,y)=(x+1)^{1/2}(y+1)^{-1}, z(0,0)=+1$
$z_x(x,y)=\frac12(x+1)^{-1/2}(y+1)^{-1}, z_x(0,0)=+\frac12$
$z_y(x,y)=-(x+1)^{1/2}(y+1)^{-2}, z_y(0,0)=-1$
$z_{xx}(x,y)=-\frac14(x+1)^{-3/2}(y+1)^{-1}, z_{xx}(0,0)=-\frac14$
$z_{xy}(x,y)=-\frac12(x+1)^{-1/2}(y+1)^{-1}, z_{xy}(0,0)=-\frac12$
$z_{yy}(x,y)=+2(x+1)^{1/2}(y+1)^{-3}, z_{yy}(0,0)=+2$
Derivatives of q:
$q(x,y)=ax^2+bx+cxy+dy+ey^2+f, q(0,0)=f=z(0,0)=1, f=1$
$q_x(x,y)=2ax+b+cy, q_x(0,0)=b=z_x(0,0)=\frac12, b=\frac12$
$q_y(x,y)=cx+d+2ey, q_y(0,0)=d=z_y(0,0)=-1, d=-1$
$q_{xx}(x,y)=2a, q_{xx}(0,0)=2a=z_{xx}(0,0)=-\frac14, a=-\frac18$
$q_{xy}(x,y)=c, q_{xy}(0,0)=c=z_{xy}(0,0)=-\frac12, c=-\frac12$
$q_{yy}(x,y)=2e, q_{yy}(0,0)=2e=z_{yy}(0,0)=2, e=1$
So, $q(x,y)=-\frac18x^2+\frac12x-\frac12xy-y+y^2+1$ is a quadratic function that shares its six derivatives with $z(x,y)$. Notice that this is always possible, as long as all six derivatives exist and are defined at the point of interest.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277853965759277, "perplexity_flag": "head"}
|
http://catalog.flatworldknowledge.com/bookhub/reader/13?e=mcafee-ch02_s01
|
# Introduction to Economic Analysis, v. 1.0
by R. Preston McAfee and Tracy R. Lewis
Study Aids:
Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards).
Study Pass:
Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes.
Highlighting and Taking Notes:
If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination.
Printing:
If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections.
Search:
To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result).
View Full Student FAQs
## 2.1 Demand and Consumer Surplus
### Learning Objectives
1. What is demand?
2. What is the value to buyers of their purchases?
3. What assumptions are commonly made about demand?
4. What causes demand to rise or fall?
5. What is a good you buy only because you are poor?
6. What are goods called that are consumed together?
7. How does the price of one good influence demand for other goods?
Eating a french fry makes most people a little bit happier, and most people are willing to give up something of value—a small amount of money or a little bit of time—to eat one. The personal value of the french fry is measured by what one is willing to give up to eat it. That value, expressed in dollars, is the willingness to pay for french fries. So, if you are willing to give up 3 cents for a single french fry, your willingness to pay is 3 cents. If you pay a penny for the french fry, you’ve obtained a net of 2 cents in value. Those 2 cents—the difference between your willingness to pay and the amount you pay—is known as consumer surplusThe value of consuming a good, minus the price paid.. Consumer surplus is the value of consuming a good, minus the price paid.
The value of items—like french fries, eyeglasses, or violins—is not necessarily close to what one must pay for them. For people with bad vision, eyeglasses might be worth $10,000 or more in the sense that people would be willing to pay this amount or more to wear them. Since one doesn’t have to pay nearly this much for eyeglasses means that the consumer surplus derived from eyeglasses is enormous. Similarly, an order of french fries might be worth$3 to a consumer, but since they are available for $1, the consumer obtains a surplus of$2 from purchase.
How much is a second order of french fries worth? For most of us, the first order is worth more than the second one. If a second order is worth \$2, we would still gain from buying it. Eating a third order of fries is worth less still, and at some point we’re unable or unwilling to eat any more fries even when they are free, that implies that the value of additional french fries becomes zero eventually.
We will measure consumption generally as units per period of time, for example, french fries consumed per month.
Many, but not all, goods have this feature of diminishing marginal valueCondition in which the value of the last unit declines as the number consumed rises.—the value of the last unit declines as the number consumed rises. If we consume a quantity q, that implies the marginal value, denoted by v(q), falls as the number of units rise.When marginal value falls, which may occur with beer consumption, constructing demand takes some additional effort, which isn’t a great deal of consequence. Buyers will still choose to buy a quantity where marginal value is decreasing. An example is illustrated in Figure 2.1 "The demand curve", where the value is a straight line, declining in the number of units.
Figure 2.1 The demand curve
Demand needn’t be a straight line, and indeed could be any downward-sloping curve. Contrary to the usual convention, the quantity demanded for any price is represented by the vertical axis whereas the price is plotted along the horizontal.
It is often important to distinguish the demand curve—the relationship between price and quantity demanded—from the quantity demanded. Typically, “demand” refers to the curve, while “quantity demanded” is a point on the curve.
For a price p, a consumer will buy units q such that v(q) > p since those units are worth more than they cost. Similarly, a consumer would not buy units for which v(q) < p. Thus, the quantity q0 that solves the equation v(q0) = p indicates the quantity the consumer will buy. This value is illustrated in Figure 2.1 "The demand curve".We will treat units as continuous, even though they are discrete units. This simplifies the mathematics; with discrete units, the consumer buys those units with value exceeding the price and doesn’t buy those with value less than the price, just as before. However, since the value function isn’t continuous, much less differentiable, it would be an accident for marginal value to equal price. It isn’t particularly difficult to accommodate discrete products, but it doesn’t enhance the model so we opt for the more convenient representation. Another way of expressing this insight is that the marginal value curve is the inverse of the demand function, where the demand function gives the quantity purchased at a given price. Formally, if x(p) is the quantity a consumer buys at price p, then $v(x(p))=p.$
But what is the marginal value curve? Suppose the total value of consumption is u(q). A consumer who pays u(q) for the quantity q is indifferent to receiving nothing and paying nothing. For each quantity, there should exist one and only one price that makes the consumer indifferent between purchasing and receiving nothing. If the consumer is just willing to pay u(q), any additional amount exceeds what the consumer should be willing to pay.
The consumer facing price p receives consumer surplus of CS = u(q) – pq. In order to obtain the maximal benefit, the consumer chooses q to maximize u(q) – pq. When the function CS is maximized, its derivative is zero. This implies that the quantity maximizing the consumer surplus must satisfy
$0= d dq ( u(q)-pq )= u ′ (q)-p.$
Thus, $v(q)= u ′ (q);$ implying that the marginal value is the derivative of the total value.
Consumer surplus is the value of the consumption minus the amount paid, and it represents the net value of the purchase to the consumer. Formally, it is u(q) – pq. A graph of consumer surplus is generated by the following identity:
$CS= max q ( u(q)−pq )=u( q 0 )−p q 0 = ∫ 0 q 0 ( u ′ (x)−p )dx = ∫ 0 q 0 ( v(x)−p )dx .$
This expression shows that consumer surplus can be represented as the area below the demand curve and above the price, as illustrated in Figure 2.2 "Consumer surplus". The consumer surplus represents the consumer’s gains from trade, the value of consumption to the consumer net of the price paid.
Figure 2.2 Consumer surplus
The consumer surplus can also be expressed using the demand curve, by integrating from the price up to where the demand curve intersects with the price axis. In this case, if x(p) is demand, we have
$CS= ∫ p ∞ x(y) dy .$
When you buy your first car, you experience an increase in demand for gasoline because gasoline is pretty useful for cars and not so much for other things. An imminent hurricane increases the demand for plywood (to protect windows), batteries, candles, and bottled water. An increase in demand is represented by a movement of the entire curve to the northeast (up and to the right), which represents an increase in the marginal value v (movement up) for any given unit, or an increase in the number of units demanded for any given price (movement to the right). Figure 2.3 "An increase in demand" illustrates a shift in demand.
Similarly, the reverse movement represents a decrease in demand. The beauty of the connection between demand and marginal value is that an increase in demand could, in principle, have meant either more units demanded at a given price or a higher willingness to pay for each unit, but those are in fact the same concept. Both changes create a movement up and to the right.
For many goods, an increase in income increases the demand for the good. Porsche automobiles, yachts, and Beverly Hills homes are mostly purchased by people with high incomes. Few billionaires ride the bus. Economists aptly named goods whose demand doesn’t increase with income inferior goodsGoods whose demand don’t increase with income., with the idea that people substitute to better quality, more expensive goods as their incomes rise. When demand for a good increases with income, the good is called a normal goodGoods whose demand increases with income.. It would have been better to call such goods superior, but it is too late to change such a widely accepted convention.
Figure 2.3 An increase in demand
Another factor that influences demand is the price of related goods. The dramatic fall in the price of computers over the past 20 years has significantly increased the demand for printers, monitors, and Internet access. Such goods are examples of complementsFor a given good x, a good whose consumption increases the value of x.. Formally, for a given good x, a complement is a good whose consumption increases the value of x. Thus, the use of computers increases the value of peripheral devices like printers and monitors. The consumption of coffee increases the demand for cream for many people. Spaghetti and tomato sauce, national parks and hiking boots, air travel and hotel rooms, tables and chairs, movies and popcorn, bathing suits and sunscreen, candy and dentistry—all are examples of complements for most people. Consumption of one increases the value of the other. The complementary relationship is typically symmetric—if consumption of x increases the value of y, then consumption of y must increase the value of x.The basis for this insight can be seen by denoting the total value in dollars of consuming goods x and y as u(x, y). Then the demand for x is given by the partial $∂u ∂x .$ The statement that y is a complement means that the demand for x rises as y increases; that is, $∂ 2 u ∂x ∂y >0.$ But then with a continuous second derivative, $∂ 2 u ∂y∂x >0$ , which means the demand for y, $∂u ∂y ,$ increases with x. From this we can predict that if the price of good y decreases, then the amount good y, a complementary good to x, will decline. Why, you may ask? The reason is that consumers will purchase more of good x when its price decreases. This will make good y more valuable, and hence consumers will also purchase more of good y as a result.
The opposite case of a complement is a substituteFor a given good x, a good whose consumption decreases the value of x.. For a given good x, a substitute is a good whose consumption decreases the value of x. Colas and root beer are substitutes, and a fall in the price of root beer (resulting in an increase in the consumption of root beer) will tend to decrease the demand for colas. Pasta and ramen, computers and typewriters, movies (in theaters) and sporting events, restaurants and dining at home, spring break in Florida versus spring break in Mexico, marijuana and beer, economics courses and psychology courses, driving and bicycling—these are all examples of substitutes for most people. An increase in the price of a substitute increases the demand for a good; and, conversely, a decrease in the price of a substitute decreases demand for a good. Thus, increased enforcement of the drug laws, which tends to increase the price of marijuana, leads to an increase in the demand for beer.
Much of demand is merely idiosyncratic to the individual—some people like plaids, some like solid colors. People like what they like. People often are influenced by others—tattoos are increasingly common, not because the price has fallen but because of an increased acceptance of body art. Popular clothing styles change, not because of income and prices but for other reasons. While there has been a modest attempt to link clothing style popularity to economic factors,Skirts are allegedly shorter during economic booms and lengthen during recessions. by and large there is no coherent theory determining fads and fashions beyond the observation that change is inevitable. As a result, this course, and economics generally, will accept preferences for what they are without questioning why people like what they like. While it may be interesting to understand the increasing social acceptance of tattoos, it is beyond the scope of this text and indeed beyond most, but not all, economic analyses. We will, however, account for some of the effects of the increasing acceptance of tattoos through changes in the number of parlors offering tattooing, changes in the variety of products offered, and so on.
### Key Takeaways
• Demand is the function that gives the number of units purchased as a function of the price.
• The difference between your willingness to pay and the amount you pay is known as consumer surplus. Consumer surplus is the value in dollars of a good minus the price paid.
• Many, but not all, goods have the feature of diminishing marginal value—the value of the last unit consumed declines as the number consumed rises.
• Demand is usually graphed with price on the vertical axis and quantity on the horizontal axis.
• Demand refers to the entire curve, while quantity demanded is a point on the curve.
• The marginal value curve is the inverse of demand function.
• Consumer surplus is represented in a demand graph by the area between demand and price.
• An increase in demand is represented by a movement of the entire curve to the northeast (up and to the right), which represents an increase in the marginal value v (movement up) for any given unit, or an increase in the number of units demanded for any given price (movement to the right). Similarly, the reverse movement represents a decrease in demand.
• Goods whose demand doesn’t increase with income are called inferior goods, with the idea that people substitute to better quality, more expensive goods as their incomes rise. When demand for a good increases with income, the good is called normal.
• Demand is affected by the price of related goods.
• For a given good x, a complement is a good whose consumption increases the value of x. The complementarity relationship is symmetric—if consumption of x increases the value of y, then consumption of y must increase the value of x.
• The opposite case of a complement is a substitute. An increase in the consumption of a substitute decreases the value for a good.
### Exercises
1. A reservation price is is a consumer’s maximum willingness to pay for a good that is usually bought one at a time, like cars or computers. Graph the demand curve for a consumer with a reservation price of \$30 for a unit of a good.
2. Suppose the demand curve is given by x(p) = 1 – p. The consumer’s expenditure is p * x(p) = p(1 – p). Graph the expenditure. What price maximizes the consumer’s expenditure?
3. For demand x(p) = 1 – p, compute the consumer surplus function as a function of p.
4. For demand x(p) = p−ε, for ε > 1, find the consumer surplus as a function of p. (Hint: Recall that the consumer surplus can be expressed as $CS= ∫ p ∞ x(y) dy .$ )
5. Suppose the demand for wheat is given by qd = 3 – p and the supply of wheat is given by qs = 2p, where p is the price.
1. Solve for the equilibrium price and quantity.
2. Graph the supply and demand curves. What are the consumer surplus and producer profits?
3. Now suppose supply shifts to qs = 2p + 1. What are the new equilibrium price and quantity?
6. How will the following affect the price of a regular cup of coffee, and why?
1. Droughts in Colombia and Costa Rica
2. A shift toward longer work days
3. The price of milk falls
4. A new study that shows many great health benefits of tea
7. A reservation price is a consumer’s maximum willingness to pay for a good that is usually bought one at a time, like cars or computers. Suppose in a market of T-shirts, 10 people have a reservation price of $10 and the 11th person has a reservation price of$5. What does the demand “curve” look like?
8. In Exercise 7, what is the equilibrium price if there were 9 T-shirts available? What if there were 11 T-shirts available? How about 10?
9. A consumer’s value for slices of pizza is given by the following table. Graph this person’s demand for slices of pizza.
Slices of pizza Total value
0 0
1 4
2 7
3 10
4 12
5 11
Close Search Results
Study Aids
Need Help?
Talk to a Flat World Knowledge Rep today:
• 877-257-9243
• Live Chat
• Contact a Rep
Monday - Friday 9am - 5pm Eastern
We'd love to hear your feedback!
Leave Feedback!
Edit definition for
#<Bookhub::ReaderController:0x0000000ee67298>
show
#<Bookhub::ReaderReporter:0x0000000ea90048>
71672
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329962134361267, "perplexity_flag": "middle"}
|
http://openwetware.org/index.php?title=User:Timothee_Flutre/Notebook/Postdoc/2011/11/10&diff=658224&oldid=658168
|
# User:Timothee Flutre/Notebook/Postdoc/2011/11/10
### From OpenWetWare
(Difference between revisions)
| | | | |
|-----------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ( add choice of hyperparam + list of future points) | | ( start section on binary phenotype) | |
| (2 intermediate revisions not shown.) | | | |
| Line 23: | | Line 23: | |
| | * '''Likelihood''': we start by writing the usual linear regression for one individual | | * '''Likelihood''': we start by writing the usual linear regression for one individual |
| | | | |
| - | <math>\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \text{ with } \epsilon_i \overset{i.i.d}{\sim} \mathcal{N}(0,\tau^{-1})</math> | + | <math>\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \; \text{ with } \; \epsilon_i \; \overset{i.i.d}{\sim} \; \mathcal{N}(0,\tau^{-1})</math> |
| | | | |
| | where <math>\beta_1</math> is in fact the additive effect of the SNP, noted <math>a</math> from now on, and <math>\beta_2</math> is the dominance effect of the SNP, <math>d = a k</math>. | | where <math>\beta_1</math> is in fact the additive effect of the SNP, noted <math>a</math> from now on, and <math>\beta_2</math> is the dominance effect of the SNP, <math>d = a k</math>. |
| Line 213: | | Line 213: | |
| | <math>\mathrm{BF} = \left( \frac{|\Omega|}{\Omega_0} \right)^{\frac{1}{2}} \frac{1}{\sigma_a \sigma_d} \left( \frac{Y^TY - Y^TX\Omega X^TY + \lambda}{Y^TY - \Omega_0 N^2 \bar{Y}^2 + \lambda} \right)^{-\frac{N+\kappa}{2}}</math> | | <math>\mathrm{BF} = \left( \frac{|\Omega|}{\Omega_0} \right)^{\frac{1}{2}} \frac{1}{\sigma_a \sigma_d} \left( \frac{Y^TY - Y^TX\Omega X^TY + \lambda}{Y^TY - \Omega_0 N^2 \bar{Y}^2 + \lambda} \right)^{-\frac{N+\kappa}{2}}</math> |
| | | | |
| - | In genetics, effect sizes are usually small, therefore difficult to detect. | + | When the Bayes factor is large, we say that there is enough evidence in the data to ''support the alternative''. |
| - | Hence it is usual to be interested in Bayes factors only when they exceed <math>10^4</math> for instance. | + | |
| - | In such cases, we say that there is enough evidence in the data to ''support the alternative''. | + | |
| | Indeed, the Bayesian testing procedure corresponds to measuring support for the specific alternative hypothesis compared to the null hypothesis. | | Indeed, the Bayesian testing procedure corresponds to measuring support for the specific alternative hypothesis compared to the null hypothesis. |
| | Importantly, note that, for a frequentist testing procedure, we would say that there is enough evidence in the data to ''reject the null''. | | Importantly, note that, for a frequentist testing procedure, we would say that there is enough evidence in the data to ''reject the null''. |
| | However we wouldn't say anything about the alternative as we don't model it. | | However we wouldn't say anything about the alternative as we don't model it. |
| | | + | |
| | | + | The threshold to say that a Bayes factor is large depends on the field. It is possible to use the Bayes factor as a test statistic when doing permutation testing, and then control the false discovery rate. This can give an idea of a reasonable threshold. |
| | | | |
| | | | |
| Line 238: | | Line 238: | |
| | | | |
| | | | |
| - | * '''R code''': | + | * '''Implementation''': the following R function is adapted from Servin & Stephens supplementary text 1. |
| | | + | |
| | | + | <nowiki> |
| | | + | BF <- function(G=NULL, Y=NULL, sigma.a=NULL, sigma.d=NULL, get.log10=TRUE){ |
| | | + | stopifnot(! is.null(G), ! is.null(Y), ! is.null(sigma.a), ! is.null(sigma.d)) |
| | | + | subset <- complete.cases(Y) & complete.cases(G) |
| | | + | Y <- Y[subset] |
| | | + | G <- G[subset] |
| | | + | stopifnot(length(Y) == length(G)) |
| | | + | N <- length(G) |
| | | + | X <- cbind(rep(1,N), G, G == 1) |
| | | + | inv.Sigma.B <- diag(c(0, 1/sigma.a^2, 1/sigma.d^2)) |
| | | + | inv.Omega <- inv.Sigma.B + t(X) %*% X |
| | | + | inv.Omega0 <- N |
| | | + | tY.Y <- t(Y) %*% Y |
| | | + | log10.BF <- as.numeric(0.5 * log10(inv.Omega0) - |
| | | + | 0.5 * log10(det(inv.Omega)) - |
| | | + | log10(sigma.a) - log10(sigma.d) - |
| | | + | (N/2) * (log10(tY.Y - t(Y) %*% X %*% solve(inv.Omega) |
| | | + | %*% t(X) %*% cbind(Y)) - |
| | | + | log10(tY.Y - N*mean(Y)^2))) |
| | | + | if(get.log10) |
| | | + | return(log10.BF) |
| | | + | else |
| | | + | return(10^log10.BF) |
| | | + | } |
| | | + | </nowiki> |
| | | + | |
| | | + | In the same vein as what is explained [http://openwetware.org/wiki/User:Timothee_Flutre/Notebook/Postdoc/2011/06/28 here], we can simulate data under different scenarios and check the BFs: |
| | | + | |
| | | + | <nowiki> |
| | | + | N <- 300 # play with it |
| | | + | PVE <- 0.1 # play with it |
| | | + | grid <- c(0.05, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2) |
| | | + | MAF <- 0.3 |
| | | + | G <- rbinom(n=N, size=2, prob=MAF) |
| | | + | tau <- 1 |
| | | + | a <- sqrt((2/5) * (PVE / (tau * MAF * (1-MAF) * (1-PVE)))) |
| | | + | d <- a / 2 |
| | | + | mu <- rnorm(n=1, mean=0, sd=10) |
| | | + | Y <- mu + a * G + d * (G == 1) + rnorm(n=N, mean=0, sd=tau) |
| | | + | for(m in 1:length(grid)) |
| | | + | print(BF(G, Y, grid[m], grid[m]/4)) |
| | | + | </nowiki> |
| | | + | |
| | | + | |
| | | + | * '''Binary phenotype''': using a similar notation, we model case-control studies with a [http://en.wikipedia.org/wiki/Logistic_regression logistic regression] where the probability to be a case is <math>\mathsf{P}(y_i = 1) = p_i</math>. |
| | | + | |
| | | + | There are many equivalent ways to write the likelihood, the usual one being: |
| | | + | |
| | | + | <math>y_i \; \overset{i.i.d}{\sim} \; Bernoulli(p_i) \; \text{ with } \; \mathrm{ln} \frac{p_i}{1 - p_i} = \mu + a \, g_i + d \, \mathbf{1}_{g_i=1}</math> |
| | | + | |
| | | + | Using <math>X_i</math> to denote the <math>i</math>-th row of the design matrix <math>X</math> and keeping the same definition as above for <math>B</math>, we have: |
| | | + | |
| | | + | <math>p_i = \frac{e^{X_i^TB}}{1 + e^{X_i^TB}}</math> |
| | | + | |
| | | + | As the <math>y_i</math>'s can only take <math>0</math> and <math>1</math> as values, the likelihood can be written as: |
| | | | |
| - | to do | + | <math>\mathcal{L}(B) = \prod_{i=1}^N p_i^{y_i} (1-p_i)^{1-y_i}</math> |
| | | | |
| | | + | We still use the same priors as above for <math>B</math> and the Bayes factor now is: |
| | | | |
| - | * '''Binary phenotype''': case-control like in GWAS, logistic regression, see Guan & Stephens (2008) for Laplace approximation | + | <math>\mathrm{BF} = \frac{\int \mathsf{P}(B) \mathsf{P}(Y | X, B) \mathrm{d}B}{\int \mathsf{P}(\mu) \mathsf{P}(Y | X, \mu) \mathrm{d}\mu}</math> |
| | | | |
| - | to do | + | The interesting point here is that there is no way to calculate these integrals analytically. Therefore, we will use [http://en.wikipedia.org/wiki/Laplace_approximation Laplace's method] to approximate them, as in Guan & Stephens (2008). |
| | | | |
| | | | |
## Revision as of 19:01, 23 November 2012
Project name Main project page
Previous entry Next entry
## Bayesian model of univariate linear regression for QTL detection
This page aims at helping people like me, interested in quantitative genetics, to get a better understanding of some Bayesian models, most importantly the impact of the modeling assumptions as well as the underlying maths. It starts with a simple model, and gradually increases the scope to relax assumptions. See references to scientific articles at the end.
• Data: let's assume that we obtained data from N individuals. We note $y_1,\ldots,y_N$ the (quantitative) phenotypes (e.g. expression levels at a given gene), and $g_1,\ldots,g_N$ the genotypes at a given SNP (encoded as allele dose: 0, 1 or 2).
• Goal: we want to assess the evidence in the data for an effect of the genotype on the phenotype.
• Assumptions: the relationship between genotype and phenotype is linear; the individuals are not genetically related; there is no hidden confounding factors in the phenotypes.
• Likelihood: we start by writing the usual linear regression for one individual
$\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \; \text{ with } \; \epsilon_i \; \overset{i.i.d}{\sim} \; \mathcal{N}(0,\tau^{-1})$
where β1 is in fact the additive effect of the SNP, noted a from now on, and β2 is the dominance effect of the SNP, d = ak.
Let's now write the model in matrix notation:
$Y = X B + E \text{ where } B = [ \mu \; a \; d ]^T$
This gives the following multivariate Normal distribution for the phenotypes:
$Y | X, \tau, B \sim \mathcal{N}(XB, \tau^{-1} I_N)$
Even though we can write the likelihood as a multivariate Normal, I still keep the term "univariate" in the title because the regression has a single response, Y. It is usual to keep the term "multivariate" for the case where there is a matrix of responses (i.e. multiple phenotypes).
The likelihood of the parameters given the data is therefore:
$\mathcal{L}(\tau, B) = \mathsf{P}(Y | X, \tau, B)$
$\mathcal{L}(\tau, B) = \left(\frac{\tau}{2 \pi}\right)^{\frac{N}{2}} exp \left( -\frac{\tau}{2} (Y - XB)^T (Y - XB) \right)$
• Priors: we use the usual conjugate prior
$\mathsf{P}(\tau, B) = \mathsf{P}(\tau) \mathsf{P}(B | \tau)$
A Gamma distribution for τ:
$\tau \sim \Gamma(\kappa/2, \, \lambda/2)$
which means:
$\mathsf{P}(\tau) = \frac{\frac{\lambda}{2}^{\kappa/2}}{\Gamma(\frac{\kappa}{2})} \tau^{\frac{\kappa}{2}-1} e^{-\frac{\lambda}{2} \tau}$
And a multivariate Normal distribution for B:
$B | \tau \sim \mathcal{N}(\vec{0}, \, \tau^{-1} \Sigma_B) \text{ with } \Sigma_B = diag(\sigma_{\mu}^2, \sigma_a^2, \sigma_d^2)$
which means:
$\mathsf{P}(B | \tau) = \left(\frac{\tau}{2 \pi}\right)^{\frac{3}{2}} |\Sigma_B|^{-\frac{1}{2}} exp \left(-\frac{\tau}{2} B^T \Sigma_B^{-1} B \right)$
• Joint posterior (1):
$\mathsf{P}(\tau, B | Y, X) = \mathsf{P}(\tau | Y, X) \mathsf{P}(B | Y, X, \tau)$
• Conditional posterior of B:
$\mathsf{P}(B | Y, X, \tau) = \frac{\mathsf{P}(B, Y | X, \tau)}{\mathsf{P}(Y | X, \tau)}$
Let's neglect the normalization constant for now:
$\mathsf{P}(B | Y, X, \tau) \propto \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B)$
Similarly, let's keep only the terms in B for the moment:
$\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B) exp((Y-XB)^T(Y-XB))$
We expand:
$\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B - Y^TXB -B^TX^TY + B^TX^TXB)$
We factorize some terms:
$\mathsf{P}(B | Y, X, \tau) \propto exp(B^T (\Sigma_B^{-1} + X^TX) B - Y^TXB -B^TX^TY)$
Importantly, let's define:
$\Omega = (\Sigma_B^{-1} + X^TX)^{-1}$
We can see that ΩT = Ω, which means that Ω is a symmetric matrix. This is particularly useful here because we can use the following equality: Ω − 1ΩT = I.
$\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Omega^{-1} B - (X^TY)^T\Omega^{-1}\Omega^TB -B^T\Omega^{-1}\Omega^TX^TY)$
This now becomes easy to factorizes totally:
$\mathsf{P}(B | Y, X, \tau) \propto exp((B^T - \Omega X^TY)^T\Omega^{-1}(B - \Omega X^TY))$
We recognize the kernel of a Normal distribution, allowing us to write the conditional posterior as:
$B | Y, X, \tau \sim \mathcal{N}(\Omega X^TY, \tau^{-1} \Omega)$
• Posterior of τ:
Similarly to the equations above:
$\mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \mathsf{P}(Y | X, \tau)$
But now, to handle the second term, we need to integrate over B, thus effectively taking into account the uncertainty in B:
$\mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \int \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B) \mathsf{d}B$
Again, we use the priors and likelihoods specified above (but everything inside the integral is kept inside it, even if it doesn't depend on B!):
$\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} \tau^{N/2} exp(-\frac{\tau}{2} B^T \Sigma_B^{-1} B) exp(-\frac{\tau}{2} (Y - XB)^T (Y - XB)) \mathsf{d}B$
As we used a conjugate prior for τ, we know that we expect a Gamma distribution for the posterior. Therefore, we can take τN / 2 out of the integral and start guessing what looks like a Gamma distribution. We also factorize inside the exponential:
$\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} exp \left[-\frac{\tau}{2} \left( (B - \Omega X^T Y)^T \Omega^{-1} (B - \Omega X^T Y) - Y^T X \Omega X^T Y + Y^T Y \right) \right] \mathsf{d}B$
We recognize the conditional posterior of B. This allows us to use the fact that the pdf of the Normal distribution integrates to one:
$\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} exp\left[-\frac{\tau}{2} (Y^T Y - Y^T X \Omega X^T Y) \right]$
We finally recognize a Gamma distribution, allowing us to write the posterior as:
$\tau | Y, X \sim \Gamma \left( \frac{N+\kappa}{2}, \; \frac{1}{2} (Y^T Y - Y^T X \Omega X^T Y + \lambda) \right)$
• Joint posterior (2): sometimes it is said that the joint posterior follows a Normal Inverse Gamma distribution:
$B, \tau | Y, X \sim \mathcal{N}IG(\Omega X^TY, \; \tau^{-1}\Omega, \; \frac{N+\kappa}{2}, \; \frac{\lambda^\ast}{2})$
where $\lambda^\ast = Y^T Y - Y^T X \Omega X^T Y + \lambda$
• Marginal posterior of B: we can now integrate out τ:
$\mathsf{P}(B | Y, X) = \int \mathsf{P}(\tau) \mathsf{P}(B | Y, X, \tau) \mathsf{d}\tau$
$\mathsf{P}(B | Y, X) = \frac{\frac{\lambda^\ast}{2}^{\frac{N+\kappa}{2}}}{(2\pi)^\frac{3}{2} |\Omega|^{\frac{1}{2}} \Gamma(\frac{N+\kappa}{2})} \int \tau^{\frac{N+\kappa+3}{2}-1} exp \left[-\tau \left( \frac{\lambda^\ast}{2} + (B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY) \right) \right] \mathsf{d}\tau$
Here we recognize the formula to integrate the Gamma function:
$\mathsf{P}(B | Y, X) = \frac{\frac{\lambda^\ast}{2}^{\frac{N+\kappa}{2}} \Gamma(\frac{N+\kappa+3}{2})}{(2\pi)^\frac{3}{2} |\Omega|^{\frac{1}{2}} \Gamma(\frac{N+\kappa}{2})} \left( \frac{\lambda^\ast}{2} + (B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY) \right)^{-\frac{N+\kappa+3}{2}}$
And we now recognize a multivariate Student's t-distribution:
$\mathsf{P}(B | Y, X) = \frac{\Gamma(\frac{N+\kappa+3}{2})}{\Gamma(\frac{N+\kappa}{2}) \pi^\frac{3}{2} |\lambda^\ast \Omega|^{\frac{1}{2}} } \left( 1 + \frac{(B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY)}{\lambda^\ast} \right)^{-\frac{N+\kappa+3}{2}}$
We hence can write:
$B | Y, X \sim \mathcal{S}_{N+\kappa}(\Omega X^TY, \; (Y^T Y - Y^T X \Omega X^T Y + \lambda) \Omega)$
• Bayes Factor: one way to answer our goal above ("is there an effect of the genotype on the phenotype?") is to do hypothesis testing.
We want to test the following null hypothesis:
$H_0: \; a = d = 0$
In Bayesian modeling, hypothesis testing is performed with a Bayes factor, which in our case can be written as:
$\mathrm{BF} = \frac{\mathsf{P}(Y | X, a \neq 0, d \neq 0)}{\mathsf{P}(Y | X, a = 0, d = 0)}$
We can shorten this into:
$\mathrm{BF} = \frac{\mathsf{P}(Y | X)}{\mathsf{P}_0(Y)}$
Note that, compare to frequentist hypothesis testing which focuses on the null, the Bayes factor requires to explicitly model the data under the alternative. This makes a big difference when interpreting the results (see below).
$\mathsf{P}(Y | X) = \int \mathsf{P}(\tau) \mathsf{P}(Y | X, \tau) \mathsf{d}\tau$
First, let's calculate what is inside the integral:
$\mathsf{P}(Y | X, \tau) = \frac{\mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B)}{\mathsf{P}(B | Y, X, \tau)}$
Using the formula obtained previously and doing some algebra gives:
$\mathsf{P}(Y | X, \tau) = \left( \frac{\tau}{2 \pi} \right)^{\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} exp\left( -\frac{\tau}{2} (Y^TY - Y^TX\Omega X^TY) \right)$
Now we can integrate out τ (note the small typo in equation 9 of supplementary text S1 of Servin & Stephens):
$\mathsf{P}(Y | X) = (2\pi)^{-\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} \frac{\frac{\lambda}{2}^{\frac{\kappa}{2}}}{\Gamma(\frac{\kappa}{2})} \int \tau^{\frac{N+\kappa}{2}-1} exp \left( -\frac{\tau}{2} (Y^TY - Y^TX\Omega X^TY + \lambda) \right)$
Inside the integral, we recognize the almost-complete pdf of a Gamma distribution. As it has to integrate to one, we get:
$\mathsf{P}(Y | X) = (2\pi)^{-\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} \left( \frac{\lambda}{2} \right)^{\frac{\kappa}{2}} \frac{\Gamma(\frac{N+\kappa}{2})}{\Gamma(\frac{\kappa}{2})} \left( \frac{Y^TY - Y^TX\Omega X^TY + \lambda}{2} \right)^{-\frac{N+\kappa}{2}}$
We can use this expression also under the null. In this case, as we need neither a nor d, B is simply μ, ΣB is $\sigma_{\mu}^2$ and X is a vector of 1's. We can also defines $\Omega_0 = ((\sigma_{\mu}^2)^{-1} + N)^{-1}$. In the end, this gives:
$\mathsf{P}_0(Y) = (2\pi)^{-\frac{N}{2}} \frac{|\Omega_0|^{\frac{1}{2}}}{\sigma_{\mu}} \left( \frac{\lambda}{2} \right)^{\frac{\kappa}{2}} \frac{\Gamma(\frac{N+\kappa}{2})}{\Gamma(\frac{\kappa}{2})} \left( \frac{Y^TY - \Omega_0 N^2 \bar{Y}^2 + \lambda}{2} \right)^{-\frac{N+\kappa}{2}}$
We can therefore write the Bayes factor:
$\mathrm{BF} = \left( \frac{|\Omega|}{\Omega_0} \right)^{\frac{1}{2}} \frac{1}{\sigma_a \sigma_d} \left( \frac{Y^TY - Y^TX\Omega X^TY + \lambda}{Y^TY - \Omega_0 N^2 \bar{Y}^2 + \lambda} \right)^{-\frac{N+\kappa}{2}}$
When the Bayes factor is large, we say that there is enough evidence in the data to support the alternative. Indeed, the Bayesian testing procedure corresponds to measuring support for the specific alternative hypothesis compared to the null hypothesis. Importantly, note that, for a frequentist testing procedure, we would say that there is enough evidence in the data to reject the null. However we wouldn't say anything about the alternative as we don't model it.
The threshold to say that a Bayes factor is large depends on the field. It is possible to use the Bayes factor as a test statistic when doing permutation testing, and then control the false discovery rate. This can give an idea of a reasonable threshold.
• Hyperparameters: the model has 5 hyperparameters, $\{\kappa, \, \lambda, \, \sigma_{\mu}, \, \sigma_a, \, \sigma_d\}$. How should we choose them?
Such a question is never easy to answer. But note that all hyperparameters are not that important, especially in typical quantitative genetics applications. For instance, we are mostly interested in those that determine the magnitude of the effects, σa and σd, so let's deal with the others first.
As explained in Servin & Stephens, the posteriors for τ and B change appropriately with shifts (y + c) and scaling ($y \times c$) in the phenotype when taking their limits. This also gives us a new Bayes factor, the one used in practice (see Guan & Stephens, 2008):
$\mathrm{lim}_{\sigma_{\mu} \rightarrow \infty \; ; \; \lambda \rightarrow 0 \; ; \; \kappa \rightarrow 0 } \; \mathrm{BF} = \left( \frac{N}{|\Sigma_B^{-1} + X^TX|} \right)^{\frac{1}{2}} \frac{1}{\sigma_a \sigma_d} \left( \frac{Y^TY - Y^TX (\Sigma_B^{-1} + X^TX)^{-1} X^TY}{Y^TY - N \bar{Y}^2} \right)^{-\frac{N}{2}}$
Now, for the important hyperparameters, σa and σd, it is usual to specify a grid of values, i.e. M pairs (σa,σd). For instance, Guan & Stephens used the following grid:
$M=4 \; ; \; \sigma_a \in \{0.05, 0.1, 0.2, 0.4\} \; ; \; \sigma_d = \frac{\sigma_a}{4}$
Then, we can average the Bayes factors obtained over the grid using, as a first approximation, equal weights:
$\mathrm{BF} = \sum_{m \, \in \, \text{grid}} \frac{1}{M} \, \mathrm{BF}(\sigma_a^{(m)}, \sigma_d^{(m)})$
• Implementation: the following R function is adapted from Servin & Stephens supplementary text 1.
```BF <- function(G=NULL, Y=NULL, sigma.a=NULL, sigma.d=NULL, get.log10=TRUE){
stopifnot(! is.null(G), ! is.null(Y), ! is.null(sigma.a), ! is.null(sigma.d))
subset <- complete.cases(Y) & complete.cases(G)
Y <- Y[subset]
G <- G[subset]
stopifnot(length(Y) == length(G))
N <- length(G)
X <- cbind(rep(1,N), G, G == 1)
inv.Sigma.B <- diag(c(0, 1/sigma.a^2, 1/sigma.d^2))
inv.Omega <- inv.Sigma.B + t(X) %*% X
inv.Omega0 <- N
tY.Y <- t(Y) %*% Y
log10.BF <- as.numeric(0.5 * log10(inv.Omega0) -
0.5 * log10(det(inv.Omega)) -
log10(sigma.a) - log10(sigma.d) -
(N/2) * (log10(tY.Y - t(Y) %*% X %*% solve(inv.Omega)
%*% t(X) %*% cbind(Y)) -
log10(tY.Y - N*mean(Y)^2)))
if(get.log10)
return(log10.BF)
else
return(10^log10.BF)
}
```
In the same vein as what is explained here, we can simulate data under different scenarios and check the BFs:
```N <- 300 # play with it
PVE <- 0.1 # play with it
grid <- c(0.05, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2)
MAF <- 0.3
G <- rbinom(n=N, size=2, prob=MAF)
tau <- 1
a <- sqrt((2/5) * (PVE / (tau * MAF * (1-MAF) * (1-PVE))))
d <- a / 2
mu <- rnorm(n=1, mean=0, sd=10)
Y <- mu + a * G + d * (G == 1) + rnorm(n=N, mean=0, sd=tau)
for(m in 1:length(grid))
print(BF(G, Y, grid[m], grid[m]/4))
```
• Binary phenotype: using a similar notation, we model case-control studies with a logistic regression where the probability to be a case is $\mathsf{P}(y_i = 1) = p_i$.
There are many equivalent ways to write the likelihood, the usual one being:
$y_i \; \overset{i.i.d}{\sim} \; Bernoulli(p_i) \; \text{ with } \; \mathrm{ln} \frac{p_i}{1 - p_i} = \mu + a \, g_i + d \, \mathbf{1}_{g_i=1}$
Using Xi to denote the i-th row of the design matrix X and keeping the same definition as above for B, we have:
$p_i = \frac{e^{X_i^TB}}{1 + e^{X_i^TB}}$
As the yi's can only take 0 and 1 as values, the likelihood can be written as:
$\mathcal{L}(B) = \prod_{i=1}^N p_i^{y_i} (1-p_i)^{1-y_i}$
We still use the same priors as above for B and the Bayes factor now is:
$\mathrm{BF} = \frac{\int \mathsf{P}(B) \mathsf{P}(Y | X, B) \mathrm{d}B}{\int \mathsf{P}(\mu) \mathsf{P}(Y | X, \mu) \mathrm{d}\mu}$
The interesting point here is that there is no way to calculate these integrals analytically. Therefore, we will use Laplace's method to approximate them, as in Guan & Stephens (2008).
• Link between Bayes factor and P-value: see Wakeley (2008)
to do
• Hierarchical model: pooling genes, learn weights for grid and genomic annotations, see Veyrieras et al (PLoS Genetics, 2010)
to do
• Multiple SNPs with LD: joint analysis of multiple SNPs, handle correlation between them, see Guan & Stephens (Annals of Applied Statistics, 2011) for MCMC, see Carbonetto & Stephens (Bayesian Analysis, 2012) for Variational Bayes
to do
• Confounding factors in phenotype: factor analysis, see Stegle et al (PLoS Computational Biology, 2010)
to do
• Genetic relatedness: linear mixed model
to do
• Discrete phenotype: count data as from RNA-seq, Poisson-like likelihood
to do
• Multiple phenotypes: matrix-variate distributions, tensors
to do
• Non-independent genes: enrichment in known pathways, learn "modules"
to do
• References:
• Servin & Stephens (PLoS Genetics, 2007)
• Guan & Stephens (PLoS Genetics, 2008)
• Stephens & Balding (Nature Reviews Genetics, 2009)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 57, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8352609872817993, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/13639/what-is-the-basic-postulate-on-which-qm-rests?answertab=votes
|
# What is the basic postulate on which QM rests
What is the basic postulate on which QM rests. Is it that the position of a particle can only be described only in the probabilistic sense given by the state function $\psi(r)$ ? We can even go ahead and abandon the particle formalism as well. So what is the QM all about ? A probabilistic description of the physical world ? and nothing more ?
-
– Marek Aug 16 '11 at 17:41
1
There isn't one basic postulate of QM. There are several postulates that fit together into a surprising and beautiful theory. For some reason, people are upvoting one of these postulates and downvoting another, and haven't even mentioned others (e.g., the superposition principle). – Peter Shor Aug 17 '11 at 11:13
@Peter: that's only half-true. There are many other theories that share lots of properties of QM (e.g. any theory given by linear PDE will have superpositions). Similarly, classical logic and quantum logic are basically the same except for one axiom. Therefore if one is really after one thing that makes QM special, one is inevitably lead to non-commutativity. After all, if there was none of it but everything else was kept untouched (formally, $\hbar \to 0$), you'd get back your plain old boring Poisson algebra on the phase space. – Marek Aug 17 '11 at 21:10
## 2 Answers
Existence of non-compatible observables: measuring one of them (say, coordinate) leads to an unavoidable uncertainty in the result of a subsequent measurement of the other (say, momentum). This is the essence of the Heisenberg uncertainty principle in the kinematics of your system. There is a detailed discussion along these lines in the beginning of the Quantum Mechanics volume (volume III) in the Course of Theoretical Physics by Landau and Lifshitz. Any measurable (physical) system, be it particle, atom or anything else, is quantum only if you can identify a manifestation of Heisenberg uncertainty principle (non-commutativity of observables).
-
"Existence of non-compatible observables", great to know this term..+1. – Rajesh D Aug 16 '11 at 17:29
This on-compatibility of physical observables is the empirical reason why non-commutative objects (operators or matrices) need to be assigned to them in the quantum formalism... – Slaviks Aug 16 '11 at 17:34
1
+1, this is precisely what sets QM apart from other theories. In mathematics the term quantization and deformation (in the sense of deforming commutative algebras to non-commutative ones) are basically equivalent. – Marek Aug 16 '11 at 17:41
"deformation of algebras", great to know the term. +1 – Slaviks Aug 16 '11 at 17:46
To me, the most basic postulate is that energy comes in discrete packages of $h \nu$. Based on this assumption, you get much of the rest of basic quantum mechanics.
In fact, the Schrödinger equation is related to the Hamilton-Jacobi formulation of classical mechanics with the added assumption of quantized energy, and the Heisenberg picture follows directly from the Poisson bracket formulation of classical mechanics, assuming quantized energy.
Wave-particle duality is also a very important assumption- this gives us a way to interpret exactly what these equations express, which is the probability of locating a particle per some (small) volume of position- or momentum-space.
-
Discreteness is not really that important (although it had been a century ago when QM was born; it's also where the term quantum comes from). There are many systems for which all important observables have continuous spectrum. – Marek Aug 16 '11 at 17:44
That's true! I guess it's not the best way to think about deriving all of QM. (Non-commuting observers probably are) In my mind, I like to put everything in a historical context. – specterhunter Aug 16 '11 at 18:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217278361320496, "perplexity_flag": "middle"}
|
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevB.79.184429
|
# Synopsis:
Challenge to a dual
#### Non-Abelian descendant of Abelian duality in a two-dimensional frustrated quantum magnet
Michael Hermele
Published May 28, 2009
Duality mapping is a powerful tool in high-energy, condensed matter, and statistical physics that establishes a connection between seemingly unrelated theories. In a nutshell, local observables of one theory are mapped onto spatially extended objects of the twin dual theory and vice versa. In some cases, especially in many-body problems involving strongly interacting particles, such duality mappings provide the only way to solve a problem, since their dual counterparts are theories with small coupling constants.
In a paper appearing in Physical Review B, Michael Hermele from the University of Colorado in the US proposes a new duality transformation that connects two distinct models—quantum chromoelectrodynamics in three space-time dimensions, or QCED3, and the $Φ4$ Ginzburg-Landau model with $O(4)$ symmetry—both of which can be applied to understanding the behavior of frustrated planar antiferromagnets, such as $Cs2CuCl4$ or $κ$-$(ET)2Cu2(CN)3$. An unusual and valuable property of the duality found by Hermele is that the theories on both ends of the duality mapping are susceptible to perturbative analysis, which could be important for understanding the nature of the phase transitions and the critical behavior of these systems.
Beyond the applications to frustrated quantum antiferromagnets, these results are interesting in a wider context: Applied to other lattice spin models, the procedure suggested by Hermele might lead to a class of new duality transformations. Potentially, this duality will lead to a greater understanding of related fermionic theories such as QED3 (quantum electrodynamics in 2+1 dimensions), which was suggested as one of the candidates for the effective theory of the pseudogap phase in cuprate superconductors. – Ashot Melikyan
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926391065120697, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/25839-composition-two-functions.html
|
# Thread:
1. ## Composition of Two Functions
Let $f(x) = x + 4$ and $h(x) = 4x - 1$. Find a function $g$ such that $g \circ f = h$.
$g(f(x)) = h$
$g(f) = h$
$g(x + 4) = 4x - 1$
$g = \frac {4x - 1}{x + 4}$
And textbook answer is $g(x) = 4x - 17$
I don't know how to get the textbook answer.......?
2. Originally Posted by Macleef
Let $f(x) = x + 4$ and $h(x) = 4x - 1$. Find a function $g$ such that $g \circ f = h$.
$g(f(x)) = h$
$g(f) = h$
$g(x + 4) = 4x - 1$
$\color{red}g = \frac {4x - 1}{x + 4}$
And textbook answer is $g(x) = 4x - 17$
the line in red is just wrong. g(x + 4) is NOT a product. you do not think of it as g times (x + 4). it is g of (x + 4), it is function notation, you cannot manipulate it as you did.
now when you have g(x + 4), it means you took some function g(x) and shifted it 4 units to the left. so given g(x + 4), we must replace x with x - 4, that way, we have g(x - 4 + 4) = g(x), what we did here was shift the function back to the right. thus we know that to get g from h, we must replace the x in h(x) with x - 4 (essentially what we are doing is shifting the h(x) function to the right to match up with the g(x)).
so, we have:
g(x + 4) = h(x)
replace x everywhere with x - 4, we get:
g(x) = h(x - 4)
=> g(x) = 4(x - 4) - 1
=> g(x) = 4x - 17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394038319587708, "perplexity_flag": "middle"}
|
http://cs.stackexchange.com/questions/6741/probability-wheel-redistribution-of-probabilities
|
probability wheel, redistribution of probabilities
I have a contiguous ordered data structure (0 based index):
````x= [1/3, 1/3, 1/3]
````
Let's say I selected index 1 and increased the probability by 1/3. Rest of the probabilities each decrease by 1/6 and the total probability remains P = 1.
````x= [1/6, 2/3, 1/6]
````
Let's say I selected index 2 and increased the probability by 1/3. Rest of the probabilities in total need to decrease by 1/3 to make the total probability remain P= 1.
````x= [1/10, 2/5, 1/2]
````
Is there a name for this kind of data structure? I'd like to research that name and use a library instead of my custom rolled code if possible.
-
1 Answer
This can be easily achieved by an array $A$ and an additional variable $m$. The entry $A[i]$ stores the probability of $i$ times $m$.
````A=[1,1,1], m=3
````
The you increase the probability of the second element by 1/3. This can be carried out by setting $A[2]=4$ and $m=6$, which gives
````A=[1,4,1], m=6
````
In general you could set the probability of element $k$ to $p$ by solving the system $$\sum_i A[i] = m \text{ and } A[k]=p\cdot m,$$ for the unknowns $A[k]$ and $m$.
So all you need to do is to set $m$ to $$m = \frac{\sum_{i\neq k} A[i]}{1-p},$$ and $A[k]=p\cdot m$ for the new $m$.
-
@pwned: yepp. Fixed it. – A.Schulz Nov 18 '12 at 20:08
@ASchulz I see where this is going, I will profile this compared to my auto normalizing version. In the meanwhile, would you help me find a data structure similar to this? I know this isn't about how memory is laid out, but it supports all the operations a data structure would have. (mutation of elements, growth, etc) It's about probability distributions and histograms or roulette wheel, or maybe it doesn't exist and I'm making something new. I just need some more keywords. – pwned Nov 18 '12 at 21:08
@pwnd: I suggest your post you request as new question and give all the details. Describe precisely what set of operations should be supported and what are your other requirements. – A.Schulz Nov 19 '12 at 9:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376220703125, "perplexity_flag": "middle"}
|
http://mathforum.org/mathimages/index.php?title=The_Golden_Ratio&diff=34261&oldid=31343
|
# The Golden Ratio
### From Math Images
(Difference between revisions)
| | | | |
|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | (Fixed hides. Forced the MME section to hide (as it should)) | |
| (29 intermediate revisions not shown.) | | | |
| Line 1: | | Line 1: | |
| | {{Image Description Ready | | {{Image Description Ready |
| | |ImageName=The Golden Ratio | | |ImageName=The Golden Ratio |
| - | |Image=Animated-gifs-pentagrams-010.gif | + | |Image=Goldenratioapplet1.jpg |
| - | |ImageIntro=The '''golden number,''' often denoted by lowercase Greek letter "phi"(φ) is numerically equal to <math>\frac{1 + \sqrt{5}}{2} = 1.61803399... \dots =\varphi</math>. The pentagram to the right is made from multiple overlapping golden gnomons. The '''golden gnomon''' is a special isosceles triangle with a base equal to <math>{\varphi}</math> and side lengths equal to 1. | + | |ImageIntro=The '''golden number,''' often denoted by lowercase Greek letter "phi", is <br /><math>{\varphi}=\frac{1 + \sqrt{5}}{2} = 1.61803399...</math>. <br /> |
| - | :[[Image:Gnomon1.jpg]] | + | The term '''golden ratio''' refers to any ratio which has the value phi. The image to the right illustrates dividing and subdividing a rectangle into the golden ratio. The result is [[Field:Fractals|fractal-like]]. This page explores real world applications for the golden ratio, common misconceptions about the golden ratio, and multiple derivations of the golden number. |
| - | | + | |
| - | :The term '''golden ratio''' refers to the ratio <math>\varphi</math> : 1. | + | |ImageDescElem=[[Image:Monalisa01.jpg|Does the Mona Lisa exhibit the golden ratio?|thumb|400px|right]]The golden number, approximately 1.618, is called golden because many geometric figures involving this ratio are often said to possess special beauty. Be that true or not, the ratio has many beautiful and surprising mathematical properties. The Greeks were aware of the golden ratio, but did not consider it particularly significant with respect to aesthetics. It was not called the "divine" proportion until the 15th century, and was not called "golden" ratio until the 18th century. <ref>[http://en.wikipedia.org/wiki/Golden_ratio "Golden ratio"], Retrieved on 20 June 2012.</ref> |
| - | :This page explores real world applications for the golden ratio, common misconceptions about the golden ratio, and multiple derivations of the golden number. | + | <br /> [[Image:Finalpyramid1.jpg|Markowsky has determined the above dimensions to be incorrect.|thumb|400px|left]] |
| - | |ImageDescElem=The golden ratio, approximately 1.618, is called golden because many geometric figures involving this ratio are often said to possess special beauty. Be that true or not, the ratio has many beautiful and surprising mathematical properties. The Greeks were aware of the golden ratio, but did not consider it particularly significant with respect to aesthetics. It was not called the "divine" proportion until the 15th century, and was not called "golden" ratio until the 18th century. Since then, it has been claimed that the golden ratio is the most aesthetically pleasing ratio, and claimed that this ratio has appeared in architecture and art throughout history. Among the most common such claims are that the Parthenon and Leonardo Da Vinci's Mona Lisa uses the golden ratio. Even more esoteric claims propose that the golden ratio can be found in the human facial structure, the behavior of the stock market, and the Great Pyramids. However, such claims have been criticized in scholarly journals (see references at the end of the page) as wishful thinking or sloppy mathematical analysis. Additionally, there is no solid evidence that supports the claim that the golden rectangle is the most aesthetically pleasing rectangle. | + | Since then, it has been claimed that the golden ratio is the most aesthetically pleasing ratio, and claimed that this ratio has appeared in architecture and art throughout history. Among the most common such claims are that the Parthenon and Leonardo Da Vinci's Mona Lisa use the golden ratio. Even more esoteric claims propose that the golden ratio can be found in the human facial structure, the behavior of the stock market, and the Great Pyramids. <br /> |
| | | + | However, such claims have been criticized in scholarly journals as wishful thinking or sloppy mathematical analysis. Additionally, there is no solid evidence that supports the claim that the golden rectangle is the most aesthetically pleasing rectangle.<ref>[http://www.math.nus.edu.sg/aslaksen/teaching/maa/markowsky.pdf "Misconceptions about the Golden Ratio"], Retrieved on 24 June 2012.</ref> |
| | | + | |
| | | | |
| | ===Misconceptions about the Golden Ratio=== | | ===Misconceptions about the Golden Ratio=== |
| - | In his paper, ''Misconceptions about the Golden Ratio,'' George Markowsky investigates many claims about the golden ratio appearing in man-made objects and in nature. Specifically, he claims that the golden ratio does not appear in the Parthenon or the Great Pyramids, two of the more common beliefs. He also disputes the belief that the human body exhibits the golden ratio. To read more, [http://www.math.nus.edu.sg/aslaksen/teaching/maa/markowsky.pdf click here!] | + | Many rumors and misconceptions surround the golden ratio. There have been many claims that the golden ratio appears in art and architecture. In reality, many of these claims involve warped images and large margins of error. One claim is that the Great Pyramids exhibit the golden ratio in their construction. This belief is illustrated below. |
| | | | |
| | | + | |
| | | + | In his paper, ''Misconceptions about the Golden Ratio,'' George Markowsky disputes this claim, arguing that the dimensions assumed in the picture are not anywhere close to being correct. Another belief is that a series of [[The Golden Ratio#Jump2|golden rectangles]] appears in the ''Mona Lisa''. |
| | | + | However, the placing of the golden rectangles seems arbitrary. Markowsky also disputes the belief that the human body exhibits the golden ratio. To read more, [http://www.math.nus.edu.sg/aslaksen/teaching/maa/markowsky.pdf click here!] |
| | | + | : |
| | | + | : |
| | ====''What do you think?''==== | | ====''What do you think?''==== |
| - | [[Image:Golden ratio parthenon.jpg|300px]] | + | George Markowsky argues that, like the ''Mona Lisa,'' the Parthenon does not exhibit a series of golden rectangles (discussed below). Do you think the Parthenon was designed with the golden ratio in mind or is the image below simply a stretch of the imagination? |
| - | <ref>[http://lotsasplainin.blogspot.com/2008/01/wednesday-math-vol-8-phi-golden-ratio.html "Parthenon"], Retrieved on 16 May 2012.</ref> | + | :[[Image:Golden ratio parthenon.jpg|300px]]<ref>[http://lotsasplainin.blogspot.com/2008/01/wednesday-math-vol-8-phi-golden-ratio.html "Parthenon"], Retrieved on 16 May 2012.</ref> |
| | | | |
| | ==A Geometric Representation== | | ==A Geometric Representation== |
| | | | |
| | ===The Golden Ratio in a Line Segment=== | | ===The Golden Ratio in a Line Segment=== |
| - | [[Image:Golden_segment.jpg|400px]][[Image:Goldenratiolabeled1.jpg]] | + | [[Image:Golden_segment.jpg|400px]][[Image:Animation2.gif]] |
| | | | |
| | | | |
| - | The golden ratio can be defined using a line segment divided into two sections, of lengths a and b, respectively. If a and b are appropriately chosen, the ratio of a to b is the same as the ratio of a + b to a and both ratios are equal to <math>\varphi</math>. The value of this ratio turns out not to depend on the particular values of a and b, as long as they satisfy the proportion. The line segment above exhibits the golden proportions. | + | The golden number can be defined using a line segment divided into two sections of lengths ''a'' and ''b''. If ''a'' and ''b'' are appropriately chosen, the ratio of ''a'' to ''b'' is the same as the ratio of ''a'' + ''b'' to ''a'' and both ratios are equal to <math>\varphi</math>. The line segment above (left) exhibits the golden proportion. The line segments above (right) are also examples of the golden ratio. In each case, |
| - | | + | |
| | | | |
| - | The line segments below are all examples of the golden ratio. | + | <math>\frac{{\color{Red}\mathrm{red}}+\color{Blue}\mathrm{blue}}{{\color{Blue}\mathrm{blue}} }= \frac{{\color{Blue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} }= \varphi . </math> |
| - | | + | |
| - | :[[Image:Animation2.gif]] | + | |
| - | In each case, <math>\frac{{\color{Red}\mathrm{red}}+\color{Blue}\mathrm{blue}}{{\color{Blue}\mathrm{blue}} }= \frac{{\color{Blue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} }= \varphi . </math> | + | |
| - | | + | |
| - | | + | |
| - | The golden rectangle is made up of line segments exhibiting the golden proportion. Remarkably, when a square is cut off of the golden rectangle, the remaining rectangle also exhibits the golden proportions. This continuing pattern is visible in the golden rectangle above. | + | |
| | | | |
| | | + | <div id="Jump2"></div> |
| | | + | ===The Golden Rectangle=== |
| | | + | A '''golden rectangle''' is any rectangle where the ratio between the sides is equal to phi. When the sides lengths are proportioned in the golden ratio, the rectangle is said to possess the '''golden proportions.''' A golden rectangle has sides of length <math>\varphi \times r</math> and <math>1 \times r</math> where <math>r</math> can be any constant. Remarkably, when a square with side length equal to the shorter side of the rectangle is cut off from one side of the golden rectangle, the remaining rectangle also exhibits the golden proportions. This continuing pattern is visible in the golden rectangle below. |
| | | + | :[[Image:Coloredfinalrectangle1.jpg]] |
| | | | |
| | | | |
| | | | |
| | ===Triangles=== | | ===Triangles=== |
| - | [[Image:Final2triangles.jpg|500px]][[Image:Pentagon_final.jpg|300px]] | + | [[Image:1byrrectangle1.jpg|500px]][[Image:Pentagon_final.jpg|300px]] |
| - | | + | |
| - | The golden ratio <math>\varphi</math> is used to construct the golden triangle, an isoceles triangle that has legs of length <math>\varphi</math> and base length of 1. It is above and to the left. Similarly, the golden gnomon has base <math>{\varphi}</math> and legs of length 1. It is shown above and to the right. These triangles can be used to form regular pentagons (pictured above),<balloon title="A pentagram is a five pointed star made with 5 straight strokes">pentagrams,</balloon> and <balloon title="A pentacle is a five pointed star inscribed in a circle.">pentacles</balloon>. | + | |
| - | | + | |
| - | The pentacle below, generated by the golden triangle and the golden gnomon, has many side lengths proportioned in the golden ratio. | + | |
| - | :[[Image:Uprightpentacle2.jpg]] | + | |
| - | :::<math>\frac{{\color{Blue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} } = \frac{{\color{Red}\mathrm{red}} }{{\color{Green}\mathrm{green}} } = \frac{{\color{Green}\mathrm{green}} }{{\color{Magenta}\mathrm{pink}} } = \varphi . </math> | + | |
| - | | + | |
| - | These triangles can be used to form [[Field:Fractals| fractals]] and are one of the only ways to tile a plane using pentagonal symmetry. Two fractal examples are shown below. | + | |
| | | | |
| - | [[Image:Penrose-4.jpg|250px]] [[Image:Penrose-21.jpg|250px]] | + | The golden number, <math>\varphi</math>, is used to construct the '''golden triangle,''' an isoceles triangle that has legs of length <math>\varphi \times r</math> and base length of <math>1 \times r</math> where <math>r</math> can be any constant. It is above and to the left. Similarly, the '''golden gnomon''' has base <math>\varphi \times r</math> and legs of length <math>1 \times r</math>. It is shown above and to the right. These triangles can be used to form regular pentagons (pictured above) and <balloon title="A pentagram is a five pointed star made with 5 straight strokes">pentagrams.</balloon> |
| - | |ImageDesc==Mathematical Representations of the Golden Ratio= | + | |
| | | | |
| | | + | The pentgram below, generated by the golden triangle and the golden gnomon, has many side lengths proportioned in the golden ratio. |
| | | + | :[[Image:Star1.jpg]] |
| | | + | :::<math>\frac{{\color{SkyBlue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} } = \frac{{\color{Red}\mathrm{red}} }{{\color{Green}\mathrm{green}} } = \frac{{\color{Green}\mathrm{green}} }{{\color{Magenta}\mathrm{pink}} } = \varphi . </math> |
| | | | |
| - | ==An Algebraic Representation== | + | These triangles can be used to form [[Field:Fractals| fractals]] and are one of the only ways to tile a plane using '''pentagonal symmetry'''. Pentagonal symmetry is best explained through example. Below, we have two fractal examples of pentagonal symmetry. Images that exhibit pentagonal symmetry have five symmetry axes. This means that we can draw five lines from the image's center, and all resulting divisions are identical. |
| | | | |
| | | + | :[[Image:Penta1.jpg|400px]] |
| | | + | :[[Image:Pent111.jpg|400px]] |
| | | + | |ImageDesc==An Algebraic Derivation of Phi= |
| | | | |
| - | {{SwitchPreview|ShowMessage=Click to expand|HideMessage=Click to hide|PreviewText=We may algebraically solve for the ratio (<math> \varphi </math>) by observing that ratio satisfies the following property by definition: | + | {{SwitchPreview|ShowMessage=Click to expand|hideMessage=Click to hide|PreviewText=How can we derive the value of <math>\varphi</math> from its characteristics as a ratio? We may algebraically solve for the ratio (<math>\varphi</math>) by observing that ratio satisfies the following property by definition: |
| | :<math>\frac{b}{a} = \frac{a+b}{b} = \varphi</math>|FullText= | | :<math>\frac{b}{a} = \frac{a+b}{b} = \varphi</math>|FullText= |
| | Let <math> r </math> denote the ratio : | | Let <math> r </math> denote the ratio : |
| Line 60: | | Line 61: | |
| | | | |
| | So | | So |
| - | :<math>r=\frac{a+b}{a}=1+\frac{b}{a} =1+\cfrac{1}{a/b}=1+\frac{1}{r}</math>. | + | :<math>r=\frac{a+b}{a}=1+\frac{b}{a}</math> which can be rewritten as |
| | | + | |
| | | + | :<math>1+\cfrac{1}{a/b}=1+\frac{1}{r}</math> thus, |
| | | | |
| | :<math>r=1+\frac{1}{r}</math> | | :<math>r=1+\frac{1}{r}</math> |
| Line 76: | | Line 79: | |
| | </span>, we get <math>r = \frac{1 \pm \sqrt{5}} {2}</math>. | | </span>, we get <math>r = \frac{1 \pm \sqrt{5}} {2}</math>. |
| | | | |
| - | Because the ratio has to be a positive value, | + | The ratio must be positive because we can not have negative line segments or side lengths. Because the ratio has to be a positive value, |
| | | | |
| - | :<math>r=\frac{1 + \sqrt{5}}{2} \approx 1.61803399 \dots =\varphi</math>. | + | :<math>r=\frac{1 + \sqrt{5}}{2} = 1.61803399... =\varphi</math>. |
| | |NumChars=500}} | | |NumChars=500}} |
| | | | |
| Line 85: | | Line 88: | |
| | | | |
| | ==Continued Fraction Representation and [[Fibonacci sequence|Fibonacci Sequences]]== | | ==Continued Fraction Representation and [[Fibonacci sequence|Fibonacci Sequences]]== |
| - | The golden ratio can also be written as what is called a '''continued fraction''' by using <balloon title="Recursion is the method of substituting an equation into itself">recursion</balloon>. | + | The golden ratio can also be written as what is called a '''continued fraction,'''a fraction of infinite length whose denominator is a quantity plus a fraction, which latter fraction has a similar denominator, and so on. This is done by using <balloon title="Recursion is the method of substituting an equation into itself">recursion</balloon>. |
| | | | |
| - | {{SwitchPreview|ShowMessage=Click to expand|HideMessage=Click to hide|PreviewText= |FullText=We have already solved for <math> \varphi </math> using the following equation: | + | {{SwitchPreview|ShowMessage=Click to expand|hideMessage=Click to hide|PreviewText= |FullText=We have already solved for <math>\varphi</math> using the following equation: |
| | | | |
| | <math>{\varphi}^2-{\varphi}-1=0</math>. | | <math>{\varphi}^2-{\varphi}-1=0</math>. |
| Line 103: | | Line 106: | |
| | <math>\varphi -1= \cfrac{1}{\varphi }</math>. | | <math>\varphi -1= \cfrac{1}{\varphi }</math>. |
| | | | |
| - | Solving for <math> \varphi </math> gives | + | Adding 1 to both sides gives |
| | | | |
| | <math>\varphi =1+ \cfrac{1}{\varphi }</math>. | | <math>\varphi =1+ \cfrac{1}{\varphi }</math>. |
| | | | |
| - | Now use recursion and substitute in the entire right side of the equation for <math> \varphi </math> in the bottom of the fraction. | + | Substitute in the entire right side of the equation for <math> \varphi </math> in the bottom of the fraction. |
| | | | |
| | <math>\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{\varphi } }</math> | | <math>\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{\varphi } }</math> |
| Line 119: | | Line 122: | |
| | <math>\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\cdots}}}</math> | | <math>\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\cdots}}}</math> |
| | | | |
| - | This last infinite form is a continued fraction | + | Continuing this substitution forever, the last infinite form is a continued fraction |
| | | | |
| - | If we evaluate truncations of the continued fraction by evaluating only part of the continued fraction (the finite displays above it), replacing <math>\varphi</math> by 1, we produce the ratios between consecutive terms in the [[Fibonacci sequence]]. | + | If we evaluate truncations of the continued fraction by evaluating only part of the continued fraction, replacing <math>\varphi</math> with 1, we produce the ratios between consecutive terms in the [[Fibonacci sequence]]. |
| | | | |
| | <math>\varphi \approx 1 + \cfrac{1}{1} = 2</math> | | <math>\varphi \approx 1 + \cfrac{1}{1} = 2</math> |
| Line 131: | | Line 134: | |
| | <math>\varphi \approx 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1+\cfrac{1}{1}}}} = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{2}}} =1 + \cfrac{1}{1 + \cfrac{2}{3}} = 8/5</math> | | <math>\varphi \approx 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1+\cfrac{1}{1}}}} = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{2}}} =1 + \cfrac{1}{1 + \cfrac{2}{3}} = 8/5</math> |
| | | | |
| - | Thus we discover that the golden ratio is approximated in the [[Fibonacci sequence]]. | + | Thus we discover that the golden ratio is approximated in the Fibonacci sequence. |
| | | | |
| | <math>1,1,2,3,5,8,13,21,34,55,89,144...\,</math> | | <math>1,1,2,3,5,8,13,21,34,55,89,144...\,</math> |
| Line 190: | | Line 193: | |
| | <math>\varphi = 1.61803399...\,</math> | | <math>\varphi = 1.61803399...\,</math> |
| | | | |
| - | As you go farther along in the [[Fibonacci sequence]], the ratio between the consecutive terms approaches the golden ratio. Many real world applications of the golden ratio are related to the [[Fibonacci sequence]]. For more real-world applications of the golden ratio [[Fibonacci Numbers|click here!]] | + | As you go farther along in the Fibonacci sequence, the ratio between the consecutive terms approaches the golden ratio. Many real-world applications of the golden ratio are related to the Fibonacci sequence. For more real-world applications of the golden ratio [[Fibonacci Numbers|click here!]] |
| | | | |
| - | In fact, we can prove this relationship using mathematical [[Induction]]. | + | In fact, we can prove that the ratio between terms in the Fibonacci sequence approaches the golden ratio by using mathematical [[Induction]]. |
| | | | |
| Line 202: | | Line 205: | |
| | we only need to show that each of the terms in the continued fraction is the ratio of Fibonacci numbers as shown above. | | we only need to show that each of the terms in the continued fraction is the ratio of Fibonacci numbers as shown above. |
| | | | |
| - | First, let <math> x_1=1</math>, <math> x_2=1+\frac{1}{1}=1+\frac{1}{x_1} </math>, <math> x_3= 1+\frac{1}{1+\frac{1}{1}}=1+\frac{1}{x_2} </math> and so on so that <math> x_n=1+\frac{1}{x_{n-1}} </math>. | + | First, let |
| | | + | :<math> x_1=1</math>, |
| | | + | :<math> x_2=1+\frac{1}{1}=1+\frac{1}{x_1} </math>, |
| | | + | :<math> x_3= 1+\frac{1}{1+\frac{1}{1}}=1+\frac{1}{x_2} </math> and so on so that |
| | | + | :<math> x_n=1+\frac{1}{x_{n-1}} </math>. |
| | | | |
| - | These are just the same truncated terms as listed above. Let's also denote the terms of the [[Fibonacci sequence]] as <math> f_n=f_{n-1}+f_{n-2} </math> where <math>f_1=1</math>,<math>f_2=1</math>, and so <math>f_3=1+1=2</math>, <math> f_4=1+2=3 </math> and so on. | + | These are just the same truncated terms as listed above. Let's also denote the terms of the Fibonacci sequence as |
| | | + | :<math> s_n=s_{n-1}+s_{n-2} </math> where <math>s_1=1</math>,<math>s_2=1</math>,<math>s_3=2</math>,<math>s_4=3</math> etc. |
| | <br> | | <br> |
| | | | |
| - | We want to show that <math> x_n=\frac{f_{n+1}}{f_n} </math> for all n. | + | We want to show that |
| | | + | :<math> x_n=\frac{s_{n+1}}{s_n} </math> for all n. |
| | | | |
| - | First, we establish our [[Induction|base case]]. We see that <math> x_1=1=\frac{1}{1}=\frac{f_2}{f_1} </math>, and so the relationship holds for the base case. | + | First, we establish our [[Induction|base case]]. We see that |
| | | + | :<math> x_1=1=\frac{1}{1}=\frac{s_2}{s_1} </math>, and so the relationship holds for the base case. |
| | | | |
| - | Now we assume that <math> x_k=\frac{f_{k+1}}{f_{k}} </math> for some <math> 1 \leq k < n </math> (This step is the [[Induction|inductive hypothesis]]). We will show that this implies that <math> x_{k+1}=\frac{f_{(k+1)+1}}{f_{k+1}}=\frac{f_{k+2}}{f_{k+1}} </math>. | + | Now we assume that |
| | | + | :<math> x_k=\frac{s_{k+1}}{s_{k}} </math> for some <math> 1 \leq k < n </math> (This step is the [[Induction|inductive hypothesis]]). We will show that this implies that |
| | | + | :<math> x_{k+1}=\frac{s_{(k+1)+1}}{s_{k+1}}=\frac{s_{k+2}}{s_{k+1}} </math>. |
| | | | |
| | <br><br> | | <br><br> |
| | | | |
| - | By our definition of <math>x_n</math>, we have | + | By our assumptions about ''x<sub>1</sub>'',''x<sub>2</sub>''...''x<sub>n</sub>'', we have |
| | | | |
| - | <math> x_{k+1}=1+\frac{1}{x_k} </math>. | + | :<math> x_{k+1}=1+\frac{1}{x_k} </math>. |
| | | | |
| | By our inductive hypothesis, this is equivalent to | | By our inductive hypothesis, this is equivalent to |
| | | | |
| - | <math>x_{k+1}=1+\frac{1}{\frac{f_{k+1}}{f_{k}}}</math>. | + | :<math>x_{k+1}=1+\frac{1}{\frac{s_{k+1}}{s_{k}}}</math>. |
| | | | |
| | Now we only need to complete some simple algebra to see | | Now we only need to complete some simple algebra to see |
| | | | |
| - | <math> x_{k+1}=1+\frac{f_k}{f_{k+1}} </math> | + | :<math> x_{k+1}=1+\frac{s_k}{s_{k+1}} </math> |
| | | | |
| - | <math> x_{k+1}=\frac{f_{k+1}+f_k}{f_{k+1}} </math> | + | :<math> x_{k+1}=\frac{s_{k+1}+s_k}{s_{k+1}} </math> |
| | | | |
| - | Noting the definition of <math>f_n=f_{n-1}+f_{n-2}</math>, we see that we have | + | Noting the definition of <math>s_n=s_{n-1}+s_{n-2}</math>, we see that we have |
| | | | |
| - | <math> x_{k+1}=\frac{f_{k+2}}{f_{k+1}} </math> | + | <math> x_{k+1}=\frac{s_{k+2}}{s_{k+1}} </math> |
| | | | |
| - | Since that was what we wanted to show, we see that the terms in our continued fraction are represented by ratios of Fibonacci numbers. | + | So by the principle of mathematical induction, we have shown that the terms in our continued fraction are represented by ratios of consecutive Fibonacci numbers. |
| | | | |
| - | The exact continued fraction is <math> x_{\infty} = \lim_{n\rightarrow \infty}\frac{f_{n+1}}{f_n} =\varphi </math>. | + | The exact continued fraction is |
| | | + | :<math> x_{\infty} = \lim_{n\rightarrow \infty}\frac{s_{n+1}}{s_n} =\varphi </math>. |
| | | | |
| | }}|NumChars=75}} | | }}|NumChars=75}} |
| Line 243: | | Line 256: | |
| | ==Proof of the Golden Ratio's Irrationality== | | ==Proof of the Golden Ratio's Irrationality== |
| | | | |
| - | {{SwitchPreview|ShowMessage=Click to expand|HideMessage=Click to hide|PreviewText= |FullText= | + | {{SwitchPreview|ShowMessage=Click to expand|hideMessage=Click to hide|PreviewText= |FullText= |
| - | Remarkably, the Golden Ratio is irrational, despite the fact that we just proved that is approximated by a ratio of Fibonacci numbers. | + | Remarkably, the Golden Ratio is <balloon title="A number is irrational if it cannot be expressed as the fraction between two integers.">irrational</balloon>, despite the fact that we just proved that is approximated by a ratio of Fibonacci numbers. |
| - | We will use the method of <balloon title="A method of proving a statement true by assuming that it's false and showing this assumption would logically lead to a statement that is already known to be untrue.">contradiction</balloon> to prove that the golden ratio is <balloon title="A number is irrational if it cannot be expressed as the fraction between two integers.">irrational</balloon>. | + | We will use the method of <balloon title="A method of proving a statement true by assuming that it's false and showing this assumption would logically lead to a statement that is already known to be untrue.">contradiction</balloon> to prove that the golden ratio is irrational. |
| | | | |
| - | Suppose <math>\varphi </math> is rational. Then it can be written as fraction in lowest terms <math> \varphi = b/a</math>, where a and b are integers. | + | Suppose <math>\varphi </math> is rational. Then it can be written as fraction in lowest terms <math> \varphi = b/a</math>, where ''a'' and ''b'' are integers. |
| | | | |
| | Our goal is to find a different fraction that is equal to <math> \varphi </math> and is in lower terms. This will be our contradiction that will show that <math> \varphi </math> is irrational. | | Our goal is to find a different fraction that is equal to <math> \varphi </math> and is in lower terms. This will be our contradiction that will show that <math> \varphi </math> is irrational. |
| Line 257: | | Line 270: | |
| | Now, since we know | | Now, since we know |
| | | | |
| - | <math> \frac{b}{a}=\frac{a+b}{b} </math> | + | :<math> \frac{b}{a}=\frac{a+b}{b} </math> |
| | | | |
| - | we see that <math> b^2=a(a+b) </math> by cross multiplication. Writing this all the way out gives us <math> b^2=a^2+ab </math>. | + | we see that <math> b^2=a(a+b) </math> by cross multiplication. Foiling this expression gives us <math> b^2=a^2+ab </math>. |
| | | | |
| - | Rearranging this gives us <math> b^2-ab=a^2 </math>, which is the same as <math> b(b-a)=a^2 </math>. | + | Rearranging this gives us <math> b^2-ab=a^2 </math>, which is the same as :<math> b(b-a)=a^2 </math>. |
| | | | |
| - | Dividing both sides of the equation by <math> (b-a) </math> and <math> a </math> gives us that | + | Dividing both sides of the equation by ''a(b-a)'' gives us |
| | | | |
| - | <math> \frac{b}{a}=\frac{a}{b-a} </math>. | + | :<math> \frac{b}{a}=\frac{a}{b-a} </math>. |
| | | | |
| - | Since <math> \varphi=\frac{b}{a} </math>, we can see that <math> \varphi=\frac{a}{b-a} </math>. | + | Since <math> \varphi=\frac{b}{a} </math>, this means <math> \varphi=\frac{a}{b-a} </math>. |
| | | | |
| | Since we have assumed that a and b are integers, we know that b-a must also be an integer. Furthermore, since <math> a<b </math>, we know that <math> \frac{a}{b-a} </math> must be in lower terms than <math> \frac{b}{a} </math>. | | Since we have assumed that a and b are integers, we know that b-a must also be an integer. Furthermore, since <math> a<b </math>, we know that <math> \frac{a}{b-a} </math> must be in lower terms than <math> \frac{b}{a} </math>. |
| Line 279: | | Line 292: | |
| | :*Markowsky. “Misconceptions about the Golden Ratio.” College Mathematics Journal. Vol 23, No 1 (1992). pp 2-19. | | :*Markowsky. “Misconceptions about the Golden Ratio.” College Mathematics Journal. Vol 23, No 1 (1992). pp 2-19. |
| | | | |
| - | :*http://www.contracosta.edu/math/pentagrm.htm | + | |
| | |other=Algebra, Geometry | | |other=Algebra, Geometry |
| - | |AuthorName=www.gifs-paradise.com | + | |AuthorName=azavez1 |
| | |SiteName=The Math Forum | | |SiteName=The Math Forum |
| | |SiteURL=http://mathforum.org/mathimages/index.php/Image:180px-Pentagram-phi.svg.png | | |SiteURL=http://mathforum.org/mathimages/index.php/Image:180px-Pentagram-phi.svg.png |
| Line 292: | | Line 305: | |
| | http://www.mathopenref.com/rectanglegolden.html | | http://www.mathopenref.com/rectanglegolden.html |
| | |InProgress=Yes | | |InProgress=Yes |
| - | |HideMME=No | | |
| | } | | } |
| | |Field=Algebra | | |Field=Algebra |
| Line 308: | | Line 320: | |
| | |Field=Algebra | | |Field=Algebra |
| | |InProgress=No | | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=Yes |
| | } | | } |
| | |Field=Algebra | | |Field=Algebra |
## Revision as of 13:42, 18 July 2012
The Golden Ratio
Fields: Algebra and Geometry
Image Created By: azavez1
Website: The Math Forum
The Golden Ratio
The golden number, often denoted by lowercase Greek letter "phi", is
${\varphi}=\frac{1 + \sqrt{5}}{2} = 1.61803399...$.
The term golden ratio refers to any ratio which has the value phi. The image to the right illustrates dividing and subdividing a rectangle into the golden ratio. The result is fractal-like. This page explores real world applications for the golden ratio, common misconceptions about the golden ratio, and multiple derivations of the golden number.
# Basic Description
Does the Mona Lisa exhibit the golden ratio?
The golden number, approximately 1.618, is called golden because many geometric figures involving this ratio are often said to possess special beauty. Be that true or not, the ratio has many beautiful and surprising mathematical properties. The Greeks were aware of the golden ratio, but did not consider it particularly significant with respect to aesthetics. It was not called the "divine" proportion until the 15th century, and was not called "golden" ratio until the 18th century. [1]
Markowsky has determined the above dimensions to be incorrect.
Since then, it has been claimed that the golden ratio is the most aesthetically pleasing ratio, and claimed that this ratio has appeared in architecture and art throughout history. Among the most common such claims are that the Parthenon and Leonardo Da Vinci's Mona Lisa use the golden ratio. Even more esoteric claims propose that the golden ratio can be found in the human facial structure, the behavior of the stock market, and the Great Pyramids.
However, such claims have been criticized in scholarly journals as wishful thinking or sloppy mathematical analysis. Additionally, there is no solid evidence that supports the claim that the golden rectangle is the most aesthetically pleasing rectangle.[2]
### Misconceptions about the Golden Ratio
Many rumors and misconceptions surround the golden ratio. There have been many claims that the golden ratio appears in art and architecture. In reality, many of these claims involve warped images and large margins of error. One claim is that the Great Pyramids exhibit the golden ratio in their construction. This belief is illustrated below.
In his paper, Misconceptions about the Golden Ratio, George Markowsky disputes this claim, arguing that the dimensions assumed in the picture are not anywhere close to being correct. Another belief is that a series of golden rectangles appears in the Mona Lisa. However, the placing of the golden rectangles seems arbitrary. Markowsky also disputes the belief that the human body exhibits the golden ratio. To read more, click here!
#### What do you think?
George Markowsky argues that, like the Mona Lisa, the Parthenon does not exhibit a series of golden rectangles (discussed below). Do you think the Parthenon was designed with the golden ratio in mind or is the image below simply a stretch of the imagination?
[3]
## A Geometric Representation
### The Golden Ratio in a Line Segment
The golden number can be defined using a line segment divided into two sections of lengths a and b. If a and b are appropriately chosen, the ratio of a to b is the same as the ratio of a + b to a and both ratios are equal to $\varphi$. The line segment above (left) exhibits the golden proportion. The line segments above (right) are also examples of the golden ratio. In each case,
$\frac{{\color{Red}\mathrm{red}}+\color{Blue}\mathrm{blue}}{{\color{Blue}\mathrm{blue}} }= \frac{{\color{Blue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} }= \varphi .$
### The Golden Rectangle
A golden rectangle is any rectangle where the ratio between the sides is equal to phi. When the sides lengths are proportioned in the golden ratio, the rectangle is said to possess the golden proportions. A golden rectangle has sides of length $\varphi \times r$ and $1 \times r$ where $r$ can be any constant. Remarkably, when a square with side length equal to the shorter side of the rectangle is cut off from one side of the golden rectangle, the remaining rectangle also exhibits the golden proportions. This continuing pattern is visible in the golden rectangle below.
### Triangles
The golden number, $\varphi$, is used to construct the golden triangle, an isoceles triangle that has legs of length $\varphi \times r$ and base length of $1 \times r$ where $r$ can be any constant. It is above and to the left. Similarly, the golden gnomon has base $\varphi \times r$ and legs of length $1 \times r$. It is shown above and to the right. These triangles can be used to form regular pentagons (pictured above) and pentagrams.
The pentgram below, generated by the golden triangle and the golden gnomon, has many side lengths proportioned in the golden ratio.
$\frac{{\color{SkyBlue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} } = \frac{{\color{Red}\mathrm{red}} }{{\color{Green}\mathrm{green}} } = \frac{{\color{Green}\mathrm{green}} }{{\color{Magenta}\mathrm{pink}} } = \varphi .$
These triangles can be used to form fractals and are one of the only ways to tile a plane using pentagonal symmetry. Pentagonal symmetry is best explained through example. Below, we have two fractal examples of pentagonal symmetry. Images that exhibit pentagonal symmetry have five symmetry axes. This means that we can draw five lines from the image's center, and all resulting divisions are identical.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Algebra, Geometry
[Click to view A More Mathematical Explanation]
# An Algebraic Derivation of Phi
<span class="_togglegroup _toggle_initshow _toggle _toggler toggle- [...]
[Click to hide A More Mathematical Explanation]
# An Algebraic Derivation of Phi
[Click to expand]
How can we derive the value of $\varphi$ from its characteristics as a ratio? We may algebraically solve for the ratio ($\varphi$) by observing that ratio satisfies the following property by definition:
$\frac{b}{a} = \frac{a+b}{b} = \varphi$
[{{{HideMessage}}}]
Let $r$ denote the ratio :
$r=\frac{a}{b}=\frac{a+b}{a}$.
So
$r=\frac{a+b}{a}=1+\frac{b}{a}$ which can be rewritten as
$1+\cfrac{1}{a/b}=1+\frac{1}{r}$ thus,
$r=1+\frac{1}{r}$
Multiplying both sides by $r$, we get
${r}^2=r+1$
which can be written as:
$r^2 - r - 1 = 0$.
Applying the quadratic formula An equation, $\frac{-b \pm \sqrt {b^2-4ac}}{2a}$, which produces the solutions for equations of form $ax^2+bx+c=0$ , we get $r = \frac{1 \pm \sqrt{5}} {2}$.
The ratio must be positive because we can not have negative line segments or side lengths. Because the ratio has to be a positive value,
$r=\frac{1 + \sqrt{5}}{2} = 1.61803399... =\varphi$.
The golden ratio can also be written as what is called a continued fraction,a fraction of infinite length whose denominator is a quantity plus a fraction, which latter fraction has a similar denominator, and so on. This is done by using recursion.
[Click to expand]
[{{{HideMessage}}}]
We have already solved for $\varphi$ using the following equation:
${\varphi}^2-{\varphi}-1=0$.
We can add one to both sides of the equation to get
${\varphi}^2-{\varphi}=1$.
Factoring this gives
$\varphi(\varphi-1)=1$.
Dividing by $\varphi$ gives us
$\varphi -1= \cfrac{1}{\varphi }$.
Adding 1 to both sides gives
$\varphi =1+ \cfrac{1}{\varphi }$.
Substitute in the entire right side of the equation for $\varphi$ in the bottom of the fraction.
$\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{\varphi } }$
Substituting in again,
$\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\varphi}}}$
$\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\cdots}}}$
Continuing this substitution forever, the last infinite form is a continued fraction
If we evaluate truncations of the continued fraction by evaluating only part of the continued fraction, replacing $\varphi$ with 1, we produce the ratios between consecutive terms in the Fibonacci sequence.
$\varphi \approx 1 + \cfrac{1}{1} = 2$
$\varphi \approx 1 + \cfrac{1}{1+\cfrac{1}{1}} = 3/2$
$\varphi \approx 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1} } } = 5/3$
$\varphi \approx 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1+\cfrac{1}{1}}}} = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{2}}} =1 + \cfrac{1}{1 + \cfrac{2}{3}} = 8/5$
Thus we discover that the golden ratio is approximated in the Fibonacci sequence.
$1,1,2,3,5,8,13,21,34,55,89,144...\,$
| | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $1/1$ | $=$ | $1$ |
| $2/1$ | $=$ | $2$ |
| $3/2$ | $=$ | $1.5$ |
| $8/5$ | $=$ | $1.6$ |
| $13/8$ | $=$ | $1.625$ |
| $21/13$ | $=$ | $1.61538462...$ |
| $34/21$ | $=$ | $1.61904762...$ |
| $55/34$ | $=$ | $1.61764706...$ |
| $89/55$ | $=$ | $1.61818182...$ |
$\varphi = 1.61803399...\,$
As you go farther along in the Fibonacci sequence, the ratio between the consecutive terms approaches the golden ratio. Many real-world applications of the golden ratio are related to the Fibonacci sequence. For more real-world applications of the golden ratio click here!
In fact, we can prove that the ratio between terms in the Fibonacci sequence approaches the golden ratio by using mathematical Induction.
[Click to show proof]
[Click to hide proof]
Since we have already shown that
$\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\cdots}}}$,
we only need to show that each of the terms in the continued fraction is the ratio of Fibonacci numbers as shown above.
First, let
$x_1=1$,
$x_2=1+\frac{1}{1}=1+\frac{1}{x_1}$,
$x_3= 1+\frac{1}{1+\frac{1}{1}}=1+\frac{1}{x_2}$ and so on so that
$x_n=1+\frac{1}{x_{n-1}}$.
These are just the same truncated terms as listed above. Let's also denote the terms of the Fibonacci sequence as
$s_n=s_{n-1}+s_{n-2}$ where $s_1=1$,$s_2=1$,$s_3=2$,$s_4=3$ etc.
We want to show that
$x_n=\frac{s_{n+1}}{s_n}$ for all n.
First, we establish our base case. We see that
$x_1=1=\frac{1}{1}=\frac{s_2}{s_1}$, and so the relationship holds for the base case.
Now we assume that
$x_k=\frac{s_{k+1}}{s_{k}}$ for some $1 \leq k < n$ (This step is the inductive hypothesis). We will show that this implies that
$x_{k+1}=\frac{s_{(k+1)+1}}{s_{k+1}}=\frac{s_{k+2}}{s_{k+1}}$.
By our assumptions about x1,x2...xn, we have
$x_{k+1}=1+\frac{1}{x_k}$.
By our inductive hypothesis, this is equivalent to
$x_{k+1}=1+\frac{1}{\frac{s_{k+1}}{s_{k}}}$.
Now we only need to complete some simple algebra to see
$x_{k+1}=1+\frac{s_k}{s_{k+1}}$
$x_{k+1}=\frac{s_{k+1}+s_k}{s_{k+1}}$
Noting the definition of $s_n=s_{n-1}+s_{n-2}$, we see that we have
$x_{k+1}=\frac{s_{k+2}}{s_{k+1}}$
So by the principle of mathematical induction, we have shown that the terms in our continued fraction are represented by ratios of consecutive Fibonacci numbers.
The exact continued fraction is
$x_{\infty} = \lim_{n\rightarrow \infty}\frac{s_{n+1}}{s_n} =\varphi$.
## Proof of the Golden Ratio's Irrationality
[Click to expand]
[{{{HideMessage}}}]
Remarkably, the Golden Ratio is irrational, despite the fact that we just proved that is approximated by a ratio of Fibonacci numbers. We will use the method of contradiction to prove that the golden ratio is irrational.
Suppose $\varphi$ is rational. Then it can be written as fraction in lowest terms $\varphi = b/a$, where a and b are integers.
Our goal is to find a different fraction that is equal to $\varphi$ and is in lower terms. This will be our contradiction that will show that $\varphi$ is irrational.
First note that the definition of $\varphi = \frac{b}{a}=\frac{a+b}{b}$ implies that $b > a$ since clearly $b+a>b$ and the two fractions must be equal.
Now, since we know
$\frac{b}{a}=\frac{a+b}{b}$
we see that $b^2=a(a+b)$ by cross multiplication. Foiling this expression gives us $b^2=a^2+ab$.
Rearranging this gives us $b^2-ab=a^2$, which is the same as :$b(b-a)=a^2$.
Dividing both sides of the equation by a(b-a) gives us
$\frac{b}{a}=\frac{a}{b-a}$.
Since $\varphi=\frac{b}{a}$, this means $\varphi=\frac{a}{b-a}$.
Since we have assumed that a and b are integers, we know that b-a must also be an integer. Furthermore, since $a<b$, we know that $\frac{a}{b-a}$ must be in lower terms than $\frac{b}{a}$.
Since we have found a fraction of integers that is equal to $\varphi$, but is in lower terms than $\frac{b}{a}$, we have a contradiction: $\frac{b}{a}$ cannot be a fraction of integers in lowest terms. Therefore $\varphi$ cannot be expressed as a fraction of integers and is irrational.
# For More Information
• Markowsky. “Misconceptions about the Golden Ratio.” College Mathematics Journal. Vol 23, No 1 (1992). pp 2-19.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# References
1. ↑ "Golden ratio", Retrieved on 20 June 2012.
2. ↑ "Misconceptions about the Golden Ratio", Retrieved on 24 June 2012.
3. ↑ "Parthenon", Retrieved on 16 May 2012.
# Future Directions for this Page
-animation?
http://www.metaphorical.net/note/on/golden_ratio http://www.mathopenref.com/rectanglegolden.html
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 117, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.84173983335495, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/114081/how-many-ways-12-persons-may-be-divided-into-three-groups-of-4-persons-each
|
# How many ways $12$ persons may be divided into three groups of $4$ persons each?
How many ways $12$ persons may be divided into three groups of $4$ persons each?
I think the answer should be $\frac{12!}{(4!)^3}$ but the suggested correct answer is $5775$, could anybody explain where I am going wrong?
-
4
Interestingly, this gives a proof that $\displaystyle \frac{(a_1 + a_2 + \dots + a_n)!}{a_1!a_2!\dots a_n!}$ is divisible by $\displaystyle n!$. – Aryabhata Feb 27 '12 at 18:48
Ask yourself the following two questions: (1) How many ways are there to divide a group of $12$ people into $2$ volleyball teams of $6$ each, one team to wear blue, the other to wear red? (2) How many ways are there to divide $12$ people into two volleyball teams at a nudist camp? – André Nicolas Feb 27 '12 at 19:02
2
@Aryabhata Except that $\displaystyle{\frac{(4+3)!}{4!3!}}$ (i.e., $n=2, a_1=4, a_2=3$) isn't divisible by $2!$ - you need interchangability of the pieces! I think the correct statement is that $\displaystyle{\frac{(na)!}{(a!)^n}}$ is always divisible by $n!$. – Steven Stadnicki Feb 27 '12 at 20:12
@StevenStadnicki: Right! Thanks. – Aryabhata Feb 27 '12 at 20:16
## 3 Answers
The answer is $\frac{12!}{(4!)^3\cdot3!}=5775$ because the $3!$ different orders of the three groups do not matter either, so your solution was almost correct.
-
We can also organize the count in a different way. First line up the people, say in alphabetical order, or in student number order, or by height.
The first person in the lineup chooses the $3$ people (from the remaining $11$) who will be on her team. Then the first person in the lineup who was not chosen chooses the $3$ people (from the remaining $7$) who will be on her team. The double-rejects make up the third team.
The first person to choose has $\binom{11}{3}$ choices. For every choice she makes, the second person to choose has $\binom{7}{3}$ choices, for a total of $$\binom{11}{3}\binom{7}{3}.$$
Remark: The lineup is a device to avoid multiple-counting the divisions into teams. The alternate (and structurally nicer) strategy is to do deliberate multiple counting, and take care of that at the end by a suitable division.
-
Since you're talking about people, order doesn't matter, so you have to add a $3!$ dividing there.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9672774076461792, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.