url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/105234?sort=oldest
|
Second-order term in first-order logic?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Could a function in FOL take functions as arguments? FOL only limits on the order of the individuals being quantified, but if an expression does not involve quantifying over second-order or higher terms, would it still be valid in FOL? Say, $f(g)$ where $f : (A \to B) \to C$ and $g : A \to B$.
So another way to put my question is that does the rule of formation in FOL say that $f(g)$ is valid as long as $g$ is in the domain of $f$, despite the type of $f$? Could $f$ be a 4th order function and would still be valid in FOL?
-
1
No . – Emil Jeřábek Aug 22 at 14:01
3
There is no important syntactic difference between second-order and first-order logic. Any syntax that works in second-order logic would work equally well in a suitable first-order logic. However, I agree this question is better suited for math.stackexchange.org. If you can ask it there, I will write a longer answer. – Carl Mummert Aug 22 at 15:27
7
Please don't close the question. The question is a symptom of logicians' always preferring single-sorted theories over multi-sorted ones. So everyone ends up thinking that functions must necessarily involve second-order logic, which is false. – Andrej Bauer Aug 22 at 16:15
A related question on MSE: math.stackexchange.com/questions/23799/… – Kaveh Aug 22 at 23:18
2 Answers
I think that the spirit of this question, combined with the clarifications in comments, is:
What is it that makes first-order logic "first order"?
Unfortunately, the terms "first order" and "second order" get used to mean various things.
A formal but unsatisfying answer would say that first-order logic is a specific logic defined in, say, Mendelson's textbook, and any other logic is not "first order logic" strictly speaking. This is unsatisfying because we know there are many inessential variations of first-order logic - really there are many first-order logics that share a certain core. The question I quoted asks for a characterization of that core.
One common answer is that any logic in which we intend to have quantifiers over "functions" or "sets" is higher order. This is unsatisfying because, as Andrej Bauer points out, such theories can be syntactically expressed in multi-sorted first-order logic. There are many theories of "second order arithmetic", for example, which allow us to express set and function quantification but which are treated as first-order theories. Unfortunately, the terminology "second order" is established for these theories and cannot be avoided.
Recall that a logic consists of both a syntax and a semantics. The truly defining feature of a first-order logic is the semantics. First-order semantics begins with the notion of a structure (also called a model), as defined in every introductory textbook on first-order logic.
Consider how we would express function quantification in (multi-sorted) first-order logic, as in Andrej's answer. Each structure must interpret two sorts. It uses a set of individuals for the quantifiers over individuals and a separate set of functions for the quantifiers over functions. This set of functions, in an arbitrary structure, might be a proper subset of the collection of all functions on the set of individuals; nothing in the definition of a structure requires otherwise. Indeed some structures will have an infinite set of individuals but a finite set of functions.
Full second order semantics changes the class of allowable structures so that only those whose function set includes all the functions are allowed. This does not affect the syntax in any way, but it deeply changes the semantics. Because fewer structures are being considered, more formulas will be logically valid, and fewer will be satisfiable. Thus there are more categorical theories in these semantics, such as the well known categorical second-order axiomatizations of the natural numbers. Those same axiomatizations are syntactically fine in first-order logic, where the simple difference is that they are no longer categorical.
Thus the key difference between function quantification in multi-sorted first order logic (or type theory) and function quantification in full second-order semantics is not the existence of syntactic quantifier symbols that allow quantification over functions. The difference is in the meaning of those quantifiers, which derives from the way the semantics are defined. In the first-order case, we have little control over the range of quantifiers. In full second-order semantics, once the set of individuals is fixed, the range of the function quantifiers is also fixed. This distinction is only visible at the meta level, when we are studying the logic from the outside and can specify which interpretations are permissible. Nothing in the syntax of the logic tells us what collection of structures will be used to interpret it.
-
Thanks. So do you know why is the common answer to what is a higher-order logic, like you pointed out, is that any logic in which we intend to have quantifiers over "functions" or "sets"? Is it a special case? – kate_r Aug 22 at 20:12
1
Historically, the distinction between first-order logic and higher-order logic was not very well understood, and it took a long time before it was clear. The idea that one could arbitrarily change the semantics of a logic would have been very strange in the early 20th century. Henkin's new proof method for the completeness theorem, which applied to type theory with first order semantics, was not obtained until the 1950s. As with many things, older terminology is still used by convention even though it is not the terminology that we would pick today. – Carl Mummert Aug 22 at 20:51
I see. So, to confirm my understanding, in FOL, we can quantify over variables of type, e.g., $A \to B \to ...$ as long as the individuals of the domain are of type $A \to B \to ...$? If so, is this the reason why ZF is a first-order theory even some axioms quantify over sets, e.g., Axiom of union, i.e. the individuals are sets? – kate_r Aug 22 at 20:57
1
My point is "what you can quantify over" is independent of whether you use FOL or higher order logic. But you should not say "in FOL" because there are many different first order logics, and they have somewhat different syntaxes. In some of them you can directly quantify over functions, in others you cannot. First-order ZFC is somewhat different in that you cannot directly quantify over functions at the level of the logic That is, there is just one kind of universal and one kind of existential quantifier in the logic used for ZFC, while in other FOLs there are multiple kinds of $\exists$. – Carl Mummert Aug 22 at 21:30
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
While this question is not research-level, I think many mathematicians would not know how to answer it. I propose we keep it.
In usual logic texts first-order logic is done over a single sort, i.e., we assume a fixed universe of discourse. Terms denote elements of the single universe. For example, in the theory of a group the universe is the group, in set-theory it is the class of all sets, etc.
But this need not be so, we can have a first-order logic with many sorts. A typical example is the theory of a module, where the two sorts are the ring and the module. Another example is the theory of a graph, where the two sorts are the vertices and the edges. In classical logic various tricks may be played that allow us to replace the sorts with their disjoint union (define operations arbitrarily when they don't make sense and fix the axioms appropriately). If one studies first-order theories in general this is a very useful trick, which is why logicians always stick to singe-sorted theories. But in particular cases it makes little sense to replace several natural sorts with a single unnatural one.
However, in computer-sciency applications such tricks are unacceptable. Therefore we keep many sorts. In fact we go a step further and organize sorts into a type theory (and call the sorts types). It is common for a type theory to have type constructors that generate infinitely many types.
To answer your question, suppose we want a first-order logic in which we are allowed to speak about functions $A \to B$ as well as functionals $(A \to B) \to C$. Then we work in simple type theory, whose type constructors are the cartesian products $\times$ and the function space $\to$. Only well-typed terms are admitted, and all quantifiers must range over specific types. Thus we can write things like $\forall F : (A \to B) \to (A \to B) . \exists f : A \to B . F(f) = f$. This is still first-order logic on top of a type theory.
Such a logic would become higher-order if we included a type $\Omega$ of truth values and axioms which related formulas and functions mapping into $\Omega$. One would expect a comprehension-style schema which related formulas $\phi(x)$ with $x$ of type $A$ and functions $f : A \to \Omega$. Second-order quantification $\forall F . F(\dots)$ can then be expressed as $\forall f : A \to \Omega . f(\dots)$.
You can read more about this kind of formal systems in Bart Jacobs's book "Categorical Logic and Type Theory". Constructive mathematicians also prefer formalizations of this sort, for example Peter Aczel has been advocating logic-enriched type-theory for a while.
-
Thanks. Just to clarify my understanding: the quantification of variables of type $A \to B$ is valid in FOL because the elements of the domain in question are of type $A \to B$. If so, how come the quantification over $(A \to B) \to (A \to B)$ is still valid in FOL? – kate_r Aug 22 at 17:14
I voted for this earlier, but I feel obliged to say I think my answer is just a more detailed explanation of some ideas expressed in this one. – Carl Mummert Aug 23 at 2:32
@kate_r: The essential thing to graps is that there can be many domains, even infinitely many ones. So you can quantify over any domain you like. In your case the domains would correspond to simple types, so they would be $A \to B$, $(A \to B) \to (A \to B)$, etc., whatever you like. Have a look at $\mathrm{HA}^\omega$, higher-order Heyting arithmetic. It is an example of a first-order theory with arbitrarily complex domains of the form $\mathbb{N} \to \mathbb{N}$, $(\mathbb{N} \to \mathbb{N}) \to \mathbb{N}$, $((\mathbb{N} \to \mathbb{N}) \to \mathbb{N}) \to \mathbb{N}$, etc. – Andrej Bauer Aug 23 at 7:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383698105812073, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/144788/towers-of-hanoi-are-there-configurations-of-n-disks-that-are-more-than-2n
|
# Towers of Hanoi - are there configurations of $n$ disks that are more than $2^n - 1$ moves apart?
This is an exercise from Chapter 1 of "Concrete Mathematics". It concerns the Towers of Hanoi.
Are there any starting and ending configurations of $n$ disks on three pegs that are more than $2^n - 1$ moves apart, under Lucas's original rules?
My initial guess is no, there are no starting and ending configurations of $n$ disks on three pegs that are more than $2^n - 1$ moves apart.
I will post an answer to this question with my attempt at a solution. I would like to confirm if my solution is correct and complete, or if there is a better approach.
Edit: I added to the answer a second method of solving this problem, based on the comments to the answer.
-
## 1 Answer
I will show it by induction on $n$ (the number of disks). Let $T(n)$ be the number of moves needed to go from an arbitrary configuration to another arbitrary configuration. Let $P(n)$ be the proposition "$T(n)\leq 2^n-1$". That is, $P(n)$ is the proposition "All starting and ending configurations of $n$ disks are at most $2^n-1$ moves apart.". I want to show that $P(n)$ is true for every natural number $n$.
For the base case, $P(0)$ is true, because it is clear that $T(0)=0\leq 2^0-1=0$.
For the inductive hypothesis, let's assume that $P(n-1)$ is true; that is, let's assume that it is true that $T(n-1)\leq 2^{n-1}-1$. Now, I will show that $P(n-1)$ implies $P(n)$.
First method
Suppose I have a configuration of $n-1$ disks. By the inductive hypothesis, we know that $T(n-1)\leq 2^{n-1}-1$. Now, let's consider what will change if we add one more disk (an $n^{th}$ disk), smaller than every other disk in the configuration, so that we have a situation where there are $n$ disks. Since this $n^{th}$ disk is the smallest one, it will always be on top in one of the three pegs, due to the rules of the problem.
The bottom $n-1$ disks will make $T(n-1)$ moves. For each move of the $n-1$ disks, in the worst case, the new smallest disk will be either on top of the disk that wants to move, or on top of the peg to which the disk wants to move. So, the new disk will have to make at most one move for each move of the other $n-1$ disks, that is, the new disk will make at most $T(n-1)$ moves. At last, when the $n-1$ disks are already at their correct positions in the end configuration, the new disk may have to make one more move to reach its correct position in the end configuration.
So, there will be at most $T(n) = T(n-1) + T(n-1) + 1 \leq 2^{n-1}-1+2^{n-1}-1+1 = 2^n - 2 + 1 = 2^n - 1$ moves. Therefore, $T(n) \leq 2^n - 1$, which shows that $P(n)$ is true.
Second method
I can also build the inductive step by considering that the new disk is the largest disk (instead of being the smallest one), as explained in the comments below.
Suppose we have a configuration of $n-1$ disks. By the inductive hypothesis, these $n-1$ disks need $T(n-1) \leq 2^{n-1}-1$ moves to reach any other configuration. Now, let's add an $n^{th}$ disk, which is the largest disk. We have two cases:
(1) If the largest disk doesn't move from the starting configuration to the end configuration, then we can just move the top $n-1$ disks as if there were only $n-1$ disks; so, in this case, it takes at most $2^{n-1} - 1$ moves. Therefore, it is true that $T(n) \leq 2^{n} - 1$. So, $P(n)$ is true in this case.
(2) If the largest disk moves, so we first have to move it to its end configuration. For this, we may have to move the top $n-1$ disks to a peg so that they are out of the way; by the inductive hypothesis, this takes at most $2^{n-1} - 1$ moves. Then, we move the largest disk to its correct position in the end configuration (1 more move). Then, we move the top $n-1$ disks to their correct positions in the end configuration (at most $2^{n - 1} - 1$ moves). The whole procedure takes at most $2^{n-1} - 1 + 1 + 2^{n-1} - 1 = 2^n - 1$ moves. So, $T(n) \leq 2^n - 1$. This shows that $P(n)$ is true in this case.
-
2
It’s easier if you make the new disk the largest one. Then it takes at most $T(n-1)$ moves to move the smallest $n-1$ disks to the peg on which the largest one is not supposed to end up, possibly one move to get the largest disk in place, and at most $T(n-1)$ moves to get the smallest $n-1$ disks to where they belong. However, what you’ve done works. – Brian M. Scott May 13 '12 at 23:39
1
It generally looks good, but it would be simpler to think about the largest disk instead of the smallest. To get from the initial position to any valid configuration, consider the largest disk that needs to be moved. If the largest disk is already in correct position, then the result follow from the $n-1$ case. Otherwise, you must first move all larger disks out of the way, more the largest disk into position, and then move the rest of the disks into position. The first part takes $2^{n-1}-1$ moves, the second takes $1$, and the final takes at most $2^{n-1}-1$ moves by the inductive hypothesis – Arturo Magidin May 13 '12 at 23:39
2
(cont) The key being that in any optimal sequence of moves, the largest disk needs to be moved at most once. – Arturo Magidin May 13 '12 at 23:41
@BrianM.Scott: Thank you for the suggestion, I've actually thought about doing it this way too. – anonymous May 13 '12 at 23:44
@ArturoMagidin: Thank you for the comment, moving the largest disk really seems to give a better idea about why this is an optimal sequence. I've thought about doing it this way too. – anonymous May 13 '12 at 23:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466989040374756, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/185600/finding-a-point-on-a-ellipse-so-that-it-has-the-shortest-distance-between-this
|
# Finding a point on a ellipse so that it has the shortest distance between this point and another given point [duplicate]
Possible Duplicate:
Calculating Distance of a Point from an Ellipse Border
Given a point $A = (x_1, y_1)$ and a $2$D ellipse, how could we find a point $B = (x_2, y_2)$ on the ellipse so that it has the shortest distance between point $A$ and $B$?
The point $A$ can be anywhere on the same plane of the ellipse. If possible, please list the final expression of the point $B.$
-
What have you already attempted? Hints: 1. What is the distance between two points? 2. If your point $(x_2,y_2)$ is on the ellipse, what must be true about it (e.g. It must satisfy the equation for the ellipse). – Daryl Aug 22 '12 at 22:05
1
– matt Aug 23 '12 at 0:11
@Jin: You have not told us what are the given data about the ellipse, e.g., whether its axes are parallel to the coordinate axes or not. Maybe the ellipse is given by $5$ of its points; then the solution of your problem looks very unintuitive. – Christian Blatter Oct 22 '12 at 11:20
## marked as duplicate by Rahul Narain, Quixotic, Davide Giraudo, Alexander Gruber, Hagen von EitzenDec 24 '12 at 12:42
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 2 Answers
Not the easiest. Let the unknown point $B$ be $(x, y).$ $B$ is on the ellipse so $$(x/a)^2 + (y/b)^2 = 1,$$ i.e. $y = \frac{b}{a}\sqrt{a^2 - x^2}.$ Hence the squared distance between $A = (x_1, y_1)$ and $B$ is $$\begin{eqnarray} d & = & (x-x_1)^2 + (y -y_1)^2 \\ & = & (x-x_1)^2 + \Big(\frac{b}{a}\sqrt{a^2 - x^2} - y_1 \Big)^2. \end{eqnarray}$$ That's a function in $1$ variable, namely $x$, which you can minimize using calculus.
-
What if the closest point is in the bottom half of the ellipse? – Daryl Aug 22 '12 at 22:45
@Daryl how is that different? If there exists a point $(x, y)$ s.t. $(x,y)$ is on the ellipse and the distance between $(x,y)$ to $A$ is minimal, then the above formulation will find it, no? – user2468 Aug 22 '12 at 22:50
1
The problem is that $y=\pm \frac{b}{a}\sqrt{a^2-x^2}$. So if the point $(x_1,y_1)$ is in the bottom half of the domain ($y_1<0)$, you won't find the shortest distance as you are neglecting the bottom half of the ellipse. – Daryl Aug 22 '12 at 23:40
An arbitrary ellipse can be parameterised as $x=h+a\cos t,\, y=k+b\cos t$ $(0\leq t\leq 2\pi)$, which is an ellipse centred at $(h,k)$, with axis lengths $a$ and $b$.
Minimising the square of the distance is an equivalent problem to minimising the distance.
Approach 1:
The point $(x_2,y_2)$ lies on the above parametised ellipse.
Then, $$f=(x_1-h-a\cos t)^2 +(x_2-k-b\sin t)$$ is the distance to any point on the ellipse.
As $d$ is a simple function of $t$ only, the value of $t$ which is minimised when $\dfrac{d f}{d t}=0$.
Approach 2:
The point $p_1=(x_1,y_1)$ can be written as $$p_1=p_2+\lambda n$$ where $p_2=(x_2,y_2)$ is the point on the ellipse and $n$ is the vector normal to the ellipse. Using the above parameterisation for the ellipse, you obtain the system of nonlinear equations $$\begin{bmatrix}x_1\\y_1\end{bmatrix}=\begin{bmatrix}h+a\cos t\\k+b\sin t\end{bmatrix}+\lambda \begin{bmatrix}b\cos t\\a\sin t\end{bmatrix},$$ which has to be solved for $t$ and $\lambda$.
Approach 3:
A third way to formulate this problem is as a constrained optimisation problem, so that this is formulated as $$\min\limits_{x_1,x_2} f = (x_1-x_2)^2+(y_1-y_2)^2$$ subject to $$\frac{(x_2-h)^2}{a^2}+\frac{(y_2-k)^2}{b^2}-1=0.$$
This can be solved using lagrange multipliers to give $$L(x_1,x_2,\lambda)=(x_1-x_2)^2+(y_1-y_2)^2+\lambda\left(\frac{(x_2-h)^2}{a^2}+\frac{(y_2-k)^2}{b^2}-1\right).$$ The optimal solution can be found by solving the system of equations $$\frac{d L}{d x_1}=0,\,\frac{d L}{d x_2}=0,\,\frac{d L}{d \lambda}=0.$$
-
You seem to have missed a rotation. Your "arbitrary" ellipses all have axes parallel to the $x$- and $y$-axes. There are five degrees of freedom in defining an ellipse. You have only given four. For each $(a:b:c:f:g:h) \in \mathbb{RP}^5$ with $h^2-ab > 0,$ there corresponds a unique ellipse given by the equation $ax^2+2hxy+2gx+by^2+2fy+c = 0.$ I agree with looking for critical points of the family of functions $\Delta : \mathbb{R}^2 \times \mathbb{S}^1 \to \mathbb{R}$ where $\Delta((x,y),p)$ is the square of the distance between the ambient point $(x,y)$ and the point $p$ on the ellipse. – Fly by Night Aug 23 '12 at 0:24
@FlybyNight: I did notice that (eventually), but the space can be simply rotated so that the axes of the ellipse and coordinate system coincide. – Daryl Aug 23 '12 at 3:01
@Daryl - I have set this up and obtained equations $2(p_x-q_x) = \lambda (2Ap_x + Bp_y + D), 2(p_y-q_y) = \lambda (Bp_x + 2Cp_y + E), Ap_x^2 + Bp_x p_y + Cp_y^2 + Dp_x + Ep_y + F = 0$ where $(p_x,p_y)$ is a point on the ellipse and $(q_x,q_y)$ is the query point. I have tried solving these using Sage and Matlab Symbolic Toolbox for $p_x$, $p_y$, and $\lambda$, and the expressions are either completely out of control or the system cannot find a solution. Is there a way to actually write the solution in closed form? – David Doria Mar 18 at 19:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222856163978577, "perplexity_flag": "head"}
|
http://nrich.maths.org/1175/index
|
### Pebbles
Place four pebbles on the sand in the form of a square. Keep adding as few pebbles as necessary to double the area. How many extra pebbles are added each time?
### It Figures
Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all?
### Bracelets
Investigate the different shaped bracelets you could make from 18 different spherical beads. How do they compare if you use 24 beads?
# Sets of Numbers
##### Stage: 2 Challenge Level:
How many different sets of numbers with at least four members can you find in the numbers in this box?
For example, one set could be multiples of $4$ {$8, 36 ...$}, another could be odd numbers {$3, 13 ...$}.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460828900337219, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/34830/list
|
## Return to Answer
When Kummer started working on research problems,he tried to solve what became known as "Kummer's problem", i.e., the determination cubic Gauss sums (its cube is easy to compute). Kummer asked Dirichlet to find out whether Jacobi or someone else had already been working on this, and to send him everything written by Jacobi on this subject. Dirichlet organized lecture notes of Jacobi's lectures on number theory from 1836/37 (see also this question), where Jacobi had worked out the quadratic, cubic and biquadratic reciprocity laws using what we now call Gauss and Jacobi sums.
In his first article, Kummer tried to generalize a result due to Jacobi, who had proved (or rather claimed) that primes $5n+1$, $8n+1$, and $12n+1$ split into four primes in the field of 5th, 8th and 12th roots of unity. Kummer's proof of the fact that primes $\ell n+1$ split into $\ell-1$ primes in the field of $\ell$-th roots of unity (with $\ell$ an odd prime) was erroneous, and eventually led to his introduction of ideal numbers.
After Lame (1847) had given is "proof" of FLT in Paris, Liouville had observed that there are gaps related to unique factorization; he then asked his friend Dirichlet in a letter whether he knew that Lame's assumption was valid (Liouville only knew counterexamples for quadratic fields). A few weeks later, Kummer wrote Liouville a letter. In the weeks between, Kummer had looked at FLT and found a proof based on several assumptions, which later turned out to hold for regular primes. Kummer must have looked at FLT before, because in a letter to Kronecker he said that "this time" he quickly found the right approach.
The Paris Prize, as John Stilwell already wrote, did play a role for Kummer, as he confessed in one of his letters to Kronecker that can be found in Kummer's Collected Papers. But mathematically, Kummer attached importance only to the higher reciprocity laws.
Kummer worked out the arithmetic of cyclotomic extensions guided by his desire to find the higher reciprocity laws; notions such as unique factorization into ideal numbers, the ideal class group, units, the Stickelberger relation, Hilbert 90, norm residues and Kummer extensions owe their existence to his work on reciprocity laws. His work on Fermat's Last Theorem is connected to the class number formula and the "plus" class number, and a meticulous investigation of units, in particular Kummer's Lemma, as well as the tools needed for proving it, his differential logarithms, which much later were generalized by Coates and Wiles. Some of the latter topics were helpful to Kummer later when he actually proved his higher reciprocity law.
I'll put
Here's my article on Jacobi and Kummer's ideal numberson the web this afternoon.
1
When Kummer started working on research problems,he tried to solve what became known as "Kummer's problem", i.e., the determination cubic Gauss sums (its cube is easy to compute). Kummer asked Dirichlet to find out whether Jacobi or someone else had already been working on this, and to send him everything written by Jacobi on this subject. Dirichlet organized lecture notes of Jacobi's lectures on number theory from 1836/37 (see also this question), where Jacobi had worked out the quadratic, cubic and biquadratic reciprocity laws using what we now call Gauss and Jacobi sums.
In his first article, Kummer tried to generalize a result due to Jacobi, who had proved (or rather claimed) that primes $5n+1$, $8n+1$, and $12n+1$ split into four primes in the field of 5th, 8th and 12th roots of unity. Kummer's proof of the fact that primes $\ell n+1$ split into $\ell-1$ primes in the field of $\ell$-th roots of unity (with $\ell$ an odd prime) was erroneous, and eventually led to his introduction of ideal numbers.
After Lame (1847) had given is "proof" of FLT in Paris, Liouville had observed that there are gaps related to unique factorization; he then asked his friend Dirichlet in a letter whether he knew that Lame's assumption was valid (Liouville only knew counterexamples for quadratic fields). A few weeks later, Kummer wrote Liouville a letter. In the weeks between, Kummer had looked at FLT and found a proof based on several assumptions, which later turned out to hold for regular primes. Kummer must have looked at FLT before, because in a letter to Kronecker he said that "this time" he quickly found the right approach.
The Paris Prize, as John Stilwell already wrote, did play a role for Kummer, as he confessed in one of his letters to Kronecker that can be found in Kummer's Collected Papers. But mathematically, Kummer attached importance only to the higher reciprocity laws.
Kummer worked out the arithmetic of cyclotomic extensions guided by his desire to find the higher reciprocity laws; notions such as unique factorization into ideal numbers, the ideal class group, units, the Stickelberger relation, Hilbert 90, norm residues and Kummer extensions owe their existence to his work on reciprocity laws. His work on Fermat's Last Theorem is connected to the class number formula and the "plus" class number, and a meticulous investigation of units, in particular Kummer's Lemma, as well as the tools needed for proving it, his differential logarithms, which much later were generalized by Coates and Wiles. Some of the latter topics were helpful to Kummer later when he actually proved his higher reciprocity law.
I'll put my article on Jacobi and Kummer's ideal numbers on the web this afternoon.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9809577465057373, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/97771/motivation-for-solution-to-constructing-a-set-of-1983-distinct-integers-such-tha/97778
|
# Motivation for solution to constructing a set of 1983 distinct integers such that no three are consecutive terms of an arithmetic progression
Problem: Is it possible to choose $1983$ distinct positive integers, all less than or equal to $100,000$, no three of which are consecutive terms of an arithmetic progression? (Source: IMO 1983 Q5)
Solution: We construct a set $T$ containing even more than 1983 integers, all less than $10^5$ such that no three are in arithmetic progression, that is, no three satisfy $x+z=2y$. The set $T$ consists of all positive integers whose base $3$ representations have at most $11$ digits, each of which is either $0$ or $1$ (i.e., no $2$'s). There are $2^{11} -1 > 1983$ of them, and the largest is
$$11111111111_3=1+3^2+3^3+\cdots+3^{10}=88573<10^5$$
Now suppose $x+z=2y$ for some $x,y,z\in T$. The number $2y$, for any $y\in T$, consists only of the digits $0$ and $2$. Hence $x$ and $z$ must match digit for digit, and it follows that $x=z=y$. Hence the set $T$ contains no arithmetic progression of length 3, and the desired selection is possible.
My question now is, how do I gain the thinking process to come up with the idea for tacking the problem that yielded the solution above? How can I deduce or intuitively know to use this particular approach of using base $3$ representations come up with a solution to the problem? It seems to me that it requires a great amount of creativity to end up with a method of approach to tackle the problem like that. Would someone be able to explain the motivation behind the solution?
Any thoughts and insights are greatly appreciated.
-
4
You are quit right that «it requires a great amount of creativity to end up with a method of approach to tackle the problem like that». You gain the ability to do that with practice, exposition to similar/connected ideas, more practice, more exposition to similar/connected ideas, and then more practice, and more exposition to similar/connected ideas. In isolation, most ideas appear to require genius—in reality, very few do. – Mariano Suárez-Alvarez♦ Jan 10 '12 at 3:50
When I see any problem containing a number that could represent a year in the late 20th century, I think: "Contest math." Then I think: "The thought process comes from having solved a lot of other contest math problems." I am being slightly facetious here, but my point is that unless you want to compete in the IMO, it is OK if you can't easily do IMO problems. (Many mathematicians can't.) If you do want to compete, then: practice! and don't feel bad about learning from solutions to old problems that you don't quite "get." Study them and you will acquire the techniques by osmosis. – leslie townes Jan 10 '12 at 4:46
## 3 Answers
Suppose you have come to the point where you know that you are looking for a set of numbers which does not contain a triplet $x$, $y$, $z$ such that $x+z=2y$.
If I give you three numbers, how do you check if they do or do not satisfy that equality? Well, the stupidest way is to simply see if $x$ and $z$ add up to $2y$, of course!
Now, if we pick two numbers and actually try to add them, we immediately notice that there is something screwing with us: all the carrying over from one column to the other. So, to avoid that, we only consider numbers $x$ and $z$ such that when we add them there is no carrying, crossing our fingers so that this condition is not too draconian to leave us with too few candidates... (And hey, we got the opportunity to use the word draconian!)
We also need to compute $2y$, so we may just as well —keeping our fingers crossed— assume that when we compute it there is also no carrying over.
If the digits of $x$ are $x_nx_{n-1}\cdots x_0$, and similarly for $y$ and $z$, at this point, the condition that $x+z=2y$ translates into
$x_i+z_i=2y_i$ for all $i\in\{0,\dots,n\}$.
So, what we want is that if the digits of $x$, $y$ and $z$ satisfy this condition, then in fact $x$, $y$ and $z$ cannot be all different. So, if we only allow the digits of out numbers to come from a set $S\subseteq\{0,1,\dots,9\}$, we want that
for all $a$, $b\in S$, then $a+b<10$
so that there is no carrying over when we compute $x+z$, nor when we compute $2y$, and that
if $a$, $b$, $c\in S$ are such that $a+b=2c$, then in fact $a=b=c$.
(This does look like more than what we really need, but I can't think of what we really need... if it does not work, this is were we need to think more...)
So... a little work will show that we can pick $S=\{0, 1, 3, 4\}$. and then it works. Cool!
So we only have to consider all numbers whose digits are drawn from ${0,1,3,4}$ and which are less than $100,000$. Hmm. We think a bit and see that there are too few of these! Damn. In fact, there are 1025 of them.
Start over.
Well... now we have a little idea: what is this obsession with the number $10$. Really. I never wrapped my head with using $A$ and $B$ and so on as digits, so well, I'll try to use another base, but smaller than $10$.
(Hmm, base $2$, the usual suspect, is not going to help here...)
Random pick: let's do base $6$. Our set $S$ of digits will have to be drawn from ${0,1,2}$, because for $3$ already we have carry over when doubling. Hm. The sets $S$ we can construct have at most $2$ elements; for example, $\{0,2\}$ or $\{1,2\}$. Hmm. Thinking a bit shows there are too few numbers smaller than $100000$ using those $6$adic numbers (we of course prefer $\{0,2\}$ to $\{1,2\}$, because it allows us to write smaller numbers, so more numbers). Damn again.
So... Think a bit more... Using base $5$ is not going to help, because the maximal digit is also $2$... Base $4$... Ok. We can take $S=\{0,1\}$, and work, work, work, there are only 512 numbers below $100000$ using only them. Ok, but base $3$ allows us to use the same digits, and obviously there will be more numbers with only those digits. Hmm. Ooooo. $2048$ of them!
We did it :)
-
1
Why would one think of changing the base? Well, experience. – Mariano Suárez-Alvarez♦ Jan 10 '12 at 4:40
1
Draconian is so pedestrian: you could have used Procrustean! (Really nice answer.) – Brian M. Scott Jan 10 '12 at 5:15
@Brian, (I had to check, but those two do not mean quite the same :) ) – Mariano Suárez-Alvarez♦ Jan 10 '12 at 8:10
Going just on the title (i.e. before I saw the limit to numbers less than 100.000) my immediate reaction was that powers of two would work, so there must be some constraint preventing them from being valid. That's another prompt to think of changing the base. – Peter Taylor Jan 10 '12 at 12:35
Sorry, Mariano, I didn’t mean to suggest that they did, just that either would be appropriate (and that one has perhaps even fewer opportunities to use Procrustean). – Brian M. Scott Jan 11 '12 at 2:12
show 5 more comments
if this is the number line:
````ooooooooooooooooooooooooooooooooo...
````
the first dot represents 1, the next 2, ...
notice there's an arithmetic progression here:
````ooooooooooooooooooooooooooooooooo...
^^^
````
we can initially remove all arithmetic progressions of step 1 by removing every third
````oo oo oo oo oo oo oo oo oo oo oo ...
^^^
````
notice that it doesn't contain any arithmetic progressions of step 2 either! but it does contain an arithmetic progression of size 3:
````oo oo oo oo oo oo oo oo oo oo oo ...
^ ^ ^
^ ^ ^
````
so let's remove the third from each of those
````oo oo oo oo oo oo oo oo ...
````
notice that the arithmetic progressions of size 4, 5, 6, 7 and 8 are missing too! The next one we have is:
````oo oo oo oo oo oo oo oo ...
^ ^ ^
````
so we can remove those..
````oo oo oo oo oo oo ...
````
I think that was a natural way to approach the problem and after actually doing it for a little while the pattern and connection with base 3 numbers is clear. So the next step is to write it out in mathematical symbols and remove all the scaffolding, that's how you end up back at the proof you posted.
-
1
This is how I would have done it if I had started from zero, modulo variation, I think. The equation $x+z=2y$ says exactly that for each pair $x$, $z$ we need to remove the middle point, and after turning around the idea this has to bring up the Cantor set and so on. (There is a difference between the continuous case and the discrete case, which makes it interesting) – Mariano Suárez-Alvarez♦ Jan 10 '12 at 7:43
Maybe it can be motivated by imagining objects other than integers, and wondering for which kinds of objects and for which rules of addition you can construct such a set. Then once you figure this out, you can devise a scheme to map the solution to integers. That said, I probably wouldn't have been able to figure out the answer.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 87, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490265846252441, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/39227/epsilon-regularity-what-does-it-say-and-where-does-it-come-from/39524
|
Epsilon regularity: what does it say and where does it come from?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The $\varepsilon$-regularity phenomenon shows up in several different contexts. I try to describe it focussing on the harmonic map situation, but I really would like to understand the situation in general. The following is the Schoen-Uhlenbeck $\varepsilon$-regularity lemma, extracted from Tobias H. Colding, William P. Minicozzi II, An excursion into geometric analysis.
Let $N$ be a Riemannian manifold and $B_{r}$ be the ball of radius $r$ centred at the origin in $\mathbf{R}^k$. Then there exists $\varepsilon(k,N)$ such that if $u:B_{r}\in\mathbf{R}^k\rightarrow N$ is an energy minimizing map and $$\frac{\int_{B_{r}}|\nabla u|^2}{r^{k-2}}<\varepsilon,$$ then $u$ is smooth in a neighborhood of $0$ and $$|\nabla u|^2(0)\leq \frac{C}{r}.$$
Thus if a (conformally invariant) rescaling of the energy that $u$ minimizes is small (I suppose $u$ should be in a suitable Sobolev space), then $u$ is automatically smooth in some smaller ball. This rescaling is monotonically increasing thanks to a monotonicity lemma. I am not sure how to interpret the bound on the derivative at zero, though. The $\varepsilon$-regularity lemma quickly implies that the singular set $S$ of $u$ has $(k-2)$-dimensional Hausdorff measure zero.
My questions are:
1. What are the basic ingredients (I suppose I am talking about the properties of the energy functional here) that guarantee that such a lemma holds?
2. What is the meaning of the supremum of the set of all $\varepsilon$ such that the energy bound holds, and how can it be computed?
3. Is there a simple intuitive picture that I am missing that explains the situation?
4. Is there an instance of this phenomenon that predates the Schoen-Uhlenbeck paper?
Many thanks.
-
2
This isn't an answer, but one thing that may be helpful is to not consider the $\epsilon$-regularity lemma for classical solutions $u_i$. In this case the smoothness is not an issue (as you get it "standard elliptic estimates" i.e. Schauder theory) but rather the point is the uniformity (independent of the $u_i$) of the pointwise estimate. In particular, let $u_i$ have uniformly bounded energy. A subsequence will converge weakly to a weak solution $u$. $\epsilon$-regularity quantifies how the sequence can fail to smoothly converge. Namely, energy concentrating on small scales. – Rbega Sep 19 2010 at 1:40
1
I should add another advantage of considering classical solutions is that there are then really slick proofs of $\epsilon$-regularity type theorems. A good example is the proof of the Choi-Schoen theorem or of the smooth version of Allard regularity both of which are (I believe) in Colding and Minicozzi's book in not in the "excursion". These sorts of proofs might may also provide you with some intuition as they strip out all the technicalities and really get at the essence. – Rbega Sep 19 2010 at 1:45
Thanks! Regarding your first comment: is the pointwise estimate you refer to the estimate on the derivative at zero? – hce Sep 19 2010 at 8:37
When you said '$\varepsilon$-regularity quantifies how the sequence can fail to smoothly converge' you reminded me of Tristan Rivière ( arxiv.org/abs/math/0304396 ), who calls the Schoen-Uhlenbeck lemma 'the earliest example of energy quantization in non-linear analysis'. I guess this means that once we know that the conformally invariant energy is less than $\Lambda$, but $\Lambda$ is greater than $\varepsilon'$ (the sup of all $\varepsilon$ for which regularity is guaranteed), then we can expect to have a singular set (I admit this is quite tautological). – hce Sep 19 2010 at 8:39
This is why I find it interesting to understand what exactly $\varepsilon'$ is. – hce Sep 19 2010 at 8:40
show 2 more comments
3 Answers
The way I think of it is to view semilinear PDEs, such as the harmonic map equation, as a contest between the linear portion of the equation ($\Delta u$ in this case) and the nonlinear portions (which, in the case of harmonic maps, are roughly of the shape $|\nabla u|^2$). Intuitively, if the nonlinear part is small compared to the linear part then we expect the linear behaviour to dominate. In the case of harmonic maps, this means that we expect the solutions to behave like solutions to Laplace's equation $\Delta u = 0$, which are known to be regular.
A bit of dimensional analysis then tells us that the condition $\frac{\int_{B_r} |\nabla u|^2}{r^{k-2}} < \varepsilon$ has the right scale-invariance properties to have a chance of making the nonlinear term smaller than the linear term. (To make this rigorous, one of course needs to deploy various harmonic analysis estimates in well-chosen function space norms, such as Sobolev embedding.)
I discuss these heuristics (though more for dispersive equations than for elliptic ones) a bit at
http://terrytao.wordpress.com/2010/04/02/amplitude-frequency-dynamics-for-semilinear-dispersive-equations/
The question of what happens at the critical value of epsilon is an interesting one. Often, the limiting non-regular solutions at that value of epsilon, after rescaling and taking limits, tend to be quite symmetric and smooth, away from a very simple singular set (e.g. a subspace). I don't know the elliptic case too well, but one obvious candidate for such a solution would be a singular 2D harmonic map (such as the map from C -> S^1 given by x -> x/|x|) extended to k dimensions by adding k-2 dummy variables. In the dispersive case, the analogous concept is that of the minimal energy blowup solutions, and these tend to be soliton solutions (so, typically, they obey a time translation invariance symmetry), associated to the ground state solution of the associated time-independent equation.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Below is a rather longwinded description of the special case when the singularity is at worst an isolated point. I suspect you know all this already. The magic comes at the very end (see paragraph that starts with "Here's the critical trick"). I don't know if this is the same thing that gives Schoen-Uhlenbeck the extra oomph or not.
There are three applications I know of: Minimal hypersurfaces, self-dual Yang-Mills connections, and Einstein manifolds. The regularity theory described below is used for both a convergence theorem except possibly a finite number of points and a removable singularity theorem. These theorems are then used to establish the so-called bubbling phenomenon. The story below applies to the latter two applications; the details for minimal hypersurfaces are slightly different.
Assume for convenience that we're on a smooth $n$-dimensional complete Riemannian manifold, where $n > 2$. Denote the Laplacian on both functions and tensors by $\Delta = g^{ij}\nabla_i\nabla_j$.
Denote the $L_p$ norm of a function or tensor $u$ with respect to the Riemannian metric by $\|u\|_p$.
Throughout the discussion below we will restrict to a geodesic ball $B(x, r)$ and assume that the following Sobolev inequality holds for a fixed constant $C_S$ and any smooth function $u$ compactly supported in $B$: $$\|\nabla u\|_2 \ge C_S\|u\|_{2n/(n-2)}.$$
First, you consider the scalar elliptic inequality $-\Delta u \le bu$, where $b$ can be viewed as a given potential function. Using Moser iteration, you show that if
$$\|b\|_{q/2}, \|u\|_p < C,$$
where $q > n$, for some $p > 1$ on $B(x,r)$, then there is a bound on $\|u\|_\infty$ on, say, $B(x,r/2)$.
Second, you use Moser iteration to show that if $\|b\|_{n/2}$ is sufficiently small (depends on $C_S$) on $B(x,r)$, then there is a bound for $\|u\|_{q/2}$ for some $q > n$ on $B(x,r/2)$.
Combining the first two shows that if $u$ satisfies $-\Delta u \le cu^2$ and $\|u\|_ {n/2}$ is sufficiently small on $B(x,r)$, then there is a bound on $\|u\|_\infty$ on $B(x,r/2)$.
In each application there is a curvature tensor $F$ that satisfies a PDE of the form $$-\Delta F = Q(F),$$ where $Q$ depends quadratically on $F$. Moreover, there is a convergence theorem when there is a uniform pointwise bound on $F$ (for Einstein manifolds you use the Cheeger-Gromov convergence theorem). Applying the results above to $u = |F|$ using coverings with smaller and smaller balls leads to a convergence theorem when there is a uniform bound on $\|F\|_ {n/2}$ where the convergence can fail at only a finite number of points (where in the limit there is too much of $\|F\|_{n/2}$ for the estimates above to hold).
Now you want to study the limit object near each point singularity. If you keep close track of the dependence on $r$ in the estimates above, the best you can do is a bound on $F$ that blows up like $r^{-2}$, where $r$ is the distance to the singularity. This is not enough to remove the singularity, so you need to use more than the elliptic PDE above.
Here's the critical trick: When doing Moser iteration on $u = |F|$, you use the standard Cauchy-Schwarz inequality to obtain the following pointwise inequality: $$|F\cdot\nabla F| \le |F||\nabla F|$$ But in all of the applications, you have extra information about $F$ and its covariant derivative. In particular, $F$ and/or its covariant derivative have certain symmetries, which allow you to prove a pointwise bound of the form $$|F\cdot\nabla F| \le c|F||\nabla F|,$$ where $c < 1$. This improvement when used with Moser iteration allows you to show that $F$ blows up more slowly than $r^{-2}$. Iterating this improvement leads to a uniform pointwise bound on $F$, which in turn allows the singularity to be removed using a straightforward geometric ODE argument.
The removable singularity theorem allows you to analyze both the limiting object with the bubbles removed as well as the bubbles themselves.
ADDED: I can't resist adding an anecdote to this: Right after I learned the trick in the paragraph above from a paper of Schoen-Simon-Yau, I went to a colleague's office to show it to him. As it happens, Eli Stein was there, and he exclaimed, "But it's in my book!" And indeed it is. You will find it presented very nicely in VII.3.1 "A subharmonic property of the gradient" of Stein's 1970 book, "Singular Integrals and Differentiability Properties of Functions". It is obvious that S-S-Y did not know this or forgot, because their proof is much messier than Stein's.
-
Might just be me but some of your text is coming out as latex and vice-versa. Is anyone else getting that? I can't edit. – Spencer Sep 20 2010 at 20:32
Sorry, but I can't figure out how to fix it. The LaTeX input appears fine to me and in fact compiles fine using LaTeX itself. But for some reason it's not working here. – Deane Yang Sep 20 2010 at 20:46
Bizarrely, the solution seems to be to put an extra space between the underscore and the bracketed n/2 in the affected region. I was inspired by Harald's comments in this meta thread meta.mathoverflow.net/discussion/637/… – jc Sep 21 2010 at 3:26
Many thanks for this answer! I still haven't digested it properly, but I understand the critical trick. I will probably ask for details soon as I still find it difficult to see what's happening geometrically. – hce Sep 21 2010 at 9:34
jc, thank you! I had learned the trick of adding spaces but it never occurred to me to do it there! – Deane Yang Sep 21 2010 at 15:27
show 1 more comment
I can comment on the $\epsilon$-regularity lemma for 4-dimensional Einstein manifolds. Namely, there is an $\epsilon$ depending on the dimension and Sobolev constant so that $\int_{B_x(r)} |Rm|^2 dV_g < \epsilon$ implies that $\sup_{B_x(r / 2)} |Rm|^2 \leq Cr^{-2} \int_{B_x(r)} |Rm|^2 dV_g$.
The key ingredient that makes this lemma work is that for Einstein manifolds, the function $|Rm|$ satisfies an elliptic inequality (it's subharmonic" for some elliptic operator). From there a standard PDE argument using Moser iteration gives the $\epsilon$-regularity. (It's like a non-linear version of the mean value inequality for subharmonic functions in Euclidean space).
-
Thanks! Just to clarify: does the elliptic inequality lead to regularity once the inequality you give is satisfied, or does it lead to said inequality? Also, is there a known computation of the sup of all $\varepsilon$ in this case? – hce Sep 19 2010 at 8:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 7, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282472729682922, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=184333&page=5
|
Physics Forums
Thread Closed
Page 5 of 13 « First < 2 3 4 5 6 7 8 > Last »
Blog Entries: 19
Recognitions:
Science Advisor
## cat in a box paradox
Zz, as I understood, the paper you mentioned explains objective existence not simply by using decoherence, but by combining decoherence with the Zurek's existential (quantum Darwinism) interpretation of quantum mechanics. This interpretation is not the same as MWI. What I look for is a simple explanation of the existential (quantum Darwinism) interpretation of quantum mechanics.
In the meantime, I have found this:
http://www.advancedphysics.org/forum...ead.php?t=1791
Apparently, I am not the only one who does not understand the Zurek's interpretation of QM.
Maybe we should open a separate thread.
Mentor
Blog Entries: 27
Quote by Demystifier Zz, as I understood, the paper you mentioned explains objective existence not simply by using decoherence, but by combining decoherence with the Zurek's existential (quantum Darwinism) interpretation of quantum mechanics. This interpretation is not the same as MWI. What I look for is a simple explanation of the existential (quantum Darwinism) interpretation of quantum mechanics.
I didn't say that it is the same as MWI. I don't think the authors were trying to do that. However, they have tried to show that by invoking decoherence, you CAN get back the classical "certainty" that we know and love. I thought this was a very good first step, at least, in trying to figure out why our classical world has a definite objectivity, meaning you get a definite ONE outcome when you make a classical measurement. That's what they have tried to show.
Zz.
Quote by Fra Dany, do you have any yet finished papers where your personal ideas are elaborated?
The paper I referred to in my post above is quant-ph/0606121 entitled “On the connection between classical and quantum mechanics”. It will be published in the HAIT JSE special issue devoted to memory of Prof. I.D.Vagner.
The related papers are:
1)physics/0504008 entitled “On the problem of Zitterbewegung of the Dirac electron”, HAIT JSE, 1 (3), 411,(2004);
2)“Quantum mechanics of non-abelian waves I”, Hadronic Journal,6, 801(1983).
The first is the corrected version of Ch. IX and the second is Ch.VIII of my Ph.D thesis entitled “Quantum Mechanics of Non-Abelian Waves”, Tel-Aviv University, 1982, unpublished.
Ch. III – Ch.VII was published as the paper written by L.P.Horwitz and L.C. Biedenharn, Ann. Phys., 157, 432 (1984).
I discuss ideas here at PF. All mentioned papers discuss mathematical results only. They use fairly advanced extensions of the functional analysis.
Not yet finished papers are:
1) On the “eigenschaften” operators in QM; finished, not written;
2) The squeezed states, the coherent states, etc.; perhaps finished, not written;
3) On relativistic QM; not finished, not written.
I understand that you are interesting in the problems of statmech. I do not believe that I will ever consider the description of more than N=3 states.
Regards, Dany.
Recognitions:
Homework Help
Quote by Demystifier But this is not enough for the consistency of the many-world interpretation. Decoherence alone does not explain why only one of the possibilities is seen by the observers. See e.g. http://xxx.lanl.gov/abs/quant-ph/0312059 (Rev. Mod. Phys. 76, 1267-1305 (2004))
I haven't gone through the paper ZapperZ mentioned, but my understanding is roughly as follows. Look at the state after observation:
|happy scientist>|alive cat> + |sad scientist>|dead cat>
If we denote the first term by |A> and the second by |B>, then decoherence says that |A> and |B> are incoherent, which roughly means that for any observable O we might measure, <A|O|B>$\approx$0. But as I described in post 33, this means the system is behaving essentially like a classical probabilistic ensemble, and so the results we get by continuing to apply Schrodinger's equation without collapse is the same as if we did assume collapse, ie, where we assume the cat is in a well-defined classical state, just one which we don't initially know.
In particular, |A> and |B> evolve independently: |happy scientist> evolves into |scientist picking up and hugging the cat>, while |sad scientist> independently evolves into |scientist quietly putting cat into a box>. Yes, a superposition still exists, but the two states in the superposition carry on as if it didn't, essentially because the incoherence of the states means there are no interference effects.
The question still remains: what determines which of the outcomes you experience. Evidently, you experience one, and someone else with an equal claim to be called "you" experiences the other. This is really strange, and I haven't heard any satisfying explanation that incorporates consciousness.
But for now, we can carry on noting that if you were to querry any of the different scientist-copies after he's carried out several quantum experiments, chances are very high that he'll have a memory of a world where the laws of quantum probability were closely followed, so the predictions of the theory are solid (in the probabiliistic sense which is the only sense in which QM can be verified).
Recognitions:
Homework Help
Science Advisor
Quote by StatusX The problem with the Copenhagen approach from a theoretical point of view is that the act of "measurement" is not well-defined. There are various attempts at nailing this concept down, but none that seem obviously correct. The difference with the many worlds view, and the reason I favor it, is that there is no collapse, and so the measurement problem disappears. In fact, that's the only real difference between it and the Copenhagen view: there are no extra assumptions, just one less. From simply denying this process and applying the idea of decoherence (which is not an assumption, but a consequence of QM common to all interpretations), the unitary schrodinger equation alone gives rise to phenonmena macroscopic beings would almost certainly interpret as wavefunction "collapse". That's too nice a fact to ignore.
I am trying to understand the next to last sentence in this paragraph. I thought that the MW interpretation and decoherence were very different beasts. Are you saying that one implies the other? I thought that MW did not make use of decoherence and vice versa.
Thanks for the interesting points.
Recognitions:
Homework Help
Science Advisor
Quote by ZapperZ Er.. where does it say these things occurs without a collapse? Isn't "decoherence", by definition, implied a gazillion interactions (and thus, collapse) of the system?
I am a bit confused by the last statement. Interactions are equivalent to collapse?? I thought that interactions in the context of decogherence meant entanglement of states and that no collapse ever took place. Maybe I missed completely the point?
Mentor
Blog Entries: 27
Quote by nrqed I am a bit confused by the last statement. Interactions are equivalent to collapse?? I thought that interactions in the context of decogherence meant entanglement of states and that no collapse ever took place. Maybe I missed completely the point?
I wanted to say "loss of coherence" after so many interactions with the surrounding, but then I'm only saying what "decoherence" is. I would consider an "interaction" as a "collapse", because such interaction can in fact tell you the state of a system.
Zz.
Recognitions:
Homework Help
Quote by nrqed I am trying to understand the next to last sentence in this paragraph. I thought that the MW interpretation and decoherence were very different beasts. Are you saying that one implies the other? I thought that MW did not make use of decoherence and vice versa.
Here's a quote that explains the gist of it:
Quote by wikipedia However, decoherence by itself may not give a complete solution of the measurement problem, since all components of the wave function still exist in a global superposition, which is explicitly acknowledged in the many-worlds interpretation. All decoherence explains, in this view, is why these coherences are no longer available for inspection by local observers. To present a solution to the measurement problem in most interpretations of quantum mechanics, decoherence must be supplied with some nontrivial interpretational considerations (as for example Wojciech Zurek tends to do in his Existential interpretation). However, according to Everett and DeWitt the many-worlds interpretation can be derived from the formalism alone, in which case no extra interpretational layer is required.
Basically, Everett and DeWitt reason that decoherence alone leads to what "local observers" (eg, one of the copies of the scientist) would interpret as irreversible wavefunction collapse. In other words, you could still enforce collapse, but it would be redundant (except from an ontological point of view, where it removes worlds that aren't practically accessible to us). Although decoherence is now widely accepted as a real effect, the validity of DeWitt's argument that it implies MW is still controvertial.
good post, X. When I was first taught of MW, I was given the impression that the branching of worlds was a very random ad hoc alternative to nondeterministic collapse. Now I wonder whether historically it was first proposed with decoherence already in mind.
Recognitions:
Homework Help
Science Advisor
Quote by Demystifier a) You don't read what I say. So, let me repeat. The cat cannot be both dead and alive, it is a logical contradiction. Still, it can be in a superposition of dead and alive. In this case, it is neither dead nor alive. Sometimes we say for such a state that the cat is "both dead and alive", but it is simply an incorrect (or imprecise) language. b) I say it is in the superposition of head and tail (recall that I am still talking within the 1. paradigm, despite the fact that I actually prefer 2.) By the way, this is my 666th post.
as I am rereading this thread, I want to say that I agree with Demystifier.
This part of the thread was about whether a linear superposition of two states (dead or alive) should be described as "both dead and alive" or "neither dead nor alive". At thi spoint I think that everybody agrees that the most (and maybe only) accurate description is to say that the system is a linear superposition, period. But if one insists on using everyday language, it seems impossible to accurately convey what a quantum linear superposition means. Then it becomes subjective, to a point, what language is used. Still, I personally think that "both dead an alive" is misleading. It would imply that once the measurement is made, and let`s say the outcome is "alive", that the cat "ceased to be dead" since it was both dead and alive before the measurement.
I find the "neither dead nor alive" at the same better and quite unsatisfying.
I would suggest the following as the best description. A cat in the linear superposition of dead and alive is a cat which has the potential of being alive and he potential of being dead.
Just my two cents....
Mentor
Blog Entries: 27
Quote by nrqed as I am rereading this thread, I want to say that I agree with Demystifier. This part of the thread was about whether a linear superposition of two states (dead or alive) should be described as "both dead and alive" or "neither dead nor alive". At thi spoint I think that everybody agrees that the most (and maybe only) accurate description is to say that the system is a linear superposition, period. But if one insists on using everyday language, it seems impossible to accurately convey what a quantum linear superposition means. Then it becomes subjective, to a point, what language is used. Still, I personally think that "both dead an alive" is misleading. It would imply that once the measurement is made, and let`s say the outcome is "alive", that the cat "ceased to be dead" since it was both dead and alive before the measurement. I find the "neither dead nor alive" at the same better and quite unsatisfying. I would suggest the following as the best description. A cat in the linear superposition of dead and alive is a cat which has the potential of being alive and he potential of being dead. Just my two cents....
So an electron that is in an H2 molecule is neither near one H atom, nor the other.
Where is the electron that somehow has formed the bonding or antibonding? It has formed it, but it isn't here nor there!
And you found this to be "better"?
Zz.
Recognitions:
Homework Help
Quote by cesiumfrog When I was first taught of MW, I was given the impression that the branching of worlds was a very random ad hoc alternative to nondeterministic collapse. Now I wonder whether historically it was first proposed with decoherence already in mind.
Yea, until a few months ago I just assumed it was someone getting carried away with their imagination and the weirdness of QM. Then I read some more about it and realized it's actually the simplest interpretation, in terms of number of assumptions, and more or less resolves the measurement problem. I'm surprised it isn't more popular than it is (I believe it's second to the Copenhagen interpretaion, depending on the kind of physicists you ask)
From my years of playing poker and trying to determine my opponents hand via looking for "tells" or ways to figure out if his hand is good or not based on my opponents behavior and the current environment(other information ive gathered such as the cards in my hand and other cards shown or showing). This is indeed the same problem. We can determine if the cat is alive or dead by making simple observations about it's environment whether it being atmospherical or physical. Is the box moving? Is the box shaking? Is the box warm in a particular spot? Is there air in which the cat can breathe?Is the box emitting sound? These are ways to determine if the cat is alive or not. The same thing is, in theory, true for partical movement just we have not yet found these observations or what to look for regarding the particals environment. And another question comes in where if 2 particals were entangled across the universe. Would the entanglements affect other traveling particals and entanglements?
Quote by StatusX Yea, until a few months ago I just assumed it was someone getting carried away with their imagination and the weirdness of QM. Then I read some more about it and realized it's actually the simplest interpretation, in terms of number of assumptions, and more or less resolves the measurement problem. I'm surprised it isn't more popular than it is (I believe it's second to the Copenhagen interpretaion, depending on the kind of physicists you ask)
Did you find out how MWI accounts for the observed probabilities (Born rule)?
Some comments. I read that paper again last night, and while the general idea that any system will attain some level of correlation with the environment, and that there is a selective mutual pressure between environment and the system is right on... ...but like others say, I wouldn't say it solves the collapse as such, beucase OTOH the collapse isn't an issue for me becuse IMO it's simply sort of a bayesian revision due to the limited measurement resolution and finite complexity of memory - I suspect Alan who is a poker will know what I mean - I like the poker analogy too. I see no way around this. Unless you of course reformulate the problem, but the care should be taken because then we might not ask the same question. Also, if we are consider an observer B that observes a system + observer A, then clearly we are working in two different descriptions. Observer A has not use of B:s information. Sure they can communicate, but then we add time. In my thinking (spacetime aside!) one can't transfer arbitrary amounts between two records arbitrarily. I think the information transfer is part of defining time which implies a locality in terms of information. I am eventually working on an explicit formalism for this but it alot of things to do left. Also, I think the assumption that there is strong correlation between the environment and the system in the first place is valid only it they are close to equilibrium - ie that the system is already "stabilized" i the environment. I figure that this is not a valid assuption in the general case. Also if one is to talk about the actual stabilisation process, this takes time, and then the argumentation gets more complicated. Information that is available in the future, is not available now. I see no sense in that argumentation. I think the paper is interesting in a sense but it does not get rid of the collapse. The fact that C may observe the correlation between A and B the system, and sees a resolution to the collapse problem is an observation with the wrong condition. The fact that A sees a collapse, doesn't mean that everybody sees a collapse. I don't see a problem with that at all. I think there is an intrinstic limit due to information capacity, which limits the maximum possible entanglement! and this constraint may impose collapses. I think part of the problem is that all the players have incomplete information, and it's NOT due to flawed or incompetent strategies, it's due to the limiting structures to hold correlation information and due to TIME that correlations are a dynamical thing, if you are thrown into a new environment, then you need some time to equilibrate with the environment, which is btw, mutual. /Fredrik
Quote by Fra Some comments. I think there is an intrinstic limit due to information capacity, which limits the maximum possible entanglement! and this constraint may impose collapses. I think part of the problem is that all the players have incomplete information, and it's NOT due to flawed or incompetent strategies, it's due to the limiting structures to hold correlation information and due to TIME that correlations are a dynamical thing, if you are thrown into a new environment, then you need some time to equilibrate with the environment, which is btw, mutual. /Fredrik
I'm going to continue with the poker analogy because it's less graphic then a dead cat.
When I sit down to a new table, I do have a set strategy and you are right. There is a time factor here. But if A has basic knowledge of environment A. And then A is thrown into environment B(or a new poker table with new people) with the knowledge of evironment A and we are observing a simular situation of a poker game. Then A would have the potential to make correct predictions on the opponents cards more so then when A started at evironment A.
Then when A is introduced to environment C and is also a simular situation of a poker game. Then A would have the knowledge of Evironments A and B. And so on and so forth untill the rules or stakes of the game are changed.
There is a learning curve of player A which could potentially be humans in the future if we can learn more about environments of particals and less about thier actions. This is a plausible solution because we no longer care what the particles are doing, thus we are not limited to just quantums observation problems. I.E double slot experiment.
If you know the cats state, the box has been opened (even if the box remains closed). So the box will always remain closed. Unless you smell something funky which opens the closed box that is still shut.
Thread Closed
Page 5 of 13 « First < 2 3 4 5 6 7 8 > Last »
Thread Tools
| | | |
|-------------------------------------------|------------------------------|---------|
| Similar Threads for: cat in a box paradox | | |
| Thread | Forum | Replies |
| | Quantum Physics | 6 |
| | Special & General Relativity | 3 |
| | General Discussion | 18 |
| | Special & General Relativity | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9540933966636658, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/90805/example-of-special-lagrangian-submanifold/90887
|
example of special lagrangian submanifold
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
are there any examples of a real analytic riemannian manifold that cannot be isometrically embedded as a special lagrangian submanifold of a calabi-yau manifold ?
peter hara
-
Asking for isometry is too much, A better question is if it can be embedded as a smooth manifold. – Mohammad F.Tehrani Mar 10 2012 at 15:23
5 Answers
1. If the question is "Are there examples of compact real-analytic Riemannian manifolds that cannot be isometrically embedded as a special Lagrangian submanifold of a compact Calabi-Yau manifold?", then the answer is "yes".
2. If the question is "Are there known, explicit examples of compact real-analytic Riemannian manifolds that cannot be isometrically embedded as a special Lagrangian submanifold of a compact Calabi-Yau manifold?", then the answer is "probably".
3. If the question is "Are there known, explicit examples of compact real-analytic Riemannian manifolds for which a proof is known that they cannot be isometrically embedded as a special Lagrangian submanifold of a compact Calabi-Yau manifold?", then the answer is "no" (to my knowledge).
For the first question, just note that, already for dimension 2, the space of compact Calabi-Yau surfaces is a finite-dimensional space, and the metrics that can be realized on compact complex curves in such a Calabi-Yau fall into a countable union of finite dimensional families. (Remember that special Lagrangian surfaces in a Calabi-Yau are complex curves in a different Calabi-Yau metric in the canonical $S^2$-family of Calabi-Yau metrics.) Thus, the set of such realizable metrics, even on the $2$-sphere, constitutes a countable union of finite dimensional families. This could never account for all of the real-analytic metrics on the $2$-sphere. Thus, some example exists, though we don't know one explicitly.
For the second question, consider the fact that it is highly unlikely that the induced metric on any complex curve in a Calabi-Yau surface has constant Gaussian curvature. The 'reason' is that most (non-flat) Ricci-flat Kahler metrics contain no complex curves with constant Gaussian curvature. It would be remarkable indeed if one of the Ricci-flat Kahler metrics on a (non-flat) compact 4-manifold had such a curve. In particular, I regard it as highly likely that the standard round metric on the $2$-sphere cannot be isometrically embedded as a complex curve in any compact Calabi-Yau surface.
My answer to the third question is just an affirmation of my ignorance.
A remark about the local story: peter h asked about what I would call the 'local case', i.e., whether a real analytic Riemannian manifold can be isometrically embedded as a special Lagrangian submanifold in some Calabi-Yau, with no assumptions about completeness of the ambient manifold. In particular, he raised the question for surfaces.
Now, in the case of a real-analytic metric on a Riemann surface, the answer would be 'yes', according to a paper in 2000 by D. Kaledin, "Hyperkaehler structures on total spaces of holomorphic cotangent bundles", which is available on the arXive (arXiv:alg-geom/9710026v1). (It's 100 pages, and I don't claim that I have read it, I'm just pointing out that it is there.) The main theorem of this paper is that, given any real-analytic Kahler manifold $M$, there exists a hyperKahler metric on a neighborhood of the $0$-section of the cotangent bundle $T^\ast M$ that is compatible with the natural complex and holomorphic structures on $T^\ast M$ and that induces the original metric on the $0$-section.
When the (real) dimension of $M$ is $2$, this would apply to show that $M$ is isometrically imbedded as a complex curve in a Calabi-Yau (complex) surface, and then one can apply the 'rotation trick' to turn this into a special Lagrangian surface when the ambient $4$-manifold is regarded as a complex surface with respect to one of the orthogonal complex structures. Thus, the case of surfaces would be covered by this theorem.
In fact, this would work in any even dimension when the given real-analytic metric is actually Kahler.
There would remain the question (which I raised in my original paper) of whether every real-analytic metric on $S^4$ can be realized by an embedding as a special Lagrangian submanifold of a $4$-dimensional Calabi-Yau.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
On the contrary, R. Bryant has shown that any closed oriented real analytic 3-dimensional riemannian manifold is the real locus of an antiholomorphic, isometric involution of a Calabi-Yau 3-fold (see http://arxiv.org/abs/math/9912246).
-
1
Only if you consider noncompact Calabi-Yau manifolds; see page 3. The real question is about compact Calabi-Yau manifolds. – Ben McKay Mar 10 2012 at 12:00
I agree this is the real question, but since the OP didn't mention compactness I assumed he didn't know R. Bryant's result. – BS Mar 10 2012 at 16:19
And I realize that even dimension 3 isn't in the OP (nor completeness of the CY, by the way). – BS Mar 10 2012 at 17:04
I think, eaven in this case you mentioned there counterexamples, but cannot be given explicitely. See the post of Bryant (above), part 1. Am I right?
hapchiu
-
@Robert Bryant: Actually I am considering the question: Are there examples of compact real analytic Riemannian manifolds that cannot be isometrically embedded as special lagrangian submanifolds of a (not necessary compact) Calabi-Yau manifold?
-
maybe in higher dimensions ??? – peter h Mar 10 2012 at 16:23
Do you want complete CY manifold ? – BS Mar 10 2012 at 17:07
or eaven in dimension 2 for the 2-sphere ? – peter h Mar 10 2012 at 17:09
no not necessary, like in bryant's paper like a "germ" around the real manifold. is this possible for the 2-sphere, for example ??? – peter h Mar 10 2012 at 17:11
1
Do not use answer boxes to post comments or updates. Since MO does not work like a forum thread, this does not work well. Instead, you can edit your own question to make updates or to add newer information – Yemon Choi Mar 11 2012 at 8:08
show 1 more comment
Actually I am intrested in the following: Are there examples of compact real analytic Riemannian manifolds that cannot be isometrically embedded as special lagrangian submanifolds of a Calabi-Yau manifold? Here the Calabi-Yau manifold doesn't have to be compact or complete, it should be like a "germ" around the Riemannian manifold (like in Bryant's paper). Are there counterexamples ? What about the two-sphere, that Robert Bryant mentioned ?
-
1
Please don't use the answer boxes for comments – Yemon Choi Mar 11 2012 at 8:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9097846746444702, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/LC_circuit
|
# LC circuit
Linear analog electronic filters
• Constant k filter
• m-derived filter
• General image filters
• Zobel network (constant R) filter
• Lattice filter (all-pass)
• Bridged T delay equaliser (all-pass)
• Composite image filter
• mm'-type filter
Simple filters
• RC filter
• RL filter
• LC filter
• RLC filter
edit
LC circuit diagram
An LC circuit, also called a resonant circuit, tank circuit, or tuned circuit, consists of an inductor, represented by the letter L, and a capacitor, represented by the letter C. When connected together, they can act as an electrical resonator, an electrical analogue of a tuning fork, storing energy oscillating at the circuit's resonant frequency.
LC circuits are used either for generating signals at a particular frequency, or picking out a signal at a particular frequency from a more complex signal. They are key components in many electronic devices, particularly radio equipment, used in circuits such as oscillators, filters, tuners and frequency mixers.
An LC circuit is an idealized model since it assumes there is no dissipation of energy due to resistance. Any practical implementation of an LC circuit will always include loss resulting from small but non-zero resistance within the components and connecting wires. The purpose of an LC circuit is usually to oscillate with minimal damping, so the resistance is made as low as possible. While no practical circuit is without losses, it is nonetheless instructive to study this ideal form of the circuit to gain understanding and physical intuition. For a circuit model incorporating resistance, see RLC circuit.
## Operation
An LC circuit can store electrical energy oscillating at its resonant frequency. A capacitor stores energy in the electric field between its plates, depending on the voltage across it, and an inductor stores energy in its magnetic field, depending on the current through it.
If a charged capacitor is connected across an inductor, charge will start to flow through the inductor, building up a magnetic field around it and reducing the voltage on the capacitor. Eventually all the charge on the capacitor will be gone and the voltage across it will reach zero. However, the current will continue, because inductors resist changes in current. The energy to keep it flowing is extracted from the magnetic field, which will begin to decline. The current will begin to charge the capacitor with a voltage of opposite polarity to its original charge. When the magnetic field is completely dissipated the current will stop and the charge will again be stored in the capacitor, with the opposite polarity as before. Then the cycle will begin again, with the current flowing in the opposite direction through the inductor.
The charge flows back and forth between the plates of the capacitor, through the inductor. The energy oscillates back and forth between the capacitor and the inductor until (if not replenished by power from an external circuit) internal resistance makes the oscillations die out. Its action, known mathematically as a harmonic oscillator, is similar to a pendulum swinging back and forth, or water sloshing back and forth in a tank. For this reason the circuit is also called a tank circuit. The oscillation frequency is determined by the capacitance and inductance values. In typical tuned circuits in electronic equipment the oscillations are very fast, thousands to millions of times per second.
## Resonance effect
The resonance effect occurs when inductive and capacitive reactances are equal in magnitude. The frequency at which this equality holds for the particular circuit is called the resonant frequency. The resonant frequency of the LC circuit is
$\omega_0 = {1 \over \sqrt{LC}}$
where L is the inductance in henries, and C is the capacitance in farads. The angular frequency $\omega_0\,$ has units of radians per second.
The equivalent frequency in units of hertz is
$f_0 = { \omega_0 \over 2 \pi } = {1 \over {2 \pi \sqrt{LC}}}.$
LC circuits are often used as filters; the L/C ratio is one of the factors that determines their "Q" and so selectivity. For a series resonant circuit with a given resistance, the higher the inductance and the lower the capacitance, the narrower the filter bandwidth. For a parallel resonant circuit the opposite applies. Positive feedback around the tuned circuit ("regeneration") can also increase selectivity (see Q multiplier and Regenerative circuit).
Stagger tuning can provide an acceptably wide audio bandwidth, yet good selectivity.
## Applications
The resonance effect of the LC circuit has many important applications in signal processing and communications systems.
1. The most common application of tank circuits is tuning radio transmitters and receivers. For example, when we tune a radio to a particular station, the LC circuits are set at resonance for that particular carrier frequency.
2. A series resonant circuit provides voltage magnification.
3. A parallel resonant circuit provides current magnification.
4. A parallel resonant circuit can be used as load impedance in output circuits of RF amplifiers. Due to high impedance, the gain of amplifier is maximum at resonant frequency.
5. Both parallel and series resonant circuits are used in induction heating.
LC circuits behave as electronic resonators, which are a key component in many applications:
## Time domain solution
### Kirchhoff's laws
By Kirchhoff's voltage law, the voltage across the capacitor, VC, plus the voltage across the inductor, VL must equal zero:
$V _{C} + V_{L} = 0.\,$
Likewise, by Kirchhoff's current law, the current through the capacitor equals the current through the inductor:
$i_{C} = i_{L} .\,$
From the constitutive relations for the circuit elements, we also know that
$V _{L}(t) = L \frac{di_{L}}{dt}\,$
and
$i_{C}(t) = C \frac{dV_{C}}{dt}.\,$
### Differential equation
Rearranging and substituting gives the second order differential equation
$\frac{d ^{2}i(t)}{dt^{2}} + \frac{1}{LC} i(t) = 0.\,$
The parameter ω0, the resonant angular frequency, is defined as:
$\omega_0 = { 1 \over \sqrt{LC} }$
Using this can simplify the differential equation
$\frac{d ^{2}i(t)}{dt^{2}} + \omega_0^ {2} i(t) = 0.\,$
The associated polynomial is
$s^2 + \omega_0^2 = 0$
Thus,
$s = +j \omega_0\,$
or
$s = -j \omega_0\,$
where j is the imaginary unit.
### Solution
Thus, the complete solution to the differential equation is
$i(t) = Ae ^{+j \omega_0 t} + Be ^{-j \omega_0 t}\,$
and can be solved for A and B by considering the initial conditions.
Since the exponential is complex, the solution represents a sinusoidal alternating current.
Since the electric current i is a physical quantity, it must be real-valued. As a result, it can be shown that the constants A and B must be complex conjugates:
$A = B^*$
Now, let
$A = { I_0 \over 2 } e^{j \phi }$
Therefore,
$B = { I_0 \over 2 } e^{ -j \phi }$
Next, we can use Euler's formula to obtain a real sinusoid with amplitude I 0, angular frequency ω0 = (LC)−1/2, and phase angle $\phi$.
Thus, the resulting solution becomes:
$i(t) = I_0 \cos(\omega_0 t + \phi ).\,$
and
$V (t) = L \frac{di}{dt} = -\omega_0 L I_0 \sin(\omega_0 t + \phi ) \,$
### Initial conditions
The initial conditions that would satisfy this result are:
$i() = I_0 \cos( \phi ).\,$
and
$V(t=0) = L \frac{di}{dt}(t=0) = -\omega_0 L I_0 \sin( \phi ) .\,$
## Series LC circuit
Series LC Circuit
In the series configuration of the LC circuit, the inductor L and capacitor C are connected in series, as shown here. The total voltage v across the open terminals is simply the sum of the voltage across the inductor and the voltage across the capacitor. The current i flowing into the positive terminal of the circuit is equal to the current flowing through both the capacitor and the inductor.
$v = v_L + v_C \,$
$i = i_L = i_C \,$
### Resonance
Inductive reactance magnitude ($\scriptstyle X_L\,$) increases as frequency increases while capacitive reactance magnitude ($\scriptstyle X_C\,$) decreases with the increase in frequency. At a particular frequency these two reactances are equal in magnitude but opposite in sign. The frequency at which this happens is the resonant frequency ($\scriptstyle f_0\,$) for the given circuit.
Hence, at resonance:
$X_L = -X_C\,$
${\omega {L}} = {{1} \over {\omega} {C}}\,$
Solving for $\scriptstyle \omega$, we have
$\omega = \omega_0 = { 1 \over \sqrt{LC}}$
which is defined as the resonant angular frequency of the circuit.
Converting angular frequency (in radians per second) into frequency (in hertz), we have
$f_0 = { \omega_0 \over 2 \pi} = {1 \over {2 \pi \sqrt{LC}}}$
In a series configuration, XC and XL cancel each other out. In real, rather than idealised components the current is opposed, mostly by the resistance of the coil windings. Thus, the current supplied to a series resonant circuit is a maximum at resonance.
• In the limit as $\scriptstyle f \to f_0$ current is maximum. Circuit impedance is minimum. In this state a circuit is called an acceptor circuit[citation needed].
• For $\scriptstyle f < f_0$, $\scriptstyle X_L \;\ll\; (-X_C)\,$. Hence circuit is capacitive.
• For $\scriptstyle f > f_0$, $\scriptstyle X_L \;\gg\; (-X_C)\,$. Hence circuit is inductive.
### Impedance
In the series configuration, resonance occurs when the complex electrical impedance of the circuit approaches zero.
First consider the impedance of the series LC circuit. The total impedance is given by the sum of the inductive and capacitive impedances:
$Z = Z_{L} + Z_{C}$
By writing the inductive impedance as ZL = jωL and capacitive impedance as ZC = (jωC)−1 and substituting we have
$Z(\omega) = j \omega L + \frac{1}{j{\omega C}}$ .
Writing this expression under a common denominator gives
$Z(\omega) = j \frac{(\omega^{2} L C - 1)}{\omega C}$ .
Finally, defining the natural angular frequency as
$\omega_0 = { 1 \over \sqrt{LC} }$
the impedance becomes
$Z(\omega) = jL \bigg( { \omega^2 - \omega_0^2 \over \omega } \bigg)$ .
The numerator implies that in the limit as $\omega \to \pm \omega_0$ the total impedance Z will be zero and otherwise non-zero. Therefore the series LC circuit, when connected in series with a load, will act as a band-pass filter having zero impedance at the resonant frequency of the LC circuit.
## Parallel LC circuit
Parallel LC Circuit
In the parallel configuration, the inductor L and capacitor C are connected in parallel, as shown here. The voltage v across the open terminals is equal to both the voltage across the inductor and the voltage across the capacitor. The total current i flowing into the positive terminal of the circuit is equal to the sum of the current flowing through the inductor and the current flowing through the capacitor.
$v = v_L = v_C \,$
$i = i_L + i_C \,$
### Resonance
Let R be the internal resistance of the coil. When XL equals XC, the reactive branch currents are equal and opposite. Hence they cancel out each other to give minimum current in the main line. Since total current is minimum, in this state the total impedance is maximum.
Resonant frequency given by: $f_0 = { \omega_0 \over 2 \pi } = {1 \over {2 \pi \sqrt{LC}}}$.
Note that any reactive branch current is not minimum at resonance, but each is given separately by dividing source voltage (V) by reactance (Z). Hence I=V/Z, as per Ohm's law.
• At f0, line current is minimum. Total impedance is maximum. In this state a circuit is called a rejector circuit[citation needed].
• Below f0, circuit is inductive.
• Above f0,circuit is capacitive.
### Impedance
The same analysis may be applied to the parallel LC circuit. The total impedance is then given by:
$Z = \frac{Z_{L}Z_{C}}{Z_{L} + Z_{C}}$
and after substitution of $\scriptstyle Z_{L}$ and $\scriptstyle Z_{C}$ and simplification, gives
$Z(\omega) = -j \frac{ \omega L}{\omega^{2}LC-1}$
which further simplifies to
$Z(\omega) = -j \bigg({1 \over C } \bigg) \bigg( \frac{ \omega }{\omega^{2} - \omega_0^2 } \bigg)$
where
$\omega_0 = { 1 \over \sqrt{LC}}$
Note that
$\lim_{\omega \to \pm \omega_0 } Z(\omega) = \infty$
but for all other values of $\scriptstyle \omega$ the impedance is finite. The parallel LC circuit connected in series with a load will act as band-stop filter having infinite impedance at the resonant frequency of the LC circuit. The parallel LC circuit connected in parallel with a load will act as band-pass filter.
## History
The first evidence that a capacitor and inductor could produce electrical oscillations was discovered in 1826 by French scientist Felix Savary.[1][2] He found that when a Leyden jar was discharged through a wire wound around an iron needle, sometimes the needle was left magnetized in one direction and sometimes in the opposite direction. He correctly deduced that this was caused by a damped oscillating discharge current in the wire, which reversed the magnetization of the needle back and forth until it was too small to have an effect, leaving the needle magnetized in a random direction. American physicist Joseph Henry repeated Savary's experiment in 1842 and came to the same conclusion, apparently independently.[3][4] British scientist William Thomson (Lord Kelvin) in 1853 showed mathematically that the discharge of a Leyden jar through an inductance should be oscillatory, and derived its resonant frequency.[1][3][4] British radio researcher Oliver Lodge, by discharging a large battery of Leyden jars through a long wire, created a tuned circuit with its resonant frequency in the audio range, which produced a musical tone from the spark when it was discharged.[3] In 1857, German physicist Berend Wilhelm Feddersen photographed the spark produced by a resonant Leyden jar circuit in a rotating mirror, providing visible evidence of the oscillations.[1][3][4] In 1868, Scottish physicist James Clerk Maxwell calculated the effect of applying an alternating current to a circuit with inductance and capacitance, showing that the response is maximum at the resonant frequency.[1] The first example of an electrical resonance curve was published in 1887 by German physicist Heinrich Hertz in his pioneering paper on the discovery of radio waves, showing the length of spark obtainable from his spark-gap LC resonator detectors as a function of frequency.[1]
One of the first demonstrations of resonance between tuned circuits was Lodge's "syntonic jars" experiment around 1889.[1][3] He placed two resonant circuits next to each other, each consisting of a Leyden jar connected to an adjustable one-turn coil with a spark gap. When a high voltage from an induction coil was applied to one tuned circuit, creating sparks and thus oscillating currents, sparks were excited in the other tuned circuit only when the circuits were adjusted to resonance. Lodge and some English scientists preferred the term "syntony" for this effect, but the term "resonance" eventually stuck.[1] The first practical use for LC circuits was in the 1890s in spark-gap radio transmitters to allow the receiver and transmitter to be tuned to the same frequency. The first patent for a radio system that allowed tuning was filed by Lodge in 1897, although the first practical systems were invented in 1900 by Italian radio pioneer Guglielmo Marconi.[1]
## References
1. Blanchard, Julian (October 1941). "The History of Electrical Resonance". Bell System Technical Journal (USA: American Telephone & Telegraph Co.) 20 (4): 415–. Retrieved 2011-03-29.
2. Savary, Felix (1827). "Memoirs sur l'Aimentation". Annales de Chimie et de Physique (Paris: Masson) 34: 5–37.
3.
4. ^ a b c Huurdeman, Anton A. (2003). The worldwide history of telecommunications. USA: Wiley-IEEE. pp. 199–200. ISBN 0-471-20505-2.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9198830723762512, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/247340/construction-of-a-sphere-bundle
|
# Construction of a sphere bundle
Let $\pi:E\to M$ be a rank $k$ vector bundle over a compact manifold $M$. The usual method to associate a sphere bundle to $E$ is by considering only vectors of length 1 in each fiber of $E$ (after choosing a metric on the bundle). This yields a bundle $S(E)\to M$ with fiber $S^{k-1}$.
My question is: Can we construct a $k$-sphere bundle $C(E)\to M$ from $E$ by looking at the one-point compactification of each fiber of $E$?
If this is indeed possible some details to the construction and references would be appreciated.
I suppose that the zero section of $E\to M$ would induce a section of $C(E)\to M$. This construction is probably related to the construction of the Thom-space, where the one-point compactification of the total space $E$ is considered.
-
1
Yes you can. Look at local trivializations for the vector bundle to make the local trivializations for the new sphere bundle... at some point you might have to use the fact that a linear map between vector spaces extends to a continuous map between the one-point compactifications, but that is easy (a linear map is proper!) – Dylan Wilson Nov 29 '12 at 16:02
2
Not only does the zero section of $E \to M$ lift to a section of $C(E) \to M$, there is also an $\infty$-section into $C(E) \to M$. – Alexander Thumm Nov 29 '12 at 16:04
@Alexander Thanks for this observation. – Dave Hartman Nov 29 '12 at 23:40
1
I'm pretty sure this gives the same answer as if you take the fiberwise quotient of the disk bundle by the sphere bundle. But as long as you're thinking about this, you should look up the Thom space and the Thom isomorphism, if you haven't seen it before. It's very cool -- you can think of a Thom space as a "twisted suspension" of the base space, and the Thom class is then a generalized version of the fundamental class of the sphere, which measures the extent to which your co/homology theory can "see" the twistedness of the original vector bundle. – Aaron Mazel-Gee Dec 10 '12 at 23:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93424391746521, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/8141/how-to-calculate-a-bezier-curve-with-only-start-and-end-points
|
# How to calculate a Bézier curve with only start and end points?
This animation from Wikipedia shows basically what I want to accomplish, however - I'm hoping to have it flipped around, where it starts progressing more towards the destination and "up" (in this image), and then arcs more directly to the end point. However, I only have access to a starting point and ending point, what I am hoping to do is be able to determine the other points by specifying a "height" (or width, whatever you want to call it), to determine how high the arc actually goes.
Help or direction would be appreciated.
-
1
– Rahul Narain Oct 28 '10 at 4:32
Well, a (cubic) Bézier requires four points, so as it stands, you still have two degrees of freedom for your problem. You might have to think about how to position those other two points to get what you want. – J. M. Oct 28 '10 at 4:37
– muntoo Oct 28 '10 at 5:16
If I'm reading this correctly, he wants a cubic bezier identical to the one in the picture but reflected across the $y$ axis, and scaled vertically (fixed at the start and end points). He wants to be able to have the two other points in the bezier a function of the height (vertical scale) of the curve. – Justin L. Oct 28 '10 at 5:48
## 1 Answer
I had a related problem, where I knew the four points (start, end, two control points) and needed to generate the height (which it turns out is called the Sagitta). Here's my question:
Find sagitta of a cubic Bézier-described arc
My maths isn't strong enough to work it backwards, but you may be able to decode it from one of the very helpful answers there.
-
I suspect that in the past 7 months the OP has either solved his problem or moved on. – Peter Taylor May 23 '11 at 21:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940176248550415, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/dimension-theory?sort=unanswered&pagesize=50
|
# Tagged Questions
In general topology, dimension theory studies various notions of dimension defined for topological spaces, for example Lebesgue covering dimension, small and large inductive dimension or Hausdorff dimension. In commutative algebra, dimension can be defined for commutative rings.
1answer
62 views
### Doubts about Hausdorff measure.
I am studying real analysis and I have some problems in understanding properties of Hausdorff measure. Let $\mathcal{E}_\delta$ be collection of subsets of $\mathbb{R}^N$ whose diameter is less then ...
1answer
59 views
### Dimension of graphs (Differential Geometry)
I have a rather basic question about the dimension of a graph mapped by the exponential map. The problem is I can't really visualize it. It starts like that: Let $M$ be a smooth manifold of ...
1answer
77 views
### Injective endomorphism and Hilbert dimension
Let $\mathcal{O}$ be a finitely-generated $K$-algebra where $K$ is a field and let $M$ be a finitely-generated $\mathcal{O}$-module. For every good filtration \$0 = M_0 \subset M_1 \subset M_2 \subset ...
1answer
19 views
### Confusion related to curse of dimensionality in k nearest neighbor
I have this confusion related to curse of dimensionality in k nearest neighbor search. It says that as the number of dimensions are higher I need to cover more space to get the same number of ...
1answer
30 views
### Codimensionality: On Cardinality of Linear Equations
How does the codimension of a subspace give the number of linear equations needed to define the subspace?
1answer
52 views
### Dimensions of Matrices Range (equalities).
I’d like to find range equalities. Considering the following: $$A=B+C \\ A=B.C^T \\ A=[ B^T C^T ]^T \\$$ I would like to find the function $f$ for each equality above. dim( R(A) ) = f( R(B) , ...
1answer
25 views
### Dimension of coefficents in a density equation
The density throughout a composite material is given by $T(x, y, z) = Axy^2 + Bxz^3 + Cy^2z^3,$ where $x$, $y$ and $z$ are the cartesian coordinates of the position inside the material. (a) Find the ...
0answers
327 views
### A short proof for $\dim(R[T])=\dim(R)+1$?
If $R$ is a commutative ring, it is easy to prove $\dim(R[T]) \geq \dim(R)+1$. For noetherian $R$, we have equality. Every proof I'm aware of uses quite a bit of commutative algebra and nontrivial ...
0answers
71 views
### Topological dimension of a countable dense set
I'm reading a (dynamical systems) paper in which topological dimension figures. In my situation, I'm trying to compute the topological dimension of the subset of the $d$-dimensional torus consisting ...
0answers
38 views
### Defining the Rank of a Projective Module
I am trying to understand the definition of rank for a projective module over a noncommutative ring. The definition I am using is: A sufficient condition for the rank of a free module over a ring ...
0answers
123 views
### Canonical $\pi$ dimensional space?
Can we talk about a canonical space of dimension $\pi$? Is there anything like $\mathbb R^\pi$? Have anyone met any fractal of dimension $\pi$?
0answers
51 views
### Buckingham Π-Theorem
I'm about to conduct some experiments concerning a welding process. To prepare for this I wanted to do a dimensional analysis of the process. So I read a lot about the Π-Theorem and I was able to ...
0answers
97 views
### Hilbert (polynomial) dimension and dimension of a support of a module
$\newcommand{\Supp}{\mathrm{Supp}}$ $\newcommand{\Ann}{\mathrm{Ann}}$ Let $X$ be an affine algebraic variety (over a field $K$, can assume it is algebraicaly closed), $M$ a finitely generated ...
0answers
48 views
### Disjointness of stars in a simplicial complex in $\ell_2$
Definitions Let's consider a full $n$-dimensional simplicial complex $C$ in $\ell_2(X)$. By that I mean a set of all functions $f:X \to[0,1]$ such that $\sum_{x\in X}f(x)=1$ and there are at most ...
0answers
11 views
### Cubic interpolation in arbitrary dimension?
Consider a $N$-dimensional space discretized with a regular cubic grid of $n^N$ cubes, each cube containing the value of a function $f$ in its center. How to correctly interpolate $f(x, y, z)$ using ...
0answers
28 views
### How to apply other than trivial dimensions?
I only used natural dimensions so far but I understand there could be Negative dimension with application e.g. dimension -2 definition and usage Non-integer dimension with application e.g. dimension ...
0answers
43 views
### Separation of Euclidean Space
Consider a finite collection $\mathcal{H}$ of hyperplanes of $\mathbb{R}^n$ that have a common line. Given some $A \subseteq \mathbb{R}^n$ that is homeomorphic to a subset of $\bigcup\mathcal{H}$, ...
0answers
21 views
### Density of a multifractal distribution
I am trying to grasp the concept of density of a multifractal. So I start from the simple case of a line. Let's assume I have a uniform distribution of points on a line and I center a cubic box in the ...
0answers
32 views
### Covering dimension of the union
Let $\{{A_n}\}$ be the closed subsets of $X$, such that ${A_n} \subset \operatorname{Int}{A_{n + 1}}$ and $\cup {A_n} = X$, if $A_1$ and all $\operatorname{cl}(A_n-A_{n-1})$ have the covering ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8904737234115601, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/53118/eigenvalues-of-a-cyclic-symmetric-tridiagonal-matrix-where-m-k-k1-tfrac12-s?answertab=oldest
|
# Eigenvalues of a cyclic symmetric tridiagonal matrix where $M_{k,k+1}=\tfrac12\sqrt{M_{k,k}M_{k+1,k+1}}$
Working on a physics problem, I've encountered some structured cyclic tridiagonal $n\times n$ matrices. They're all of the following form: $$\tiny \begin{bmatrix} \alpha_1 & \frac{\sqrt{\alpha_1\alpha_2}}2 & 0 & \cdots &\cdots &\cdots &\cdots &0 & \frac{\sqrt{\alpha_n\alpha_1}}2 \\ \frac{\sqrt{\alpha_1\alpha_2}}2 & \alpha_2 & \frac{\sqrt{\alpha_2\alpha_3}}2 &0 & \cdots &\cdots &\cdots &\cdots &0 \\ 0 & \ddots&\ddots&\ddots &0 & \cdots &\cdots &\cdots &0 \\ \vdots &0 &\frac{\sqrt{\alpha_{k-2}\alpha_{k-1}}}2 & \alpha_{k-1} & \frac{\sqrt{\alpha_{k-1}\alpha_k}}2 &0 & \cdots& \cdots&\vdots \\ 0& \cdots & 0 &\frac{\sqrt{\alpha_{k-1}\alpha_k}}2 & \alpha_k &\frac{\sqrt{\alpha_k\alpha_{k+1}}}2 & 0 & \cdots & 0\\ \vdots & \cdots & \cdots & 0 &\frac{\sqrt{\alpha_k\alpha_{k+1}}}2 & \alpha_{k+1} &\frac{\sqrt{\alpha_{k+1}\alpha_{k+2}}}2 &0 & \vdots \\ 0 & \cdots& \cdots& \cdots& 0 &\ddots &\ddots &\ddots & 0 \\ 0 & \cdots& \cdots& \cdots& \cdots & 0&\frac{\sqrt{\alpha_{n-2}\alpha_{n-1}}}2 & \alpha_{n-1}&\frac{\sqrt{\alpha_{n-1}\alpha_n}}2\\ \frac{\sqrt{\alpha_n\alpha_1}}2& 0 & \cdots& \cdots& \cdots& \cdots & 0&\frac{\sqrt{\alpha_{n-1}\alpha_n}}2 & \alpha_n \end{bmatrix}$$ i.e. they obey $M_{k,k+1}=M_{k+1,k} = \tfrac12\sqrt{\alpha_k\alpha_{k+1}}$ with $M_{k,k}=\alpha_k$ and $k=n+1$ is remapped to $k=1$.
I am interested in the eigenvalues of such a a matrix, or at least its characteristic polynomial, but I was not able to simplify the problem further than this, even is the geometric means on the second diagonals let me hope there is a solution to this problem.
P.S. One interesting case for me is when $\alpha_k$ is the binomial coefficient $\displaystyle \binom{n}{k}$, but I don't think it simplifies the problem.
-
That would be a cyclic tridiagonal (or periodic tridiagonal) matrix, to use the term of art. – J. M. Jul 22 '11 at 16:53
If all the $\alpha_i$ are the same it becomes very easy since then it is not only cyclic tridiagonal but a circulant matrix. Other than that i think you should just follow J.M.'s lead to find some specialzed algorithm to find the eigenvalues... – Peter Sheldrick Jul 22 '11 at 17:00
@ J. M. Thanks for the official name. I'll edit my question accordingly – Frédéric Grosshans Jul 22 '11 at 17:04
Without the "ears", the usual way is to note that the characteristic polynomials of successive minors form a system of orthogonal polynomials (with the tacit assumption that no off-diagonal element vanishes). I'll get back to you after I comb through the literature... – J. M. Jul 22 '11 at 17:19
## 1 Answer
Unless I've messed up something, this can be factorized as
$$M = \frac{1}{2} C^T C$$
where
$$C=\begin{bmatrix} \beta_1 & \beta_2 & 0 & ... & 0\\ 0 & \beta_2 & \beta_3 & ... & 0\\ & & \cdots & & \\ 0& 0 & \cdots & \beta_{N-1} & \beta_{N}\\ \beta_1& 0 & \cdots & 0 & \beta_{N}\\ \end{bmatrix}$$
and $\beta_i = \sqrt{\alpha_i}$
This at least shows that the matrix is semi positive definite: one of its eigenvalues is zero (if N is even), the rest are non-negative.
-
Thanks for the decomposition. The Semi-positive definiteness is a good news for me, since the eigenvalues of this matrix are supposed to be proportional to probabilities. – Frédéric Grosshans Jul 26 '11 at 15:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931964635848999, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/79343/is-the-axiom-of-universes-harmless
|
# Is the axiom of universes 'harmless'?
Usually when you start studying category theory you see the usual definition: a category consists of a class $Ob(\mathcal{C})$ of objects, etc.
If you take ZFC to be your system of axioms, then a "class" (a proper one) is something which you can't formally use, since everything in the universe of discourse is a set.
Some people (MacLane? Grothendieck?) were understandably worried about this. Cutting down on the history which I am unqualified to give an accurate account of, there is the definition of Grothendieck universe.
If we add the following axiom of universes to ZFC, then we can get around having to use classes:
Axiom of universes (U): every set is contained in some universe.
Now, it can be proven that the axiom of universes is equivalent to the
Inaccesible cardinal axiom: for every cardinal μ, there is an inaccessible cardinal κ which is strictly larger.
which was proven to be independent of ZFC. Hence we can work in ZFC+U and do category theory with the concern of dealing with proper classes put at rest.
This seems to be now a standard approach to a good foundation of category theory.
My question, to put it informally, is: how innocent is the axiom of universes?
What I mean by this is: how do we know it does not have unexpected consequences which may alter the rest of mathematics? The motivation was to give a good foundation of category theory, but it would be unreasonable to give a great foundation that altered the rest of ordinary mathematics.
To give an example, we now that adding the axiom of choice to ZF has some startling consequences. For example, the Banach-Tarski paradox. How do we know that ZFC+U does not have some equally startling consequences? Why are we at rest with adding this axiom to our foundation of mathematics? Isn't this a rather delicate question? How much do we know about how good is the universe approach? (I would say a foundation for category theory is better than another one if it solves the 'proper classes issue' and it has less impact on the rest of mathematics.)
-
1
Just to comment on your last sentence, that is to say "ZFC+A proper class of inaccessible cardinals" is enough to solve all "proper class" issues. However this is not just "a single inaccessible" assumption, and it is quite heavier than assuming the existence of one or two inaccessible cardinals (however not even close to something like Mahlo cardinals). It does not solve all proper classes issues, especially not in set theory. The Russell paradox tells us not that we are limited by set theory, but rather that we are limited. You can never describe your system from within itself. – Asaf Karagila Nov 5 '11 at 22:28
If you only need one ‘level’ of classes, then NBG may provide a satisfactory solution. Mac Lane only assumes the existence of one universe in Categories for the Working Mathematician. But the truth of the matter is that the working categorist would rather have at least a countable infinity of levels, so that we can talk about such things as the category of all functors $\textbf{CRing} \to \textbf{Set}$ (relevant, for example, in Grothendieck's functor of points approach to algebraic geometry). – Zhen Lin Nov 5 '11 at 22:47
2
If you read French you may want to consult the contribution by Bourbaki to SGA 4.1. – t.b. Nov 5 '11 at 23:16
– Quinn Culver Dec 3 '11 at 17:31
Bruno, I'm not 100% clear about what you are trying to understand with this question. Modern mathematics is a lot more open to paradoxical results (e.g. Banach-Tarski Banach-Tarski) and the parts depending on universes (i.e. mild large cardinals) are not too horrible to endure. It seems that many people have accepted large cardinals, and even stronger axioms than universes. – Asaf Karagila Dec 7 '11 at 23:10
show 4 more comments
## 2 Answers
What I mean by this is: how do we know it does not have unexpected consequences which may alter the rest of mathematics?
I will give a couple remarks, and also link to these MathOverflow discussions:
(1) In set theory, the study of large cardinals (much "larger" than just inaccessible) has been very fruitful. The existence of many of these large cardinals requires the existence of inaccessibles. So set theorists are interested in these large cardinals because of their useful (perhaps "startling") consequences. If there were no interesting consequences, set theorists would find other things to look at.
(2) From the skeptical POV, we don't know what the consequences might be. It could be that ZFC is consistent but ZFC plus the universe axiom is inconsistent. Many people come to feel that they have some intuition that the existence of universes is not inconsistent with ZFC. This belief often comes from thinking about the way that the cumulative hierarchy works. On the other hand, there is a manuscript by Kiselev (link) in which he claims to prove that the existence of even one inaccessible cardinal is inconsistent with ZFC.
We do know that ZFC cannot prove that there is even one inaccessible cardinal. And we know we cannot prove in ZFC that the existence of an inaccessible is consistent, because of limitations coming from the incompleteness theorems. So any argument that inaccessibles are consistent will have to use methods that cannot be formalized in ZFC.
(3) Temporarily adopt a Platonistic perspective, at least for the sake of argument. From this position, each "axiom" is either true or false, but it cannot alter the properties of mathematical objects, which exist separately from the axioms used to study them. Of course we can prove false statements from false axioms, but we can't actually change the objects themselves.
(4) Now temporarily reject Platonism, and think only about formal proofs. Then it will not make any difference to my conception of mathematics if someone else adopts an axiom that I don't accept. I will simply put a * beside all the theorems that use this axiom, and count them as dubious at best. I might even reprove some of the theorems without the new axiom just so I know they are OK. In this way, my personal conception of mathematics would also be unchanged by other people using different axioms.
I think that (3) and (4) start to indicate the way that philosophical issues will enter in when we ask about the effects of different axioms on "mathematics".
(This answer is marked as community wiki, as I already gave a different answer for this question. Please feel welcome to add more links to the list of links above.)
-
## Did you find this question interesting? Try our newsletter
Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).
email address
I will leave it to another user to discuss the exact strength of the universe axioms in set theory. There is a lot to say about that.
The thing that I want to point out is that, for the actual applications of the category-theoretic methods to number theory, such as Fermat's Last Theorem (FLT), it appears that the use of universes can be eliminated. For example, Colin McLarty published an article (ref, preprint) in the Bulletin of Symbolic Logic in 2010 in which he states:
"This paper aims to explain how and why three facts coexist:
1. Universes organize a context for the rather explicit arithmetic calculations proving FLT or other number theory.
2. Universes can be eliminated in favor of ZFC by known devices though this is never actually done (and this remains far stronger than PA).
3. The great proofs in cohomological number theory, such as Wiles [1995] or Deligne [1974], or Faltings [1983], use universes in fact."
The key claim I want to highlight is (2): many people believe that methods using universes are not needed for concrete results such as Wiles' theorem, and in principle the proofs can be rewritten without them. I am in no position to judge the claim but it seems to be accepted by quite a few people who have looked into the matter. There is an open conjecture that Fermat's Last Theorem can be proved in Peano Arithmetic and even in much weaker theories, and at present we have no reason to suspect that FLT cannot be proved in Peano Arithmetic.
This makes the foundational question of universes more interesting: they are used, clearly, but working number theorists know how to avoid them if desired, which leaves a sort of tension. This is the issue McLarty is getting at in his paper.
McLarty's most recent progress announcement indicates he has made even more progress since his 2010 paper.
-
4
– t.b. Nov 5 '11 at 23:56
This answer is very interesting as a comment: since the bounty is coming to an end, I will award it to you. I do not consider it really answers the question, as you point out in the first paragraph yourself, hence I will still leave this question without a checkmark. – Bruno Stonek Dec 10 '11 at 15:55
@Bruno Stonek: Thank you! I will do my best to write an answer to the question separately, in return. I expected someone else would write one, so I hesitated. But I can say something at least. I will write it tomorrow. – Carl Mummert Dec 11 '11 at 2:45
@CarlMummert: Great! I'm looking forward to it :) – Bruno Stonek Dec 11 '11 at 17:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505066275596619, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/260391/central-limit-theorem-confusion
|
# Central limit theorem confusion
If a bunch of random variables $X_i$ are independently and identically distributed with an exponential distribution, their sum apparently follows a Gamma distribution.
But doesn't the central limit theorem imply that (for $X_i$ of any distribution with mean zero and variance $\sigma^2$), the sum $\sum_{i=1}^n X_i$ will become approximately normally distributed $~N(0,n\sigma^2)$ for large enough $n$ ?
Obviously I am missing something basic, but what's going on? How can the sum of i.i.d. exponential random variables have a Gamma distribution, but also be converging to normality?
-
## 2 Answers
There are several confusions here (I was also very confused when I started learning about that topic :-).)
• Exponential random variables have a non zero mean (and are positive). The quantity you should be looking at, which asymptotically converges in distribution to a normal variable is $$\sqrt{n} \left( \frac{\sum_{i = 1}^n X_i}{n} - \mu \right)$$ The $\sqrt{n}$ was essential here, otherwise the distribution of the average will converge to a point mass at $\mu$. That quantity will converge to $N(0,\sigma^2)$. Both $\mu$ and $\sigma$ will be determined by the parameter of the exponential distribution.
• The central limit theorem is asymptotic. The quantity $\sqrt{n} \left( \frac{\sum_{i = 1}^n X_i}{n} - \mu \right)$ will have a distribution. Let's call it $F_n$. (it is essential to remember that it depends on $n$). $F_n$ in general is not a normal distribution $N(0,\sigma^2)$. The central limit theorem tells us that that distribution gets in a certain sense closer and closer to $N(0,\sigma^2)$ as $n \to \infty$.
-
Yes I was mistaken in talking about a mean of zero, which is impossible for exponentials (actually I was originally motivated to ask this question because I wanted to sum Laplaces, but thinking about exponentials was the first step). But I'm confused by the $\sqrt(n)$ being essential there. That expression is giving a distribution for the sample average. Doesn't the CLT also give you an approximate distribution for the sample sum? Is it wrong to say that the sum of random variables with variance $\sigma^2$ will converge to a normal distribution with variance $n \sigma^2$? – s4027340 Dec 17 '12 at 2:03
@s4027340 The problem with saying that the sum of random variables will converge to a Normal distribution with variance $n\sigma^2$ as $n \rightarrow \infty$ is that $n\sigma^2 \rightarrow \infty$ too, and it doesn't help much to talk about a Normal distribution with infinite variance! It is absolutely correct to say that for any (large-ish) finite $n$, the distribution of the sum is approximately Normal with variance $n\sigma^2$, like you said--it just doesn't make much sense in the limit. – Jonathan Christensen Dec 17 '12 at 2:08
Ah, of course! That makes sense. – s4027340 Dec 17 '12 at 2:09
Good question! The Gamma distribution itself converges to a Normal distribution--exactly the Normal distribution that the Central Limit Theorem says the sum of the Exponential random variables converges to.
Of course, for any finite $n$ the distribution of the sum of Exponentials (the Gamma distribution) is not a normal distribution, since it's bounded below by zero, but as $n$ gets larger it gets closer and closer to a Normal distribution.
Here's a page that discusses the approximation, with some graphs that show how the Gamma distribution looks like a Normal when the first parameter (= the number of Exponential random variables you are summing up) gets large.
-
Is the difference between gamma and normal basically due to the negative part on the normal's left tail? – s4027340 Dec 17 '12 at 1:59
The shape of the Gamma will never be exactly a Normal distribution, even if you ignore the left-tail problem. But the CDF will get closer and closer to a Normal CDF. – Jonathan Christensen Dec 17 '12 at 2:06
Thanks for the link! – s4027340 Dec 17 '12 at 2:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395833015441895, "perplexity_flag": "head"}
|
http://euler.slu.edu/escher/index.php/The_Three_Geometries
|
# The Three Geometries
### From EscherMath
M.C. Escher, Three Worlds (1955)
## Axioms and the History of Non-Euclidean Geometry
### Euclidean Geometry and History of Non-Euclidean Geometry
In about 300 BCE, Euclid penned the Elements, the basic treatise on geometry for almost two thousand years. Euclid starts of the Elements by giving some 23 definitions. After giving the basic definitions he gives us five “postulates”. The postulates (or axioms) are the assumptions used to define what we now call Euclidean geometry. [1]
The five axioms for Euclidean geometry are:
1. Any two points can be joined by a straight line. (This line is unique given that the points are distinct)
2. Any straight line segment can be extended indefinitely in a straight line.
3. Given any straight line segment, a circle can be drawn having the segment as radius and one endpoint as center.
4. All right angles are congruent.
5. Through a point not on a given straight line, one and only one line can be drawn that never meets the given line.
The fifth postulate is called the parallel postulate. Euclid used a different version of the parallel postulate, and there are several ways one can write the 5th postulate. They are all equivalent and lead to the same geometry.
• "If two lines are drawn which intersect a third in such a way that the sum of the inner angles on one side is less than two right angles, then the two lines inevitably must intersect each other on that side if extended far enough." (Euclid's version)
• "The sum of the angles in a triangle is exactly 180 degrees."
• “given a line L, and a point P not on that line, there is exactly one line through P which is parallel to L”.
The axioms are basic statements about lines, line segments, circles, angles and parallel lines. We need these statements to determine the nature of our geometry.
The fifth postulate, the “parallel postulate”, seemed more complicated and less obvious than the other four, so for many hundreds of years mathematicians attempted to prove it using only the first four postulates as assumptions.
The Greeks already studied spherical trigonometry. Hipparchus (190 BC-120 BC) was a Greek astronemer. hipparchus was known for his work in trigonometry and he may have known some results about spherical triangles. [2] Menelaus of Alexandria (ca 100 AD) worked on spherical geometry and was the first known to publish a treatise on the subject. Menelaus' work was called Sphaerica (3 volumes) and included material on the properties of spherical triangles. [3] Ptolemy (ca 90 - 168 AD) also included some studies of spherical triangles in his work. [4] Hyperbolic geometry, in comparison, took a lot longer to develop.
We saw that the parallel postulate is false for spherical geometry (since there are no parallel geodesics), but this is not helpful since some of the first four are false, too. For example there are many geodesics through a pair of antipodal points.
In fact, the first four postulates (plus the assumption that lines are infinite) imply that given a line and a point not on that line, there is a parallel line as required. The subtle question is: can there be more than one?[5]
In 1733, the Jesuit priest Giovanni Saccheri began by assuming the fifth postulate was false, and attempted (at great length) to derive a statement contradicting the other four. In doing so, he nearly produced the theory of hyperbolic geometry. However, his goal was not to discover new kinds of geometry, but to rule them out, so he concluded his treatise with a rant about the absurdity of everything he had just written.
The great German mathematician Carl Freidrich Gauss apparently believed that a geometry did exist which satisfied Euclid’s first four postulates but not the fifth. However, Gauss never published or discussed this work because he felt his reputation would suffer if he admitted he believed in non-Euclidean geometry. In the early 1800’s, the idea was preposterous.
Generally, Nikolai Ivanovich Lobachevsky is credited with the discovery of the non-Euclidean geometry now known as hyperbolic space. He presented his work in the 1820’s, but even it was not formally published until the 20th century, when Felix Klein and Henri Poincaré put the subject on firm footing.
In our two other geometries, spherical geometry and hyperbolic geometry, we keep the first four axioms and the fifth axiom is the one that changes. It should be noted that even though we keep our statements of the first four axioms, their interpretation might change!
### Spherical Geometry
The five axioms for spherical geometry are:
1. Any two points can be joined by a straight line.
2. Any straight line segment can be extended indefinitely in a straight line.
3. Given any straight line segment, a circle can be drawn having the segment as radius and one endpoint as center.
4. All right angles are congruent.
5. There are NO parallel lines.
How do we interpret the first four axioms on the sphere?
• Lines: What would a “line” be on the sphere? In Euclidean geometry a line segment measures the shortest distance between two points. This is the characteristic we want to carry over to spherical geometry. The shortest distance between two points on a sphere always lies on a great circle. Longitude lines and the equator on a globe are examples of great circles. Note that we can always draw a great circle, which we will from now on call a geodesic, through any two points. We have to be careful here, because unlike in Euclidean geometry this geodesic (“line”) may not be unique. Take for instance the north and South Pole on the globe. There are infinitely many great circles through these two poles. In general, any two points that lie on opposite sides of the sphere, so called antipodal points, can be joined by infinitely many geodesics.
• Line segments: We can extend any line segment, but at some point the line segment will then connect up with itself. A line of infinite length would go around the sphere an infinite amount of times.
• Circles: As we have stated the circle axiom it is true on the sphere. Note that it does not make sense to say that given any center C and any radius R we can draw a circle of radius R centered at C. If we take a radius less than half the circumference of the sphere, then we can draw the circle. If the radius is exactly half the circumference of the sphere, then the circle degenerates into a point. If the radius were greater than half the circumference of the sphere, then we would repeat one of the circles described before. Note that great circles are both geodesics (“lines”) and circles.
• Angles: Right angles are congruent. Think about the intersection of the equator with any longitude. These two geodesics will meet at a right angle.
How can we formulate the 5th postulate?
• No parallel lines: Any two geodesics will intersect in exactly two points. Note that the two intersection points will always be antipodal points.
• Sum of the angles in a triangle: On the sphere the sum of the angles in a triangle is always strictly greater than 180 degrees.
These basic facts really turn the properties of this geometry on its head. We will have to rethink all of our theorems and facts! Here are some examples of the difference between Euclidean and spherical geometry.
In Euclidean geometry an equilateral triangle must be a 60-60-60 triangle. In spherical geometry you can create equilateral triangles with many different angle measures. Take for instance two longitudes that meet at 90 and intersect them with the equator. This gives ride to a 90-90-90 equilateral triangle! If you shrink this triangle just a little bit, you can make an 80-80-80 triangle. If you expand it a bit, you can make a 100-100-100 triangle. As a matter of fact you can make a X-X-X triangle as long as 60 < X < 300.
Note that not having any parallel lines means that parallelograms do not exist. Recall that a parallelogram is a 4-gon that has the property that opposite sides are parallel. In Euclidean geometry this definition is equivalent to the definition that states that a parallelogram is a 4-gon where opposite angles are equal. In spherical geometry these two definitions are not equivalent. There are quadrilaterals of the second type on the sphere.
### Hyperbolic Geometry
The five axioms for hyperbolic geometry are:
1. Any two points can be joined by a straight line.
2. Any straight line segment can be extended indefinitely in a straight line.
3. Given any straight line segment, a circle can be drawn having the segment as radius and one endpoint as center.
4. All right angles are congruent.
5. Through a point not on a given straight line, infinitely many lines can be drawn that never meet the given line.
How do we interpret the first four axioms in hyperbolic space?
We first have to agree on a model of hyperbolic space. We will choose the Poincare Disk Model. We will think of all of hyperbolic space as living inside a disk. Putting an entire infinite world inside a disk will lead to some distortion as you might expect. We think of the center of the disk as being close to Euclidean geometry, but the closer we get to the edge of the disk, the more distorted the picture will become. We have to think of the boundary of the disk as being infinitely far away from the center of the disk. This means that anything we see close to the edge of hyperbolic space will appear much, much smaller than it actually is.
• Lines: In hyperbolic geometry a geodesic line segment measures the shortest distance between two points. There are two types of geodesics in the Poincare Disk Model (PDM). Geodesics will be Euclidean line segments passing through the center of the disk, or semi-circles, which meet the boundary of the disk in right angles.
• Line segments: Any finite piece of a geodesic.
• Circles: Given any center C and any radius R we can draw a circle of radius R centered at C. Hyperbolic circles look just like Euclidean circles, but the center is not located where a Euclidean center would be. The center of the circle will be slightly closed to the boundary of the PDM than it’s Euclidean counterpart.
• Angles: Right angles are congruent.
How do we interpret the 5th postulate?
• Infinitely many parallel lines: Given a line and a point not on the line, we can always draw infinitely many parallel lines through the point. Remember that two lines are parallel if they never meet. Because the geodesics in hyperbolic space include semi-circles, we have a bit more freedom in our choice of geodesic.
The easiest way to see this is to choose a geodesic that is a fairly small semi-circle near the boundary of the PDM. Now think of all the geodesics passing through the center of the PDM. You can draw infinitely many of these straight looking geodesics that never meet the semi-circle, so all of those are parallel to the small semi-circle.
• Sum of the angles in a triangle: On the sphere the sum of the angles in a triangle is always strictly less than 180 degrees.
These basic facts also turn the properties of this geometry on its head. We will have to rethink all of our theorems and facts for hyperbolic geometry too. Here are some examples of the difference between Euclidean and spherical geometry.
In Euclidean geometry an equilateral triangle must be a 60-60-60 triangle. In hyperbolic geometry you can create equilateral triangles with many different angle measures. Take for instance three ideal points on the boundary of the PDM. If we connect these three ideal points by geodesics we create a 0-0-0 equilateral triangle. Moving the vertices into the interior of hyperbolic space will result in equiangular triangles with small angle measures. We will be able to create X-X-X triangles with 0 ≤ X < 60.
Having infinitely many parallel lines means that parallelograms will look different than you expect!
Note that we cannot have squares or rectangles in hyperbolic space, because the sum of the angles of a quadrilateral has to be strictly less than 360.
## The Classification of Regular Tessellations
Two topics we studied - regular tessellations and (non-) Euclidean geometry - are connected in a special way. This is not something that is immediately obvious and you may not have expected such a connection at all. But if we look at all possible regular tessellations using the Schlafli symbol, then we see that every symbol actually corresponds to a regular tessellation and the tessellation is unique (so one symbol corresponds to one tessellation). And these tessellations are either Euclidean, Spherical or Hyperbolic. If we display the symbols in a table as shown below we actually get a highly symmetric diagram.
Recall that the simplest tessellations are the regular tessellations. They are simple, because each involves only a single shape of tile, and that tile has all sides the same length and all angles the same measure. We have studied regular tessellations in three different geometries: Euclidean, spherical, and hyperbolic. In each geometry, the key step to forming regular tessellations was to choose the corner angles of the tile so that multiple tiles could fit together around a vertex. That is, we needed the corner angle to evenly divide 360°.
A regular tessellation is described completely by a pair of numbers - the number of sides on each tile, and the number of tiles meeting at a vertex. The Schlafli symbol for a regular tessellation is just this pair of numbers, written {n,k}. For example, the regular tessellation of the plane by hexagons is written {6,3}, since three hexagons meet at each vertex. There is a regular tessellation for every Schlafli symbol {n,k} (with n and k at least 2). Some are spherical, some are Euclidean, and some are hyperbolic. To classify which {n,k} go with which geometry, we consider angle sums.
Regular Tessellations
$n \backslash k$ 2 3 4 5 6 7
2 S S S S S S
3 S S
Tet
S
Oct
S
Ico
E H
4 S S
Cube
E H H H
5 S S
Dod
H H H H
6 S E H H H H
7 S H H H H H
The table can be extended indefinitely, and this symmetric diagram still has the property that the tessellations along the top and left edge will exist in spherical geometry, and the rest will exist in hyperbolic geometry. The three Euclidean regular tessellations are the only ones possible (as we showed in one of the explorations).
For example {n,k} = {2,8} and for instance {n,k} = {2,10} will appear on the top right side of the diagram and correspond to tessellations by 2-gons, with 8 (resp 10) 2-gons meeting at a vertex. These tessellations can only appear on the sphere, so these are spherical tessellations. Similarly {n,k} = {8,2}, {n,k} = {9,2} and {n,k} = {10,2} will appear in the left hand column and correspond to tessellation by 8-gons (resp 9-gons and 10-gons) and have 2 polygons meet at a vertex. These tessellations can only be realized on the sphere and will there fore be spherical geometries.
## The Shape of the Universe
This section is unfinished.
In two dimensions there are 3 geometries: Euclidean, spherical, and hyperbolic. These are the only geometries possible for 2-dimensional objects, although a proof of this is beyond the scope of this book.
What about in three dimensions, which corresponds to the space we actually live in? It has been shown that in three dimensions there are eight possible geometries. There is a 3-dimensional version of Euclidean geometry, a 3-dimensional version of spherical geometry and a 3-dimensional version of Hyperbolic geometry. There is also a geometry which is a combination of spherical and Euclidean, and a geometry which is a combination of hyperbolic and Euclidean. The three other geometries are a bit more exotic and are harder to describe.
Since all these geometries look the same at small scales, we cannot tell the shape of the space that we live in without studying difficult questions about the universe itself. In particular, there are still two very fundamental questions about the universe that remain unknown:
• Is the universe finite or infinite?
• Do we know which of the geometries describes the shape of the universe?
When we ask if the universe is finite, we are really asking if it closes up like a sphere, or extends infinitely like the plane or hyperbolic space. Another way to ask this question is to think about a rocket traveling through space in a straight line: If the universe is finite, it will eventually wrap around and return. On a 2 dimensional surface, if we travel in a straight path and never return we would be on something like an infinite plane. If we did manage to return, even though we travel in a straight line, then we would have been on something like a sphere. Some scientists believe our universe is more like the 3-dimensional version of the sphere. Our rocket would eventually return to Earth (after an impossibly long time).
What about the geometry of the universe? Euclidean, spherical and hyperbolic geometry are different on small scales. The sum of the angles in a triangle is different, for example. However, for really small triangles in spherical and hyperbolic geometry, the triangles begin to look a lot like their Euclidean cousins. One would have to be able to do very precise measurement to measure the angle defects. We run into a similar problem when trying to measure the geometry of the universe. So far, measurements are not accurate enough or large enough to decide the issue. The universe could be what we call flat (which corresponds to Euclidean) or it could have some small amount of curvature (which could make it have some other geometry).
How do we picture possible 3-dimensional spaces? Think about some computer games where our screen in a square or a rectangle, but if we leave the screen on the right hand side, we re-appear on the left. Similarly if we were to leave the screen at the top, we would show up again at the bottom. This really means that the left is connected to the right and the top to the bottom. I little more thought would show that we were actually playing the game on a torus (a doughnut like shape).
The 3-torus is obtained from the cube by gluing the left side to the right side, top to bottom, and front to back
A dodecahedral space is created by gluing the sides of the dodecahedron in pairs
We can do similar things in 3-space. A so called 3-torus would be a 3-dimensional space made up of a cube but with the understanding that top and bottom, left and right and front and back are connected. Imagine a computer game with a spaceship. If the spaceship exits on the left it will re-appear on the right! This space is Euclidean in nature, and is finite.
The dodecahedron can similarly be used to create 3-dimensional spaces. They will be finite, but in some cases they will have a hyperbolic geometry.
See Geometry Center's page on the Shape of Space for more detail.
## Related Sites
Geometry Games Jeff Weeks’ Topology and Geometry Software. This site includes the torus games, Kali and Kaleidotile.
## Notes
1. ↑ http://aleph0.clarku.edu/~djoyce/java/elements/elements.html
2. ↑ A History of Greek Mathematics by Thomas Little Heath, 2nd Ed. 1981, retrieved from google books
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9386034607887268, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/88243/cubic-residues-modulo-primes/88250
|
## Cubic residues modulo primes
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let k is integer which is not a cube. If there exists a prime p [respectively infinitely many primes p] congruent to 3 modulo 8 and such that k is not a cube modulo p? Thanks in advance for proof or counterexample!
-
If you do not mind, could you say a little in which context this question arose/why you are interested in this. Some people find it helpful to have such information alongside with the actual question. – quid Feb 11 2012 at 23:18
3
Have you tried applying the Chebotarev density theorem to a suitable extension $\mathbf{Q}(k^{1/3}, \alpha)/\mathbf{Q}$? – Timo Keller Feb 11 2012 at 23:25
1
(Hint: For $p \equiv 3 \pmod{8}$ look at cyclotomic extensions; you also have to adjoin a 3rd root of unity to make the extension Galois.) – Timo Keller Feb 11 2012 at 23:32
## 1 Answer
Your condition $p \equiv 3 \pmod 8$ is unusual, everything is a cubic residue $\pmod 3$ and $\pmod q$ where $q \equiv 2 \pmod 3.$ However, Timo seems to have this well in hand.
This is just a bit of culture. A rational number is a square if and only if it is a square in every $\mathbb Q_p$ for all $p \leq \infty.$
I got a bit frustrated thinking about the analogous statement for cubes, but it works. In GSS it is proved that, if a polynomial in one variable with integer coefficients has prime degree, if it is reducible in all $\mathbb Q_p,$ then it is reducible in $\mathbb Q.$ In particular, the polynomial $x^3 -k$ is irreducible in $\mathbb Q$ if $k$ is not a cube. As the degree is 3, a prime, $x^3 -k$ is irreducible in some $\mathbb Q_p,$ meaning that there is no linear factor and no root, as $3 = 1 + 2.$ Finally, by Hensel's lemma, $k$ is not a cubic residue $\pmod p.$
Language in the article (mentioning Chebotarev density) suggests that the direction I use, prime degree, was known for quite some time, that the news in the article is about composite degree.
The laughter may now commence.
-
Will, the Grunwald-Wang Theorem states that "global $n$-th power" equals "locally everywhere $n$-th power", and usually equals "locally almost-everywhere $n$-th power" (with the exceptions precisely demarcated). – BR Feb 12 2012 at 1:30
@BR, thanks, I found this: en.wikipedia.org/wiki/… – Will Jagy Feb 12 2012 at 2:47
Thanks! I see that by Grunwald-Wang Theorem: if k is not a cube in Q then k is not a cube modulo infinitely many primes p. But what with my extra condition "p is congruent to 3 modulo 8" (or similar congruences modulo power of 2)? Is it breaking nothing? – jan Feb 13 2012 at 9:41
1
@jan, the collective opinion of the experts here seems to be that you ought to know how to do this yourself, if you are engaged in research that requires this. Timo told you how to apply Chebotarev density. If you don't know how to finish that argument (I do not), your best bet is to place a more detailed question at math.stackexchange.com/questions?sort=active in which you say exactly why you care about $3 \pmod 8,$ say what Grunwald-Wang gives and what it does not, but mostly detailing your background. – Will Jagy Feb 13 2012 at 20:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317722916603088, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/95747?sort=oldest
|
## Polar decomposition in C*-algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A very nice feature of W*-algebras is the following:
once you have an element $a$ of a W*-algebra $M$, and $a=u|a|$ (the polar decomposition), then $u\in M$.
It seems that it carries over to AW*-algebras without pain. This is simply because (A)W*-algebras have lots of projections, unlike general C*-algebras.
Can one give an abstract characterisation of C*-algebras with the above mentioned property?
-
## 2 Answers
There exist further generalizations but they do not go very far from AW*-algebras. See
http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.pjm/1102650543 and
http://dmle.cindoc.csic.es/pdf/PUBLICACIONSMATEMATIQUES_1995_39_01_01.pdf
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It appears that there is such a condition for so-called Rickart C*-algebras (to each element a in the algebra there is a selfadjoint idempotent generating the left annihilator of a). This condition is mentioned in the first paragraph of the paper "Polar decomposition in Rickart C*-algebras" by Dmitry Goldstein.
I don't expect there will be a trivial abstract characterization of the property in general.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8791589736938477, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/tags/freezing/hot
|
# Tag Info
## Hot answers tagged freezing
10
### Dangers of using liquid nitrogen
Liquid nitrogen boils when it comes in contact with skin, so small amounts of spatter are no danger at all-- the droplets just bounce off. I regularly pour a liter or so (a bit at a time) out on a lab table when I do liquid nitrogen demos, with no problems or safety gear. The biggest risk from the low temperature is getting it into fabric of some sort, ...
8
### Why does frozen water burst a pipe?
First of all, when you say that trying to crack a pipe is hard work, what you probably mean (in physics terms) is that it takes a large force. But that doesn't necessarily mean that it requires a lot of energy. The energy used in a physical process like that is equal to the force times the distance over which the force is applied, and you don't have to push ...
7
### What is the status of Mpemba effect investigations?
One boring Monday morning in the lab a group of us did the experiment, and to our surprise we found that the hot water (in sealed containers) did freeze faster. On closer examination we discovered that the shelves in our freezer were covered in frost, like I imagine most freezers, and the hot water was melting the frost and creating a good thermal contact ...
7
### Why does a slight drip of water protect pipes from freezing?
Pipes are damaged when ice forms a complete blockage, and the expansion of water trapped by it puts too much pressure on them. Now, ice is a pretty good thermal insulator, so once a little ice forms on the inside of the pipe further freezing proceeds slowly. If the water is flowing there will not be enough time for it to freeze between leaving the ...
6
### Will the water added to an ice piece freeze?
Ice coming from the freezer will typically be around -19 deg. celsius, and can only be stored for a limited time at room temperature. As soon as the ice is heated to 0 deg. or above, the ice will melt into liquid water. Liquid water coming into contact with ice will be cooled, and if cooled below 0 deg. it will also freeze. The answer to your question is ...
5
### What happens to the temperature of the water when you add salt to a bowl of melting ice?
In the absence of salt, the ice and water at 0C are in equilibrium, so unless you add or remove heat nothing changes. However when you add salt it reduces the freezing point of the water. This means the ice and salt water are no longer in equilibrium, and the result is that the ice starts to melt. Melting the ice requires heat. Specifically it requires the ...
4
### Dangers of using liquid nitrogen
You can get localized soft tissue damage rather like a burn from sustained contact with moderate amounts of cryogenic liquids, and large amounts can freeze flesh solid---which is really bad. Small amounts will dance on your skin because of the vapor barrier that develops as they vaporize. Treat cryogenic materials with respect. Think about what you're ...
4
### Dangers of using liquid nitrogen
Liquid nitrogen will not moisten human skin, so short contact with small amount of it should not be too harmful -- it would just float on a evaporated portion of itself like water over very hot pan; yet of course putting a hand into a container with it is not a good idea. From what I have heard, the biggest problem is when you pour it on your shoes, because ...
4
### What kinds of materials contract the most in cold temperatures?
Most materials contract on cooling. The notable exception to the rule are some phase transitions and water. But even ice contracts on cooling. Water expands on cooling only between $0^\circ\text{C}$ and $4^\circ\text{C}$ (including phase transition). This corresponds to the part of the graph below, in which density rises with temperature (note suppressed ...
4
### Why did my frozen water bottle explode when I opened it after it defrosted a bit?
Water is an unusual substance in that it expands when it freezes. Evidently this expansion wasn't enough to burst the bottle in your case, but it left the bottle's contents under pressure. After you'd defrosted it for a while there was, presumably, some ice and some water in the bottle. Because the ice was taking up more volume than it did when it was water, ...
4
### Liquid with freezing point above 0 Celsius that could be use at ice rinks
The answer to this question is "probably not". The reason for this is quite interesting. Ice skates have such low friction because a layer of water forms in between the ice and the blades. In order for this to happen, you need a substance that will turn from solid to liquid when it's compressed, which (according to thermodynamics) is the same thing as ...
4
### Freezing point of water with respect to pressure.
You can have a look at the phase diagram pressure-temperature of water: [Phase diagram taken from Martin Chaplin's webpage, http://www.lsbu.ac.uk/water/phase.html#b\ , under license CC-BY-NC-ND. This webpage is highly recommended, with tons of useful links and articles.] The transition between solid and liquid is the red line separating the blue (solid) ...
3
### Why laundry dry up also in cold/frost?
It is called sublimation. It is how ice cubes disappear in the freezer. Snow and ice sublime, although more slowly, below the melting point temperature. This allows a wet cloth to be hung outdoors in freezing weather and retrieved later in a dry state. I .... Sublimation is the process of transformation directly from the solid phase to the gas ...
3
### Freezing point depression - cooling my drink with the same method as salt on a highway?
Adding salt to water makes it freeze at a lower temperature. This fact is being used in two different ways in the two scenarios you mention. Dissolving sodium chloride in water is slighly endothermic, but this effect is small and to the best of my knowledge isn't important in the drink cooling process. Putting salt on the highway is quite straightforward: ...
3
### Why are snowflakes symmetrical?
When water freezes, you get ice. Ice, like many solid materials, forms a crystalline structure. In the case of water, the crystalline structure may be attributed to the hydrogen bond, a special kind of an attractive interaction. So a big chunk of ice will have a crystalline structure - preferred directions, translational symmetry, and some rotational ...
3
### What kinds of materials contract the most in cold temperatures?
Water is very odd in that it expands when it freezes - almost everything else contracts. I don't know what material has the largest volume change on freezing. But among liquids - organic solvents, with much weaker bonds between molecules than water, tend to have much larger expansivities. There is a very odd material (zirconium tungstate) that shrinks as ...
3
### Thermodynamics of supercooled water
I'm answering my own question. Apparently this is one of those rare cases when the physicist must doubt what he observed -- or what he thought he observed -- and believe the numbers his theory yielded instead. From further experiments I've noticed that the ice tends to form thin plates inside the supercooled water once the crystallization process starts ...
3
### How does watering your plants help protect against freezing?
If the temperature is not much below freezing, the rate of heat transfer from your plants (and particularly from the earth around their roots) is low, if there is a lot of water present, the high heat of fusion means that it will take a long time to actually freeze much of it. So maybe the plant makes it through the night without too much damage. Note that ...
2
### Why does a slight drip of water protect pipes from freezing?
I would say it is a simple case of heat transfer. The new water (from the mains) is above freezing (usually by 10C or more), so the flow is transfering heat from the relatively warm input water. I would also say, that the opening through the end of the pipe might act a bit as a pressure relief valve, i.e. some freezing of the contents of a closed pipe means ...
2
### Thermodynamics of supercooled water
There is a simpler way to do the calculation, though using it also gives me 7% of the water freezing. The heat needed to warm the water from T degrees below zero is simply: $$E = MTC_w$$ where $M$ is the mass of the water, $T$ is the degrees below zero and $C_w$ is the specific heat of water (assumed constant over this range). The heat released when a mass ...
2
### Why are snowflakes symmetrical?
K Libbrecht has a nice paper that answers your question in considerable detail and has some nice pictures-- his homepage: http://www.its.caltech.edu/~atomic/publist/kglpub.htm Scroll down to the article in American Scientist in his publications list "The Formation of Snow Crystals," K. G. Libbrecht, American Scientist 95, 52-59 (2007). View pdf. the pdf is ...
2
### How does the size and load of a freezer affect the rate of freezing of an item?
the set temperature will determine the final, equilibrium temperature, but the rate of cooling is determined by how fast the air inside the freezer can extract the heat from the new (warmer) food. The cold air molecules will heat each time they collide with the food which is at a greater temperature. If the freezer is empty, that molecules will get cold ...
2
### Freeze and “break apart” an object. How?
Yes, that is possible. The usual way to cool down object to this temperature, is by putting it in liquid nitrogen. For an example, consider this movie, where it is done with a tulip. The water inside the object is freezing, which makes it breakable (as you can break ice, but not water).
2
### What kinds of materials contract the most in cold temperatures?
The cheap answer to your question is "a gas," probably most specifically helium since it stays a gas longer than anything else as it gets colder. Your question boils down to how the ratio of two forces changes with temperature. First you have the separation forces that push molecules or atoms apart, and then you have the binding forces that pull the ...
2
### What kinds of materials contract the most in cold temperatures?
EDIT: I misread the question I see you asked what kind of material contracts the most when you cool it. In this regard, hardly anything beats the ideal gas, whose contraction is about .1% per degree at room temperature. If you want a material, consider a bunch of balloons mushed together with drops of glue, or something microscopic equivalent. Materials ...
2
### Freezing point of water with respect to pressure.
If you decrease the pressure, the freezing point of water will increase ever so slightly. From 0° C at 1 atm pressure it will increase up to 0.01° C at 0.006 atm. This is the tripple point of water. At pressures below this, water will never be liquid. It will change directly between solid and gas phase (sublimation). The temperature for this phase change, ...
1
### Why laundry dry up also in cold/frost?
Why laundry dry up also in cold/frost? Probably because, initially, the clothes and the liquid water trapped in the clothes fibres, are both at a temperature well above 0 C. When you have frost, water in the clothes should freeze, And it does, when the temperature of the garment and water trapped within it have eventually reduced to below 0 C ...
1
### What is the status of Mpemba effect investigations?
The explanation is that hot water evaporates, leaving less water to freeze (but see John Rennie's answer for a contact effect which is probably the biggest factor for most of the reports of the effect. Aside from John's effect...) this is the only significant difference, there is no other, despite what you sometimes read. The evaporation effect is not even ...
1
### Cause of sea ice freezing during an upwell event
The bit of The Frozen Planet that mentions this is short on detail, but I would guess they are referring to Antarctic Bottom Water; see also Weddell Sea Bottom Water. It's similar in mechanism to the brinicle you mentioned. Water cooled at the surface becomes very cold and very dense (high salinity) so it sinks and forms a layer on the sea bed of the ...
1
### Why are snowflakes symmetrical?
Not all snowflakes are symmetrical. One can disrupt the symmetry quite easily by introducing impurities or some mechanical artifact. In nature, snowflakes have plenty of time to form and it is more natural for them to form symmetric shapes because of the molecular structure of water. That is, when there is more time for the molecules to move about and ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431723952293396, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/number-theory/9271-proving-converse-theorem.html
|
# Thread:
1. ## proving the converse theorem
Here is another question that I got stuck with. Please see attachment. I typed the question in " Microsoft word" . If you can't load the attachment , please do let me know.
Thank you very much.
Attached Files
• question.doc (23.5 KB, 123 views)
2. Originally Posted by Jenny20
Here is another question that I got stuck with. Please see attachment. I typed the question in " Microsoft word" . If you can't load the attachment , please do let me know.
Thank you very much.
Did you consider using the Mobius inversion formula?
If $g(n)=\sum_{d|n} f(d)$is multiplicative then,
$f(n)=\sum_{d|n} \mu (n/d) \cdot g(d)$
Now, $g(n)$ is multiplicative function
And $\mu(n)$ (the Mobius mu function) is also multiplicative.
Thus,
$g(n)\cdot \mu(n)$ is also multiplicative.
Thus,
$f(n)$ is multiplicative (because of the summation theorem).
3. Hi perfecthacker,
Thank you very much for your reply. I have never learnt Mobius inversion forumla. I will go through your prove. If I don't understand it, I will come back to you.
4. Hi perfecthacker,
Can you show me the other way ( not by mobius inversion forumla) to prove this question? I don't really get your prove because I dont' quite understand the mobius inversion formula. Thank you very much for your help.
5. Originally Posted by Jenny20
Hi perfecthacker,
Can you show me the other way ( not by mobius inversion forumla) to prove this question? I don't really get your prove because I dont' quite understand the mobius inversion formula. Thank you very much for your help.
I just looked up the proof in my number theory book and they prove it the same way I did. Thus, perhaps this is the standard way.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461581707000732, "perplexity_flag": "middle"}
|
http://mathforum.org/mathimages/index.php/Induction
|
# Induction
## Basic Method
Just as one domino knocks the other over, each proposition P(k) implies the next, P(k + 1).
Induction is a useful method of mathematical proof ordinarily used for statements that connect to the sequence of natural numbers. The principle of induction first involves proving a claim for a simple case, usually the first one in the sequence. Then, it must be shown that if any case later in the sequence is true, then the next statement in the sequence must also be true. A trail of dominos is a common analogy for the principle of induction. Imagine a long trail of dominos. They are close enough that if the first domino in the line were to fall, the rest would also fall. The "first domino" is the most trivial case, and the fact that one domino will knock over the next domino represents the bulk of the proof: showing that the other cases then follow from the simplest case.[1]
Let's imagine a series of propositions, P(1), P(2), ... P(n). In our analogy, the dominos are each of the propositions. We can first prove P(1) (the first domino falls). Then we prove that P(k) implies P(k + 1) for some number k ≥ 1 (the dominos will knock each other over). Thus, P(1) implies P(2), and P(2) implies P(3), etc. Thus, all of the dominos will fall, and we will have proved all of the propositions. If n is the variable we are using for the induction proof, the process is usually called induction over n.
Here is an outline for an induction proof.
1. State the general proposition P(n).
2. a.State the base case P(1), the first case for which the proposition is true.
b. Prove the base case.
3. a. State the induction hypothesis, which says that "If P(k) is true, then P(k + 1) is true for k ≥ 1." The next two steps will prove this statement.
b. Assume that P(k) is true.
c. Using the truth of P(k), show that P(k + 1) is then true.
4. State that all the cases P(n) have been proved by Induction. The proof is complete!
Since the propositions have to correspond to sequence of natural numbers, the claims that induction can prove always involve integers in some way. Furthermore, induction is most useful when traditional algebraic manipulation is not enough.
It is best to see an example to understand induction more fully.
## Examples
The equation for the sum of the first n integers is a good example, since it is difficult to prove with algebra and involves integers. Here is the proof for the following equation:
$\sum_{i=1}^n i=1+2+3+\dots+n=\frac{n(n+1)}2$
First, we will prove the base case, where n = 1. The sum here is simply 1. In the formula we have
$\frac{1(1+1)}2=\frac{2}2=1$
Thus, the theorem holds for n = 1. Next, we have to show that P(k) implies P(k + 1). Hence, for our induction hypothesis, we assume that case for k ≥ 1 is true.
We will prove the k + 1 case from the k case algebraically. Here is what we are assuming explicitly:
$\sum_{i=1}^k i=\frac{k(k+1)}2$
Now, we will algebraically manipulate the sum for k + 1 into the formula using this fact.
$\sum_{i=1}^{k+1} i=\sum_{i=1}^k i+k+1$
Substituting the for the sum, we now have:
$\sum_{i=1}^{k+1} i$ $=$ $\sum_{i=1}^k i+k+1$
$=$ $\frac{k(k+1)}2+k+1$
$=$ $\frac{k^2+k+2k+2}2$
$=$ $\frac{(k+1)(k+2)}2=\frac{(k+1)((k+1)+1)}2$
This final expression is just the formula with k + 1 where k used to be. Thus, the truth of P(k) implies P(k + 1), and we have proved the theorem by induction.
The distributive property for numbers a, b, and c is commonly written like this:
$a(b+c)=ab+ac$
The sum in parentheses is between two numbers, but mathematicians and math students intuitively know that the distributive property would still hold if the sum was between 3 numbers, 6 numbers, or any number of numbers. This can be proved rigorously using induction.
This is the theorem that we will be proving, called the general distributive property:
$\sum_{i=1}^n ca_i=c\left (\sum_{i=1}^n a_i \right )$
The c and the ai's are all constants. Without summation notation, it looks like this:
$(ca_1+ca_2+\dots+ca_n)=c(a_1+a_2+\dots+a_n)$
Here is the proof that the distributive property applies to larger sums.
We are going to do induction over the variable n, which controls how big the sum is. For the base case, there are two simple options. First, n = 1. When we plug 1 into the formula for n, the formula becomes this simple statement:
$(ca_1)=c(a_1)$
We could also use n = 2 for our base case. In this case, we have
$(ca_1+ca_2)=c(a_1+a_2)$
which we also know is true because this is exactly the distributive property. So either n = 1 or n = 2 could be our base case. Now comes the inductive step. We assume that the theorem holds for n = k ≥ 1 and prove that it holds for n = k + 1.
For the induction hypothesis, here is the case for n = k, which we are assuming is true.
$\sum_{i=1}^k ca_i=(ca_1+ca_2+\dots+ca_k)$
Now consider the case for n = k + 1.
$\sum_{i=1}^{k+1} ca_i=(ca_1+ca_2+\dots+ca_{k+1})$
Now let,
$\sum_{i=1}^k a_i=a_1+a_2+\dots+a_k=b$
Since there is a sum of k numbers within the k + 1 sum, the induction hypothesis holds here, and we can distribute the first k terms of the total sum.
$\sum_{i=1}^{k+1} ca_i$ $=$ $(ca_1+ca_2+\dots+ca_{k+1})$
$=$ $(c(a_1+a_2+\dots+a_k)+ca_{k+1})$
$=$ $(c(b)+ca_{k+1})$
To this expression, we can use the regular distributive property, and then the result is almost evident.
$\sum_{i=1}^{k+1} ca_i$ $=$ $c(b+a_{k+1})$
$=$ $c(a_1+a_2+\dots+a_k+a_{k+1})$
Thus, the induction hypothesis holds true and we have proven the more general distributive property!
## Variants
There are actually multiple types of induction. For example, sometimes it is very difficult to show that P(k) implies P(k + 1) but it is easier to show that P(k) implies P(k + 2). However, if we completed this type of inductive step with only one base case, we would only prove every other proposition, since we are skipping over the k + 1 case. In order to compensate for the propositions we are missing, we would need two base cases, P(1) and P(2). P(1) would imply P(3), P(5), P(7), and so on, while P(2) would imply P(4), P(6), P(8), and so on. Thus we have proven all of the propositions P(n).
The same principle applies if we can show that P(k) implies P(k + 3), P(k + 5), P(k + 475) or P(k + c). We just need more base cases to compensate. If we can show that P(k) implies P(k + c) then we need the base cases to be P(1), P(2), ... P(c). These base cases, coupled with the inductive step, would prove all of the propositions and the induction would be complete.
### Induction on Negative Numbers
Induction can also be completed for negative numbers. In regular induction over positive numbers, we started with the base case, and the inductive step proved all the cases from the k case imply the k + 1 case. The dominos are knocked over in the positive direction. In order to knock the dominos down in the other direction, the inductive step needs to switch directions. In order to do this, the proposition P(k) needs to imply P(k - 1). Now, if the base case is P(c), all of the propositions will be proved for numbers that are less than c. If used in conjunction with induction over positive numbers, this method could be utilized to prove a theorem for all integers.
### Strong Induction
In the inductive step of the more common form, the proposition P(k) is usually shown to imply that P(k + 1). However in the strong form, we are not limited to just using P(k). We show that "P(1), P(2), ... P(k) all together imply P(k + 1)". Logically this has the same result as the common form. First we prove the base case, P(1), and then we prove the strong inductive hypothesis. P(1) implies P(2), then P(1) and P(2) both imply P(3), then P(1), P(2), and P(3) all imply P(4), etc. Thus all of the dominos will fall, and the theorem is proved for all P(n). Since we have many propositions to help us finish the inductive step, this method of induction is given the adjective "strong".
The graph theory proof that the number of vertices in a tree is 1 more than the number of edges uses strong induction. For more, see the Graph Theory page.
## More Examples
Here are some examples that readers can try themselves before seeing the answer!
1.Given that for numbers a, b, and c, the following is true:
$(ab)^c=a^cb^c$
use induction to show that this can be generalized to:
$(a_1a_2a_3\dots a_n)^c=a_1^ca_2^ca_3^c\dots a_n^c$
No peeking!
We can use either n = 1 or n = 2 as the simple base case, just as with the general distributive property proof. Here is the case for n = 1,
$(a_1)^c=a_1^c$
which is definitely true. Here is the case for n = 2:
$(a_1a_2)^c=a_1^ca_2^c$
which is true because it is a given property for the problem.
Now that we have the base case proven, we do the inductive step. We assume the case for n = k ≥ 1 and show that it implies the case for n = k + 1. The case for n = k, which we are now assuming is true, looks like this:
$(a_1a_2a_3\dots a_k)^c=a_1^ca_2^ca_3^c\dots a_k^c$
Now let
$a_1a_2a_3\dots a_k=b$
Now we will use algebraic manipulation and the cases for n = 2 and n = k to demonstrate the case for n = k + 1.
$(a_1a_2a_3\dots a_ka_{k+1})^c=(ba_{k+1})^c$
$(a_1a_2a_3\dots a_ka_{k+1})^c=(b)^c(a_{k+1})^c$
which we know by the case for n = 2. Now we substitute back in the expression for a and use the case for n = k.
$(a_1a_2a_3\dots a_ka_{k+1})^c=(a_1a_2a_3\dots a_k)^c(a_{k+1})^c$
$(a_1a_2a_3\dots a_ka_{k+1})^c=a_1^ca_2^ca_3^c\dots a_k^ca_{k+1}^c$
Now that we have completed the inductive step we have proved the theorem by induction!
2. Prove the following formula for the sum of the cubes.
$\sum_{i=1}^n i^3=1^3+2^3+\dots+n^3=\frac{1}4n^2(n+1)^2$
Interestingly enough, note that:
$\frac{1}4n^2(n+1)^2=\left ( \frac{n(n+1)}2 \right )^2=\left ( \sum_{i=1}^n i \right )^2$
The equation that follows from this fact (shown below) is called Nicomachus's theorem.[2] Proving the sum of cubes formula will amount to proving Nicomachus's theorem.
$\sum_{i=1}^n i^3=\left ( \sum_{i=1}^n i \right )^2$
No peeking!
The general proposition P(n) is stated above. So first, we tackle the base case, where n = 1.
Our sum is simply 13 = 1. In the other half of the equation:
$\frac{1}4(1)^2(1+1)^2=\frac{1}4(1)(4)=1$
Thus we have established the result for the base case n = 1. Now for the inductive step. We must show that if case for n = k is true, then the case for n = k + 1 is true. So let us assume that the case for n = k ≥ 1 is true, namely:
$\sum_{i=1}^k i^3=1^3+2^3+\dots+k^3=\frac{1}4k^2(k+1)^2$
We will use this to imply the case for k + 1. The following process will be similar to proving the formula for the sum of the numbers from 1 to n. Consider the sum of the cubes of 1 to k + 1.
$\sum_{i=1}^{k+1} i^3=1^3+2^3+\dots+k^3+(k+1)^3=\frac{1}4k^2(k+1)^2+(k+1)^3$
Now we will combine the two terms and try to get the formula for the sum of cubes with k + 1 in the place of n. This will require some really fancy FOILing and factoring. In order to avoid that, we will turn this expression into a polynomial. Then we will take what the formula for the k + 1 sum of cubes should be, and show they are the same.
$\sum_{i=1}^{k+1} i^3$ $=$ $\frac{k^2(k+1)^2+4(k+1)^3}4$
$=$ $\frac{k^2(k^2+2k+1)+4(k^3+3k^2+3k+1)}4$
$=$ $\frac{k^4+2k^3+k^2+4k^3+12k^2+12k+4}4$
$=$ $\frac{k^4+6k^3+13k^2+12k+4}4$
So, now here is our polynomial. Now, we will take what the formula for the k + 1 sum should be, and try to transform it into this sane polynomial. If we can, then the formula is thus equal to the sum for k + 1 and we will have our result. To get the formula for k + 1, substitute n for k + 1. We have:
$\frac{(k+1)^2((k+1)+1)^2}4$ $=$ $\frac{(k+1)^2(k+2)^2}4$
$=$ $\frac{(k^2+2k+1)(k^2+4k+4)}4$
$=$ $\frac{k^4+4k^3+4k^2+2k^3+8k^2+8k+k^2+4k+4}4$
$=$ $\frac{k^4+6k^3+13k^2+12k+4}4$
And there it is, the same polynomial! We have shown by assuming the case for n = k the case for n = k + 1:
$\sum_{i=1}^{k+1} i^3=\frac{(k+1)^2((k+1)+1)^2}4$
Thus, the inductive step is complete, and we have proven the theorem for all n by induction. If you want to know more about sums of powers, then consult this page.
3. Show that for any integer n > 0, the number 169n + 6 is divisible by 7.
No peeking!
In this problem, our base case would be n = 1. We have:
$169^1+6=169+6=175$
$\frac{175}7=25$
Thus, the theorem holds for n = 1. Before we move onto the inductive step, we actually require one more fact. Let us consider 169 divided by 7. Since we have to add 6 in order to make a number that is a multiple of 7, we know then that 169 divided by 7 has a remainder of 1. We will apply this reasoning later in the proof.
Now for the inductive step. We have to show that if 169k + 6 is divisible by 7, then 169k + 1 + 6 is divisible by 7. Then, we assume that 169k + 6 is divisible by 7. As before, this also means that we are assuming that 169k divided by 7 has a remainder of 1.
Now let's consider the case for k + 1.
$169^{k+1}+6=169(169^k)+6$
For one moment, let us consider just 169(169k) without the 6.
As we have shown in the base case and assumed in the induction hypothesis, both 169 and 169k have a remainder of 1 when divided by 7. When we multiply two numbers of this kind, we will get another number with a remainder of 1 when divided by 7. Click below to see why.
Let's imagine two numbers that when divided by 7 have a remainder of 1. This means that they are each 1 greater than a multiple of seven. Thus, they both have the following form, where n is a positive integer.
$7n+1$
When we multiply two numbers of this form, the result will be evident. Let a and b be positive integers. In order to simplify the expression, we will use the FOIL method.
$(7a+1)(7b+1)$ $=$ $49ab+7a(1)+7b(1)+(1)(1)$
$=$ $49ab+7(a+b)+1$
$=$ $7(7ab+a+b)+1$
This final expression is a number of the form 7n + 1, where the 7ab + a + b is the n. This means that it is a number that has a remainder of 1 when divided by 7, like the original two numbers! Thus, we have shown that when two numbers that have a remainder of 1 are multiplied, the resulting number has the same property.
Thus the number 169(169k) has a remainder of 1 when divided by 7. As we have shown before, this means that it is 6 less than a multiple of 7, and adding 6 to this number will yield a multiple of 7. Thus, we have established the case for n = k + 1 as a multiple of 7. We have completed the inductive step and proven that numbers of the form 169n + 6 for n > 0 are all divisible by 7 through the induction method.
4. Show that any number n ≥ 12 can be made by a sum of a multiple of 4 and a multiple of 5 (where 0 counts as a multiple of both).
No peeking!
This is a case of induction where we need more than 1 base case. Actually, the proof requires 4 base cases, where n is 12, 13, 14, and 15.
$4(3)+5(0)=12+0=12$
$4(2)+5(1)=8+5=13$
$4(1)+5(2)=4+10=14$
$4(0)+5(3)=0+15=15$
Thus we have established the 4 base cases we need. Now, in the inductive step, the induction hypothesis is "if P(k) is true, then P(k + 4)" is true. So we assume the P(k) case and show that case for k + 4, where k ≥ 12. This method is discussed in the "Variants" section.
So we are explicitly assuming the following, where a and b are nonnegative integers:
$k=4a+5b$
This inductive step isn't too hard. We have the following equation which quickly shows the result.
$k+4=4a+5b+4$
$k+4=4a+4+5b$
$k+4=4(a+1)+5b$
Thus the case for k + 4 has been shown from P(k) and we have shown the result.[3]
## References
1. ↑ Maurer, Stephen B. Discrete Algorithmic Mathematics". A. K. Peters Ltd. Wellesley MA: 2004.
2. ↑ Havil, J. Gamma: Exploring Euler's Constant. Princeton, NJ: Princeton University Press, p. 82, 2003.
3. ↑ http://www.cs.cornell.edu/courses/cs312/2002sp/handouts/induction/induct-examples.html
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 87, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281150698661804, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/27681/formulas-for-the-liar-paradox/27686
|
## Formulas for the liar paradox
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
How can the liar paradox be expressed concisely in symbols? In which formal languages?
-
## 6 Answers
The Liar is the statement "this sentence is false." It is expressible in any language able to perform self-reference and having a truth predicate. Thus, $L$ is a statement equivalent to $\neg T(L)$.
Goedel proved that the usual formal languages of mathematics, such as the language of arithmetic, are able to perform self-reference in the sense that for any assertion $\varphi(x)$ in the language of arithmetic, there is a sentence $\psi$ such that PA proves $\psi\iff\varphi(\langle\psi\rangle)$, where $\langle\psi\rangle$ denotes the Goedel code of $\psi$. Thus, the sentence $\psi$ asserts that "$\varphi$ holds of me".
Tarski observed that it follows from this that truth is not definable in arithmetic. Specifically, he proved that there can be no first order formula $T(x)$ such that $\psi\iff T(\langle\psi\rangle)$ holds for every sentence $\psi$. The reason is that the formula $\neg T(x)$ must have a fixed point, and so there will be a sentence $\psi$ for which PA proves $\psi\iff\neg T(\langle\psi\rangle)$, which would contradict the assumed property of T. The sentence $\psi$ is excactly the Liar.
Goedel observed that the concept of "provable", in contrast, is expressible, since a statement is provable (in PA) say, if and only if there is a finite sequence of symbols having the form of a proof. Thus, again by the fixed point lemma, there is a sentence $\psi$ such that PA proves $\psi\iff\neg\text{Prov}(\langle\psi\rangle)$. In other words, $\psi$ asserts "I am not provable". This statement is sufficiently close to the Liar paradox statement that one can fruitfully run the analysis, but instead of a contradiction, what one gets is that $\psi$ is true, but unprovable. This is how Goedel proved the Incompleteness Theorem.
-
Minor quibble: I think you want to say $\psi$ is equivalent to $\phi(|\psi|)$, not $\phi(|\phi|)$. – Ketil Tveiten Jun 11 2010 at 14:50
Ketil, thanks very much! I have now corrected this. – Joel David Hamkins Jun 11 2010 at 15:10
Thanks for the clear answer. It's quite articulated, so let me see if I get this straight. Tarsky's theorem states that a truth predicate T cannot be defined for all sentences; were it definable, THEN (and only then) we could show that there exists a sentence \psi (the liar paradox) such that \psi\iff\neg T(\langle\psi\rangle) Let me ask you another question then. Does Goedel theorem state that - there exist sentences which have a definite truth value but are neither provable nor disprovable or only the weaker - there exist sentences which are neither provable nor disprovable ? – tomate Jun 12 2010 at 15:13
Tomate, thanks for accepting my answer. About the question in your comment, every statement has a definite truth value in the standard model of arithmetic, even if we don't know which occurs. What the Incompletness Theorem asserts is that for any formal axiomatic system, there will be statements that are neither provable nor refutable in that system. Such a statement will have a definite truth value, and so there will be true unprovable statements. – Joel David Hamkins Jun 12 2010 at 15:28
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As Joel David Hamkins said, the standard answer to your question is that formal languages like the first-order language of arithmetic cannot express the liar paradox because they cannot express the predicate "is true" as applied to all its own sentences. Why not? Well, if it could, then we would get a contradiction, following the standard liar-paradoxical reasoning.
However, this is not the end of the story. For example, there is an interesting paper by Saul Kripke, Outline of a theory of truth, J. Philosophy 72 (1975), 690-716, better known among philosophers than among mathematicians, which explains how to define a truth predicate in such a way that the liar paradox can be expressed. The conclusion is just that the liar-paradoxical sentence has an undefined truth value.
-
1
Indeed, and there has been quite a bit of work along similar lines, leading to the subject of Revision Theories of Truth. The basic idea is to introduce a truth predicate T(x). It is unproblematic to apply it to assertions in the base language; the difficulty arises from attempting to apply T to assertions in the language with T. So one can proceed inductively, saying that if $\varphi$ is true at a stage, then $T(\varphi)$ becomes true at the next stage. Kripke in effect seeks fixed points in this procedure, and others have considered more complex rules at limit stages. – Joel David Hamkins Jun 10 2010 at 17:58
@Joel: Your comment reminds me of a remark that Kripke makes in his paper on page 697, that extending his work to transfinite levels lead to "mathematical difficulties that make the the problem highly nontrivial." Do you know what he is talking about and whether he or others have carried out this transfinite extension? – Timothy Chow Jun 10 2010 at 23:35
Oh yes, it has definitely been carried out. For example, see plato.stanford.edu/entries/truth-revision. There are various choices of limit rules used by Herzeberger, Gupta and others (see Philip Welch for some interesting criticism). – Joel David Hamkins Jun 11 2010 at 13:37
Thanks for the interesting reference, I'm going through it. – tomate Jun 13 2010 at 9:57
There is an extensive discussion of this issue in Vicious Circles by Jon Barwise and Lawrence S. Moss.
-
Aladdin M. Yaqūb (1993) The Liar Speaks the Truth, OUP, formalises a very simple language for naturally expressing the liar paradox, consisting of:
1. First-order equational logic which may, but need not be, equipped with constants, functions and relations; together with:
2. A constant for each sentence, which is the "name" of that sentence: i.e., a realisation of the countably infinite set of quoted sentences by the obviously bijective set of constants; this way of doing things avoid the need for any kind of syntactically (at least) second-order Quinean quotation operator from propositions to individuals;
3. Possibly, some constants that are names of certain individuals; and
4. A predicate T, together with a countably infinite set of axioms involving the names from (2) and (3) that are in the obvious bijective correspondence with the T-schema for the whole language we approach.
Yaqūb carried this out in the usual single-sorted first-order logic: I think it is more natural to formulate this in a two-sorted logic, but Yaqūb's handling of his system is concise and elegant, and this discpline shows that no kind of second-orderness, not even Henkin semantics, is required to model Tarski's T-schema, but only an expansion of the universe to include names, an additional predicate, and axioms sufficient to model the T-schema.
It shows, therefore, that an object language can be be its own meta-language without leaving the realm of the straightforwardly first order. As a consequence, the liar paradox exists within this logic, but it can be "tamed" with a family of possible tweaks to the T-schema. Yaqūb argues for one such tweaking that result in the liar paradox being not self-referential, but generating a sequence of formulae each involving one more T predicate. By looking at how the interpretation of these formulae evolve in each model, he classifies each formula of the base language —i.e., the subset of sentences that do not use the T predicate— into one of seven classes, depending on whether the sequence converges on a truth value, or whether they oscillate between values, and if so, in what manner. Paradoxical self-referential sentences are reseolved in a more pleasing manner than Tarski did, by being able to treat them in a unitary formalism that embeds the tower of formulae that are the progressive unwindings within the usual semantics of first-order logic, and without forbidding sentences that talk about themselves.
I found Yaqūb's monograph to be much more readable (I read it in an evening whilst travelling), and his argument much more elegant and compelling than that of Barwise & Etchemendy, and I highly recommend it to anyone who found B&E worth reading.
I would be very interested to read an effort to "intuitionise" Yaqūb's theory, by embedding it in intuitionistic first-order logic in a similarly elegant manner, and using a constructive model theory.
-
The liar paradox could be expressed in Church's original lambda calculus of 1932. Let $F$ be the function
$\lambda x. \sim x(x)$
Then $F(P) = \sim P(P)$ for any function $P$. In particular,
$F(F) = \sim F(F)$
and so $F(F)$ is a sentence that asserts its own falsehood.
-
Short Answer (SA). No Paradox, All Liar.
Bit Longer Answer (BLA). You don't need a theory of truth to say that a statement is false. You need only deny the statement. And if you tell me that a statement, say, Statement 1 (S1), is identical or even just logically equivalent to a statement that S1 is false, then you are telling me a falsehood, at least, according to principles that both of us probably took for granted beforehand.
-
1
-1: Please do not continue to answer questions of mathematical logic with answers that ignore or even flout the subject of mathematical logic. Such answers are not helpful. – Pete L. Clark Jul 14 2010 at 5:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9342353343963623, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/79112-vector-forces-problem.html
|
# Thread:
1. ## Vector as forces problem
I really need some help with this:
A mass of 15 kg is suspended by two cords from a ceiling. The cords have lengths of 15 cm and 20 cm, and the distance between the points where they are attached on the ceiling is 25 cm. Determine the tension in each of the two cords.
thanks in advance.
2. Originally Posted by Soul to soul
I really need some help with this:
A mass of 15 kg is suspended by two cords from a ceiling. The cords have lengths of 15 cm and 20 cm, and the distance between the points where they are attached on the ceiling is 25 cm. Determine the tension in each of the two cords.
thanks in advance.
Start by recognising that you have a 3-4-5 triangle in there and hence a 90 degree angle at the point where the 15 kg mass is attached ....
Note that Lami's Theorem makes the problem simple.
3. Originally Posted by Soul to soul
I really need some help with this:
A mass of 15 kg is suspended by two cords from a ceiling. The cords have lengths of 15 cm and 20 cm, and the distance between the points where they are attached on the ceiling is 25 cm. Determine the tension in each of the two cords.
thanks in advance.
let A be the point where the 15 cm string is attached to the ceiling.
C be the point where the 20 cm string is attached to the ceiling.
B be the point directly above the hanging mass between A and C.
M is the mass position.
ABM and CBM are right triangles.
AC = 25 cm , AB = x cm , BC = (25-x) cm
using Pythagoras ...
$15^2 - x^2 = 20^2 - (25 - x)^2$
$x = 9$ ... BM = 12 cm
let $T_1$ = tension in the 15 cm string
$T_2$ = tension in the 20 cm string
$g$ = acceleration due to gravity
system is in equilibrium ...
$\sum{F_x} = 0$
$T_1 \cdot \frac{3}{5} = T_2 \cdot \frac{4}{5}$
$\sum{F_y} = 0$
$T_1 \cdot \frac{4}{5} + T_2 \cdot \frac{3}{5} = 15g$
solve the system for $T_1$ and $T_2$
4. Thanks a lot, now I know how to solve it!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492801427841187, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/102665-optimization.html
|
# Thread:
1. ## Optimization
I should know this, but my mind completely blanked out on me, and I am lost...
A buyer wants lots of boxed of the same size with strong, square bases, and no tops. A manufacturer can make 1000 of them for a total of \$36.00, charging 0.3 cents per square inch for the base, and 0.1 cents per square inch for the sides. The buyer wants to know how to choose the dimensions of the box so that it holds as much as possible, i.e., he wants to maximize the volume. Let x denote the edgelength of the base of the box and let y denote the height.
Express the volume V of the box as a function of x alone.
What is the domain of V given the fact that V represents a volume?
Find the value of x in domain V which gives the maximum volume of V and what is the corresponding value of y?
What is the range of V(x) for x in domain V? Explain how this range is related to the maximum value of V.
Ok, so i began to set up an equation and got:
4(.1xy)+.3(x^2)=.036
therefore y=(3.6-.3(x^2))/(.4x)
and after plugging that into the equation V=(x^2)(y)
I got V=(.9x-.75(x^3)) but this is not correct.
Can I get some input on what I need to do correctly, and what I have done wrong?
Thank you in advance.
2. Originally Posted by mjoconn
I should know this, but my mind completely blanked out on me, and I am lost...
A buyer wants lots of boxed of the same size with strong, square bases, and no tops. A manufacturer can make 1000 of them for a total of \$36.00, charging 0.3 cents per square inch for the base, and 0.1 cents per square inch for the sides. The buyer wants to know how to choose the dimensions of the box so that it holds as much as possible, i.e., he wants to maximize the volume. Let x denote the edgelength of the base of the box and let y denote the height.
Express the volume V of the box as a function of x alone.
What is the domain of V given the fact that V represents a volume?
Find the value of x in domain V which gives the maximum volume of V and what is the corresponding value of y?
What is the range of V(x) for x in domain V? Explain how this range is related to the maximum value of V.
Ok, so i began to set up an equation and got:
4(.1xy)+.3(x^2)=.036
therefore y=(3.6-.3(x^2))/(.4x)
and after plugging that into the equation V=(x^2)(y)
I got V=(.9x-.75(x^3)) but this is not correct.
Can I get some input on what I need to do correctly, and what I have done wrong?
Thank you in advance.
one box costs 3.6 cents ... 0.036 would be in dollars
$0.3x^2 + 0.1(4xy) = 3.6$
$3x^2 + 4xy = 36$
$y = \frac{36-3x^2}{4x}$
$V = x^2y$
$V = \frac{36x-3x^3}{4}$
3. Originally Posted by skeeter
one box costs 3.6 cents ... 0.036 would be in dollars
$0.3x^2 + 0.1(4xy) = 3.6$
$3x^2 + 4xy = 36$
$y = \frac{36-3x^2}{4x}$
$V = x^2y$
$V = \frac{36x-3x^3}{4}$
Thank you so much. Would the domain then be (0,infinity)?
4. Originally Posted by mjoconn
Thank you so much. Would the domain then be (0,infinity)?
domain of V(x) ...
$\frac{36x-3x^3}{4} > 0$
$3x(12-x^2) > 0$
$3x(\sqrt{12}-x)(\sqrt{12}+x) > 0$
$0 < x < \sqrt{12}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217010736465454, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/36694/the-form-xy5-axy-and-its-solutions-with-x-y-prime/36839
|
# The form $xy+5=a(x+y)$ and its solutions with $x,y$ prime
In another question I was asking if there are any different $x,y>2$ primes such that $xy+5=a(x+y)$.
Where $a=2^r-1$, and $r>2$.
I was thinking if it is able to find a Pell equation or a similar pattern of $xy+5=a(x+y)$ to say what are and how many integer solutions are there (in particular prime solutions).
Thanks.
-
## 1 Answer
$xy-5=a(x+y)$ can be rewritten as $$(x-a)(y-a)=a^2+5$$ so for any fixed $a$ solving it just amounts to finding all the ways to factor $a^2+5$. So how many solutions depends on the prime factorization of $a^2+5$. I don't think there will be any formula for how many of those solutions have $x$ and $y$ prime.
-
I'm really sorry but I made a change in the question, it is not really different but it should be assumed that the -5 that I wrote is +5. – tomerg May 4 '11 at 16:50
So $xy+5=a(x+y)$ can be rewritten as $$(x-a)(y-a)=a^2-5,$$ right? – Gerry Myerson May 5 '11 at 1:21
true (need to spend letters) – tomerg May 5 '11 at 16:15
So then what more could one do by way of an answer to your question? – Gerry Myerson May 6 '11 at 0:48
If there aren't such prime pairs solve this equation, it seems very interesting. The question here if it may be unique. If when we change -5 into some other -prime will give no prime solutions. In this form of the equation we can say that for each such $a$ there are finitely (but more than 0) solutions in integers and infinitely in the union set. – tomerg May 6 '11 at 6:41
show 5 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9705150723457336, "perplexity_flag": "head"}
|
http://micromath.wordpress.com/2008/04/07/altitudes-of-a-triangle-and-the-jacobi-identity/?like=1&source=post_flair&_wpnonce=a21ac6eb7c
|
Mathematics under the Microscope
Atomic objects, structures and concepts of mathematics
Posted by: Alexandre Borovik | April 7, 2008
Altitudes of a triangle and the Jacobi identity
It is many years that I know the expression which belongs to Arnold and which sound something like that:
Altitudes of a triangle intersect in one point because of the Jacobi identity.
What is meant here is the defining identity of Lie algebras which is known in undergraduate mathematics mostly as an identity for cross product of vectors in three dimensional space:
$(A \times B)\times C + (B \times C)\times A + (C\times A)\times B = 0$
I even produced a crude computational proof of that link; later Hovik Khudaverdyan showed me a streamlined proof. Finally, I found in the literature a really elegant proof. Interestingly, it is done with the help of spherical geometry and observation that cross product gives a polarity on the real projective plane (that is, on the sphere with identified antipodal points). My conjecture is that a more careful analysis should show that this is the same as a “calculus of reflections” proof originating in Hjelmslev’s paper of 1907 and developed into an impressive theory by Friedrich Bachman.
After all, $\mathbb{R}^3$ with cross product is the Lie algebra of the group $PSO_3(\mathbb{R})$ which preserves the polarity, and reflections are half-turns around axes which could be conveniently identified with the points of projective plane.
As usual, references, further discussion, etc. can be found in my book.
Like this:
Posted in Uncategorized
Responses
1. I prefer to think about 3-space with cross product as the Lie algebra of SU(2) which is homeomorphic and isomorphic to the 3-sphere. The SO(3) bit is just SU(2) double covering the real projective 3-space.
A soccer ball can be used to explain the correspondence between SO(3) and RP(3) t an interested child. [still afraid that the word processors might eat my carrots-^]. Of course, you have to wave your hands that a rigid motion of space has an axis. That is where the soccer ball comes in handy (or footy).
I haven’t tried Arnold’s exercise yet.
By: Scott Carter on April 8, 2008
at 2:06 am
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428515434265137, "perplexity_flag": "middle"}
|
http://planetmath.org/distribution1
|
# distribution
In the following we will mean $C^{\infty}$ when we say smooth.
###### Definition.
Let $M$ be a smooth manifold of dimension $m$. Let $n\leq m$ and for each $x\in M$, we assign an $n$-dimensional subspace $\Delta_{x}\subset T_{x}(M)$ of the tangent space in such a way that for a neighbourhood $N_{x}\subset M$ of $x$ there exist $n$ linearly independent smooth vector fields $X_{1},\ldots,X_{n}$ such that for any point $y\in N_{x}$, $X_{1}(y),\ldots,X_{n}(y)$ span $\Delta_{y}$. We let $\Delta$ refer to the collection of all the $\Delta_{x}$ for all $x\in M$ and we then call $\Delta$ a distribution of dimension $n$ on $M$, or sometimes a $C^{\infty}$ $n$-plane distribution on $M$. The set of smooth vector fields $\{X_{1},\ldots,X_{n}\}$ is called a local basis of $\Delta$.
Note: The naming is unfortunate here as these distributions have nothing to do with distributions in the sense of analysis. However the naming is in wide use.
###### Definition.
We say that a distribution $\Delta$ on $M$ is involutive if for every point $x\in M$ there exists a local basis $\{X_{1},\ldots,X_{n}\}$ in a neighbourhood of $x$ such that for all $1\leq i,j\leq n$, $[X_{i},X_{j}]$ (the commutator of two vector fields) is in the span of $\{X_{1},\ldots,X_{n}\}$. That is, if $[X_{i},X_{j}]$ is a linear combination of $\{X_{1},\ldots,X_{n}\}$. Normally this is written as $[\Delta,\Delta]\subset\Delta$.
# References
• 1 William M. Boothby. An Introduction to Differentiable Manifolds and Riemannian Geometry, Academic Press, San Diego, California, 2003.
Type of Math Object:
Definition
Major Section:
Reference
Groups audience:
## Mathematics Subject Classification
53-00 General reference works (handbooks, dictionaries, bibliographies, etc.)
## Recent Activity
May 17
new image: sinx_approx.png by jeremyboden
new image: approximation_to_sinx by jeremyboden
new image: approximation_to_sinx by jeremyboden
new question: Solving the word problem for isomorphic groups by mairiwalker
new image: LineDiagrams.jpg by m759
new image: ProjPoints.jpg by m759
new image: AbstrExample3.jpg by m759
new image: four-diamond_figure.jpg by m759
May 16
new problem: Curve fitting using the Exchange Algorithm. by jeremyboden
new question: Undirected graphs and their Chromatic Number by Serchinnho
## Info
Owner: jirka
Added: 2004-11-30 - 17:10
Author(s): jirka
## Versions
(v6) by jirka 2013-03-22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 36, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8523306846618652, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/131983-derive-formulas-mathematical-logic.html
|
# Thread:
1. ## derive formulas in mathematical logic
Hey everyone,
Here's the problem:
Show that if $\Gamma \vdash \phi$ and $\Delta,\phi \vdash \psi$, then $\Gamma,\Delta \vdash \psi$
I soooort of know how to start, it's supposed to be like,
We already have a derivation of $\psi$ from $\Gamma$, so start with:
.
.
.
(k) $\phi$
(k+1)
(k+2)
.
.
.
I have a hard time getting this naturally, because to me it feels like I should be able to assume $\Delta$ in line (k+1) and then have $\psi$... but then, that's not right because it doesn't use Modus Ponens or any of the three axioms.
Any help or hints as to how I should be thinking?
2. Originally Posted by sfitz
Hey everyone,
Here's the problem:
Show that if $\Gamma \vdash \phi$ and $\Delta,\phi \vdash \psi$, then $\Gamma,\Delta \vdash \psi$
I soooort of know how to start, it's supposed to be like,
We already have a derivation of $\psi$ from $\Gamma$, so start with:
.
.
.
(k) $\phi$
(k+1)
(k+2)
.
.
.
I have a hard time getting this naturally, because to me it feels like I should be able to assume $\Delta$ in line (k+1) and then have $\psi$... but then, that's not right because it doesn't use Modus Ponens or any of the three axioms.
Any help or hints as to how I should be thinking?
From what you've described (three axioms schemes, and one rule that you've identified as MP), I'd say you might be thinking:
First, an application of the Deduction Theorem, followed by an application of MP.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9627494812011719, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/57682/natural-coherent-sheaves-on-algebraic-varieties
|
## Natural coherent sheaves on algebraic varieties
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let X be a smooth projective variety of dimension d over an algebraically closed field k. Can you please list some examples of natural locally free or coherent sheaves that you can always construct on X, regardless whether X has some particular structure or is of general type?
I can only name the two obvious examples: the structural sheave and the sheaf of differentials (and obvious combinations of them, of course, which do not count).
I can believe that these are the only ones for curves, as the genus is the only discrete invariant available.
-
The canonical sheaf and the tangent sheaf count as "combinations"? – Martin Brandenburg Mar 7 2011 at 15:37
1
@Martin: by "combination" vic means something like "any other sheaf derived from these". The canonical is the determinant and the tangent is the dual of the sheaf of differentials. – Sándor Kovács Mar 7 2011 at 16:31
1
You can take sheaves of differential operators $Diff_N$ of finite order $\le N$. It is related to the tangent sheaf $T_X$, but I don't see an obvious functor $T_X\mapsto Diff_N$. But I agree there aren't too many obvious examples. – Donu Arapura Mar 7 2011 at 16:31
1
If you're working over $\mathbb C$ you should be able to construct lots of singular metrics on the line bundles of $X$. Every one of those gives a coherent multiplier ideal sheaf on $X$. (Of course this won't work if you admit the existence of algebraically closed fields different from $\mathbb C$.) – Gunnar Magnusson Mar 7 2011 at 16:53
1
@Sándor: Ah yes, you only get ideal sheafs for pseudo-effective bundles, thanks for that (we can put a singular metric on any line bundle, but their curvature currents need to satisfy a positivity condition to define ideal sheaves - see tinyurl.com/493y6q3 starting with sections 4, 5 and 6). @unknowngoogle: No. Or at least very seldom. I've heard some talk of less singular metrics than others, but nothing widespread. I don't know of any Hermite-Einstein style criterions for singular metrics that would give a canonical choice either. – Gunnar Magnusson Mar 7 2011 at 20:09
show 3 more comments
## 9 Answers
What about the locally free sheaf which is the direct sum of all n-torsion invertible sheaves, for some fixed integer n? Of course on many smooth, projective varieties this is zero. But it is a "canonically defined" locally free sheaf which is sometimes nonzero. And I see no obvious way to build this from the structure sheaf or the cotangent sheaf.
-
- Good point! – Sándor Kovács Mar 7 2011 at 22:27
Hi Jason. That's a good point, but your sheaf is only canonically defined up to isomorphism, right? It seems that the OP has to decide what exactly their question is... – James Borger Mar 8 2011 at 5:45
Thanks Jason, this is also interesting. James: there is no "correct answer" to my question, as every one can take his own interpretation. As for me is concerned, I already learned a lot reading all your answers, but I'll be happy to hear more, of course! – vic Mar 9 2011 at 4:57
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
One formal approach to a problem like this is motivated by the "formal geometry" of Gelfand-Fuchs (which is the subject of other MO questions). We can define a "natural bundle" on a class of spaces to be one associated to a representation of a structure group of the given geometry. This is a common approach in differential geometry, where there's a lot known about natural differential operators etc. But in any case in the current context we can define a natural bundle on smooth n-dimensional varieties as one associated to a representation of the group of changes of coordinates on a formal n-dimensional disc. Every smooth variety has a canonical principal bundle for this group (with fiber over $x\in X$ the variety of isomorphisms between the completion of $X$ at $x$ and the formal disc), and so given a representation of the group we get a vector bundle on any variety of the given dimension.
All of the bundles discussed in the answers above are of this form (including jet bundles, sheaves of differential operators etc). If you take this as a definition of a natural bundle it's easy to prove the conjecture that they're all extensions of powers of bundles of forms: the group of changes of coordinates is an extension of $GL_n$ (the structure group of the tangent bundle) by a pro-unipotent group (changes of coordinate with derivative the identity). Hence all representations have filtrations with associated graded bundles associated to the (frame bundle of the) tangent bundle in the usual sense (ie all the Schur functors of the tangent bundle, like forms).
Edit: it's interesting to note that if we take representations of this group which extend to the Lie algebra of all derivations of formal power series in n variables (ie we have an action of $\partial_x$'s on the module as well) we get "all" natural flat bundles on smooth varieties --- such as the sheaves of all jets or all differential operators.
-
Beautiful answer! +1 – Qfwfq Mar 7 2011 at 17:49
It seems to me that his viewpoint assumes that the construction of the sheaf is local'', e.g. the restriction to an open analytic subset $U$ only depends on $U$. – ABayer Mar 7 2011 at 17:52
Absolutely - it's a universal local (or even formal) construction, based on the fact that we know what smooth varieties look like formally. It would be interesting to see if there are any natural bundles defined in a nonlocal way, eg using an integral kernel not supported on the diagonal. If you're willing to be only quasicoherent you could use things like the structure sheaf of $X\times X\setminus \Delta$ eg -- take your favorite local construction $F$ and assign $x\mapsto H^*(X\setminus x,F)$.. – David Ben-Zvi Mar 7 2011 at 18:17
.. or if we want coherent sheaf, the following seem natural constructions: take again $F$ to be a natural bundle in the local sense (eg forms or jets), fix an integer and consider $x \mapsto H^*(X,F(nx))$ -- more formally, $\pi_{2,*}(\pi_1^*(F)(n\Delta))$ where the $pi_i:X\times X\to X$ are the projection. These seem like "nonlocal natural coherent sheaves".. – David Ben-Zvi Mar 7 2011 at 20:03
I assume you mean s.th. like $p_{2, *}(p_1^*(F) \otimes I_{n\Delta}$ where $I_{n\Delta}$ is the ideal sheaf of the $n$-th infinitesimal neighborhood. But I am not sure a purist would count these as new'' sheaves, e.g. for $n = 1$ we would get the kernel of $H^0(F) \otimes \mathcal O_X \to F$. I imagine vic might count these as a combination of $F$ and $O_X$. – ABayer Mar 7 2011 at 22:07
I'll put out the following possibly bold, possibly totally stupid conjectures. In any case, the statements below are not intended as mathematically rigorous statements, but may have some truth in them. Cheers!
Conjecture 1 Any naturally defined coherent sheaf on all smooth projective varieties is related to the sheaf of differentials via tensor operations and up to torsion.
A more concrete version is
Conjecture 2 Let $\mathscr L_X$ be a naturally defined line bundle on all smooth projective varieties $X$. Then some tensor power (possibly negative or zero) of $\mathscr L_X$ agrees with some tensor power of the canonical bundle.
Remark Jason pointed out that there are naturally defined torsion line bundles and Arend's idea of defining sheaves as push-forwards would produce sheaves supported on proper closed subvarieties. This is the main motivation for the "up to torsion" part of the first conjecture and for taking powers of the natural line bundle and the canonical bundle. Also notice that David Ben-Zvi's construction produces sheaves that satisfy these conjectures and the extensions in David Speyer's answer produce sheaves whose determinants are powers of the canonical sheaf.
To get to Conjecture 1 from Conjecture 2 one could argue the following way. Since the claim is "up to torsion" we can mod out by the torsion and assume that our sheaf is torsion-free. Now since we are on a smooth variety this implies that it is locally free in codimension $2$ and actually, again by the "up to torsion" principle, we may assume that it is reflexive, that is, take the reflexive hull, or in other words, the push-forward of the restriction to the open set where it is a locally free sheaf. In other words, we may perform all tensor operations as if we had locally free sheaves and in particular, the (reflexive hull of the) determinant will be a line bundle. In other words, up to torsion, we obtained a natural line bundle. If that is either the structure sheaf or the canonical bundle, then we're in business.
The reasoning I can offer for Conjecture 2 is the following: If there is a natural line bundle, then we can ask whether it is ample (or its inverse is) and for those varieties that it is we obtain a natural embedding (after taking some power). Once we have this we can look at the corresponding Hilbert schemes and try to construct moduli spaces. For those varieties on which this mysterious line bundle is not ample we can still define a corresponding Kodaira dimension and study Iitaka fibrations and eventually work toward a corresponding classification theory. I don't think any of this has happened except for the version using the canonical sheaf. I believe that suggests that there are no other non-trivial natural line bundles.
-
Is there though any deeper reason as to why people study classification theory in terms of the canonical bundle or is it simply because the canonical bundle is one of the few canonically constructible bundles on a variety? – Frank Mar 7 2011 at 17:18
1
I think the ultimate reason is that that's the only (or at least most obvious) line bundle that exists on every variety. Of course, it helps that it also plays a major role in duality, but one might argue that that is a consequence of being canonical. In other words, if the dualizing sheaf would be a totally different (line) bundle, then we'd have two to play around with (and in some sense, neither would be the canonical choice). – Sándor Kovács Mar 7 2011 at 17:53
-1? Could you explain? – Sándor Kovács Mar 7 2011 at 17:54
Thanks; duality does seems to be a good reason to expect the canonical bundle to be special but still these could all be consequences of making this initial choice. PS: It was not me that downvoted! – Frank Mar 7 2011 at 18:03
@Frank: I did not think it was (the downvote). I noticed accidentally; it happened while I was writing the response to your comment. It's not a big deal, but it would be nice to know the reason. Cheers! – Sándor Kovács Mar 7 2011 at 18:34
show 2 more comments
In characteristic $p > 0$ one can take the Frobenius (iterated) pushforward of the structure sheaf and various sheaves of differentials. These are always locally free by an old result of Kunz. For toric varieties (and related varieties) you can explicitly compute these Frobenius pushforward sheaves, see a paper by Thomsen.
-
1
Thanks, but this is again not what I am looking for: these sheaves arise from the two ones I mention in my question. Besides, I would like the sheaves to really be available in any characteristic, including characteristic zero. – vic Mar 7 2011 at 15:57
You are right, those sheaves are constructed from the sheaves you mentioned. I guess I felt that they were distinct enough to warrant special mention. – Karl Schwede Mar 8 2011 at 2:00
This is really a comment that is too long for the comment box, in an attempt to more clearly define the problem.
I assume that "obvious combinations" includes the result of applying any Schur functor to $T^* X$ and $T_* X$, and any direct sum of such.
Does it include nontrivial extensions of such? Because there are some. For example: Let $\mathcal{I}$ be the ideal sheaf of the diagonal on $X \times X$. Then we have a short exact sequence on $X \times X$: `$$0 \to \mathcal{I}/\mathcal{I}^2 \to \mathcal{O}/\mathcal{I}^2 \to \mathcal{O}/\mathcal{I} \to 0.$$` Pushing this to the first copy of $X$, we have a short exact sequence: `$$0 \to \Omega^1(X) \to A \to \mathcal{O}_X \to 0$$` where $A$ is an extension which is usually nontrivial. (In particular, it is nontrivial if any of the Chern classes of $T^* X$ are nontrivial.) You can play similar games with higher tensor powers of $\mathcal{I}$ and get other canonical nontrivial extensions between tensor powers of $T^* X$.
Let me suggest breaking your question up into several parts, of successively greater optimism. I will leave the word "canonical" undefined for now. For the reasons Karl mentions, all of these are guesses are in characteristic zero.
Guess 1: The only canonical classes in $K_0(X)$ are integer combinations of the Schur functors of $T_* X$ and $T^* X$.
Guess 2: The only canonical classes in $K_0(X)$ which are classes of vector bundles are nonnegative integer combinations of the Schur functors of $T_* X$ and $T^* X$.
Guess 3: Every canonical method of assigning a vector bundle to an algebraic variety has an filtration whose successive quotients are the Schur functors of $T_* X$ and $T^* X$.
Guess 4, somewhat vague: All of the extensions of vector bundles occurring in guess 3 are built from constructions on nilpotent thickenings of the diagonal in $X \times X$.
I don't know whether any of these guesses are true, but they might help focus the discussion.
-
The bundle $A$ is known as the first jets. Similarly, you can take the first jets of any bundle as well as the higher order jets. – Sasha Mar 7 2011 at 16:26
What about some "natural" sheaves given by specifying some property of the stalks of the structure sheaf, such as $\mathcal{I}_{\mathrm{Sing}(X)}$ ? – Qfwfq Mar 7 2011 at 16:47
Ops, sorry, $X$ is assumed to be smooth... – Qfwfq Mar 7 2011 at 16:48
Thanks David, this is very interesting, I'll read more about these non-trivial extensions. Do you recommend any paper? I'd like to understand better these criteria for non-triviality. – vic Mar 9 2011 at 4:54
Given a map $f \colon X \to Y$, there are of course various ways to construct sheaves on $X$ from $f$ - e.g. relative differentials, or any other sheaf obtained from the cotangent complex of $f$, etc.
So any canonically constructed map gives canonically constructed sheaves. An example would be the Albanese map. For varieties of general type, one could also construct sheaves associated to the map to the canonical model (but of course this needs more care, as this is only a rational map).
One could also construct sheaves on $X$ from natural morphisms to $X$, but I don't right away see a way to get a coherent sheaf in that manner. (A cheap way to get a quasi-coherent sheaf would, for example, be to take the push-forward of the structure sheaf of the universal curve of the union of all Kontsevich spaces of stable maps from curves of a fixed genus $g$.)
-
The jet bundle $J_k (X)$ has been mentioned several times. It is NOT a ''combination'' of the tangent bundle, because it cannot be recovered from the tangent bundle (not by bundle techniques, at least). To fix the notation, a point in $J_k (X)$ over $x$ is a germ of a rational function on $X$, and two are equivalent if they coincide up to order $k$ at $x$. There are epimorphisms $J_k (X) \to J_{k-1}$ with kernel $Sym^k (T^{\ast} X)$, and of course $J_1 (X) = T^{\ast}(X)$. Dually, we can consider jet bundles of germs of curves into $X$. These jet bundles give some more credibility to the four guesses that David made.
-
In characteristic $p>0$, you can not only take push-forwards by the Frobenius (see Karl Schwede's answer), but also pull-backs.
Of course pulling the structure sheaf back gives the structure sheaf, but you can pull-back higher rank bundles, e.g. $\Omega_X^i$. The bundles $F^{s*} \Omega^i_X$ have been studied in the paper
Brückmann, P., Müller, D. "Birational invariants in the case of prime characteristic". Manuscripta Math. 95 (1998), no. 4, 413–429. (MR1618186)
As the title suggests, they produce birational invariants generalizing the plurigenera.
-
Do $\mathcal{D}$-modules, such as the sheaf of rings of differential operators $\mathcal{D}_X$ itself, count as combinations?
And the jet bundles $\mathrm{Jet}^k_X$ ?
-
Then, if $Z:=\mathrm{Sing}(X)$, the coherent sheaf of ideals $\mathcal{I}_Z$ is coherent (though not locally free in general), and $Z$ is somewhat "naturally" attached to $X$. – Qfwfq Mar 7 2011 at 16:46
1
Ops, sorry, $X$ was supposed to be smooth. – Qfwfq Mar 7 2011 at 16:48
btw, $\mathcal{D}_X$ is only quasi-coherent.. (isn't it?) – Qfwfq Mar 7 2011 at 16:57
$\mathcal{D}_X$ is coherent for holomorphic operators, if I recall correctly, so I guess by some sort of GAGA stuff it's coherent in the algebraic case as well? – Ketil Tveiten Mar 22 2011 at 14:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 86, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357782006263733, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/14560/statistical-approach-to-multinomial-distribution
|
## statistical approach to multinomial distribution
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose a dice with $q$ faces is rolled $N$ times, where $N$ is very big.
We define a multinomial variable $X=(X_1,\ldots,X_q)$ which counts how many times any face is occurred ($X_i$ is the number of occurrence of the $i$-th face).
Suppose we don't know if the dice is fair or not, namely if the probability distribution of the outcome of the dice is uniform or not.
If we know the value of $X$, how can we use it to estimate the probability distribution of the dice?
In particular, let $\epsilon>0$ be fixed. How can we use the value of $X$ to understand if there's a face $i$ such that $|P(\mbox{the dice's outcome is } i) -\frac{1}{q}|>\epsilon$?
Clearly, I expect to use a statistical method, hence my prediction can be wrong, but I would like to esteem my error probability.
ps: in the case $q=2$, it can be made defining the binomial which counts how many times one of the two faces occurs. If $N$ is big, that binomial can be approximated by a gaussian and the gaussian has mean $\frac{N}{2}$ if and only if the dice is fair. Thus we can settle a threshold $T>0$ and say that the dice is fair if $|X-\frac{N}{2}|>T$ and is unfair otherwise. $T$ is choosen according to the minimum value for $\epsilon$ and the error probability can be easily computed with the normal table.
-
## 4 Answers
The conjugate prior of the multinomial distribution is Dirichlet -- it is a distribution over the parameters (the probabilities of outcomes) of the multinomial.
Define
````D = Dirichlet(X + 1)
````
(the 1 represents the non-informative prior belief that all probability vectors are equiprobable.)
And then integrate over the region of the pdf that you're interested in.
I can expand on this answer if you like.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Use concentration of measure. For 2 dimensions, Hoeffding's inequality. See the appendix of the book on empirical processes by van der Vaart and Wellenr for the multinomial case.
-
The classical approach is to build a Neyman-Pearson style hypothesis test (warning: incredibly ugly mathematics, in desperate need of replacement, but ubiquitous).
Say you rolled your die $N$ times to produce $X$. Let the multinomial distribution have parameters $(p_1, p_2, ..., p_6)$, where $\sum_i p_i = 1$. Then construct a one dimensional measure such as $Q = \| X/N - p \|$, using your favorite $p$-norm. Calculate the probability distribution of $Q$.
Your null hypothesis in this case is $p_i = \frac{1}{6}$ for all $i$. For a test of level of significance $\alpha$ (conventionally 0.05 or 0.01), there is a region $[a,b]$ such that $\int_a^b p(Q = x) dx = 1 - \alpha$. Actually, there are many such, and there are other criteria to choose among them. In your case, invariance might be a good one: you expect the whole problem to be symmetric if you let $Q$ go to $-Q$, in which case the interval should be symmetric about 0, i.e., $[-a,a]$.
For a given value of $Q$ from your data, you do the integral over $[-Q,Q]$ and get $1 - \alpha$. That $\alpha$ is the lowest level of significance at which the observed data will be significant.
As I said, classical hypothesis testing is a very ugly theory. There are other approaches, such as minimax tests which you can construct via Bayes priors, since the set of all Bayes priors contains but is usually not much larger than the set of all admissible statistical procedures.
-
This contains an answer: roughly speaking, the entropy of the observation $x=(x_1,\ldots,x_q)$ based on $n$ rolls, with respect to the uniform distribution is $$h(x)=-\sum_{k=1}^q(x_k/n)\log(qx_k/n),$$ and the associated p-value is $$\exp(-nh(x)).$$ If $x$ is drawn from the uniform distribution, $h(x)$ is of order $1/n$ and $nh(x)$ converges to the $\chi^2$-distribution with $q-1$ degrees of freedom, otherwise $h(x)$ is of order $1$ and measures how far apart the empirical distribution of $x$ and the uniform distribution are.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9195286631584167, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/121424/picard-group-and-cohomology
|
# Picard group and cohomology
It's an easy but boring exercise (Hartshorne Ex. III.4.5 or Liu 5.2.7) that the group $Pic(X)$ of isomorphism classes of invertible sheaves on a ringed topological space (well, maybe we can restrict to schemes) is isomorphic to $H^{1}(X, \mathcal{O}_X^{*})$, where $\mathcal{O}_X^{*}$ denotes the sheaf whose sections over an open set $U$ are the units in the ring $\mathcal{O}_X(U)$.
The proof that I know (that uses the hint given by Hartshorne) uses heavily Cech cohomology: basically the idea is that given an invertible sheaf $\mathcal{L}$ and an affine open covering $\mathcal{U}=(U_i)$ on which $\mathcal{L}$ is free, we can construct an element in $\check{C}^1(\mathcal{U},\mathcal{O}_X^{*})$ using the restriction of the local isomorphism to the intersections $U_i\cap U_j$. The cocycle condition on triple intersection implies that we have a well defined element in $\check{H}^{1}(X, \mathcal{O}_X^{*})$. Then one proves that the map is an isomorphism of groups.
My question is the following: this approach is not very enlightening. Is there a more intrinsic proof of the isomorphism between $Pic(X)$ and $H^{1}(X, \mathcal{O}_X^{*})$, without Cech cohomology?
-
2
I'm puzzled. The proof via Čech cohomology is the most enlightening way of seeing the isomorphism! There is no a priori reason why a cohomology group computed via injective resolutions or other abstract nonsense should have anything to do with the classification of line bundles. – Zhen Lin Mar 17 '12 at 18:36
3
I agree with @Zhen: thinking about line bundles in the "Čech way" seems pretty valuable. Not that other proofs wouldn't be cool to see. – Dylan Moreland Mar 17 '12 at 18:37
Of course the "Čech way" is pretty valuable. I was just asking for a different approach. – FedeB Mar 18 '12 at 11:57
## 1 Answer
Suppose for simplicity that $X$ is integral. Consider the exact sequence of groups $$1\to O_X^* \to K_X^* \to K_X^*/O_X^* \to 1$$ where $K_X$ is the constant sheaf of rational functions on $X$. Taking the cohomology will give $$K(X)^* \to H^0(X, K_X^*/O_X^*) \to H^1(X, O_X^*) \to H^1(X, K_X^*).$$ Now the last term vanishes because $K_X^*$ is a flasque sheaf, and the cokernel of the left arrow is by definition the group of Cartier divisors on $X$ up to linear equivalence. As $X$ is integral, this cokernel is known to be isomorphic to $\mathrm{Pic}(X)$.
-
Ok, this works in the integral case. But what about the general case? The isomorphism between $H^1(X,\mathcal{O}_X^*)$ and Pic(X) can be defined even in the non integral case, also when you don't have the isomorphism between Cartier divisors and Picard group. – FedeB Mar 18 '12 at 12:10
For example if $X$ is not Noetherian and if you have embedded points, then the canonical map $CaCl(X)\to Pic(X)$ is not surjective. – FedeB Mar 18 '12 at 12:11
@FedeB, the above proof works for Noetherian schemes without embedded points. Unfortunately, in general I don't have an alternative proof to the usual one. – QiL'8 Mar 18 '12 at 20:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9223690032958984, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2009/06/30/dirac-notation-i/?like=1&source=post_flair&_wpnonce=c327cf8c6a
|
# The Unapologetic Mathematician
## Dirac notation I
There’s a really neat notation for inner product spaces invented by Paul Dirac that physicists use all the time. It really brings to the fore the way a both slots of the inner product enter on an equal footing.
First, we have a bracket, which brings together two vectors
$\displaystyle\langle w,v\rangle$
the two sides of the product are almost the same, except that the first slot is antilinear — it takes the complex conjugate of scalar multiples — while the second one is linear. Still, we’ve got one antilinear vector variable, and one linear vector variable, and when we bring them together we get a scalar. The first change we’ll make is just to tweak that comma a bit
$\displaystyle\langle w\vert v\rangle$
Now it doesn’t look as much like a list of variables, but it suggests we pry this bracket apart at the seam
$\displaystyle\langle w\rvert\lvert v\rangle$
We’ve broken up the bracket into a “bra-ket”, composed of a “ket” vector $\lvert v\rangle$ and a “bra” dual vector $\langle w\rvert$ (pause here to let the giggling subside) (seriously, I taught this to middle- and high-schoolers once).
In this notation, we write vectors in $V$ as kets, with some signifier inside the ket symbol. Often this might be the name of the vector, as in $\lvert v\rangle$, but it can be anything that sufficiently identifies the vector. One common choice is to specify a basis that we would usually write $\left\{e_i\right\}$. But the index is sufficient to identify a basis vector, so we might write $\lvert1\rangle$, $\lvert2\rangle$, $\lvert3\rangle$, and so on to denote basis vectors. That is, $\lvert i\rangle=e_i$. We can even extend this idea into tensor products as follows
$\displaystyle e_i\otimes e_j=\lvert i\rangle\otimes\lvert j\rangle=\lvert i,j\rangle$
Just put a list of indices inside the ket, and read it as the tensor product of a list of basis vectors.
Bras work the same way — put anything inside them you want (all right, class…) as long as it specifies a vector. The difference is that the bra $\langle w\rvert$ denotes a vector in the dual space $V^*$. For example, given a basis for $V$, we may write $\langle i\rvert=\epsilon^i$ for a dual basis vector.
Putting a bra and a ket together means the same as evaluating the linear functional specified by the bra at the vector specified by the ket. Or we could remember that we can consider any vector in $V$ to be a linear functional on $V^*$, and read the bra-ket as an evaluation that way. The nice part about Dirac notation is that it doesn’t really privilege either viewpoint — both the bra and the ket enter on an equal footing.
### Like this:
Posted by John Armstrong | Algebra, Linear Algebra
## 14 Comments »
1. But why not simply consider the linear map ev: V* (x) V -> k, where k is the ground field, using simply the notation (a,b) for the nondegenerate pairing (if a \in V, b \in V*)?
Comment by Zygmund | June 30, 2009 | Reply
2. Trivia: “bra” means good in Swedish.
Comment by Å | June 30, 2009 | Reply
3. Å: good to know
Zygmund: there are other things we’re going to want to do, especially if we’ve got an inner product around (like in a Hilbert space for quantum mechanics). Stay tuned.
Comment by | June 30, 2009 | Reply
4. Thank you, John Armstrong. I’m hard at work redlining Draft 4.0 of a QM paper. Question is: what is the topology of the ensemble of theories of QM if we don’t presume that Planck’s contant is a nonnegative real number? You clarify some notational issues that are deeper than the look.
Comment by | June 30, 2009 | Reply
5. Naive question: if putting a bra and ket together does it what it does for two vectors, what is the equivalent for a “3-bra-ket” widget that operates on three vectors? Does it map to a vector triple products, or is there a Jacobi identity, or what?
Comment by | June 30, 2009 | Reply
6. As far as I know there’s no natural three-variable operation analogous to the pairing between a vector space and its dual.
Comment by | June 30, 2009 | Reply
7. Actually, that’s not quite true. In special cases there is such an operation. One in particular is the “triality” between the three eight-dimensional irreducible representations of $\mathfrak{so}(8)$.
Comment by | June 30, 2009 | Reply
8. Thank you, John Armstrong, for that fascinating comment #7. I can’t even claim that fact was “on the tip of my tongue.” But I had a nagging 1/3 memory that there was something beyond duality. This comes from, I think, but please correct me, special features of the group Spin(8), i.e. the double cover of the 8-dimensional rotation group SO(8), arising because the group has an outer automorphism of order three? Wasn’t this discussed on the n-category Cafe or John Baez’s blog?
Comment by | June 30, 2009 | Reply
9. Yes, Baez’ has discussed it in “This Week’s Finds”.
Comment by | June 30, 2009 | Reply
10. [...] Notation II We continue discussing Dirac notation by bringing up the inner product. To this point, our notation applies to any vector space and its [...]
Pingback by | July 1, 2009 | Reply
11. The post says that, for <w, v>, “the first slot is antilinear — it takes the complex conjugate of scalar multiples — while the second one is linear”. This seems to be the opposite of what Wikipedia says for inner product space:
http://en.wikipedia.org/wiki/Inner_product_space#Definition
What am I missing?
Comment by Sig Freud | July 5, 2009 | Reply
12. You’re missing the fact that which slot is which is a choice of convention, and I’ve chosen the other convention than they have. Go back and read my earlier posts.
Comment by | July 5, 2009 | Reply
13. [...] Notation III So we’ve got Dirac notation and it’s nice for inner product spaces, but remember we’re not just interested in [...]
Pingback by | July 6, 2009 | Reply
14. [...] and Bilinear Forms on Inner Product Spaces in Dirac Notation Now, armed with Dirac notation, we can come back and reconsider matrices and forms. For our background, [...]
Pingback by | July 8, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 20, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287365078926086, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-topics/205065-help-physics.html
|
# Thread:
1. ## help with physics
A 650 kg satellite orbits at a distance from the Earth's center of about 7.2 Earth radii. What gravitational force does the Earth exert on the satellite?
how would I set this up and what equation would I use
2. ## Re: help with physics
Use Newton's law of gravity:
$F=G\frac{m_1m_2}{r^2}$
or find the weight of the satellite at the surface of the Earth, then use the fact that the force due to gravity varies inversely as the square of the distance from the center of the Earth. This method is easiest, as you don't need to use Newton's gravitation constant, or know the mass and radius of the Earth.
3. ## Re: help with physics
What? How do you write that?
4. ## Re: help with physics
Originally Posted by Louisana1
What? How do you write that?
$\frac{650 \cdot 9.8}{7.2^2}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188318252563477, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/7721/what-methods-exist-to-prove-that-a-finitely-presented-group-is-finite/14259
|
## What methods exist to prove that a finitely presented group is finite?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose I have a finitely presented group (or a family of finitely presented groups with some integer parameters), and I'd like to know if the group is finite. What methods exist to find this out? I know that there isn't a general algorithm to determine this, but I'm interested in what plans of attack do exist.
One method that I've used with limited success is trying to identify quotients of the group I start with, hoping to find one that is known to be infinite. Sometimes, though, your finitely presented group doesn't have many normal subgroups; in that case, when you add a relation to get a quotient, you may collapse the group down to something finite.
In fact, there are two big questions here:
1. How do we recognize large finite simple groups? (By "large" I mean that the Todd-Coxeter algorithm takes unreasonably long on this group.) What about large groups that are the extension of finite simple groups by a small number of factors?
2. How do we recognize infinite groups? In particular, how do we recognize infinite simple groups?
(For those who are interested, the groups I am interested in are the symmetry groups of abstract polytopes; these groups are certain nice quotients of string Coxeter groups or their rotation subgroups.)
-
## 9 Answers
The theory of automatic groups may be a help here. There is a nice package written by Derek Holt and his associates called kbmag (available for download here: http://www.warwick.ac.uk/~mareg/download/kbmag2/ ). A previous answer mentioned groebner bases. The KB in kbmag stand for Knuth-Bendix which is a string rewriting algorithm which can be considered to be a non-commutative generalization of groebner bases. There is a book "Word Processing in Groups" by Epstein, Cannon, Levy, Holt, Paterson and Thurston that describes the ideas behind this approach. It's not guaranteed to work (not all groups have an "automatic" presentation) but it is surprisingly effective.
-
4
You may be in luck. It turns out that finitely generated Coxeter groups are automatic: Brink and Howlett (1993). "A finiteness property and an automatic structure for Coxeter groups". Mathematische Annalen – Victor Miller Feb 5 2010 at 15:56
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If a discrete group is amenable and has Kazhdan's Property (T), then it is finite.
This technique was used by Margulis in his original proof of his Normal Subgroup Theorem. It's since been used in a couple of other Normal Subgroup Theorems, which have been applied to prove simplicity of some infinite groups. See for example "Lattices in products of trees" by Burger and Mozes, and "Simplicity and superrigidity of twin building lattices" by Caprace and Remy.
-
This is sort of a sideways look at your question:
There's software called "Heegaard" by John Berge that takes as input a finite presentation and attempts to find a corresponding Heegaard splitting of a 3-manifold which has that fundamental group. It seems to be fairly effective. There are algorithms to produce triangulations of Heegaard splittings available (Hall and Schleimer for example). So you could take the presentation, find (if possible) the Heegaard splitting, produce the triangulation and then use software like Regina and SnapPea to analyze the geometry of those manifolds. There's a lot of heuristics there and also some serious algorithms. All the links the the various packages and their documentation is available here: http://www.math.uiuc.edu/~nmd/computop/
So for groups that are the fundamental groups of 3-manifolds at least, there's a decent toolkit to play with.
As an example, consider testing to see if a group is trivial. Step 1: Heegaard could get stuck. Step 2: if Heegaard finds a splitting, you triangulate it and pass it to Regina. Step 3: Regina has an algorithm to recognise a triangulated 3-sphere, so it will tell you whether or not your group is trivial.
-
There is no algorithm to tell if a finitely presented group is finite, but in principle there is a procedure which will terminate if your group is finite, and tell you which group it is. You can recursively list all finite groups (e.g. by group tables), and therefore presentations for them. You can recursively perform all Tietze transformations on your group presentation, and check at each stage whether it agrees with one of the finite group presentations you have computed (imagine doing this in parallel or alternating the steps of the two recursive procedures). This will eventually tell you whether your group is finite if it is. But of course this is completely impractical, and I realize this isn't what you want. The uncomputable thing is to prove that a group is not finite.
-
One can also sometimes use Fox calculus, which describes the abelianization of a finite-index normal subgroup of $G$. If this abelianization is infinite, your group is infinite. Johnson's "Presentations of Groups", chapter 12, describes this in detail.
Also see this thread for some examples of other techniques: group-pub
Steve
-
Regarding part 2 of your question - "In particular, how do we recognize infinite simple groups?" - I think the answer is that it depends on which infinite simple group you're looking at! Some famous examples:-
• Higman's original construction of an infinite simple group starts with a group with no non-trivial finite quotients. (Roughly, you construct one of these by building in a pair of conjugate elements which would have to have different orders in a finite quotient.) You then proceed to take the quotient by a maximal proper normal subgroup. The result can't be finite, because that would be a non-trivial finite quotient! (There was some discussion of this here.)
• Thompson's groups T and V contain elements of infinite order!
• Tarski Monsters are infinite because of Sylow's Theorems. Every proper subgroup is of prime order p, so Sylow's Theorems tell you that if it were finite then it would be cyclic, which it isn't by construction.
Do you have a particular reason to think that your groups are simple? What do you know about the kernel of the map from the Coxeter group?
EDIT: Just wanted to emphasize that of course, of the examples listed, only Thompson's Groups happen to be finitely presented. Finitely presented infinite simple groups are pretty special.
-
The groups I'm dealing with right now certainly aren't simple -- each one has at least one known finite quotient -- but I'm interested in the general question anyway. – Gabe Cunningham Dec 4 2009 at 2:48
Right. I assumed from the "simple" flavour of your question that you weren't interested in answering your question by looking for infinite quotient/finite overgroups (which is the usual way of approaching these things). Another possibility is that if your kernel satisfies some sort of "small cancellation" condition then you can sometimes prove that the quotient is infinite. But I've no idea how you make that work on a Coxeter group. – HW Dec 4 2009 at 4:00
I suppose Groebner bases can be used to compute the size of a group with generators and relations, just as they can be used to compute the size of a commutative or noncommutative algebra with generators and relations. This certainly would not work in all cases, but in some simple enough cases it will. In particular, when your group is actually finite, you will eventually discover this with Groebner bases, though the computation time may be impracticable for a human, or even for a computer. When your group is infinite, Groebner bases will sometimes tell you it is, but sometimes they wouldn't.
-
Do you mean some non-commutative version of Groebner bases? To which algebra they belong? – mathreader Dec 4 2009 at 0:18
1
I am not sure that I understand you question. Of course, you need noncommutative Groebner bases, but you don't necessarily need any algebra, though you may think about one if it makes you more comfortable. Just add the inverses of your group generators to your list of generators, and proceed with the standard Groebner basis algorithm based on the Diamond Lemma. Actually, there might be a better way, possibly, with a special notion of a Groebner basis particularly suited for the group case. I would look for it in the literature, starting from Teo Mora's papers. – Leonid Positselski Dec 4 2009 at 0:30
Your approach of finding infinite quotients is certainly a standard one. There is, however, a slight tweeking of it that helps in the event that this approach breaks down - search through some low index subgroups. If any of these have infinite homomorphic images then your group must posses an infinite subgroup and thus must also be infinite. In Magma the command "LowIndexSubgroups" can do this and I suspect somthing similar works in GAP, matlab etc.
As with the other techniques this is not a sure-fire 100% guaranteed method, but it is sometimes useful.
Simplicity of an infinite group is a much harder question to address. Needless to say that if I was a beting man then I would certainly put money on your group not being simple.
-
There is a software called Magnus (http://www.grouptheory.org/magnus) which gives You some insight how it may be implemented. I do not know anything about details, but it has some documentation, and works pretty well;-)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440210461616516, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/123903-there-subgroup-h-a4-such-a4-h-isomorphic-s3.html
|
# Thread:
1. ## Is there a subgroup H of A4 such that A4 / H isomorphic to S3?
Is there a subgroup $H$ of $A_4$ such that the quotient group ${A_4}/H \cong S_3$? I know I could just go through the possibilities, but is there a very quick reason why the answer should be "no"?
2. Originally Posted by Boysilver
Is there a subgroup $H$ of $A_4$ such that the quotient group ${A_4}/H \cong S_3$? I know I could just go through the possibilities, but is there a very quick reason why the answer should be "no"?
As quick as possible: $A_4$ has no normal subgroups of order 2.
Tonio
3. Great, thanks.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585729241371155, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/108138?sort=newest
|
## How to tell a paradox from a “paradox”?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Russell's paradox showed that naive set theory leads to a contradiction. This was something that was taken seriously and caused a lot of work.
Now, Banach–Tarski paradox is arises from a result that a ball can be decomposed into finite amount of pieces and the pieces can be used to built two identical copies of the decomposed ball. Banach-Tarski paradox is often treated as a "paradox", basicly meaning that, yes, it is counter intuitive but yet there is no problem - mathematics just occasionally is counter intuitive.
To be honest, I have never understood why Banach-Tarski is not a "real" paradox but not being expert of measure theory I chose to accept the common view.
Is there some high level explanation on how to tell a paradox from a "paradox"? What is it that makes a counter intuitive result to a "real mathematical paradox" that we should start worrying about?
-
The point of Banach-Tarski is not just that one ball equals two balls - it is that the disassembly and assembly maps are rigid motions of Euclidean space, and hence there is no non-zero functional that is defined on all subsets and is invariant under the isometry group of Euclidean space. This only works in dimensions 3 and above; there is no B-T paradox IIRC in dimensions 1 or 2. – Yemon Choi Sep 26 at 7:57
3
@Yemon: I think you forgot finite additivity in your list of conditions. `:-)` Anyway, Terry Tao gave a good description of the difference between the dimensions on his blog: terrytao.wordpress.com/2009/01/08/… – Willie Wong Sep 26 at 8:44
@WillieWong (1) quite correct, oops (2) I knew that (non-)amenability was the main issue here but I couldn't recall the precise details in haste – Yemon Choi Sep 26 at 8:55
6
Note that in mathematics, and even in the current language, paradox should refer to the former acception you mentioned, that is, a counter-intuitive fact, something that is contrary to the common opinion, thus a priori not dangerous. For the latter notion, I think you mean antinomy, something that leads to a contradiction. – Pietro Majer Sep 26 at 9:43
1
Voting to close as "not a real question" (not even from a metamathematical or philosophical viewpoint) – Qfwfq Sep 27 at 8:05
show 2 more comments
## 5 Answers
Many paradoxes are first expressed in a semi-formal way, for example "the least number not describable by fewer than eleven words". They are warning signs that lead us to further analysis and can be resolved in different ways:
1. We can just get used to a "paradox" and accept it as "truth", e.g., there are infinite sets of different sizes, or there is a real function which is continuous at irrational arguments and discontinuous at rational arguments. There are famous paradoxes in philosophy which would not be considered paradoxes today, such as Zeno's paradox ("How can an infinite sum of positive numbers be finite? No movemement is possible!") and various arguments from Prime Cause ("How could we have an infinite descending chain of causality? God must exist!").
2. We find the paradox unacceptable and so we need to change something. We might change rules of logic, definitions, or axioms, everything is up in the air.
A paradox which actually proves falsehood, or a statement as well as its negation, is more properly called an inconsistency. An inconsistency is something we can never get used to and so we have to change something. A milder form of paradox is one which does not prove falsehood but just something very counter-intuitive, in which case we have to decide whether to accept it, or admit that our attempt to bring something into the realm of mathematics worked in unexpected ways.
I think this question is about how to tell whether a given "paradox" is of the first or second kind. When should we just "get used" to a paradox and when should we "change things"? In the case of Russell paradox we had no choice but to change something. In the case of Banach-Tarski paradox there is a choice. The accepted view is that we should just get used to it, but there are interesting alterantives which force us to rethink the notion of space. Even though these alternative notions of space are far better suited for probability, measure and randomness than the classical approach, mathematicians are unlikely to adopt them widely out of sheer inertia and historical coincidence. But mathematicians do not like to admit that mathematics is a human activity, and as such subject to sociological and historical trends.
So I suppose my answer is this: when faced with an unacceptable counter-intuitive statement which offers several mathematical resolutions, the choice will be made through social interaction which has some mathematical content, but not as much as we would like to think. Other factors, such as arguments from authority and social intertia will play an important role.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
What you're describing as a "true paradox" is sometimes called an "antimony" and it means an actual logical inconsistency in the underlying theory. The Burali-Forti paradox is another example and it means there can't be a set of all ordinals (the ordinals are a proper class). By contrast, the Banach-Tarski theorem is consistent; it's just counterintuitive. The reason we don't hear much about "true paradoxes" (antimonies) these days is that logicians in the 1920's got the earlier inconsistencies under control, and (in all likelihood) we're not dealing with any actual inconsistent systems today, at least in everyday mathematics.
-
3
Heh -- I think you mean antinomy. "Antimony" is a chemical element. – Todd Trimble Sep 26 at 17:55
antinomy :-) ${}$ – Mariano Suárez-Alvarez Sep 26 at 17:56
1
Antimony/antinomy came up here two years ago - see the comments on the Dick Palais answer to mathoverflow.net/questions/40920/… – Gerry Myerson Sep 27 at 0:25
Although I am not so good with philosophical subtleties, I have always found useful to make a distinction between an antinomy and a paradox. The first leads to a formal contradiction, i.e., a logical inconsistency in your theory (you can prove both a formula and its negation).
The second `merely' defies human intuition, without being a (known) antinomy. Much less worrying (ask Frege :)).
Many just use `paradox' for both things, but I find this highly confusing.
-
+ 1 – Qfwfq Sep 27 at 8:09
Both the Russell paradox and the Banach-Tarski "paradox" show that certain ideas are contradictory. It seems to me that the key difference between the two is that, in Russell's case, the ideas in question had been proposed (by Frege) as axioms for a foundation of mathematics, and they seemed sufficiently basic to be accepted, until the paradox appeared. In the Banach-Tarski case, one of the ideas involved in the contradiction is the idea that one can meaningfully talk about the "volume" of arbitrary sets in $\mathbb R^3$. (Here "meaningfully" is intended to include additivity and invariance under Euclidean motions.) Although that is a very appealing idea intuitively, I'm not aware of anyone's proposing it as an axiom (or even as a conjecture). The development of Lebesgue's measure theory had already shown that the intuition is not reliable and the measurability of general sets is a delicate issue.
-
I don't see how these two a so fundamentally different. Russell's paradox tells us that we have to think more carefully about what a set actually is, and Banach-Tarski tells us that we have to think more carefully about what the measure of a set is and which sets are measurable.
When arriving at a counterintuitive statement, there are two possible conclusions: First, the previous intuition was wrong, and in this case the result is genuinely counterintuitive, and the second possibility is that the definitions or the logic were wrong and need to be changed. Banach-Tarski falls into the second category, because one would not conclude that matter can be created by clever cuts and rearrangements, but rather that one needs a thorough definition of measure.
-
7
But the Russell paradox is a proof that a mathematical theory was inconsistent; whereas the Banach-Tarski paradox is just perplexing for us as human beings. – Asaf Karagila Sep 26 at 11:36
6
@Asaf: If we insisted that all subsets of $\mathbb{R}^3$ all had volume, Banach-Tarski would also become a proof that something is inconsistent. The point of "paradoxes" is not that they are formal inconsistencies (which depends entirely on what you accept as the axioms), but rather that they point to limitations in the logical structure of theories, be it theories of sets, measure or something else. In principle we are at liberty to propose whatever axioms we like, so it is not decided in advance whether a given "paradox" will be considered a proof of inconsistency or just an oddity. – Andrej Bauer Sep 26 at 14:28
2
And so, this is what the question is about: when faced with two apparently paradoxical statements, why do we formulate our mathematical theories so that one is indeed an inconsistency and the other just a really weird theorem? This is not a question within mathematics, but a question about mathematics. – Andrej Bauer Sep 26 at 14:30
2
@Andrej: I agree with that, but the Banach-Tarski paradox came up about 20 years after Vitali proved that it is inconsistent with choice to have all sets measurable. As I said that paradox in the case of Banach-Tarski is that it is inconceivable to us that one ball can be split into five pieces (one of which is a point if I recall correctly) and then reassembled as two balls of the same volume. – Asaf Karagila Sep 26 at 17:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9598267674446106, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/55715?sort=newest
|
## Are there uncountably many essentially inequivalent versions of Mathematics?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi everyone,
Disclaimer 1: logic and set theory are a long way from my field, so apologies in advance if I demonstrate extreme ignorance or stupidity, and please correct me if (when?) I write stupid things. But hopefully my basic meaning should be fairly clear to everyone even if I get some details wrong.
Disclaimer 2: I admit this question might be slightly subjective. But I feel it's not too subjective, and is fairly natural and interesting to most mathematicians, out of mere curiosity.
Framework: Throughout, let's assume that standard ZF set theory is consistent, and take it as our basic mathematical foundation. (I don't necessarily think this is best, but I prefer to pin down the discussion).
We all know that Mathematics comes in several distinct flavours: e.g. you can believe or disbelieve the Continuum Hypothesis, and both points of view are (equally?) valid; they are really just matters of opinion. Thus there are at least 2 different versions. Of course we have infinitely many different versions: each number $m=1,2,3,\ldots$ gives a different flavour of Mathematics, given by the axiom $2^{\aleph_0} = \aleph_m$.
Subquestion Does the value of $m$ really matter very much? $2^{\aleph_0} = \aleph_1$ seems a particularly special case; but I find it hard to believe there'd be very much meaningful distinction (in terms of theorems anyone would want to consider) between the axioms $2^{\aleph_0} = \aleph_{103}$ or $2^{\aleph_0} = \aleph_{275}$, for example.
If desired, we could regard these different versions of Mathematics as essentially equivalent (in a rough sense): the axioms all look very similar, given by a single parametrisation. We could also throw in versions with $2^{\aleph_\alpha}$, etc.
Alternatively, we could remove these difficulties completely by not even considering cardinals beyond $\aleph_2$ or $\aleph_3$, say; (or any $\aleph_m$ with finite $m$).
It would be really amusing if we could do the following, for then we would have (at least) $2^{\aleph_0}$ different flavours of Mathematics! (Although I suppose there might be technical difficulties with nonconstructive infinite 0,1 strings...!) We'd have an explicit injective function $f$ from $[0,1]$ into the class of all possible versions of Mathematics!
# Main question
Can we find (or prove the existence of) an infinite sequence of axioms $A_1, A_2, A_3, \ldots$, for which every sequence of true/false assignments is consistent? (e.g. the infinite string 1011001110... would mean that $A_j$ is true for $j=1,3,4,7,8,9,\ldots$ and false for $j=2,5,6,10,\ldots$; we want every string to be consistent).
If so, can it be done with $A_1, A_2, \ldots$ all being essentially different kinds of axioms? [maybe it's stupidly optimistic to hope for this]. Can it be done without ever considering $\aleph_k$ for $k>3$, say (or 4, or any fixed finite number)?
If not, what's a reasonable known lower bound $K$ on the number of $A_1, \ldots, A_K$ which are known to exist, so that we have at least $2^K$ essentially different versions of Mathematics?
-
3
Zen, logic is also very remote from my field, so maybe I am missing something. But how is your main question not answered by Goedel's incompleteness theorems? – Alex Bartel Feb 17 2011 at 10:20
1
Supposing ZFC is consistent and given a finite set of axioms $I_0, I_1, \ldots, I_n$ that are all independent of ZFC, the axioms of ZFC union $\{I_0, I_1, \ldots, I_n\}$ (or the negations of any of the $I_j$) will still be computable because you're only adding a finite set. Consequently, by Goedel's incompleteness theorem, you will have a statement that's independent of the collection, $I_{n+1}$, and can add it (or its negation) to the list. Therefore there will actually be infinitely many computable extensions of ZFC. – Jason Feb 17 2011 at 10:56
I was trying to think of explicit natural examples though. I'll try to think about it more later. – Jason Feb 17 2011 at 10:58
2
Regarding the title question (but perhaps not the main question): I was under the impression that mathematics as practiced is encoded by finite (and perhaps bounded) chunks of information encoding the rules and conventions we follow. In that case, you can only have countably many inequivalent versions of mathematics that can be distinguished in a manner that can be communicated in finite time. – S. Carnahan♦ Feb 17 2011 at 15:38
@Scott: Fixing an enumeration of formulas in a countable language, there are indeed only countably many computably enumerable theories. However, you can also consider continuum many consistent extensions of a computable consistent theory (even though you won't be able to effectively prove all of its theorems). Also, let me remark separately that for my previous comment, each $I_j$ should be independent of ZFC union $\{I_0, I_1, \ldots, I_{j-1}\}$ and not just of ZFC. – Jason Feb 17 2011 at 23:06
show 5 more comments
## 2 Answers
Your question is essentially asking about the structure of the Lindenbaum–Tarski algebra of ZF. Jason gave a concrete example showing that one can embed the free Boolean algebra on countably many generators inside the Lindenbaum–Tarski algebra of ZF. In fact, it can be shown that the Lindenbaum–Tarski algebra of ZF is a countable atomless Boolean algebra. (There is nothing very special about ZF here, one only needs that the theory is consistent, recursively axiomatizable, and that it encodes a sufficient amount of arithmetic.) Since there is only one countable atomless Boolean algebra up to isomorphism, this completely determines the structure of the Lindenbaum–Tarski algebra of ZF.
-
Thanks very much; I'm still thinking about it. I like this answer and also Jason's answer, and can't yet decide which answer to choose, as in the meta discussion: meta.mathoverflow.net/discussion/178/… – Zen Harper Feb 18 2011 at 5:03
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Main Question:
(1) Yes, let $A_j: 2^{\aleph_j} \neq \aleph_{j+1}$ (i.e., GCH does not hold at $\aleph_j$). We can do this by simultaneously forcing (via a countable product of posets adding Cohen subsets) $2^{\aleph_j} = \aleph_{j+1+s_j}$ where $s_j$ represents the truth value at $j$.
Edited Additions: You can also let $A_j$ be the statement "$\aleph_{j}^{L}$ is a cardinal (in $V$)" (i.e., the $j^{th}$ uncountable cardinal of the constructible universe is a cardinal in the actual universe). In this case, you could simultaneously force over $L$ (via a countable product of posets from $L$ collapsing cardinals) to add a surjection from $\aleph_{j-1}^L$ to $\aleph_{j}^L$ exactly when $s_j = 0$ so that the cardinal $\aleph_{j}^L$ becomes an ordinary ordinal of size $|\aleph_{j-1}^L|$ in the forcing extension. In the case that all of the $s_j$'s are $0$, the first $\aleph_0$ many cardinals of $L$ all become countable ordinals from the perspective of the forcing extension whereas if they're all $1$'s, then we have done trivial forcing and so the forcing extension is $L$.
Now after showing the desired relative consistency results as above, you can note (for your If so part) that you are only considering countable ordinals here from the perspective of most universes. For example, if a certain type of Real exists in your universe, mainly $0^{\sharp}$, then the true $\aleph_1$ will be inaccessible in $L$ and more so all of the $\aleph_j^L$'s for $j \in \mathbb{N}$ will be very puny countable ordinals in the said universe. Of course, this is probably cheating, but I thought I'd mention it anyway.
Also to your subquestion, $2^{\aleph_0} = \aleph_1$ and $2^{\aleph_0} = \aleph_2$ are very meaningful distinctions. But also under ZFC, $2^{\aleph_0}$ needs to be quite large in order to extend the Lebesgue measure to a countably additive measure on the full powerset of $\mathbb{R}$.
François already gave a very nice general answer to your main question so I think I'll leave my answer at that.
-
Thanks very much for your answer! I don't understand much of it yet, but I'll keep thinking... – Zen Harper Feb 18 2011 at 5:05
If you have any follow-up questions, feel free to ask them here. The general idea with collapsing cardinals is that a model of set theory can think that sets are larger than they really are in the "true" universe. For example, if we have a countable ZFC model $M$ for which $m \in M$ implies $m \subseteq M$ (i.e., transitive), then its $\aleph_1$, denoted $\aleph_1^M$, is countable since $\aleph_1^M \subseteq M$ and $M$ is countable. But if we were to add a bijection to this model from $\aleph_0$ into $\aleph_1^M$ through forcing, then the extension would realize that $\aleph_1^M$ is countable. – Jason Feb 20 2011 at 7:22
In the extension, $\aleph_1^M$ is no longer a cardinal because there is a bijection from the smaller $\aleph_0$ into it. So the forcing extension $N$ thinks that $\aleph_1^M < \aleph_1^N$ where $\aleph_1^N$ will be the least uncountable ordinal in $N$. The constructible universe $L$ is an inner model (transitive and contains all ordinals of $V$) and is the minimal inner model in the sense that it is contained in any other inner model of ZF. Assuming certain large cardinal hypotheses, it turns out to be quite small as mentioned above. – Jason Feb 20 2011 at 8:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9557201266288757, "perplexity_flag": "head"}
|
http://constraints.wordpress.com/2010/08/09/deolalikars-manuscript/
|
# Constraints
## 9 August 2010
### Deolalikar’s manuscript
Filed under: Commentary — András Salamon @ 17:39
Tags: computational complexity, papers
Given the sudden influx of visitors, it seemed apposite to talk about Vinay Deolalikar’s manuscript titled $P \ne NP$ (or in words, “P is not equal to NP”).
(See Richard Lipton’s discussion of the paper for a download link.) After all, this paper seems to be the prime cause of all the attention.
Deolalikar’s work brings to greater attention a bunch of nice techniques that currently aren’t mainstream in computational complexity theory. The main ingredients are finite model theory, random constraint satisfaction problems, and graphical models.
I haven’t yet spent much time reading the manuscript, but it certainly passes the nonsense test: there is a serious attempt to address the main objections from Scott Aaronson’s list of impediments to proving P versus NP. Moreover, the overall approach is along the same lines that several people have been attempting: combine results from the study of random constraint satisfaction problems with techniques from finite model theory. There is a roll call of at least a dozen theoretical computer scientists that I could name here, so the approach is definitely respectable. Finally, I happen to have studied all the ingredients that are used, so although I am no expert, I can at least read the paper.
Given all this, my initial impression is that the technique used is unlikely to work to prove the desired result. The argument seems non-uniform, and proving P to be equal to NP if one is going to argue that P equals NP (in order to obtain a contradiction, as the paper does), then this requires a uniform polynomial bound for the problem under consideration. In other words, for k-SAT to be in P, there must exist a single polynomial $p(x)$ such that every instance of size $n$ can be decided by a deterministic Turing machine in $p(n)$ steps. On my first hasty reading, it seems that the argument is built on a series of polynomials, of increasing degree. If this is the case, then this technique would be insufficient to establish the result sought. Of course, my impression may well be wrong.
I am sceptical, but the paper is definitely worth reading, unlike the vast majority of the ones listed at Gerhard Woeginger’s P-versus-NP page.
Update 20100810: see Deolalikar’s updated paper, the clearinghouse wiki, Antonio E. Porreca’s meta-summary, and Richard Lipton’s update.
Update 20100812: clarified wording of my analysis. The best single place for a technical summary remains the clearinghouse wiki mentioned above. Much of the ongoing technical discussion is happening on Richard Lipton’s blog, see further posts Update on Deolalikar’s Proof that P≠NP and Deolalikar Responds To Issues About His P≠NP Proof, and especially the many insightful comments. For those who really can’t wait, be sure to keep checking recent changes to the clearinghouse wiki.
Update 20100813: I think the last word goes to Terence Tao, in a magisterial summary of consensus about the strategy in Deolalikar’s manuscript (in a soundbite: it doesn’t work and can’t be fixed). The objection of Timothy Gowers is also worth reading.
## 2 Comments »
1. [...] Salamon is also “sceptical“, as “the technique used is unlikely to work to prove the desired result”. [...]
Pingback by — 10 August 2010 @ 16:10
2. [...] ≠ NP) consequences Filed under: Uncategorized — András Salamon @ 20:36 During the discussion about Vinay Deolalikar’s manuscript, many people commented about the consequences that would follow if we knew that P ≠ NP. I’d [...]
Pingback by — 13 August 2010 @ 20:43
RSS feed for comments on this post. TrackBack URI
Theme: Rubric. Blog at WordPress.com.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9165456891059875, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/67434/computing-bochner-integrals-with-values-in-lp-spaces-by-lebesgue-integrals
|
## Computing Bochner integrals with values in L^p-spaces by Lebesgue integrals?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $f: \mathbb{R}^n \to L^2(\mathbb{R}^d)$ be a Bochner-integrable function (all measures are the Lebesgue measure). Does then $\int_{\mathbb{R}^n} f(x) d\lambda^n (y) = \int_{\mathbb{R}^n} f(x)(y) d\lambda^n$ hold for $\lambda^d$-almost all $y \in \mathbb{R}^d$? I.e. can one compute such Bochner integrals just by computing ordinary Lebesgue integrals?
-
## 1 Answer
Answer: YES and NO.
YES: In any practical situation you are likely to meet, your formula is correct. You would prove it using Fubini's Theorem, pairing your two sides with an arbitrary $h \in L^2(\mathbb R^d)$ and getting the same answer on both sides. The catch is, you have to be able to apply Fubini.
NO: As stated, it can fail. $f(x) \in L^2(\mathbb R^d)$, so $f(x)$ is an equivalence class. For each $x$, CHOOSE some representative for that class, call it $f(x)(y)$. But now, for fixed $y$ it may fail that $f(x)(y)$ is a measurable function of $x$. Or even if those are all measurable, it may fail that $f(x)(y)$ is measurable in the product measure.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916093111038208, "perplexity_flag": "head"}
|
http://mathhelpforum.com/statistics/161860-applying-antiderivatives-normally-distributed-sets-data-bell-curve-z-scores.html
|
# Thread:
1. ## Applying antiderivatives to normally distributed sets of data (bell curve/z scores)
Hi,
Me and my G12 teacher are having a debate about a question he marked wrong on one of my tests.
The question was true or false:
Z scores > 4 are undefined. T/F?
I put false, reason being it truly isn't undefined. Z scores can continue on to positive/negative infinity, all of which are defined. Z scores > 4 perhaps are negligable, but most definitely not undefined.
The reason my teacher says it is undefined is because of the way we go about calculating the probability using the z scores. Instead of using the antiderivative (the accurate way to determine percentages from a z score, i would think), we use a table that has z scores and their corrosponding percentages listed. We simply round to the nearest percent.
Since the table we use only goes to a min of -4 and a max of +4, he says that all scores above and below that range are undefined.
The class im taking is a financial math class, which is why we use a simple table rather than figuring out the antiderivative of the normal distribution curve. I hope to show my teacher that the way the table he uses was generated was using calculus, and more importantly z scores above and below -4/+4 are most definitely not undefined.
The problem I've encountered, is I'm a bit rusty since my calculus classes a few years ago, and I can't find the equation of normally distributed data :\.
Through a bit of searching, I found that the normal distribution is a form of the gaussian function.
I got the equation $f(x)=e^(-x^2)$, but the problem is the total area under the curve doesn't come to 1, how can i mold it so it does come to 1, so that when I find the area under a certain section of the curve it will be a percent in decimal form?
any help would be appriciated,
thanks,
Coukapecker
2. That is a z-score at least 5.
3. Originally Posted by Coukapecker
Hi,
Me and my G12 teacher are having a debate about a question he marked wrong on one of my tests.
The question was true or false:
I put false, reason being it truly isn't undefined. Z scores can continue on to positive/negative infinity, all of which are defined. Z scores > 4 perhaps are negligable, but most definitely not undefined.
The reason my teacher says it is undefined is because of the way we go about calculating the probability using the z scores. Instead of using the antiderivative (the accurate way to determine percentages from a z score, i would think), we use a table that has z scores and their corrosponding percentages listed. We simply round to the nearest percent.
Since the table we use only goes to a min of -4 and a max of +4, he says that all scores above and below that range are undefined.
The class im taking is a financial math class, which is why we use a simple table rather than figuring out the antiderivative of the normal distribution curve. I hope to show my teacher that the way the table he uses was generated was using calculus, and more importantly z scores above and below -4/+4 are most definitely not undefined.
The problem I've encountered, is I'm a bit rusty since my calculus classes a few years ago, and I can't find the equation of normally distributed data :\.
Through a bit of searching, I found that the normal distribution is a form of the gaussian function.
I got the equation $f(x)=e^(-x^2)$, but the problem is the total area under the curve doesn't come to 1, how can i mold it so it does come to 1, so that when I find the area under a certain section of the curve it will be a percent in decimal form?
any help would be appriciated,
thanks,
Coukapecker
You are correct, your teacher is wrong. Feel free to refer him/her to this thread to get instruction on the subject.
4. Hmm, yes I thought he was lol.
What I'm asking though is what is the function to create a normal distribution graph? And what would be the antiderivative function to get the corrosponding percentage to the z score?
My graphing calculator has a plethora of functions in it that I can use to get a percentage, so I know its possible to do without a chart, but how exactly do you do it?
5. Originally Posted by Coukapecker
Hmm, yes I thought he was lol.
What I'm asking though is what is the function to create a normal distribution graph? And what would be the antiderivative function to get the corrosponding percentage to the z score?
My graphing calculator has a plethora of functions in it that I can use to get a percentage, so I know its possible to do without a chart, but how exactly do you do it?
If you Google
normal distribution
you will get the function. It cannot be integrated exactly (except in a couple of very special cases) - a numerical procedure for finding the required integral is necessary (most scientific calculators can do this. All graphics and CAS calculators can do it).
6. $\displaystyle \Phi (z) = P(Z \leqslant z) = \int_{ - \infty }^z {\frac{{e^{\frac{{ - x^2 }}{2}} }}{{\sqrt {2\pi } }}dx}$
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587447643280029, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-applied-math/126500-solved-error-function-incomplete-gamma-function.html
|
# Thread:
1. ## [SOLVED] The error function and incomplete gamma function
I think I'm pretty close to getting this but can't quite figure out what do next!
From earlier in the question...
$\textrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int^x_0 e^{-t^2} dt \sim 1 - \frac{e^{-x^2}}{\pi} \sum^{\infty}_{n=0} \Gamma(n + \frac{1}{2})\frac{(-1)^n}{x^{2n+1}}$ (*)
Show that erf(x) can be expressed in terms of the incomplete gamma function, and that the above asymptotic expansion agrees with the result obtained in Problem 6. [The formula $\Gamma(\frac{1}{2} + n)\Gamma(\frac{1}{2} - n) = (-1)^n \pi$ (**) for integer n may be useful.]
So I think I have to show the relation... $\gamma(\frac{1}{2}, x) = \sqrt{\pi} \textrm{erf}(\sqrt{\pi})$ (***) as that's the relation I've seen elsewhere.
The result obtained in question 6 was...
$\gamma(\alpha,x) \sim \Gamma(\alpha) - e^{-x}x^{\alpha - 1} \sum^{\infty}_{n=0}\frac{\Gamma(\alpha)}{\Gamma(\a lpha - n)}\frac{1}{x^n}$
And also...
$\gamma(\alpha,x) \sim \Gamma(\alpha) - e^{-x}x^{\alpha - 1} \sum^{\infty}_{n=0}\frac{(\alpha-1)(\alpha-2)\ldots(\alpha-n)}{x^n}$
Right so intro done!
Now what I've done...
From (*)
$\sqrt{\pi} \textrm{erf}(x) = 2 \int^x_0 e^{-t^2} dt \sim \sqrt{\pi} - \frac{e^{-x^2}}{\sqrt{\pi}} \sum^{\infty}_{n=0} \Gamma(n + \frac{1}{2})\frac{(-1)^n}{x^{2n+1}}$
$= \Gamma(\frac{1}{2}) - \frac{e^{-x^2}}{\sqrt{\pi}} \sum^{\infty}_{n=0} \Gamma(n + \frac{1}{2})\frac{(-1)^n}{x^{2n+1}}$
(This step seems to be showing that $\alpha = \frac{1}{2}$ which corresponds to (***))
Using (**)
$= \Gamma(\frac{1}{2}) - \sqrt{\pi} e^{-x^2}$ $\sum^{\infty}_{n=0} \frac{1}{\Gamma(\frac{1}{2} - n)}\frac{1}{x^{2n+1}}$
$= \Gamma(\frac{1}{2}) - \sqrt{\pi} e^{-x^2} \sum^{\infty}_{n=0} \frac{x^{-(n+1)}}{\Gamma(\frac{1}{2} - n)}\frac{1}{x^n}$
But I'm stuck now. I don't even know if what I've done so far is right to be honest... Any help please!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395208358764648, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/56545/what-really-are-superselection-sectors-and-what-are-they-used-for
|
# What really are superselection sectors and what are they used for?
When reading the term superselection sector, I always wrongly thought this must have something to do with supersymmetry ... DON'T laugh at me ... ;-)
But now I have read in this answer, that for example for a free QFT highly excited states , that would need infinite occupation numbers to build them up, and that lie therefore outside the Fock space are said to lie in a (different?) superselection sector. If a state has finite or infinite energy depends on the Hamiltonian, and a finite energy and physically relevant Hilbert space can be obtainend from the inacessible infinite energy states of another Hamiltonian.
This makes me now want to really know what a superselection sector is. What are the key ideas behind the definition of a superselection sector? Are they an underlaying concept to derive quantum field theories with a physical hilbert space that has only finite energy states, or what is their common use in physics?
-
3
Easiest example of superselection rule would be total charge of a system. You can't have a superposition of states which have different charges! – twistor59 Mar 11 at 14:47
1
I could say a bit, but I'm at work at the moment, so don't have a lot of time. Maybe someone else will come up with the goods in the meantime..! – twistor59 Mar 11 at 15:36
3
@twistor59 This is something that has always bugged me (as someone else who doesn't understand this superselection business!). Why can't you have a superposition of states with different charges? Obviously if you start in a charge eigenstate you will stay in one because charge is conserved, and obviously if you have a superposition of different charges decoherence will rapidly take place. But superselection seems to be saying something nontrivial about the initial condition of the universe that seems far from obvious to me. – Michael Brown Mar 11 at 16:44
1
@MichaelBrown you should check out this paper by Aharonov and Susskind, where they explain how to prepare a superposition of a neutron and a proton. I'm sorry it's behind a paywall :(. The point is that superselection rules are equivalent to lacking a reference frame for the conjugate variable. Charge (number) superselection is equivalent to a lack of a phase reference. Of course, constructing such a reference frame is not necessarily practical. This review has a good list of references. – Mark Mitchison Mar 11 at 21:22
1
@MarkMitchison Thanks for the references. I'll read them when I can. :) So to construct a silly example consider a Minkowski universe. Would the total momentum $P$ of the universe be considered a superselection variable? (You can boost it away, but we won't.) Since you lack a reference for absolute position, and it is conserved so you can't interfere states with different values of $P$. – Michael Brown Mar 12 at 0:07
show 7 more comments
## 1 Answer
A superposition sector is a subspace of the Hilbert space ${\mathcal H}_i$ such that the total Hilbert space of the physical system may be described as the direct sum $${\mathcal H} = {\mathcal H}_1 \oplus {\mathcal H}_2 \oplus\cdots \oplus {\mathcal H}_N$$ where $N$ may be finite or infinite such that if the state vector belongs to one of these superselection sectors $$|\psi(t)\rangle\in{\mathcal H}_I,$$ then this property will hold for all times $t$: it is impossible to change the superselection sectors by any local operations or excitations.
An example in the initial comments involved the decomposition of the Hilbert space to superselection sectors ${\mathcal H}_Q$ corresponding to states with different electric charges $Q$. They don't talk to each other. A state with $Q=-7e$ may evolve to states with $Q=-7e$ only. In general, these conservation laws must be generalized to a broader concept, "superselection rules". Each superselection rule may decompose the Hilbert space into finer sectors.
It doesn't mean that one can't write down complex superpositions of states from different sectors. Indeed, the superposition postulate of quantum mechanics guarantees that they're allowed states. In practice, we don't encounter them because the measurement of total $Q$ – the identification of the precise superselection sectors – is something we can always do as parts of our analysis of a system. It means that in practice, we know this information and we may consider $|\psi\rangle$ to be an element of one particular superselection sector. It will stay in the same sector forever.
In quantum field theory and string theory, the term "superselection sector" has still the same general meaning but it is usually used for different parts of the Hilbert space of the theory – that describes the whole spacetime – which can't be reached from each other because one would need an infinite energy to do so, an infinite work to "rebuild" the spacetime. Typically, different superselection sectors are defined by different conditions of spacetime fields at infinity, in the asymptotic region.
For example, the vacuum that looks like $AdS_5\times S^5$ ground state of type IIB string theory is a state in the string theory's Hilbert space. One may add local excitations to it, gravitons, dilatons ;-), and so on, but that will keep us in the same superselection sector. The flat vacuum $M^{11}$ of M-theory is a state in string theory's Hilbert space, too. There are processes and dualities that relate the vacua, and so on. However, it is not possible to rebuild the spacetime of the $AdS$ type to the spacetime of the $M^{11}$ time by any local excitations. So if you live in one of the worlds, you may assume that you will never live in the other.
Different asymptotic values of the dilaton ;-) or any other scalar field (moduli...) or any other field that is meaningful to be given a vev define different superselection sectors. This notion applies to quantum field theories and string theory, too. In particular, when we discuss string theory and its landscape, each element of the landscape (a minimum of the potential in the complicated landscape) defines a background, a vacuum, and the whole (small) Hilbert space including this vacuum state and all the local, finite-energy excitations is a superselection sector of string theory. So using the notorious example, the F-theory flux vacua contain $10^{500}$ superselection sectors of string theory.
In the case of quantum field theory, we usually have a definition of the theory that applies to all superselection sectors. A special feature of string theory is that some of its definitions are only good for one superselection sector or a subset of superselection sectors. This is the statement that is sometimes misleadingly formulated by saying that "string theory isn't background-independent". Physics of string theory is demonstrably background-independent, there is only one string theory and the different backgrounds (and therefore the associated superselection sectors – the empty background with all allowed local, finite-energy excitations upon it) are clearly solutions to the same equations of the whole string theory. We just don't have a definition that would make this feature of string theory manifest and it is not known whether it exists.
-
Thanks a lot Lumo for these very nice explanations, reading this answer saves my (otherwise not so stellar) day :-) – Dilaton Mar 11 at 19:29
... and I feel really touched by some of your cool explanations about the superselection sectors in string theory, ha ha ... :-D ;-P – Dilaton Mar 11 at 19:37
"It doesn't mean that one can't write down complex superpositions of states from different sectors. Indeed, the superposition postulate of quantum mechanics guarantees that they're allowed states. In practice, we don't encounter them" - That's what I needed. I've always heard of superselection as a rule on what superpositions you are allowed to make, which never sat well with me. This helps. Thanks. :) – Michael Brown Mar 11 at 23:57
1
It was a pleasure, Michael and Dilaton. We don't encounter the mixture because it's always the easiest measurement to determine in which superselection sector the particle is. So it's an eigenstate of the "which sector" operator, e.g. $Q$, and it stays an eigenstate i.e. in the sector at all times. – Luboš Motl Mar 12 at 5:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303215146064758, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/94179?sort=newest
|
Intermediate extension of a Prikry-Silver extension?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Prikry-Silver forcing $\mathbb{V}$ (sometimes just Silver forcing) is the forcing notion consisting of all partial functions $p:\omega\rightarrow 2$ with co-infinite domain. In "Combinatorics on ideals and forcing with trees" Marcia Groszek mentions (without proof) that a Prikry-Silver real has minimal real degree, but not minimal degree.
That the Prikry-Silver real $r$ has minimal real degree means that whenever $s$ is a real in $V[r]$ that doesn't belong to $V$ we have $V[r]=V[s]$. A proof of this can be extracted from some more general results in the seminally named "Combinatorics on ideals and forcing" by Serge Grigorieff.
That $r$ doesn't have minimal degree means that there is some object $A\in V[r]$ for which $V[A]$ is different from both $V$ and $V[r]$. Can anyone point me to a proof of this fact?
-
1 Answer
I don't know a reference for this, but the $A$ that you are looking for is the collection of all domains of conditions in the Silver generic filter.
Equivalently, you can consider the set of all complements of domains of conditions in the filter. This is a non-principal ultrafilter on $\omega$.
Silver forcing can be considered as an iteration of the following two forcing notions: First you force with $\mathcal P(\omega)/fin$, which gives you a nonprincipal ultrafilter on $\omega$, and then you force with Grigorieff forcing with respect to that ultrafilter. (Grigorieff forcing is like Silver forcing, only that the complements of the domains are in the filter, not just infinite.) The first forcing, $\mathcal P(\omega)/fin$, is $\sigma$-closed and therefore doesn't add any reals at all. The second forcing does all the adding of reals.
The extension is not minimal since you force with an iteration. I am leaving out the details, but it shouldn't be hard to show that the iteration that I am talking about is equvalent to Silver forcing. (Consider instead of $\mathcal P(\omega)/fin$ the equivalent p.o. of infinite subsets of $\omega$. Map each Silver condition $p$ to the pair $(\omega\setminus\mbox{dom}(p),p)$. This should be a dense embedding of Silver forcing into the iteration.)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436572194099426, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/13741/addition-theorem-polynomials/14070
|
## addition-theorem polynomials
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose a function f(u) identically satisfies an equation of the form G{f(u+v),f(u),f(v)}=0 for all u and v and u+v in its domain. Here G(Z,X,Y) is a non vanishing polynomial in the three variables with constant coefficients. Then one says that f admits an ALGEBRAIC ADDITION THEOREM. IF f(u) is cos(u), then
$G(Z,X,Y)=Z^2-2XYZ+X^2+Y^2-1,$
while, if f(u) is the Weierstrass p-function with invariants g_2 and g_3, then
$G(Z,X,Y)=16(X+Y+Z)^2(X-Y)^2 -8(X+Y+Z){4(X^3+Y^3)-g_2(X+Y)-2g_3} +4(X^2+4XY+4Y^2-g_2)^2$
Here is the question: Characterize those polynomials G(Z,X,Y) which express an algebraic addition theorem.
-
Just to clarify, do you want the domain and codomain of $f$ to be the complex numbers? Also, what sorts of coefficients of $G$ are we allowed? – Pace Nielsen Feb 1 2010 at 21:40
take the case that f is a meromorphic function and the coefficients of G are complex constants – Mark B Villarino Feb 1 2010 at 21:41
Thanks. I need one more clarification. Let $f$ be the zero function. Would it be correct to state that every nonzero polynomial $G$ with zero constant term expresses an algebraic addition theorem for $f$? – Pace Nielsen Feb 1 2010 at 21:49
yes, although, of course, the question is meant to deal with non trivial meromorphic functions. For example, it is obvious that G should be symmetric in X and Y and homogeneous. But, the degree of homogeneity is related to how many times f takes on a particular value, and that can be complicated...for example if it takes on a particular value n times, the degree of G in Z is n^2. – Mark B Villarino Feb 1 2010 at 22:05
1
It might be slightly nicer to ask for polynomials such that u+v+w=0 implies G(f(u), f(v), f(w)) = 0, since now you have symmetry in all three variables. At least, I'm reasonably certain this version is equivalent. – Qiaochu Yuan Feb 1 2010 at 22:42
show 2 more comments
## 4 Answers
The examples listed in David Speyer's answer are all of them. This is equivalent to say that all one dimensional algebraic groups are isomorphic to the additive group, the multiplicative group or an elliptic curve. A proof in the language of "algebraic addition theorems" is given in the old book of H. Hancock, Lectures on the theory of elliptic functions, Ch. XXI.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here is a very basic comment no one has made yet: If $f(u)$ is a rational function of $u$, then there will be some nonzero polynomial $G$ such that $G(f(u), f(v), f(u+v))=0$. That's because $\mathbb{C}(u, v, u+v)$ has transcendence degree $2$ over $\mathbb{C}$.
The same argument applies if $f$ is a rational function of $e^u$, or if $f$ is a rational function of $\wp(u)$ and $\wp'(u)$, where $\wp$ is the Weierstrauss $\wp$-function.
Can we show that every example is of one of these forms?
-
I took the liberty of \wp-ifying your answer. – Mariano Suárez-Alvarez Feb 2 2010 at 1:42
Thanks for the help! – David Speyer Feb 2 2010 at 2:01
It is a famous theorem of Weierstrass that the only meromorphic functions admitting an algebraic addition theorem are rational functions, or rational functions of the exponential function, or elliptic functions. What has NOT been answered is: given a polynomial G(Z,X,Y), in the three variables X,Y,Z, is it an addition-theorem polynomial? Which formal characteristics of G characterize it as such a polynomial? As far as I know, this question has never been investigated.
-
G=0 will be a rational surface if the group is the additive or multiplicative group, whereas G=0 will be covered by the abelian surface $E \times E$ if the group is the elliptic curve $E$. The surface will also have the symmetry as in Qiaochu Yuan's comment. Beyond that it's not clear what can be said and maybe there is no simple characterization. This is a very old topic so, if there was a simple answer, it would likely be known. Any specific reason for your interest? – Felipe Voloch Feb 2 2010 at 18:19
The specific reason for my interest is that the problem is simply stated, concrete, interesting in itself, and unanswered. Moreover, I am preparing an expository paper on this beautiful classical theory since the presentations in Hancock, and in Forsyth, have definite mistakes and errors, quite apart from being misleading and difuse. Koebe (in his thesis) and Forsyth prove certain properties of G, for example the formula for its degree in Z, but leave the question, there. Since a century has passed, one would hope that fresh insights might lead to further results. This website is ideal. – Mark B Villarino Feb 2 2010 at 19:07
If X' means the derivative with respect to u and Y' that wrsp y, etc., then one condition is: Elimination of Z between G=0 and $X'\frac{\partial G}{\partial Y}=Y'\frac{\partial G}{\partial X}$ leads to only a single equation between X and X' for all values of Y and Y' (see Forsyth, page 357) – Mark B Villarino Feb 3 2010 at 22:33
If X' means the derivative with respect to u and Y' that wrsp y, etc., then one condition is: Elimination of Z between G=0 and $X'\frac{\partial G}{\partial Y}=Y'\frac{\partial G}{\partial X}$ leads to only a single equation between X and X' for all values of Y and Y' (see Forsyth, page 357) – Mark B Villarino 0 secs ago
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292653799057007, "perplexity_flag": "middle"}
|
http://www.univ-rouen.fr/LMRS/GT/programme12.html
|
# Programme
24 et 25 mai 2012 - Université de Rouen.
Le but de ces rencontres est de présenter des résultats récents et de discuter des questions nouvelles et ouvertes sur les systèmes de particules et la mécanique statistique.
## Jeudi 24 mai
11h00 - 12h00 : Stefan Grosskinsky - Condensation in zero-range processes and related models. (Partie I)
12h00 - 13h30 : Déjeuner
13h30 - 14h30 : Stefan Grosskinsky - Condensation in zero-range processes and related models. (Partie II)
14h30 - 15h10 : Alexandre Gaudillière - TBA.
15h10 - 15h30 : Pause
15h30 - 17h30 : Michalis Loulakis - Large Deviations and Subexponential Random Variables with Applications to Condensating Zero Range Processes.
## Vendredi 25 mai
09h00 - 11h00 : Claudio Landim - Metastability of reversible condensed zero range processes on a finite set.
11h00 - 11h20 : Pause
11h20 - 12h00 : Krishnamurthi Ravishankar - Ergodicity and Percolation for Variants of One-dimensional Voter Models.
12h00 - 13h30 : Déjeuner
13h30 - 15h30 : Ines Armendariz - Scaling limit of the condensate dynamics in the zero-range process.
15h30 - 15h50 : Pause
15h50 - 16h10 : Marios Stamatakis - Variational Characterization of Generalized Relative Entropy Functionals and Static Large Devations for the Empirical Embeddings of the Zero Range Process at Equilibrium without Full Exponential Moments.
16h10 - 16h50 : Milton Jara - The formation of the condensate on a metastable zero-range process.
## Mini-cours
Ines Armendariz (Buenos Aires) Scaling limit of the condensate dynamics in the zero-range process. We consider the zero-range process on the one dimensional torus with L sites and N particles, in the supercritical regime when N/L exceeds the critical density. It is known that in the stationary state, as L goes to infinity, the excess particles accumulate at a randomly chosen position, forming the condensate. We now show that, at the right scale, this condensates moves according to a Lévy process on the rescaled torus, with rates determined by the jump distribution of the original zero range process. For instance, for nearest neighbour probabilities, the limiting Lévy process will have rates inversely proportional to the jump length.
Joint work with Stefan Grosskinsky and Michalis Loulakis.
Stefan Grosskinsky (Warwick) Condensation in zero-range processes and related models. Zero-range processes or more general mass transport models can exhibit a condensation transition, where a finite fraction of all particles condenses on a single lattice site if the total density exceeds a critical value. This phenomenon can result from spatial inhomogeneities, an effective attraction between the particles and also from size-dependence in the jump rates. We give an introduction to the most basic results for the stationary measures to characterize the condensation transition, and describe also connections to the classical framework of the equivalence of ensembles in statistical mechanics. Zero-range processes will be the main example, but we will also mention other models such as inclusion processes or models with continuous state space such as the Brownian energy process.
This includes joint work with Herbert Spohn, Gunter Schuetz, Paul Chleboun, Frank Redig and Kiamars Vafayi.
Claudio Landim (Rouen et Rio de Janeiro) Metastability of reversible condensed zero range processes on a finite set. Let $r: S\times S\to \mathbb R_+$ be the jump rates of an irreducible random walk on a finite set $S$, reversible with respect to some probability measure $m$. For $\alpha >1$, let $g: \mathbb N\to \mathbb R_+$ be given by $g(0)=0$, $g(1)=1$, $g(k) = (k/k-1)^\alpha$, $k\ge 2$.
Consider a zero range process on $S$ in which a particle jumps from a site $x$, occupied by $k$ particles, to a site $y$ at rate $g(k) r(x,y).$ Let $N$ stand for the total number of particles. In the stationary state, as $N\uparrow\infty$, all particles but a finite number accumulate on one single site. We show in this article that in the time scale $N^{1+\alpha}$ the site which concentrates almost all particles evolves as a random walk on $S$ whose transition rates are proportional to the capacities of the underlying random walk.
Michalis Loulakis (National Technical University of Athens & ACMAC Heraklion) Large Deviations and Subexponential Random Variables with Applications to Condensating Zero Range Processes. We will prove a Gibbs Conditioning Principle analogue for subexponential random variables, and we use this result in the context of Zero Range Processes to explore the bulk fluctuations and the fluctuations of the size of the condensate in equilibrium, as well as the onset of condensation as we move from subcritical to supercritical densities.
Joint work with Ines Armendariz and Stefan Grosskinsky.
## Conférences
Alexandre Gaudillière (Université de Provence, Marseille) TBA
TBA
Milton Jara (Rio de Janeiro) The formation of the condensate on a metastable zero-range process. Let us consider a zero-range process with decreasing rates on a finite graph. Our setup is the same considered by Beltrán-Landim. Let N be a scaling parameter, which will be sent to infinity, and let us put initially N particles at each site. We prove that in a diffusive time scaling, the normalized number of particles on each site converges to a system of diffusions that shares some similarities with Bessel processes of negative dimension. In particular, each time a site is emptied, it remains empty (on the observed time window). The correlation among the diffusions on each site are given in terms of harmonic properties of the subjacent random walk on the graph.
Work in progress, joint with J. Beltrán.
Krishnamurthi Ravishankar (New Paltz, USA) Ergodicity and Percolation for Variants of One-dimensional Voter Models. We study variants of one-dimensional $q$-color voter models in discrete time. In addition to the usual voter model transitions in which a color is chosen from the left or right neighbor of a site there are two types of noisy transitions. One is bulk nucleation where a new random color is chosen. The other is boundary nucleation where a random color is chosen only if the two neighbors have distinct colors. We prove under a variety of conditions on $q$ and the magnitudes of the two noise parameters that the system is ergodic, i.e., there is convergence to a unique invariant distribution. The methods are percolation-based using the graphical structure of the model which consists of coalescing random walks combined with branching (boundary nucleation) and dying (bulk nucleation).
Joint work with Y. Mohylevskyy and C.M. Newman.
Marios Stamatakis (University of Crete, Greece) Variational Characterization of Generalized Relative Entropy Functionals and Static Large Devations for the Empirical Embeddings of the Zero Range Process at Equilibrium without Full Exponential Moments. It is well known that the empirical embeddings of the Zero Range Process satisfy the static large deviations principle at any equilibrium of full exponential moments. We prove a variational characterization of generalized relative entropy functionals allowing any lower semicontinuous convex function in place of u → u log u, in order to generalize the static large deviations principle for the empirical embeddings of the Zero Range Process in the absence of full exponential moments and in particular in the case of finite critical density.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8091182708740234, "perplexity_flag": "middle"}
|
http://rjlipton.wordpress.com/2011/05/06/navigating-cities-and-understanding-proofs/
|
## a personal view of the theory of computation
tags: cities, Proofs, structure
by
How proofs are like cities
Pierre L’Enfant was not a theorist, but a designer, and in particular the designer of the physical layout of Washington D.C.
Today I want to talk about why some proofs are hard to understand, not hard to discover.
What do proofs have to do with cities? At first glance they seem to have little in common, but I believe that understanding proofs and navigating cities have a great deal in common. I think that the reason some proofs can be hard to follow is related to why some cities are hard to navigate.
Washington, as in the US capital, has always been difficult for me to get around, with due respect to L’Enfant. I sometimes feel lost in the capital. Its main streets run in a diagonal pattern which is beautiful, but does not help me form a good mental spatial map. I also almost never drive a car around the city: I walk, take the Metro, or take a taxi. The latter have me let someone else do the navigating for me, which means that I can close my eyes and still get where I want to go. This is very pleasant and easy on me, yet does not make me learn the ins and outs of the streets of D.C.
I think when we read a proof it is like trying to navigate around a city. The layout is critical if we are to be successful in understanding the proof. Also we need to actively interactive with the proof. If you just sit back in a “taxi” and are driven around the proof, say at a talk or lecture, then you are much less likely to really understand the proof.
Let’s turn to discuss some connections between the understanding of cities and proofs. Perhaps these comments will help us both to write better proofs and to help us to better read proofs.
Cities And Proofs
${\bullet}$ Standard Layout: For all its size and immense density of people and vehicles and buildings, Manhattan is relatively easy to navigate around. The streets, to a first approximation, are laid out on a rectangular grid: so 41${^{st}}$ street is north of 33${^{rd}}$. The avenues run north and south and are numbered west to east—okay not exactly but close—for example, Sixth Avenue is also the Avenue of the Americas and Ninth is Columbus Avenue. So figuring out a rough location is pretty straightforward, especially if you stay away from the lower part of Manhattan.
What does this mean for a proof? We should, as much as possible, layout our proofs in a standard “grid” manner. Definitions, lemma’s, and proofs should be in as standard a layout as possible. Sometimes this is hard or even impossible, but proofs with “diagonal” structure are much harder to understand and should be avoided.
Stated another way, with a mixed metaphor: Traffic flow in a proof should be topologically sorted. You might have to refer back to check the source of the next logic step, how Theorem 5 depends on Lemmas 2 and 3, but you should never have to cycle around, where to prove Lemma 3 you need to understand part of the proof of Theorem 5. Another way to say this: the understanding of a lemma should never need to be “deferred” to a later result. When such “deferrence” happens, you have what Jacques Derrida called différance, which might be fine in literary theory, but shouldn’t be in math. This last reference is due to Ken, I defer to him on this, and simply add: keep the structure as simple as possible.
${\bullet}$ Good Guide Books: Any city is easier to navigate if their are well written guide books. Ones that explain how to get around, how to locate major landmarks, and how to navigate.
What does this mean for a proof? We should supply a guidebook with our proofs. If a proof is very short or simple, then a guide may be unnecessary. However, for a proof of any complexity having a good overview that explains where you are, where you are going, and how all fits together is invaluable. Terence Tao is a master at this: even some of his deepest theorems have an overview that at least allows you to get your bearings. Here is an overview from one of his papers.
There are three major ingredients. The first is Szemerédi’s theorem, which asserts that any subset of the integers of positive density contains progressions of arbitrary length. The second, which is the main new ingredient of this paper, is a certain transference principle. This allows us to deduce from Szemerédi’s theorem that any subset of a sufficiently pseudorandom set (or measure) of positive relative density contains progressions of arbitrary length. The third ingredient is a recent result of Goldston and Yildirim, which we reproduce here. Using this, one may place (a large fraction of) the primes inside a pseudorandom set of “almost primes” (or more precisely, a pseudorandom measure concentrated on almost primes) with positive relative density.
${\bullet}$ Districts: Many cities, even huge ones, have districts or neighborhoods. This modular structure is immensely useful in navigating around. Even Manhattan, for example, has neighborhoods: TriBeCa, West Village, Washington Heights to name just three out of dozens.
What does this mean for a proof? We should structure our proofs to have a neighborhood structure. Proofs that are modular often can be read much more easily. The modules can be understood separately and that helps the reader greatly. Often this modular structure is based on the use of powerful abstraction, which used properly can be very reader friendly. Used properly a highly abstract level proof can be quite clean, simple, and very understandable. Or used poorly the abstract level may hide the key issues that are being argued, and make the proof very hard to understand.
$\displaystyle \S$
The next two are things to be avoided.
${\bullet}$ Many side streets: Cities with lots of . dead-end streets, with numbering schemes that jump in unexpected ways, in naming schemes that are strange can be very hard to navigate.
What does this mean for a proof? We should avoid proofs with many side steps and many parts that are unneeded. I believe that side streets in proofs correspond to many cases. The more cases the more complex a proof is, usually. The reason is that we can leave a case out, or argue that case (iii) follows as case (ii), which is false. Lots of cases is a symptom that there is complexity, and that is usually an enemy of understanding.
Cases cannot always be avoided. In some areas of mathematics, for example finite group theory, special cases abound. One is quite likely to see a theorem like this:
Theorem: Every group of even order that does not have ${\dots}$ as a subgroup and has no elements of order ${p^{2}}$ where ${p \in \{3,5,17\}}$ satisfies ${\dots}$.
${\bullet}$ Crazy Structure: Some cities have endearing, but nutty structures. Pittsburgh where CMU is located, is famous for having the property: you may be able to see where you want to get to, but there is no way to get there. The many hills, rivers, and bridges make this happen. You can see where you want to be, but there seems to be no road that leads you there. Of course, there is a way to get there, the way is just hard to discover, and may involve you heading initially away from where you are going. It can be very confusing and makes for difficult navigation.
In Cleveland there was a road that self-intersects—crosses itself—at least when I was an undergraduate there. Really. Many cities also have roads that change names for no obvious reason, or become one-way at certain times of the day. All of this makes getting around a challenge. Atlanta, where I live, has many roads that are named
$\displaystyle \mathrm{Peachtree \ } X$
where ${X}$ is a modifier like: way, street, place, and so on.
What does this mean for a proof? We should be careful to avoid crazy structure. We should use logical names, keep notation from changing for no reason. We must avoid circular reasoning, even reasoning that is not circular but appears to be will be quite hard to follow.
A classic example of this are inductive arguments, which are essential to many areas of mathematics. But an inductive argument can be tricky. There is always the danger that the argument is circular or has some other defects. Be very careful writing them and reading them: be sure the base case is handled properly and that you understand what is being “inducted” on. Some proofs, especially advanced ones, may use a complex measure that is being used to do the induction on. Be sure to follow these carefully.
Open Problems
If you are writing or reading a proof look for the above examples to help you, and understand the parts that make the proof hard. Avoid taking proof “taxis” if you really want to understand a proof.
An open problem: classic proof theory measures complexity by size: longer is harder. But this seems to be a bit naive. Perhaps there is a more reasonable measure of proof complexity. The same applies to much of complexity theory in general: a bigger circuit is more complex, a longer computation is more complex, and so on. Is this the best we can do?
### Like this:
from → People, Proofs
16 Comments leave one →
1. May 6, 2011 9:15 am
I feel like your analogy about topological sorting is probably the best intuition about how we can measure proof complexity in general. Consider a proof as a graph where the the lemmas/theorems are vertices and the references between them are edges. We could refine this even further by considering a directed dependency graph between the actual arguments, rather than just the theorems, however this would be harder to do systematically.
Once we have a proof represented as a graph, we can do some interesting things like considering the traversal represented by the linear arrangement of the proof in text form. How many vertices does the linear arrangement traverse? If we minimize the number of traversed vertices (basically, using a topological sort) then the amount of mental context-switching is reduced (since long stretches of the proof will all be interrelated, and thus the mental “locality of reference” is better). In principle, reducing mental context-switching should make the proof easier to grok, though that’s obviously just a hunch.
2. May 6, 2011 9:49 am
To your “open problem” point: if we could find a “semantic” way to assess the complexity of a proof separate from its “size”, I think we would be most of the way to solving the major complexity class inclusion problems.
3. May 6, 2011 10:29 am
I kept having this problem that I wanted to go top-down or bottom. I tended to see the proof as a tree structure, branching as we get deeper into the details. No doubt, that’s a programmers way of looking at it, but does that make it harder for a mathematician to understand what I am trying to say? How does one unroll that perception?
Paul.
4. Peter Floderus
May 6, 2011 11:26 am
This was one of the, pehaps not best, but at least important posts here. Messy and unnecessarily complex proofs are the bane of mathematics. A city planner that creates a city that is unaccessible and confusing has failed in the same way that a mathematician that creates a proof that is unaccessible and confusing has. Thanks for the post.
• May 6, 2011 2:35 pm
Please let me second Peter Floderus’ opinion that this is a good post on a difficult-but-central topic. One topic that might be mentioned is the crucial importance of well-crafted definitions. Michael Spivak’s well-regarded textbook Calculus on Manifolds (1965) offers this meditation on the role of definitions:
“There are good reasons why theorems should all be easy and the definitions hard … Definitions serve a twofold purpose: they are rigorous replacements for vague notions, and machinery for elegant proofs … Stokes’ Theorem shares three important attributes with many fully evolved major theorems: (1) It is trivial. (2) It is trivial because the terms appearing in it have been properly defined. (3) It has significant consequences.”
The proof that Spivak finally presents (on page 102) of Stokes’ Theorem is short in length (two pages) and it is logically trivial (a reasonably obvious sequence of elementary manipulations) … and yet it has taken Spivak 101 prior pages to define the elements of the proof, and motivate the geometric logic of the proof.
So what might be the computational complexity of Spivak’s proof? Obviously we need … better definitions!
• Allen Knutson
May 7, 2011 7:16 am
I’ve tried to figure out why this is (that the work should be in the definitions rather than the theorems). I think it’s because we hope to have the definitions done, and to now produce lots of theorems, so we want them to be easy.
Here’s a matter of taste: should one define “An X is something that has A, B, and C” and a theorem that says “Actually you only need to check A”, or say “An X is something that has A” and a theorem that says “Xs also have B and C”?
I prefer the first, but I can’t articulate why. I remember seeing the definition “A rectangle is a parallelogram with a right angle” and thinking I much preferred “is a parallelogram with all right angles”.
• May 7, 2011 2:59 pm
Allen, these are deep issues indeed, relating to the interlocking roles of definitions, theorems, and proofs!
For example, Gauss is justly famed for his Theorema Egregium of 1827, which first proved that geometric curvature was intrinsic. Similarly, Riemann is justly famed for his extension of the emph{Theorema Egregium} to manifolds of arbitrary dimension. And yet the geometric definitions essential to the Theorema Egregium clearly state too (decades earlier) in Bowditch’s New American Practical Navigator of 1807 [etext, page 100]. Throughout the 19th century, Bowditch—as Bowditch’s text was eponymously known both then and now—contained the most accurate and carefully validated lunar tables in the world; it is very likely that Gauss (as a surveyor) and Riemann (as Gauss’ student) both knew Bowditch well.
We see that our modern understanding of geometry came to us via an archtypal trajectory: [1] the practical applications, physical intuitions, and essential definitions came first (Bowditch), [2] then come the creative abstraction of fundamental theorems (Gauss), [3] and finally came the refinement of the definitions, the extension to deeper theorems, and the enormous simplification of proof technologies (Riemann and his innumerable modern successors).
Only after this slow succession was complete could pedagogic geniuses (like Spivak) write their great mathematical texts. And so it is not surprising that throughout mathematical history, great texts have generally focused most of their attention not on theorems, and not on proofs, but rather on stating good definitions. This is because (from the Spivakian point-of-view) theorems and proofs are not ends in themselves, but rather, serve mainly to motivate the good definitions that are the true central concern of mathematics.
It follows that students of mathematics, science, and engineering should heed Spivak’s highest-rated Amazon reviewer:
Spivak knows you learned calculus the wrong way and he devotes the first three chapters to setting things right. Along the way he clears all the confusion …
This reviewer’s advice should lead every student to wonder (complexity theory students in particular): “Am I learning my subject the right way? That is, are my textbooks teaching me good definitions? Meaning, definitions that make rigorous-yet-useful theorems trivial to prove?”
With regard to algebra, geometry, and dynamics, at the undergraduate level the Spivakian answer is “Definitely not!” Fortunately, good texts covering algebra, geometry, and dynamics are available beginning at the first-year graduate level (Spivak’s text being one of them). And needless to say, these texts begin by teaching students better definitions, and it is is notable that most of the theorems that students learn at the graduate level are trivial in Spivak’s wholesome sense.
With regard to complexity theory and quantum dynamics (and their beautiful progeny, quantum complexity theory) the news is less cheerful. We have not found obviously good definitions—as we know because useful theorems are not trivial to prove—and so perhaps it is simply impossible (at present) to write good texts about complexity theory and quantum dynamics. And this would explain why students find these topics so baffling!
Therefore, if we could ask just one question of 22nd century complexity theorists and quantum physicists, perhaps we should not ask “What theorems have been proved?” or even “What proof technologies are popular?” but rather we should ask the Spvakian question “What fundamental definitions are associated to the complexity class ${P}$ and to the state-space of quantum dynamics?”
We should all hope that these fundamental definitions will not have their present simple forms (which perhaps are too simple?) but rather these definitions will be incrementally refined throughout the coming decades of the 21st century, to the point that the theorems and proofs associated to problems that today are perceived as both fundamental and exceedingly difficult (like the separation of complexity classes and the simulation of quantum dynamical processes) will become “trivial” in the good Spivakian sense … definitions that are so well-crafted as to carry us most of the way to the understanding we seek.
More broadly, we can hope that future STEM students will grasp that key theorems mostly are trivial and key proofs mostly are simple … that the creativity and vitality of the STEM enterprise thus resides largely in its definitions … and thereby these future students will understand much more than we do now, and be inspired and emboldened to create the global enterprises that a planet of ${10^{10}}$ people so urgently requires.
The preceding amounts to a definition-centric, broadly Spivakian roadmap for the 21st century STEM enterprise, and yet (obviously) it is neither feasible nor desirable that everyone embrace Spivak-style roadmaps. After all, as Mark Twain’s Puddn’head Wilson said:
It were not best that we should all think alike; it is difference of opinion that makes horse-races.
It is these diverse mathematical perspectives that (for me) makes weblogs like Gödel’s Lost Letter and P=NP so thought-provokingly enjoyable.
• May 8, 2011 12:04 pm
Wow, that’s a great comment. Thanks for that.
• May 8, 2011 9:53 pm
Oh boy … a fan post!
Delta, here is another terrific quote (IMHO) relating to definitions, proofs and theorems, from the preface to John Lee’s Introduction to Smooth Manifolds (a text that is known to graduate students as “Smooth Introduction to Manifolds”):
Over the past few centuries, mathematicians have developed a wondrous collection of conceptual machines designed to enable us to peer ever more deeply into the invisible world of geometry in higher dimensions. Once their operation is mastered, these powerful machines enable us to think geometrically about the ${6}$-dimensional zero set of a polynomial in four complex variables, or the ${10}$-dimensional manifold of ${5\times5}$ orthogonal matrices, as easily as we think about the familiar ${2}$-dimensional sphere in ${R^3}$.
The price we pay for this power, however, is that the machines are built out of layer upon layer of abstract structure. Starting with the familiar raw materials of Euclidean spaces, linear algebra, and multivariable calculus, one must progress through topological spaces, smooth atlases, tangent bundles, cotangent bundles, immersed and embedded submanifolds, tensors, Riemannian metrics, differential forms, vector fields, flows, foliations, Lie derivatives, Lie groups, Lie algebras, and more, just to get to the point where one can even think about studying specialized applications of manifold theory such as gauge theory or symplectic topology.
The reader’s main job […] is to absorb all the definitions and learn to think about familiar objects in new ways. It is the bane of this subject that there are so many definitions that must be piled on top of one another before anything interesting can be said, much less proved.
All of this seems very remote from the concerns of practicing engineers, until one appreciates that the very example that Lee cites as mathematically arcane—“the ${6}$-dimensional zero set of a polynomial in four complex variables”—is precisely the state-space of two interacting (unentangled) spin-${1/2}$ particles, which is a topic of immense theoretical interest and practical importance.
So it is natural to ask, “How would classic texts like Slichter’s Principles of Magnetic Resonance and Nielsen and Chuang’s Quantum Computation and Quantum Information read, if their ideas were expressed using Lee’s “smooth” mathematical machinery?” And it is a trivial exercise (in Spivak’s exacting sense of the word) to prepare such a translation.
In the same spirit, with regard to complexity theory, what I would most like to read is a definition-centric book in the same “smooth” spirit as John Lee’s, nominally titled Smooth Introduction to Computational Complexity, that similarly would facilitate a Spivak-style translation of (say) Oded Goldreich’s (terrific!) textbook P, NP, and NP-completeness, into terms that would render the proofs of its complexity-theoretic theorems similarly “trivial” to the theorems and proofs in Lee’s book. And it seems to me that the complexity-theoretic definitions of P, NP, and NP-completeness might well require some adjustments and deepening, for this goal to be achievable (certainly adjustments and deepening were required of algebra, geometry and dynamics, so why not complexity theory too?)
The desire for Spivak-style definitions and Lee-style “smooth” expositions is, I think, not confined to complexity theory and/or quantum information theory, but rather is widespread throughout the STEM community nowadays. As Scott Aaronson recently posted, “My real hope is that we’ll learn something new someday that changes the entire terms of the debate”.
What Michael Spivak’s and John Lee’s texts both teach us—and the history of math and science affirms—is that the process of learning “something new that changes the entire terms of the debate” very often begins with learning (and accepting) new definitions … the theorems and proofs come afterwards.
5. May 6, 2011 2:26 pm
The Tao summary is really a marvelous piece of writing.
• Allen Knutson
May 7, 2011 11:29 am
I’d very much like to point out that Tao won the Levi Conant prize for mathematical writing (in the Notices of the AMS).
6. May 6, 2011 4:12 pm
Some cities have radial-circle planning, for example, Moscow. Is it crazy structure? Perhaps, not. The radial-circle structure may be useful when you have many independent modules and one central module that linked with others.
7. May 6, 2011 4:44 pm
Thank you, Dick, for this guide to navigating and planning proofs. Hopefully it will make us all better proof-writers.
I think the part about having a good guide-book has an important counter-point, which I encountered today (among many other times): a relatively simple proof should often be presented with little introduction, just as we need no guidebook for merely walking down the street. While writing today, I realized I was trying explain the subtleties of a proof in advance, while simply handling them on arrival is much more clear.
Imagine if, walking down the sidewalk, you encountered a sign declaring: “Caution: this sidewalk will veer slightly to the right, almost two-thirds into a minor incline, but not before you pass the largest pebble on the route.” There’s a level where advanced warning is just confusing.
8. May 8, 2011 12:10 am
I like your post as usual, particularly
“I think when we read a proof it is like trying to navigate around a city. The layout is critical if we are to be successful in understanding the proof. Also we need to actively interactive with the proof. If you just sit back in a “taxi” and are driven around the proof, say at a talk or lecture, then you are much less likely to really understand the proof. ”
It is indeed true that if the proof structure is look like a “rectangular grid city” it would be enough to navigate the roads of the city by a two tuple (x,y) where x is the street number and y is the direction (north, south, east, west). But there is another way of proof structure (particularly in algorithmic proofs) which is independent from the structural complexity of the map of city (represent road map of the proof). We embed our navigational path (search structure) onto the streets of the existing city (grid, diagonal, triangular etc.) and find our way from, say point A to point B by only using the navigational path. For example the navigational path chosen in the new proof of the four color theorem is a “spiral path” within the maximal planar graphs (the cities). In this way we would not go from one place to the other by using the shortest path (easiest proof structure) but always select the spiral segment from point A and point B (longer but safe and guaranteed path). Furthermore by using spiral path we can only investigate small number of cases that might create trouble to our proof. Further views have been given in:
1. VISUALIZATION OF THE FOUR COLOR THEOREM
http://neu-tr.academia.edu/IbrahimCahit/Papers/423920/VISUALIZATION_OF_THE_FOUR_COLOR_THEOREM
2. On the Algorithmic Proofs of the Four Color Theorem
http://neu-tr.academia.edu/IbrahimCahit/Papers/539878/On_the_Algorithmic_Proofs_of_the_Four_Color_Theorem
• June 4, 2011 3:59 pm
Second paper has been cited in the recent book “The Proof Is in the Pudding: The Changing Nature of Mathematical Proof” by Steven G. Krantz:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9479038119316101, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/105951/finite-dimensional-mountain-pass-lemma/105963
|
## Finite dimensional “Mountain Pass Lemma”
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Question Does anyone know of a good reference which I can cite for the finite dimensional version of Mountain Pass Lemma?
Motivation I am writing a paper and found myself using the following result:
Let $f$ be a proper smooth real-valued function on $\mathbf{R}^3$ such that $f(0) = 0$, $f|_{B_1(0)} \geq 0$, $f|_{\partial B_1(0)} \geq 1$ and $\exists p \in {\partial B_2(0)}$ such that $f(p) = 0$. Then $\exists q\in \mathbf{R}^3 \setminus B_1(0)$ such that $f'(q) = 0$ and $f(q) \geq 1$.
For the time being I referred to Ambrosetti and Rabinowitz's JFA article for the mountain pass lemma, but citing a Banach space version for a finite-dimensional Euclidean space application gives me a funny feeling. (Also, if feels like such a result could in principle be found in not-so-advanced undergraduate textbooks...)
-
1
I colleague in grad school some years ago had exactly the same difficulty. Alas, he also used Ambrosetti and Rabinowitz. – Marc Chamberland Aug 30 at 15:14
2
Only somewhat related: Mike Usher has an interesting article about a converse of this (in finite dimensions) arxiv.org/abs/1207.0889 – Sam Lisi Aug 31 at 7:07
## 4 Answers
My book An Invitation to Morse Theory, 2nd Edition, Springer Verlag 2011 describes the finite dimensional Mountain Pass Lemma in Example 2.53. There I work on a compact manifold, but the compactness of the manifold can be substituted by a properness assumption on the function. In the same section I explain a more general Min-Max principle (Thm. 2.51) and in Example 2.53 I explain how this implies the Mountain Pass Lemma.
-
@Liviu: a small technical question. In your book you assume that $f$ is $C^\infty$, do you have any idea about lower regularity? The classical MPT is of course proven for $C^1$. But in your proof of the deformation lemma you use ODE existence to construct the flow, which usually uses Picard (I'm not sure if Peano would be able to give you that the flow generates a homeo/diffeomorphism) and requires $C^2$ or $C^{1,1}$ in your function. – Willie Wong Aug 31 at 14:20
As you correctly pointed out $C^{1,1}$ suffices. All one needs is that the gradient vector field be locally Lipschitz to invoke existence results. – Liviu Nicolaescu Aug 31 at 14:32
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For historical interest: A friend pointed me to the book
• Youssef Jabri, The Mountain Pass Theorem: Variants, Generalizations and Some Applications, CUP
which asserts that one of the earliest known published version of the finite dimensional mountain pass theorem was due to
• Richard Courant, Dirichlet's Principle, Conformal Mapping, and Minimal Surfaces, Interscience
published originally in 1950. The version stated and proven by Courant does not, technically speaking, imply the result I stated in the question text (the points $0$ and $p$ are assumed to be local minima of the function $f$). But a simple modification of the deformation lemma (for example, as in Liviu's book that he mentioned) would do.
-
This seems to be in L Evans's PDE book, section 8.5
-
I have stumbled across Richard Palais' (co-author Chuu-lian Terng) Critical Point Theory and Submanifold Geometry (Springer Lecture Notes in Math 1353). This is an awesome book!
The "Mountain Pass Lemma" for finite-dimensional manifolds is presented as Theorem 9.2.7 (pg189).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9058512449264526, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Trilateration
|
Trilateration
Figure 1. The plane z = 0, showing the three sphere centers, P1, P2, and P3; their x,y-coordinates; and the three sphere radii, r1, r2, and r3. The two intersections of the three sphere surfaces are directly in front and directly behind the point designated intersections in the z = 0 plane.
In geometry, trilateration is the process of determining absolute or relative locations of points by measurement of distances, using the geometry of circles, spheres or triangles.[1][2][3][4] In addition to its interest as a geometric problem, trilateration does have practical applications in surveying and navigation, including global positioning systems (GPS). In contrast to triangulation it does not involve the measurement of angles.
In two-dimensional geometry, it is known that if a point lies on two curves such as the boundaries of two circles then the circle centers and the two radii provide sufficient information to narrow the possible locations down to two. Additional information may narrow the possibilities down to one unique location.
In three-dimensional geometry, when it is known that a point lies on three surfaces such as the surfaces of three spheres then the centers of the three spheres along with their radii provide sufficient information to narrow the possible locations down to no more than two. If it is known that the point lies on the surface of a fourth sphere then knowledge of this sphere's center along with its radius is sufficient to determine the one unique location.
This article describes a method for determining the intersections of three sphere surfaces given the centers and radii of the three spheres.
Derivation
The intersections of the surfaces of three spheres is found by formulating the equations for the three sphere surfaces and then solving the three equations for the three unknowns, x, y, and z. To simplify the calculations, the equations are formulated so that the centers of the spheres are on the z = 0 plane. Also the formulation is such that one center is at the origin, and one other is on the x-axis. It is possible to formulate the equations in this manner since any three non-colinear points lie on a unique plane. After finding the solution it can be transformed back to the original three dimensional Cartesian coordinate system.
$r_1^2=x^2+y^2+z^2 \,$
$r_2^2=(x-d)^2+y^2+z^2 \,$
$r_3^2=(x-i)^2+(y-j)^2+z^2 \,$
We need to find a point located at (x, y, z) that satisfies all three equations.
First we subtract the second equation from the first and solve for x:
$x=\frac{r_1^2-r_2^2+d^2}{2d}.$
We assume that the first two spheres intersect in more than one point, that is that
$d-r_1 < r_2 < d+r_1. \,$
In this case substituting the equation for x back into the equation for the first sphere produces the equation for a circle, the solution to the intersection of the first two spheres:
$y^2+z^2=r_1^2-\frac{(r_1^2-r_2^2+d^2)^2}{4d^2}.$
Substituting $z^2=r_1^2-x^2-y^2$ into the formula for the third sphere and solving for y there results:
$y=\frac{r_1^2-r_3^2-x^2+(x-i)^2+j^2}{2j}=\frac{r_1^2-r_3^2+i^2+j^2}{2j}-\frac{i}{j}x.$
Now that we have the x- and y-coordinates of the solution point, we can simply rearrange the formula for the first sphere to find the z-coordinate:
$z=\pm \sqrt{r_1^2-x^2-y^2}.$
Now we have the solution to all three points x, y and z. Because z is expressed as the positive or negative square root, it is possible for there to be zero, one or two solutions to the problem.
This last part can be visualized as taking the circle found from intersecting the first and second sphere and intersecting that with the third sphere. If that circle falls entirely outside or inside of the sphere, z is equal to the square root of a negative number: no real solution exists. If that circle touches the sphere on exactly one point, z is equal to zero. If that circle touches the surface of the sphere at two points, then z is equal to plus or minus the square root of a positive number.
Preliminary and final computations
The Derivation section pointed out that the coordinate system in which the sphere centers are designated must be such that (1) all three centers are in the plane z = 0, (2) the sphere center, P1, is at the origin, and (3) the sphere center, P2, is on the x-axis. In general the problem will not be given in a form such that these requirements are met.
This problem can be overcome as described below where the points, P1, P2, and P3 are treated as vectors from the origin where indicated. P1, P2, and P3 are of course expressed in the original coordinate system.
$\hat e_x = \frac{ P2 - P1 }{ \| P2 - P1 \| }$ is the unit vector in the direction from P1 to P2.
$i = \hat e_x \cdot ( P3 - P1 )$ is the signed magnitude of the x component, in the figure 1 coordinate system, of the vector from P1 to P3.
$\hat e_y = \frac{ P3 - P1 - i \; \hat e_x}{ \| P3 - P1 - i \; \hat e_x \| }$ is the unit vector in the y direction. Note that the points P1, P2, and P3 are all in the z = 0 plane of the figure 1 coordinate system.
The third basis unit vector is $\hat e_z = \hat e_x \times \hat e_y$. Therefore,
$d = \| P2 - P1 \|$ the distance between the centers P1 and P2 and
$j = \hat e_y \cdot ( P3 - P1 )$ is the signed magnitude of the y component, in the figure 1 coordinate system, of the vector from P1 to P3.
Using $i, \; d$ and $j$ as computed above, solve for x, y and z as described in the Derivation section. Then
$\vec p_{1,2} = P1 + x \ \hat e_x + y \ \hat e_y \ \pm \ z \ \hat e_z$
gives the points in the original coordinate system since $\hat e_x, \; \hat e_y$ and $\hat e_z$, the basis unit vectors, are expressed in the original coordinate system.
See also
• Euclidean distance
• Multilateration – position estimation using measurements of time difference of arrival at (or from) three or more sites.
• Resection (orientation)
• Triangulation
• Global positioning system
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9248148798942566, "perplexity_flag": "head"}
|
http://www.math.uah.edu/stat/urn/Secretary.html
|
$$\newcommand{\P}{\mathbb{P}}$$ $$\newcommand{\E}{\mathbb{E}}$$ $$\newcommand{\R}{\mathbb{R}}$$ $$\newcommand{\N}{\mathbb{N}}$$ $$\newcommand{\bs}{\boldsymbol}$$ $$\newcommand{\var}{\text{var}}$$ $$\newcommand{\cov}{\text{cov}}$$ $$\newcommand{\cor}{\text{cor}}$$
## 9. The Secretary Problem
In this section we will study a nice problem known variously as the secretary problem or the marriage problem. It is simple to state and not difficult to solve, but the solution is interesting and a bit surprising. Also, the problem serves as a nice introduction to the general area of statistical decision making.
#### Statement of the Problem
As always, we must start with a clear statement of the problem. We have $$n$$ candidates (perhaps applicants for a job or possible marriage partners). Here are our assumptions:
• The candidates are totally ordered from best to worst with no ties.
• The candidates arrive sequentially in random order.
• We can only determine the relative ranks of the candidates as they arrive. We cannot observe the absolute ranks.
• Our goal is choose the very best candidate; no one less will do. The second best candidate is of no more value to us than the worst candidate.
• Once a candidate is rejected, she is gone forever and cannot be recalled.
• The number of candidates $$n$$ is known to us.
The assumptions, of course, are not entirely reasonable in real applications. The last assumption, for example, that $$n$$ is known, is more appropriate for the secretary interpretation than for the marriage interpretation.
What is an optimal strategy? What is the probability of success with this strategy? What happens to the strategy and the probability of success as $$n$$ increases? In particular, when $$n$$ is large, is there any reasonable hope of finding the best candidate?
#### Strategies
Play the secretary game several times with $$n = 10$$ candidates. See if you can find a good strategy just by trial and error.
After playing the secretary game a few times, it should be clear that the only reasonable type of strategy is to let a certain number $$k - 1$$ of the candidates go by, and then select the first candidate we see who is better than all of the previous candidates (if she exists). If she does not exist (that is, if no candidate better than all previous candidates appears), we will agree to accept the last candidate, even though this means failure. The parameter $$k$$ must be between 1 and $$n$$; if $$k = 1$$, we select the first candidate; if $$k = n$$, we select the last candidate; for any other value of $$k$$, the selected candidate is random, distributed on $$\{k, k + 1, \ldots, n\}$$. We will refer to this let $$k$$ go by strategy as strategy $$k$$.
Thus, we need to compute the probability of success $$p_n(k)$$ using strategy $$k$$ with $$n$$ candidates. Then we can maximize the probability over $$k$$ to find the optimal strategy, and then take the limit over $$n$$ to study the asymptotic behavior.
#### Analysis
First, let's do some basic computations.
For the case $$n = 3$$, list the 6 permutations of $$\{1, 2, 3\}$$ and verify the probabilities in the table below. Note that $$k = 2$$ is optimal.
| | $$k$$ | $$p_3(k)$$ |
|-----------------|-----------------|-----------------|
| 1 | 2 | 3 |
| $$\frac{2}{6}$$ | $$\frac{3}{6}$$ | $$\frac{2}{6}$$ |
In the secretary experiment, set the number of candidates to $$n = 3$$. Run the experiment 1000 times with each strategy $$k \in \{1, 2, 3\}$$
For the case $$n = 4$$, list the 24 permutations of $$\{1, 2, 3, 4\}$$ and verify the probabilities in the table below. Note that $$k = 2$$ is optimal.
| | | $$k$$ | $$p_4(k)$$ |
|------------------|-------------------|-------------------|------------------|
| 1 | 2 | 3 | 4 |
| $$\frac{6}{24}$$ | $$\frac{11}{24}$$ | $$\frac{10}{24}$$ | $$\frac{6}{24}$$ |
In the secretary experiment, set the number of candidates to $$n = 4$$. Run the experiment 1000 times with each strategy $$k \in \{1, 2, 3, 4\}$$
For the case $$n = 5$$, list the 120 permutations of $$\{1, 2, 3, 4, 5\}$$ and verify the probabilities in the table below. Note that $$k = 3$$ is optimal.
| | | | $$k$$ | $$p_5(k)$$ |
|--------------------|--------------------|--------------------|--------------------|--------------------|
| 1 | 2 | 3 | 4 | 5 |
| $$\frac{24}{120}$$ | $$\frac{50}{120}$$ | $$\frac{52}{120}$$ | $$\frac{42}{120}$$ | $$\frac{24}{120}$$ |
In the secretary experiment, set the number of candidates to $$n = 5$$. Run the experiment 1000 times with each strategy $$k \in \{1, 2, 3, 4, 5\}$$
Well, clearly we don't want to keep doing this. Let's see if we can find a general analysis. With $$n$$ candidates, let $$X_n$$ denote the number (arrival order) of the best candidate, and let $$S_{n,k}$$ denote the event of success for strategy $$k$$ (we select the best candidate).
$$X_n$$ is uniformly distributed on $$\{1, 2, \ldots, n\}$$.
Proof:
This follows since the candidates arrive in random order.
Next we will compute the conditional probability of success given the arrival order of the best candidate.
For $$n \in \N_+$$ and $$k \in \{2, 3, \ldots, n\}$$,
$\P(S_{n,k} \mid X_n = j) = \begin{cases} 0, & j \in \{1, 2, \ldots, k-1\} \\ \frac{k-1}{j-1}, & j \in \{k, k + 1, \ldots, n\} \end{cases}$
Proof:
For the first case, note that if the arrival number of the best candidate is $$j \lt k$$, then strategy $$k$$ will certainly fail. For the second cases, note that if the arrival order of the best candidate is $$j \ge k$$, then strategy $$k$$ will succeed if and only if one of the first $$k - 1$$ candidates (the ones that are automatically rejected) is the best among the first $$j - 1$$
The two cases are illustrated below. The large dot indicates the best candidate. Red dots indicate candidates that are rejected out of hand, while blue dots indicate candidates that are considered.
Now we can compute the probability of success with strategy $$k$$.
For $$n \in \N_+$$
$p_n(k) = \P(S_{n,k}) = \begin{cases} \frac{1}{n}, & k = 1 \\ \frac{k - 1}{n} \sum_{j=k}^n \frac{1}{j - 1}, & k \in \{2, 3, \ldots, n\} \end{cases}$
Proof:
When $$k = 1$$ we simply select the first candidate. This candidate will be the best one with probability $$1 / n$$. The result for $$k \in \{2, 3, \ldots, n\}$$ follows from Exercises 8 and 9, by conditioning on $$X_n$$.
$\P(S_{n,k}) = \sum_{j=1}^n \P(X_n = j) \P(S_{n,k} \mid X_n = j) = \sum_{j=k}^n \frac{1}{n} \frac{k - 1}{j - 1}$
Values of the function $$p_n$$ can be computed by hand for small $$n$$ and by a computer algebra system for moderate $$n$$. The graph of $$p_{100}$$ is shown below. Note the concave downward shape of the graph and the optimal value of $$k$$, which turns out to be 38. The optimal probability is about 0.37104.
The optimal strategy $$k_n$$ that maximizes $$k \mapsto p_n(k)$$, the ratio $$k_n / n$$, and the optimal probability $$p_n(k_n)$$ of finding the best candidate, as functions of $$n \in \{3, 4, \dots, 20\}$$ are given in the following table:
Candidates $$n$$ Optimal strategy $$k_n$$ Ratio $$k_n / n$$ Optimal probability $$p_n(k_n)$$
3 2 0.6667 0.5000
4 2 0.5000 0.4853
5 3 0.6000 0.4333
6 3 0.5000 0.4278
7 3 0.4286 0.4143
8 4 0.5000 0.4098
9 4 0.4444 0.4060
10 4 0.4000 0.3987
11 5 0.4545 0.3984
12 5 0.4167 0.3955
13 6 0.4615 0.3923
14 6 0.4286 0.3917
15 6 0.4000 0.3894
16 7 0.4375 0.3881
17 7 0.4118 0.3873
18 7 0.3889 0.3854
19 8 0.4211 0.3850
20 8 0.4000 0.3842
Apparently, as we might expect, the optimal strategy $$k_n$$ increases and the optimal probability $$p_n(k_n)$$ decreases as $$n \to \infty$$. On the other hand, it's encouraging, and a bit surprising, that the optimal probability does not appear to be decreasing to 0. It's perhaps least clear what's going on with the ratio. Graphical displays of some of the information in the table may help:
Could it be that the ratio $$k_n / n$$ and the probability $$p_n(k_n)$$ are both converging, and moreover, are converging to the same number? First let's try to establish rigorously some of the trends observed in the table.
The success probability $$p_n$$ satisfies
$p_n(k - 1) \lt p_n(k) \text{ if and only if } \sum_{j=k}^n \frac{1}{j-1} \gt 1$
It follows that for each $$n \in \N_+$$, the function $$p_n$$ at first increases and then decreases. The maximum value of $$p_n$$ occurs at the largest $$k$$ with $$\sum_{j=k}^n \frac{1}{j - 1} \gt 1$$. This is the optimal strategy with $$n$$ candidates, which we have denoted by $$k_n$$.
As $$n$$ increases, $$k_n$$ increases and the optimal probability $$p_n(k_n)$$ decreases.
#### Asymptotic Analysis
We are naturally interested in the asymptotic behavior of the function $$p_n$$, and the optimal strategy as $$n \to \infty$$. The key is recognizing $$p_n$$ as a Riemann sum for a simple integral. (Riemann sums, of course, are named for Georg Riemann.)
If $$k(n)$$ depends on $$n$$ and $$k(n) / n \to x \in (0, 1)$$ as $$n \to \infty$$ then $$p_n[k(n)] \to -x \, \ln(x)$$as $$n \to \infty$$.
Proof:
We give an argument that is not completely rigorous, but captures the general ideas. First note that
$p_n(k) = \frac{k-1}{n} \sum_{j=k}^n \frac{1}{n} \frac{n}{j-1}$
We recognize the sum above as the left Riemann sum for the the function $$f(t) = \frac{1}{t}$$ corresponding to the partition of the interval $$\left[\frac{k-1}{n}, 1\right]$$ into $$(n - k) + 1$$ subintervals of length $$\frac{1}{n}$$ each: $$\left(\frac{k-1}{n}, \frac{k}{n}, \ldots, \frac{n-1}{n}, 1\right)$$. It follows that
$p_n(k) \approx -\frac{k-1}{n} \ln\left(\frac{k-1}{n}\right)$
If $$k / n \to x \in (0, 1)$$ as $$n \to \infty$$ then the expression on the right converges to $$-x \, \ln(x)$$ as $$n \to \infty$$.
The graph below shows the true probabilities $$p_n(k)$$ and the limiting values $$-\frac{k}{n} \, \ln\left(\frac{k}{n}\right)$$ as a function of $$k$$ with $$n = 100$$.
For the optimal strategy $$k_n$$, there exists $$x_0 \in (0, 1)$$ such that $$k_n / n \to x_0$$ as $$n \to \infty$$. Thus, $$x_0 \in (0, 1)$$ is the limiting proportion of the candidates that we reject out of hand. Moreover, $$x_0$$ maximizes $$x \mapsto -x \, \ln(x)$$ on $$(0, 1)$$.
The maximum value of $$-x \ln(x)$$ occurs at $$x_0 = 1 / e$$ and the maximum value is also $$1 / e$$.
Thus, the magic number $$1 / e \approx 0.37104$$ occurs twice in the problem. For large $$n$$:
• Our approximate optimal strategy is to reject out of hand the first 37% of the candidates and then select the first candidate (if she appears) that is better than all of the previous candidates.
• Our probability of finding the best candidate is about 0.37.
The article "Who Solved the Secretary Problem?" by Tom Ferguson has an interesting historical discussion of the problem, including speculation that Johannes Kepler may have used the optimal strategy to choose his second wife. The article also discusses several generalizations of the problem.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 146, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9125377535820007, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/89601/list
|
## Return to Answer
2 Added details. ; added 1 characters in body
As long as connected groups of isometries are concerned, Grassmann manifolds are symmetric spaces, so the identity component of its isometry group is $G$ in its symmetric presentation $G/H$ ($G$ connected) as a homogeneous space, namely, $SO(n)$ for $n$ odd and $SO(n)/\mathbf Z_2$ for $n$ even in the real case, and $PU(n)=SU(n)/\mathbf Z_n$ in the complex case. (Note that $U(n)$ acts on the left on the Grassmannian with a $U(1)$-kernel (its center), so the effectivized group is the projectivization $PU(n)$. Moreover the center of $U(n)$ meets $SU(n)\subset U(n)$ along its center, which consists of $\omega I$ where $\omega$ is an $n$-th root of unit.)
Further, Cartan described the full isometry groups of symmetric spaces, and an explicit result is easy to figure out in the case of Grassmann manifolds. I do not remember now, but you can find Cartan's description in the book of O. Loos on symmetric spaces, the second volume. I tend to agree with Ryan when he writes that in the case of Grassmann manifolds, the full isometry group should be $G\times N_G(H)$.
About Stiefel manifolds: with the metric you describe, they are normal homogeneous spaces $G/H$, i. e. have the metric induced from a bi-invariant Riemannian metric on $G$. There is a recent paper by S. Reggiani with a very effective way of computing the identity component of isometry groups of normal homogeneous spaces in here.
Since Stiefel manifolds fiber over Grassmann manifolds,
Added: I think it shouldn't be very hard to use this fiber bundle to figure out their looked up Loos, "Symmetric spaces, II", Theorem 4.4 and the ensuing Table 10 on page 156 for the full isometry group of the real and complex Grassmannians. If I understand correctly, indeed in the case of complex Grassmannians
$SU(n)/S(U(p)\times U(n-p))$, every isometry comes from left multiplication by elements from $SU(n)$ except for two cases: an isometry induced by complex conjugation; and mapping a $p$-plane to its orthogonal complement in case $n=2p\geq4$. In the case of real unoriented Grassmannians $SO(n)/S(O(p)\times O(n-p))$, every isometry comes from left multiplication by an element of $O(n)$ except for: mapping a $p$-plane to its orthogonal complement in case $n=2p\geq4$; the symmetric group $S_3$ in case $n=2p=8$, coming from outer automorphisms of $\mathfrak{so}(8)$.
1
As long as connected groups of isometries are concerned, Grassmann manifolds are symmetric spaces, so the identity component of its isometry group is $G$ in its symmetric presentation $G/H$ ($G$ connected) as a homogeneous space, namely, $SO(n)$ in the real case and $PU(n)=SU(n)/\mathbf Z_n$ in the complex case. (Note that $U(n)$ acts on the left on the Grassmannian with a $U(1)$-kernel (its center), so the effectivized group is the projectivization $PU(n)$. Moreover the center of $U(n)$ meets $SU(n)\subset U(n)$ along its center, which consists of $\omega I$ where $\omega$ is an $n$-th root of unit.)
Further, Cartan described the full isometry groups of symmetric spaces, and an explicit result is easy to figure out in the case of Grassmann manifolds. I do not remember now, but you can find Cartan's description in the book of O. Loos on symmetric spaces, the second volume. I tend to agree with Ryan when he writes that in the case of Grassmann manifolds, the full isometry group should be $G\times N_G(H)$.
About Stiefel manifolds: with the metric you describe, they are normal homogeneous spaces $G/H$, i. e. have the metric induced from a bi-invariant Riemannian metric on $G$. There is a recent paper by S. Reggiani with a very effective way of computing the identity component of isometry groups of normal homogeneous spaces in here.
Since Stiefel manifolds fiber over Grassmann manifolds, I think it shouldn't be very hard to use this fiber bundle to figure out their full isometry group.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304521679878235, "perplexity_flag": "head"}
|
http://scicomp.stackexchange.com/questions/4708/approximating-and-visualizing-basins-of-attraction
|
Approximating and visualizing basins of attraction
I am working on estimating the position and orientation (pose) of a model (rigid object) from its silhouette in an image. For this, I have constructed an error measure between the model in its pose and the silhouette, which looks roughly like:
$$\epsilon ( \bar{x} ) = \sum_{\forall i} \| f(\bar{x}, m_i) - s_i \|^2$$
where $\bar{x}$ is a six-dimensional vector describing the 3D translation and rotations as
$$f( \bar{x}, p ) = R_{\bar{x}} \cdot p + t_{\bar{x}}$$
Ordinarily, this could be nonlinear least squares, however there is a catch: An assignment needs to be made between model-points $m_i$ and silhouette points $c_i$, which complicates the evaluation of the error measure.
I am approaching the problem as a general nonlinear optimization problem. I already know that this error measure is continous, but not continously differentiable due to the aforementioned assignment. I do have gradient information however, but this does not take the assignment into account and therefore is not completely accurate.
The question: Is there a method which can calculate/approximate and visualize the basins of attractions in this six-dimensional space?
If this is absolutely not feasible, is there a method which can calculate/approximate the number of local minima within a "bounded" region?
-
2 Answers
Visualizing 6 dimensional domains is simply not easy. Unless of course, your uber-dimensional monitor is back form the repairman. Getting parts from the future is never a quick thing to do however, so mine languishes in a back room with my busted Holodeck.
Kidding aside, visualizing a basin in 6-d really is not easy. Even computing the limits of a basin of attraction will be difficult. The curse of dimensionality hounds you.
Ok, even in a lower number of dimensions, identifying the boundaries of such a basin requires solving MANY optimization problems. After all, a basin of attraction need not be a convex set. It need not be connected. And, since an optimizer, starting from distinct starting values will yield results that are still distinct, you must now do some clustering, testing that the multiple solutions truly are the same.
There are other issues of course. Suppose I ask to minimize the function (x-y)^2 in the (x,y) plane? Clearly any point on the line y=x is a solution, and all are equally good. But clustering will have problems here, as they will on any such degeneracies, and identifying degeneracies in 6-d is not always trivial.
Finally, you ask about identifying the NUMBER of local minimizers in any bounded region. The is too is quite difficult for a general black box problem. The field of global optimization has been working on problems like this for many years, though I don't think they can give you any hard, easy to compute answers in general.
-
You could use a nonsmooth solver such as solvopt or ralg, run it with many random starting values and limited number of function evaluations, and make a cluster of the resulting approximate solutions.
This will give you an idea of the basins of attraction. Since randomness is involved, there are no guarantees, but I do not think that rigorous methods (branch and bound) would be efficient in your context.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465574622154236, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/1707/how-do-i-find-the-most-diversified-portfolio-or-least-correlated-subset-of-sto?answertab=votes
|
# How do I find the most diversified portfolio, or least correlated subset, of stocks?
I have a trading system that chooses top 10 stocks in Nasdaq 100 ranked on relative strength and some other factors. However, I'd like to take positions in only 5 of these 10 stocks based on how minimally correlated these are to others for diversification effect. How do I resolve this? I do have the correlation/covariance matrices computed. Literature seems to indicate applying weights to reduce correlations but I felt there should be a simpler solution. That said, the stocks doesn't need to be equally weighted if it is easier to compute these weights.
A computationally easier solution is preferred even if it is not completely accurate since I need to implement this in Amibroker trading software.
-
Weighing by 1-correlation is already a pretty simple method. Could you elaborate on why that's difficult? – chrisaycock♦ Aug 18 '11 at 2:38
## 2 Answers
One simple method, based on the principles of mean-variance optimization, is to set the weights proportional to the product of the inverse of the covariance matrix and a vector of standard deviations. This implicitly assumes that the normalized expected return of each stock is equal. If you wish, you can take only the top 5 weights and set the others to zero. The actual problem you face, of selecting just 5 stocks, can be solved rigorously with an optimizer, but since it is not a quadratic program, may be difficult to solve.
Update
A more sophisticated but very interesting additional possibility is to find the "Maximum Diversification Portfolio (MDP)", as defined in Toward Maximum Diversification (free version, hat tip vonjd). The MDP is defined as the portfolio that maximizes the Diversification Ratio (DR), which in turn is defined as the ratio of the portfolio’s weighted average volatility to its overall volatility. A follow-up paper investigates the properties of this portfolio. From the paper:
This measure [DR] embodies the very nature of diversification whereby the volatility of a long-only portfolio of assets is less than or equal to the weighted sum of the assets' volatilities. As such, the DR of a long-only portfolio is greater than or equal to one, and equals unity for a single asset portfolio. Consider for example an equal-weighted portfolio of two independent assets with the same volatility: its DR is equal to $\sqrt{2}$, and to $\sqrt{N}$ for $N$ independent assets.
$DR(\bf{w})=\frac {\sum_i{\it{w}_i\sigma_i}} {\sigma(\bf{w})}$
-
You could probably use DEOptim() R package to solve this complex objective function. Along the lines suggested, I would add a constraint for the max number of assets. Also, you can include in the objective function a vector corresponding to the expected returns of each stock. Since you are optimizing only over 10 stocks the algorithm would converge rapidly. – Quant Guy Aug 24 '11 at 22:43
@QuantGuy I haven't tried this, but the paper suggests that a long-only MDP will typically have much fewer assets than the total available, so a cardinality constraint may not be necessary here. – Tal Fishman Aug 25 '11 at 1:44
@TalFishman I've looked at replicating TOBAM's results in the past, and you're absolutely correct: an unconstrained MDP portfolio on the R1K universe typically holds 50-200 names. – michaelv2 Mar 21 '12 at 14:32
1
– vonjd May 27 '12 at 17:46
The problem of the selecting the best portfolio (according to some risk measure) with a limited number of assets can be formulated as a mixed integer linear or quadratic program and is reviewed in the recent paper "Portfolio selection problems in practice: a comparison between linear and quadratic optimization models". It can be solved for reasonable sizes by several of the best optimizers like CPLEX or XPRESS. However, in the case of 5 stocks out of 10 there are only 252 possible possible different subsets (namely 10 choose 5) and they could be all exaustively explored with repect to the risk measure of preference by any personal computer.
-
Hi Fabio, welcome to quant.SE and thanks for contributing the link to your paper. I hope you will stay to contribute to the site more broadly. – Tal Fishman Sep 28 '11 at 0:24
1
Hi Tal, thanks for the welcome. Indeed, this site looks actractive to me. I work more on the theoretical side, but I would like to share ideas also with people working on the practical side and this site seems to be a good place for it. – Fabio Sep 28 '11 at 9:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426578879356384, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/51855/energy-in-orbit-of-satellites-around-the-earth-lost/51861
|
# Energy in orbit of satellites around the earth lost?
If the total mechanical energy in a satellite's orbit (assuming circular) is greater when it is closer to the earth, and hence smaller when it is farther from the earth, then we can say that as the moon drifts from the earth, the moon loses energy in translational speed and gravitational potential energy. If only those two are taken into consideration, then there is a net energy loss from the moon.
I had first thought that the energy a satellite has increases as it goes on a larger orbit, but I ran some numbers and it didn't appear so. If I went wrong somewhere, please someone, correct me. Here are my numbers:
For a geostationary satellite (r = 42 164 km, v = 11 068 km/s, m = 1 kg), its total energy is PE + KE. PE = mgh, but g = 0.22416 m/s^2. The result is PE = 9 451.650 kJ, KE = 4 726.582 kJ
For a satellite at r = 45 000 km , m = 1kg, then v = sqrt(GM/r) = 2 976.06 km/s. g at that height is g = 0.19680 The result is PE = 8 856.094 kJ, KE = 4 428.047 kJ
At the larger orbit, both PE and KE are lower than if it was at a lower orbit. Is this right?
Now, the earth slows down its rotation, which allows the moon to go into a larger orbit by conservation of angular momentum. Since the moon goes into a larger orbit, it loses energy. But, since the spin of the earth has slowed down, it also loses energy. Moreover, the moon is still tidally locked with the earth, so its rotational speed isn't increasing.
All in all, there seems to be an energy loss that's going on. How is this being compensated? Is it in the translational speed of the moon (so that the moon is actually moving faster than it should be to maintain a stable orbit)? That seems reasonable - there could be an increase in translational and rotational speed to compensate for the energy loss, maintaining the moon to be tidally locked.
But that's just me. What really happens? How does the energy transfer occur, and are there mathematical equations describing this exchange?
-
## 1 Answer
It appears you made a few mistakes.
The formula $E_P = mgh$ is only an approximation for objects near the ground. The more complete formula is
$E_P = -\frac{\mu m}{r}$
where $\mu = 398600.44$ is Earth's standard gravitational parameter, and $r$ is the distance between the object and the Earth's center of gravity.
Especially note the negative sign; this has to do with the definition of potential energy in the context of orbits. This is where I think you went wrong.
Also, where did you find $V = 11.068$ km/s for a geostationary orbit? That looks more like an escape speed than a normal orbital speed...Indeed, if you look up the altitude for a geostationary orbit you see that it is $35768$ km above the equator. That means the total pathlength traversed by the satellite in one stellar day is
$2\pi \cdot ( 35768+R_E) \approx 264,811$km
making the speed
$264,811 \mathrm{\ km} / 86164 \mathrm{\ seconds} \approx 3.07 \mathrm{\ km/s}$
so much much slower than the ~11 km/s you stated. Lumping all this together:
$E_P^{GEO} \approx -\frac{398600.44}{42164} = -9.45$ kJ/kg
$E_K^{GEO} \approx \frac{3.07^2}{2}$ = 4.71 \$ kJ/kg
$E_{tot}^{GEO} = 4.71-9.45 = -4.74$ kJ/kg
while for the other orbit
$E_P^{alt} \approx -\frac{398600.44}{45000} = -8.86$ kJ/kg
$E_K^{alt} \approx \frac{2.98^2}{2}$ = 4.44 \$ kJ/kg
$E_{tot}^{alt} = 4.44-8.86 = -4.42$ kJ/kg
which is indeed higher than the GEO orbit.
This makes sense -- you need to input a lot more energy to let anything escape from Earth's gravity than, say, an apple falling to the ground (which is also in an "orbit", albeit one far closer to the Earth, and not exactly on an escape trajectory).
If what you say would be true, everything would simply fall up and escape the Earth. There are a few experiments that will show that that is not actually what happens :)
With regard to your statement about the moon: the moon is indeed slowly escaping from the Earth. The mechanism here is that the Moon is gaining orbital speed at the expense of Earth's rotational momentum, through tidal interaction.
Roughly translated: as Earth's rotation slows down, the Moon speeds up, making the Moon progress farther away from the Earth, towards a lower speed.
The total energy in that higher orbit is higher, because the drop in speed is disproportionally small in relation to the gain in potential energy. Eventually, after a few million years of repeating the above, the moon will have gained enough energy to escape the Earth and orbit the Sun on its own.
-
Thank you. The numbers I got were counterintuitive, so I asked. I laughed when you pointed out that things would fall up if what I said was right - I didn't think about that. Also, I think I didn't put the correct units there, too. It should be 11 km/h, but apparently I didn't use the incorrect value for getting the kinetic energy. The KE I got was 4 726.582, which uses v = 3.07 and not v ~ 11. – markovchain Jan 22 at 10:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9568694829940796, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/120044/convergence-of-dirichlet-forms/120045
|
## Convergence of Dirichlet Forms
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If a sequence of Dirichlet forms convergence to 0, then what about the diffusion processes associated with these Dirichlet forms? Do the finite dimensional distributions of them converges weakly? and what are the limits?
-
What do you mean exactly by "finite dimensional distribution of the diffusion processes" associated with the Dirichlet forms? Perhaps the limit (for $t\to \infty$) of the associated semigroups? – Delio Mugnolo Jan 27 at 19:36
The sequence of Dirichlet forms depend on a parameter, So the convergence of the associated diffusion processes are about this parameter not time $t\rightarrow\infty$. – John Young Jan 27 at 22:00
Yes, this is clear to me. Say, if the forms are $(a_n)_{n\in \mathbb N}$, then I am talking about convergence of the associated semigroups $((e^{-ta_n})_{t\ge 0})_{n\in \mathbb N}$. My question was - and is -: what is the "finite dimensional distribution" of the diffusion process (governed by $(e^{-ta_n})_{t\ge 0}$, say)? – Delio Mugnolo Jan 27 at 23:10
yes, That is what I mean. – John Young Jan 28 at 20:38
## 1 Answer
Kato shows in §VI.3 of his book "Perturbation theory for linear operators" that in particular if a sequence of Dirichlet forms (or, more generally, of bounded closed sesquilinear forms) converges to a limiting form $a$ (in a certain quite natural sense), then the associated operators converge to the operator associated with $a$ in the norm resolvent sense. This in turn implies norm convergence of the associated semigroups, e.g. because of the representation of the semigroups via the backward Euler scheme applied to the resolvents. I guess this answers your question about the diffusion processes associated with the forms.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9082462787628174, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/136718-use-quadractic-regression-equation-predict-population.html
|
# Thread:
1. ## Use a quadractic regression equation to predict population.
I was given the population of Las Vegas for every 10 years starting at 1900 and ending in 1990. From these 10 data points I created a table and Graph with my Ti-83.
I found the quadratic regression equation. P =population t=time since 1900
P(t)=53.4804t^2-2217.5496t+12820.6454
I also found the x intercepts and the vertex.
X=34.5202
X=6.9445
The vertex
X=20.7324
Y=12820.6454
Now this is the part where I am stuck. My teacher wants me use to use the above information to determine when will (was) the population of Las Vegas be 1 million people, when was (will) the population be zero? He also said some answers might not be possible with my data(Las Vegas pop)
So I guess that I need to substitute 1 million into P of my equation and solve for t. Is this right ? If so how would I go about that? or is it something else?
I appreciate any help and let me know if you need to me to clarify anything...I'm new.
2. Originally Posted by ghostbuster
So I guess that I need to substitute 1 million into P of my equation and solve for t. Is this right ? If so how would I go about that? or is it something else?
Yep, solve as a normal quadratic $t = \frac{-b\pm\sqrt{b^2-4ac}}{2a}$
Originally Posted by ghostbuster
He also said some answers might not be possible with my data(Las Vegas pop)
This will be for $P,t <0$
3. Yep, solve as a normal quadratic
Could you give me a little more info as how to do that Where would the 1,000,000 go?
4. $P(t)=53.4804t^2-2217.5496t+12820.6454$
$1000000=53.4804t^2-2217.5496t+12820.6454$
$1000000=53.4804t^2-2218t+12821$
$0=53t^2-2218t-987179$
Now $a = 53, b= 2218 , c= -987179$
5. So I put these into
and this will give the year that the population will be at 1 million?
Thanks for the help....I've haven't been in a math class for a couple of years.
6. Originally Posted by ghostbuster
So I put these into
and this will give the year that the population will be at 1 million?
yep....
7. when was (will) the population be zero?
]
is this possible?...do I just do the same thing but put in a zero instead of 1 million?
8. Originally Posted by ghostbuster
]
is this possible?...do I just do the same thing but put in a zero instead of 1 million?
Could be 0 if gambling became prohibited!
Otherwise yep put 0 instead of 1 million.
Note that if $b^2-4ac < 0$ there will be no solutions.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259374737739563, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/38266/spectral-galerkin-method-for-a-semi-linear-parabolic-pde
|
## Spectral Galerkin method for a semi-linear parabolic PDE
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm trying to understand how to apply the Galerkin method to $u_t - \Delta u = u^3$. I understand how to obtain all of the a-priori estimates using Sobolev embeddings and such but my question concerns the actual discretization procedure where we project onto the finite dimensional subspace spanned by the eigenfunctions of $-\Delta$.
In the linear case we may simply set $u_N = \sum \phi_n c_n$ and plug this in to obtain a set of $N$ O.D.Es which we then show satisfy the same energy bounds. In the non-linear case though we may not just substitute directly because of the $u^3$ term. How can this be dealt with? Is there perhaps a better way to do the approximation?
Addition: In this example if we let $u_N = \sum \phi_n c_n^N(t)$ then when we plug this into the weak form of our PDE we obtain and then choose our test function to be one of the basis elements $w_k$ we obtain $d/dt c_k^N(t) + \sum_{i=1}^n e^{ki}(t) c_k^N(t) = \int (\sum_{n=1}^N c_n^N(t) \phi_n)^3w_k$, for some coefficients $e^{ki}(t)$. My question is, how do I deal with the integral on the right? I would like to be able to solve this ODE and then say I have a solution.
-
1
There is no easy way. You have to honestly open the parentheses, compute the integrals, and deal with the resulting trilinear form if you pursue this approach. – fedja Sep 10 2010 at 23:47
## 1 Answer
The solution of this problem will blow up in finite times, which can be understood from the ODE your posted. In fact, the nonlinear term in the right hand of the ODE is not globally lipchitz, you can only get a local solution by Picard existence theorem. To see that, one solves the following ODE $$y' = y^3, y(0)=1$$ you will the solution is $y(t) = -2(t - \sqrt[3]{2})^{-2}$ blow up as $t \rightarrow \sqrt[3]{2}$.
-
Will all solutions blow up no matter what (nontrivial) initial and boundary conditions you have? – timur Dec 1 2011 at 1:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9125550985336304, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/105866?sort=oldest
|
## weakening naive comprehension to avoid the paradoxes
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Weakening the axiom of naive comprehension has not been a popular way of escaping from the set-theoretic paradoxes because no consistent weakenings seem to be particularly well motivated or even to lead to understandable models. At any rate, that is so of the most famous consistent (well, probably consistent) weakening, New Foundations. Nonetheless, it could be illuminating to understand the partial order of consistent subtheories of naive set theory. My question would be: what is known about it? Unfortunately, however, that question seems ill-posed in that an arbitrary axiom $\psi$ can be coded as comprehension for $(\psi \land x\neq x)\lor (\lnot\psi \land x\notin x)$. But can anything interesting be said?
Besides NF, I am aware of just one type of set theory that is (almost) naturally thought of as arrived at by admitting only a subset of all possible instances of naive comprehension. These are the so-called positive set theories. Unfortunately, I know nothing about them except that they admit a universal set (like NF), and they apparently do require some extra axioms that are not naturally expressed as instances of comprehension. In particular, according to Wikipedia, the theory known as `$GPK^{+}_{\infty}$` requires the axiom of infinity, the empty set axiom (!), and an axiom scheme of "closure" giving, for each formula $\phi$ with one free variable, the intersection of all the sets that contain every $x$ such that $\phi(x)$. This seems to me arguably in the spirit of restricting naive comprehension because comprehension is still the main set construction principle, and in particular there is no need for powerset or replacement. Are there other "natural" examples of set theories that can be thought of as arrived at by weakening naive comprehension? Perhaps ones that don't admit a universal set?
Even non-effective examples (examples where the set of instances of comprehension that is admitted is not computably enumerable) might be interesting. Also, there might be interesting ways of weakening naive comprehension that are different from simply restricting the allowed instances of the schema. For instance, maybe some set of disjunctions of instances of naive comprehension is interesting, or maybe it is interesting to consider an axiom that would only guarantee the existence of a set that is in some sense "close" to the class of objects satisfying $\phi$.
The context in which this question came up is that I was trying to explain Russell's paradox to someone, and their reaction was, well you should just throw out the instance of the comprehension axiom that leads to paradox. Of course, throwing out literally that one instance won't restore consistency. But pointing out a flaw in any particular proposal someone with this attitude toward the paradoxes might propose wouldn't show that some more sophisticated proposal might succeed. I was hoping for some sort of general argument that, say, proceeding in this way inevitably leads to a system that is either like NF or like positive set theory (if it is not inconsistent or extremely weak). (What else could be wrong with $x\notin x$ except that it is unstratified or that it involves negation?) Or at least an argument that you won't get an extension of ZF by any natural weakening procedure would be nice! Both NF and positive set theory, if I understand the situation aright, could serve as a foundation for mathematics, but both are less intuitive and convenient than ZFC, and it is sort of an article of faith for set theorists that any alternative to ZFC we might ever find is either somehow worse than ZFC or not deeply different from it, yes?
-
Probably not a fix you intend to consider, but you can also escape Russell's paradox by keeping full comprehension, and instead adopting a non-classical logic as your deductive framework. I tend to find this approach not very interesting, but it certainly can yield a non-trivial set theory which is very different from ZFC, NF, positive set theory, etc. – Noah S Aug 29 at 18:16
When you write "This seems to me arguably in the spirit of restricting naive comprehension because comprehension is still the main set construction principle, and in particular there is no need for powerset or replacement.", you seem to imply that replacement and power set are NOT instances of naive comprehension. If so, please explain what you mean by "naive comprehension". If not, then "ZF minus Foundation and Extensionality" is a natural weakening of comprehension, guided by "limitation of size". Except that I think that any "natural" set theory needs some version of extensionality. – Goldstern Aug 29 at 23:27
@Noah: Russell's paradox still occurs with naïve comprehension even in very weak systems: we are basically given $R \notin R \leftrightarrow R \in R$, and to deduce $\bot$ we just need the deduction rules for $\land$ and $\to$. – Zhen Lin Aug 30 at 11:36
@Zhen Lin: What you wrote depends sensitively on the details of the propositional deduction rules. Specifically, the obvious deduction of $\bot$ won't work in a system like Girard's linear logic. The problem is that $\phi\to(\phi\to\psi)$ doesn't give you $\phi\to\psi$ in such systems. (I vaguely recall that an attempt was made to use linear logic to circumvent Russell's paradox, but the resulting set theory was terribly weak; there may have been better attempts since then but I don't recall seeing any.) – Andreas Blass Sep 16 at 1:27
## 1 Answer
I'm not clear on why you don't regard ZFC as an example. You say:
Weakening the axiom of naive comprehension has not been a popular way of escaping from the set-theoretic paradoxes because no consistent weakenings seem to be particularly well motivated or even to lead to understandable models.
But it seems to me that the ZFC axioms of set theory result essentially from a weakening of naive comprehension, are highly popular, are well motivated and seem to avoid the paradoxes while having an abundance of understandable models.
In particular, the ZFC axiom of separation is the result of weakening the naive comprehension axiom to the assertion that for any property $\phi$ and any set $A$, the collection $\{ \ x\ \mid\ x\in A\text{ and }\phi(x)\ \}$ forms a set. And one can similarly view the replacement axiom as an instance or weakening of naive comprehension, asserting of any set $A$ and property $\phi$ that the collection $\{\ x\ \mid\ \exists a\in A\ x\text{ is unique such that }\phi(x,a)\ \}$ forms a set.
Furthermore, the ZFC formulation of set theory seems to be very well motivated by the iterative conception of set, where one views the class of all sets being formed in a well-founded cumulative hierarchy formed in stages, in which the elements of a set are constructed at earlier stages than the set itself, and the stages continue in an endless transfinite progression. In essence, one must construct the elements of a set before constructing the set itself. On this philosophical view of how sets are formed, there is ample support for the separation and replacement axioms, and essentially none for the naive comprehension axiom (since there seems in general no reason to suppose all the $x$ with the desired property exist by some stage).
-
That was my initial reaction upon reading the question as well, but I think the point that kimtown is only after theories all of whose axioms (besides extensionality) are instances of comprehension. So ZFC in its entirety doesn't make the cut, and one would have to do without foundation, infinity and choice. (Though I suppose infinity could be recovered as part of the theory while respecting kimtown's criterion by adding as axioms the finitely many instances of stratified comprehension which Specker used in his proof that NF refutes choice.) – Ed Dean Aug 29 at 20:11
1
The power set, union and pairing axioms seem to be instances of naive comprehension, as well as the infinity axiom, since it asserts the existence of { n ∣ n is a finite ordinal}. So this means that ZF-Foundation is obtainable as a weakening of naive comprehension. The standard discussion of foundation shows that every model of ZF-Foundation has a model of ZF as its well-founded part and so we arrive at ZF in this way (and even at ZFC by going to the inner models). – Joel David Hamkins Aug 29 at 20:24
Just to be clear, I wasn't suggesting that ZFC is in any way lacking for motivation, or that it doesn't originate from a weakening of naive comprehension, and I agree wholeheartedly that the lines you quote from the OP are mistaken as written. I meant no more and no less in my comment than that part of ZFC (Choice and Foundation, please excuse the mention of Infinity) isn't axiomatized as it seems the OP intends, and so I thought ZF-Foundation (rather than ZFC) would speak more directly to what concerns the OP. – Ed Dean Aug 29 at 21:29
It is true that the ZFC axioms other than foundation and choice are instances of comprehension in a more straightforward way than the way in which every sentence in the language of set theory is, and I should have noticed that and pointed it out. But it is perverse to think of ZF-foundation as a weakening of naive set theory when in fact it is, as you say, an expression of the iterative conception of set, which is different conception of set. You cannot naturally arrive at ZF-foundation by removing problematic instances of comprehension. – kimtown Aug 29 at 21:46
1
I would say that it is more natural; whatever pleasant thoughts we might have had about the naive conception were largely abandoned once we realized it was contradictory. Why should we cling to a mistaken conception we know is wrong? – Joel David Hamkins Aug 29 at 23:36
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9636586308479309, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/3687/sanity-check-how-to-price-callables
|
# Sanity check - How to price callables
This question is meant as a sanity check whether i got the workflow right for pricing callable bonds. If anyone finds a mistake, or has a suggestion, please answer.
The workflow is:
1. For every call date calculate:
• The probability that the bond is called
• The plain vanilla price of the bond as if it had a maturity to the call date
2. Calculate the weighted average price of the bond with the following code
`
````# Assume a callable with call dates t = 1...T-1 and normal maturity T
# CallProps - Vec of Probabilities that the bond is called at times t
# FullPrices - Vec of prices of a bond if it had maturity at t, T.
NoCallProps = 1-CallProps
CumNoCallProps = c(1,cumprod(NoCallProps))
WeightedPrice = 0
for(i in 1:length(FullPrices))
{
WeightedPrice = WeightedPrice + CumNoCallProps[i] * (CallProps[i] * FullPrices[i])
}
````
`
The call propabilities are calculated by Monte Carlo:
• take the current yield and simulate rate development between now and the call date with a CIR process (taken from the MATLAB library and adapted to R)
• compare the yield at the call date with the coupon of the bond, and call, if the yield is lower than the coupon
• Calculate the average of the calls for the number of replications.
-
The code above has no commenting and the variables are not well defined. You should at a minimum be using pseudocode if you want any feedback. This algo seems to be missing the recursive dynamic I describe but since the variables are not defined no one can evaluate this. – Quant Guy Jun 30 '12 at 15:04
## 1 Answer
You have the right intuition but the approach is not quite right.
The issuer has the right to call back the bond at a pre-defined call price. So your decision criterion is "call when the value of the bond >= contractual call price". We are comparing prices in the decision rule, not the YTM of the callable bond with the coupon of the bond.
Note that typically the call price is above par value (reflecting a call premium).
So you need to value the bond under various interest rate scenarios according to your Monte Carlo simulation. After you simulate your interest rate paths, you will also need to use a recursive backward induction algorithm to value the callable bond at each node in a binomial tree. Take a weighted average of bond prices along each interest rate path to arrive at the value of the bond (first starting at the terminal nodes at maturity then working to back to the present day) remembering to use the discount rate prevailing at that point in time. Also, at any node you assign the call value in lieu of an otherwise option-free bond value wherever the option-free bond value is greater than the callable price (since these are the cases where it is rational for the issuer to call the bond). This is depicted in Node(D,D) below.
This is best visualized by a binomial tree:
Some examples are in the attached paper by Frank Fabozzi.
-
Thanks for the reply. Seems I have taken a shortcut assuming a call price of 100 in every case - should I be able to expect the users that they will be able to provide the call price (this is for a software to calculate Solvency II risks in the standard formula). Theoretically I am using a truncated tree, in which i either have a terminal node in each date, if the bond is called with probability $p_t$, or the bond is not called, and i get to the next node with $1-p_t$ – Owe Jessen Jul 2 '12 at 8:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9164083003997803, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/254623/is-there-a-way-to-determine-if-half-of-an-even-number-will-be-odd-or-even
|
# Is there a way to determine if half of an even number will be odd or even?
For example 100 is even and 100/2= 50 is also even
But 30 is also even but 30/2=15 is odd
Now let's say I have a number as large as 10^10000000000...
I want to know how many steps are involved in cutting this number in half. When the number is even, I divide it by 2. When it's odd, I subtract 1. I continue this process until I hit 0.
However, when the number is too high, I can't actually manipulate it directly (elaboration: too big to write out, and too big to fit into memory on a computer), so I am curious if there's a way for me to do this by just knowing the even/odd attributes along the chain.
I hope this makes sense!
Example: If n=100, I have the following chain
100, 50, 25, 24, 12, 6, 3, 2, 1, 0
Which is a total of 9 "splitting steps" (10 if you count the original number) And the following parities
Even, Even, Odd, Even, Even, Even, Odd, Even, Odd, Even
I am asking if, given n=10, there is a way to get this parity chain
-
## 2 Answers
Yes you can get the chain for $n=10$, but for large numbers it is not easy.
Write your number $n$ in binary. What you are doing is the following:
• If the last digit is 0, you erase it.
• If the last digit is 1, you make it a zero.
For $n=10$ in binary you have $n=1010$. Thus your string is
$$1010 \to 101 \to 100 \to 10 \to 1 \to 0$$
In total, the number of digits is exactly the number of digits in binary (which is exactly $\log_2 n$ rounded up) plus the number of 1 in the binary representation
-
If the number is too large to write out in decimal, how can I possibly write it out in binary, though? – user51819 Dec 9 '12 at 16:17
@user51819 That is exactly what I meant by not easy for large numbers. Anyhow, it is easy to see that finding a method to solve this problem is equivalent to writing the number in binary; so if you cannot write the number in binary you cannot solve the problem ;) – N. S. Dec 9 '12 at 16:19
Is there a way to move along the binary of the number without writing it all out at one time? – user51819 Dec 9 '12 at 16:20
@user51819 Yes, but it is either the process you described, or you can do the following: Find the largest $m$ such that $2^m \leq n$ and subtract $n-2^m$. Repeat....A better approach would be to pick some $0 < < 2^m < < n$ and find the quotient and reminder when $n$ is divided by $2^m$, $q$ gives you the first many digits, $r$ gives the last few... You can repeat.... – N. S. Dec 9 '12 at 16:25
Do you have an example using, say, n=10^100? How do I know the 2^m or the quotient if I can't actually manipulate the full decimal value of n directly? (in this case I can with that level of n but for the sake of experiment assume I can't) – user51819 Dec 9 '12 at 16:27
show 1 more comment
Your sequence corresponds quirte directly with the binary representation of the original number: Starting form the least significant binary digit, each $0$ corresponds to "even, dide by two" and each $1$ corresponds to "odd, subtract one; even, divide by two". For your example $n=100$, which is $1100100$ in binary we thus obtain
Even, divide by two; even, divide by two; odd, subtract one, then even, divide by two; even, divide by two; even, divide by two; odd, subtract one, then even, divide by two; odd, subtract one. Finally zero is even.
Thus determining your even/odd sequence is equivalent to determinig the binary representation of the given number.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9290661215782166, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/314598/perfect-secrecy-encryption
|
# Perfect Secrecy, Encryption
An encryption scheme $(\mathrm{Gen},\mathrm{Enc},\mathrm{Dec})$ over a message space $M$ is perfectly secret if and only if for every probability distribution over $M$, every message $m\in M$, and every ciphertext $c\in C$, $$\mathrm{Pr}[C=c\mid M=m]=\mathrm{Pr}[C=c].$$
Please can you help me prove this?
-
## 1 Answer
Hint: What is your definition of "perfectly secret"? What this says is that knowing the message $m$ gives you no information about the cyphertext. A perfect encryption scheme says that knowing the encrypted text gives you no information about the message. These look like inverses.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7987262606620789, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/72484/lack-of-homeomorphism-between-compact-space-and-non-hausdorff-space/72489
|
Lack of homeomorphism between compact space and non-Hausdorff space
Show that a continuous bijection $f : X \to Y$ with $X$ compact and $Y$ Hausdorff is a homeomorphism. Give an example to show that such a continuous bijection is not necessarily a homeomorphism if $Y$ is not assumed to be Hausdorff.
I'm having some trouble with the counterexample.
Would an $X$ interval in $\mathbb R$ and a $Y = S^1$ work?
-
1
$S^1$ is Hausdorff! – Grumpy Parsnip Oct 14 '11 at 1:05
3 Answers
No, because $\mathbb{S}^1$ is Hausdorff (it is a subspace of the Hausdorff space $\mathbb{R}^2$).
Let $X=\{a,b\}$ with the discrete topology, i.e. the open sets of $X$ are $\varnothing$, $\{a\}$, $\{b\}$, and $\{a,b\}$, and let $Y=\{c,d\}$ with the trivial topology, i.e. the open sets of $Y$ are $\varnothing$ and $\{c,d\}$.
$X$ is compact (because it is finite), and $Y$ is not Hausdorff because the points $c$ and $d$ cannot be separated by disjoint open sets.
Let the map $f:X\to Y$ be defined by $f(a)=c$, $f(b)=d$. Then $f$ is a continuous bijection, but not a homeomorphism.
-
Here is a simple example. Let $X$ be an interval with the standard topology, and let $Y$ be the same set with the coarse topology (only two open sets).
-
To show that f is a homeo. is to show that it takes open sets into open sets. Let $G$ be open in $X$. Then $F = X - G$ is closed and therefore compact (closed subspace of a compact space.) Since f is continuous, $f(F)$ is compact in $Y$. Since $Y$ is $T_2$ we have that $f(F)$ is closed in $Y$.
Then $Y - f(F) = f(G)\$ is open in Y. therefore $f^{-1}$ is continuous
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9477531313896179, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/22470/wind-vs-air-resistance?answertab=oldest
|
# wind vs air resistance
I'm wondering which offers more resistance: pulling an object at some speed through air, or holding the object steady against wind at the same speed.
I think initially people would think same resistance.
Then I thought that the air that is flowing has probably been compressed under it's own speed (or rather, in order to get to it's speed), and therefore offers more resistance. Also, the wind may be colder (thus denser) if it's the wind I'm used to, but disregarding temperature influence, do you think that my theory is correct? Can we quantify the resistance gained by windspeed?
Thanks in advance.
-
Wind isn't colder. It only feels colder due to evaporation. IMO, the answer to this question depends upon the method of generation of wind due to the pressure issue. – Manishearth♦ Mar 17 '12 at 5:05
Ah true. I'm thinking of the usual method of wind generation. – user420667 Mar 17 '12 at 6:28
## 1 Answer
One might say, to be specific: a perfect wind with constant density $\rho$, pressure $P$, velocity $v$ produces the same effect on a body at rest, as on a body, which is moving with velocity $v$ in a still medium of pressure $P$ and density $\rho$. Should you want to specify any other physical parameters, they should be taken the same for the media in both cases.
The statement is guaranteed to be true by galilean invarince. You see a body moving with velocity $v$ in a medium, you start to move yourself with velocity $v$, and you see the body standing still and a wind blowing with velocity $v$. The same physics - just a different reference frame.
-
Is it also true for turbulent flow? In the direction normal to the movement of the object, I can imagine that the turbulent fluctuations are different. Is this what people working with wind tunnels neglect? – Bernhard Mar 17 '12 at 16:24
It is only about switching of the reference frames. If your wind is not perfect, and would possess turbulences itself without the body being there, then it is not the same case as a body moving through a still medium. In other words, the wind should be such that if you were moving along with it, it would look for you like a still medium. – Alexey Bobrick Mar 17 '12 at 16:51
So in practice, the windtunnel assumption is not completely correct. As there is not such thing as perfect wind, due to the non-linearity of the process. – Bernhard Mar 17 '12 at 18:04
Perfectly speaking, it is not the same. However, it is pretty easy to calculate out the wind in an empty windtunnel(or measure), and have a control over your conditions. – Alexey Bobrick Mar 17 '12 at 18:44
1
You need two thermodynamic variables to completely specify a thermodynamic state for a given composition. Pressure and density give you, say, the temperature, internal energy and whatever else you would wish. – Alexey Bobrick Mar 17 '12 at 22:32
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474058151245117, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/32025-geometric-progression.html
|
# Thread:
1. ## Geometric progression
Just a quick question;
How do I show that (1+2+...+n)^2 = 1^3+2^3+...+n^3 ?
2. Originally Posted by weasley74
Just a quick question;
How do I show that (1+2+...+n)^2 = 1^3+2^3+...+n^3 ?
The sum on the right can be shoen to be a quartic in $n$ (construct the
difference table and it becomes obviouse).
Which quartic can be established by fitting the first few terms to the
quartic.
The sum on the left is: $\left(\frac{n(n+1)}{2}\right)^2$.
Expand and show the required equality.
RonL
3. Originally Posted by weasley74
Just a quick question;
How do I show that (1+2+...+n)^2 = 1^3+2^3+...+n^3 ?
Second method from the department of incredibly elegant but indirect proofs.
The expression on the left is obviously a quartic in n, since the table of
second differences of what is inside the square is constant it is a quadratic in
n and so its square is a quartic in n.
As explained before what is on the right is also a quartic.
To show that two quartics are identical you need only show that they are
equal at five distinct points. So compute the first 5 values of what is on the
left and the corresponding values for what is on the right and if the
corresponding terms are equal the identity is proven.
RonL
4. Originally Posted by CaptainBlack
Second method from the department of incredibly elegant but indirect proofs.
The expression on the left is obviously a quatic in n, since the table of
second differences of what is inside the square is constant it is a quadratic in
n and so its square is a quartic in n.
As explained before what is on the right is also a quartic.
To show that two quartics are identical you need only show that they are
equal at five distinct points. So comopute the first 5 values of what is on the
left and the corresponding values for what is on the right and if the
corresponding terms are equal the identity is proven.
RonL
5. So I just have to show that the five first values are equal? it sounds a bit too easy..
6. Originally Posted by weasley74
So I just have to show that the five first values are equal? it sounds a bit too easy..
Well, if you want something more "logic" oriented, you could always do a proof using Mathematical Induction.
-Dan
7. Originally Posted by topsquark
Well, if you want something more "logic" oriented, you could always do a proof using Mathematical Induction.
-Dan
and how do I do that? =)
8. Originally Posted by topsquark
Well, if you want something more "logic" oriented, you could always do a proof using Mathematical Induction.
-Dan
I was just about to post method 3, Mathematical Induction (which I can
confirm does work quite nicely, but is not as elegant as method 2).
RonL
9. Originally Posted by weasley74
Just a quick question;
How do I show that (1+2+...+n)^2 = 1^3+2^3+...+n^3 ?
Let n = 1.
$(1)^2 = 1^3$? True.
So assume the theorem is true for some n = k. Then we need to prove that it is true for n = k + 1.
Our assumption is that
$(1 + 2 + ~...~ + k)^2 = 1^3 + 2^3 + ~...~ + k^3$
What we wish to prove is
$(1 + 2 + ~...~ + k + (k + 1))^2 = 1^3 + 2^3 +~...~+k^3 + (k + 1)^3$
Now,
$(1 + 2 + ~...~ + k + (k + 1))^2 = (1 + 2 + ~...~ + k)^2 + (k + 1)^2 + 2(1 \cdot (k + 1) + 2 \cdot (k + 1) + ~...~ k(k + 1))$
$= (1^3 + 2^3 + ~...~ + k^3) + (k + 1)^2 + 2(1 \cdot (k + 1) + 2 \cdot (k + 1) + ~...~ k(k + 1))$
according to our assumption. So plugging this back into what we need to prove, we see that we need to show
$= (1^3 + 2^3 + ~...~ + k^3) + (k + 1)^2 + 2(1 \cdot (k + 1) + 2 \cdot (k + 1) + ~...~ k(k + 1))$ $= 1^3 + 2^3 +~...~+k^3 + (k + 1)^3$
or
$(k + 1)^2 + 2(1 \cdot (k + 1) + 2 \cdot (k + 1) + ~...~ k(k + 1)) = (k + 1)^3$
So let's work a bit more on that left hand side.
$(k + 1)^2 + 2(1 \cdot (k + 1) + 2 \cdot (k + 1) + ~...~ k(k + 1))$
$= (k^2 + 2k + 1) + 2(k + 1)(1 + 2 + ~...~ + k)$
and the sum of the first k numbers is
$= (k^2 + 2k + 1) + 2(k + 1) \frac{k(k + 1)}{2}$
$= (k^2 + 2k + 1) + k(k + 1)^2$
$= k^2 + 2k + 1 + k^3 + 2k^2 + k$
$= k^3 + 3k^2 + 3k + 1$
$= (k + 1)^3$
as we needed.
So the theorem is true for n = 1 and thus for all integers by induction.
-Dan
10. Originally Posted by CaptainBlack
I was just about to post method 3, Mathematical Induction (which I can
confirm does work quite nicely, but is not as elegant as method 2).
RonL
Agreed. And Mathematical Induction here is a bit tedious.
-Dan
11. Originally Posted by weasley74
and how do I do that? =)
First you should know how induction works:
You have a proposition about a netural number n, such as your identity.
Show it holds for a base case, typical n=1 or n=0.
The if you assume that it hold for some k, show this implies it holds for k+1
Then you are done the general proposition is proven for all n greater than or
equal to the base case.
Checking the base case is trivial in this case as for n=1 both sides of the
equality are 1.
I will let you have a go at proving the induction step its not difficult.
RonL
12. Originally Posted by weasley74
So I just have to show that the five first values are equal? it sounds a bit too easy..
That part is easy, the key part of the argument is identifying that both
sides are quartics.
RonL
13. Originally Posted by topsquark
Agreed. And Mathematical Induction here is a bit tedious.
-Dan
On my a5 note pad its half a side of paper!
RonL
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941696047782898, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/27462/why-is-there-no-theta-angle-topological-term-for-the-weak-interactions
|
# Why is there no theta-angle (topological term) for the weak interactions?
Why is there no analog for $\Theta_\text{QCD}$ for the weak interaction? Is this topological term generated? If not, why not? Is this related to the fact that $SU(2)_L$ is broken?
-
Good question, and looking forward to the answers if any. ;-) – Luboš Motl Jan 26 '12 at 6:57
## 1 Answer
In the presence of massless chiral fermions, a $\theta$ term in can be rotated away by an appropriate chiral transformation of the fermion fields, because due to the chiral anomaly, this transformation induces a contribution to the fermion path integral measure proportional to the $\theta$ term Lagrangian.
$$\psi_L \rightarrow e^{i\alpha }\psi_L$$
$${\mathcal D}\psi_L {\mathcal D}\overline{\psi_L}\rightarrow {\mathcal D} \psi_L {\mathcal D}\overline{\psi_L} \exp\left(\frac{i\alpha g N_f}{64 \pi^2}\int F \wedge F\right)$$
So the transformation changes $\theta$ by $C \alpha g N_f$ ($g$ is the coupling constant, $N_f$ the number of flavors).
The gluons have the same coupling to the right and left handed quarks, and a chiral rotation does not leave the mass matrix invariant. Thus the QCD $\theta$ term cannot be rotated away.
The $SU(2)_L$ fields however, are coupled to the left handed components of the fermions only, thus both the left and right handed components can be rotated with the same angle, rotating away the $\theta$ term without altering the mass matrix.
-
Nice, would you add one or two formulae? What is the parameter of the transformation (and which one) to remove the $\theta\cdot F\wedge F$ term? And a related question: is there some simple way to add some chiral couplings of new fermions to $SU(3)_{color}$ to solve the strong CP-problem? – Luboš Motl Jan 26 '12 at 13:56
1
@Luboš I am not an expert, from reading only, I think that your suggestion is quite close to one solution to the strong CP problem assuming the mass of the u-quark is exactly zero,though not widely accepted. – David Bar Moshe Jan 26 '12 at 14:55
Thanks a lot, David! – Luboš Motl Jan 26 '12 at 16:33
What about the Yukawa couplings? You absorb the phase into the Higgs? Or into right handed fermions? – Thomas Jan 26 '12 at 20:10
@Thomas: To the right handed fermions. They are not coupled to the gauge fields so their transformation does not change the path integral measure – David Bar Moshe Jan 27 '12 at 3:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8823062777519226, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/tagged/number-theory?sort=votes&pagesize=15
|
# Tagged Questions
Questions on the number-theoretic functionality of Mathematica.
4answers
764 views
### Factorisation diagrams
Here is a way to visualize the factorisation of natural numbers. How do we get this or a similar kind of output using Mathematica? See the list of images generated for number from 1 to 36:
2answers
3k views
### What is so special about Prime?
When we try to evaluate Prime on big numbers (e.g. 10^13) we encounter the following issue : ...
1answer
2k views
### Finding long strings of identical digits in transcendental numbers
Introduction Describing the three main streams of present-day mathematical philosophy (formalism, Platonism and intuitionism) in a well-known book, The Emperor's New Mind, R. Penrose says: ...it ...
6answers
768 views
### efficient way to count the number of zeros at the (right) end of a very large number
If I want to count the number of zeros at the (right) end of a large number, like $12345!$, I can use something like: Length[Last[Split[IntegerDigits[12345!]]]] ...
4answers
504 views
### Why does Mathematica claim there is no even prime?
I wonder if this is a bug, or if I'm misunderstanding something: Exists[n, EvenQ[n] && PrimeQ[n]] // Resolve (* ==> False *) So if I interpret this ...
2answers
288 views
### Why does iterating Prime in reverse order require much more time?
Say I would like to display the $10$ greatest primes that are less than $10^5$. I could do the following: ...
3answers
315 views
### How to know if a number is the square of a rational?
I'm pretty new with Mathematica and I was looking for a way to know whether a number is a square of a rational. I thought of Head[Sqrt[myNumber]] == Rational ...
6answers
395 views
### How to find lattice points on a line segment?
How do I find points on the line segment joining {-4, 11} and {16, -1} whose coordinates are positive integers?
5answers
792 views
### Fastest square number test
What is the fastest possible square number test in Mathematica 7, both for machine size and big integers? I presume in version 8 the fastest will be a dedicated C LibraryLink function.
1answer
306 views
### FiniteFields package is very slow. Any fast substitute for Mathematica?
I want to compute the inverse of matrix, say with dimensions $100 \times 100$, defined over a large finite field extension such as $GF(2^{120})$. I am using the package FiniteFields, but Mathematica's ...
1answer
71 views
### ToNumberField won't recognize Root[…] as explicit algebraic number
In Mathematica 9.0.1, it appears that ToNumberField will not always recognize a Root object as an explicit algebraic number. ...
1answer
239 views
### What is the confidence limit on this convergence?
When I run this, Product[n^MoebiusMu[n],{n,1,Infinity}] I get $\frac{1}{4 \pi^{2}}$ Over on Math Overflow they are saying it shouldn't happen. So, how do ...
1answer
148 views
### Testing for primality in quadratic rings?
Testing for primality in $\mathbb{Z}[\sqrt{-1}]$ in Mathematica is easy: PrimeQ[n, GaussianIntegers -> True] But how can I test for primality in, say, ...
1answer
94 views
### Implementing Remainder Tree
I want to implement Remainder Tree based on this. With the answers on SE I've come up with: ...
0answers
250 views
### Does Mathematica use the Elliptic Curve Method (ECM) in FactorInteger[]?
I'm not a mathematician, and I'm not even going to pretend that I understand anything of the ECM. But I know it can be a fast method for factorization. I benchmarked the factorization of ...
3answers
142 views
### How could I implement the equivalent of NextPrime
I would like to know what an implementation of the function NextPrime would look like if it were implemented in Mathematica's core language.
0answers
78 views
### Doing computations in a modulo ring
I need to perform some computations in a modulo ring, like Mod[Subfactorial[n], m] Mod[Binomial[n, k], m] However, this is obviously much too slow for large ...
1answer
282 views
### Evaluate continued fraction
Mathematica has the ContinuedFraction[] function to give the continued fraction expansion of a rational (or approximation of a real) number. I'm interested in the ...
1answer
271 views
### Which DirichletCharacter is KroneckerSymbol?
If $d$ is a fundamental discriminant, KroneckerSymbol[d,n] is a Dirichlet character modulo $|d|$. Which one is it? If $d>0$ is a prime $\equiv 1\bmod 4$, then ...
3answers
214 views
### Implementing the Farey sequence efficiently
There is of course the silly implementation: FareySequence[n_] := Union[Flatten[Table[j/i, {i, 1, n}, {j, 0, i}]]] However, there are numerous properties and ...
0answers
192 views
### Parallel PowerMod
Is there anyway to parallelize the PowerMod function? Here is my Left-To-Right modular exponentation: ...
2answers
189 views
### Plotting Chebyshev's theta function $\vartheta(x)$
The function I would like to plot is defined as $\sum\limits_{p\leq x}\log p.$ The following gives me I think a plot of the points of interest, but the function is defined for all $x > 0$ and so ...
3answers
50 views
### Why do these two different zetas produce the same value?
Zeta[-13] == Zeta[-1] == -1/12 Why do these two different zetas produce the same value?
3answers
247 views
### Generating pairs of additive and multiplicative factors for integers
Given an integer $n$, I want two lists: a) the set of pairs of the divsors $a,b$ into exactly two factors $n=a\cdot b$, b) the set of pairs $a,b$ of two summands $n=a+b$. The code I came up ...
1answer
115 views
### Another MoebiusMu question
When I evaluate the Mertens function to infinity: NSum[MoebiusMu[k], {k, 1, \[Infinity]}] I get -1, but I expected to get -2. I wanted to modify the ...
1answer
55 views
### Generating a list of all factorizations
What is the best way to generate a list of all factorizations of some number $n$? I'm quite new to Mathematica so this might be obvious. I have been trying some basic stuff with ...
0answers
150 views
### Faster GCD Implementation
Is there any chance to write a faster GCD than the built-in one in Mathematica? @Mr.Wizard has written one in this question (although it's not for this purpose) which is 6 times slower on a 100k ...
0answers
85 views
### PowersRepresentations Algorithm
I'm trying to understand the mathematics behind counting the number of representations of a positive integer by $n$ distinct $k$th powers, i.e. I would really like to know how to do the Mathematica ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8884667754173279, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/2844-volume-convergence.html
|
# Thread:
1. ## Volume/Convergence
Hello,
A forewarning, this problem does take a lot of time. So thanks in advance for the help.
The problem is as follows:
"Let f(x) = [12(x^4 + 1)]/[2x^4 + 3x^2 + 2]. Consider the solid S formed by revolving the region between f(x) and the line y = 6 about the line y = 6.
Decide whether the solid S has finite volume. Then, if it is finite, approximate it accurate to within .01. If it infinite, find a reasonable interval over which the volume is 100 or more."
So, I started out by graphing it. I graphed f(x) and y=6. It appears there is a horizontal asymptote at y=6 as the graph goes out to infinity and negative infinity on each side. I'm used to doing problems around the x axis, using the p-test for when p > 1 it converges, and <= it diverges. Revolving it about y=6 also may change the result (as 1/x diverges, but when revolved around the x axis, converges).
Thanks.
2. Originally Posted by AfterShock
Hello,
A forewarning, this problem does take a lot of time. So thanks in advance for the help.
The problem is as follows:
"Let f(x) = [12(x^4 + 1)]/[2x^4 + 3x^2 + 2]. Consider the solid S formed by revolving the region between f(x) and the line y = 6 about the line y = 6.
Decide whether the solid S has finite volume. Then, if it is finite, approximate it accurate to within .01. If it infinite, find a reasonable interval over which the volume is 100 or more."
So, I started out by graphing it. I graphed f(x) and y=6. It appears there is a horizontal asymptote at y=6 as the graph goes out to infinity and negative infinity on each side. I'm used to doing problems around the x axis, using the p-test for when p > 1 it converges, and <= it diverges. Revolving it about y=6 also may change the result (as 1/x diverges, but when revolved around the x axis, converges).
Thanks.
Translate the function so that y=6 is moved so that it now coincides with the
x-axis, that is consider the function:
$<br /> g(x)=f(x)-6<br />$
RonL
3. Thanks Captain Jack. Are you able to help me solve the problem? It took your suggestion into consideration:
int(f(x) - 6) [that is, just the integral; now I have to see whether rotating it around the x-axis will create a finite volue, as (1/x^2) does, or an infinite (1/x) with a finite area (Gabrielle's Trumpet)].
So, I'm not sure of which test to go about doing this problem: integral, comparison, ratio, n-th term test, etc. The integral of [f(x) - 6] is:
-9*sqrt(2)[(7*ln(2x^2 - sqrt(2)*x + 2) - 7ln(2x^2 + sqrt(2) + 2 + ... couple of arctangents, etc
So, I don't think I want to use the integral test. Well, first off, I wouldn't want to use it on f(x) - 6; I would want to find cross-sections of the graph.
Therefore: Pi*r^2; so in my case, taking the integral of Pi*[f(x) - 6]^2 will give me the volume, with the limits of integral being...0 to infinity? Or, negative infinity to infinity?
Another question I have: since the integral test, ratio test, etc applies to only non-negative decreasing monotone functions (or so I thought), how would this apply to this problem, in order to find convergence. Maybe a comparison would be easier, but how would I apply that to finding the volume. I'm not sure. I'm quite lost right now.
I think I have a lot of flaws in my thinking. But I needed to make some progress any way, so I tried giving it another shot.
Thank you.
4. Originally Posted by AfterShock
Thanks Captain Jack. Are you able to help me solve the problem? It took your suggestion into consideration:
int(f(x) - 6) [that is, just the integral; now I have to see whether rotating it around the x-axis will create a finite volue, as (1/x^2) does, or an infinite (1/x) with a finite area (Gabrielle's Trumpet)].
I suggest you consider:
$<br /> g(x)=f(x)-6<br />$,
you are then interested in the integral of $g(x)$.
Now:
$<br /> g(x)=\frac{12(x^4+1)}{2x^4+3x^2+2}-6=\frac{-18x^2}{2x^4+3x^2+2}<br />$
So when $|x|$ is large $g(x)\sim-9/x^2$, which
should be sufficient to answer your question.
RonL
5. $<br /> g(x)=\frac{12(x^4+1)}{2x^4+3x^2+2}-6=\frac{-18x^2-12}{2x^4+3x^2+2}<br />$
Where in the world did you get the -12 from? I agree they are the same, if you take out the "-12".
So now I rotate Pi*g(x)^2 around the x-axis? What test do I use to determine convergence and such?
6. Originally Posted by AfterShock
$<br /> g(x)=\frac{12(x^4+1)}{2x^4+3x^2+2}-6=\frac{-18x^2-12}{2x^4+3x^2+2}<br />$
Where in the world did you get the -12 from? I agree they are the same, if you take out the "-12".
An error - you should always check what you see here.
So now I rotate Pi*g(x)^2 around the x-axis? What test do I use to determine convergence and such?
I don't recall the names of these tests. It is clear that
for $|x|>N$ for large enough $N$ that there exists a constant $K>0$
such that:
$g(x)^2<K/x^4$,
and as $\int_{-N}^N g(x)^2 dx$ is finite and $\int_{N}^{\infty} K/x^4 dx$ and $\int_{-\infty}^{-N} K/x^4 dx$ are both
convergent the integral for the volume converges.
RonL
7. I've lost you on this part:
|x| > N for large enough N that there exists a constant K > 0 such that,
[g(x)^2]/[k/x^4]
I understand the part after that, and I see how the comparison would work. I just think I need more algebra in there to convince me that that is indeed true. Knowing k/x^4 is larger and convergent, would then definitely mean g(x) converges, too. Assuming all of that is correct, I went on with approximating it accurate to within .01.
Ok so we know g(x) <= int(1/x^4, x, 10, infinity). Now, we can make the integral to which we compared the original (and know is bigger) less than .01, therefore the original will be within .01, too.
So, int(1/x^4, x, 10, infinity) = int(1/x^4, x, 10, N) + int(1/x^4, x, N, infinity). The latter is the tail.
We want an N such that int(1/x^4, x, N, infinity) <= .01
lim(as b--> infinity)[int(1/x^4), x, n, b] = -1/(3x^3)|(from n to b) which is equal to:
0 - (-1/(3n^3)) = 1/(3n^3)
So 1/(3n^3) <= .01 .... n <= 3.2183.... use n = 4.
But I run into a problem... if that worked out correctly, wouldn't that be saying that:
int(g(x), x, 10, 4) is within .01 ... I think I have basically got it but am missing minor detail.
8. Disregard the second part; I did not square g(x) and I should have realized it didn't make sense because I tried to use a test for a non-negative integer series on an alternating series!
Let me go through this a final time so I can get this problem done!
g(x) = (-18x^2)/(24x^4 + 3x^2 + 2)
g(x)^2 = (324x^4)/(2x^4 + 3x^2 + 2)^2
So, now I have Pi * int((324x^4)/(2x^4 + 3x^2 + 2)^2, x, 1, infinity) <=
int(k/x^4, x, 1, infinity) <= .01/Pi
In which case, if I followed you correctly, why not just use k = 81, since that's what it is?
Ok, so we're using int(81/x^4, 1, infinity) <= .01/Pi to make Pi*g(x)^2 also within .01. [Note: I am not sure if I am using the right limits of integration...can't use 0 since that's undefined].
int(81/x^4, 1, infinity) = int(81/x^4, 1, N) + int(81/x^4, N, infinity). I will focus on the second part (the tail).
int(81/x^4, N, infinity) <= .01/Pi
Take the lim as b --> infinity from N to b, and we get,
-27/x^3 evaluated from N to b is 0 - (-27/n^3) = 27/n^3 <= .01/Pi
Solving for N I get: 20.3941...so I will use 21 for N. Therefore, if everything is right, I get this as a conclusion:
The volume is finite, and int(g(x)^2, x, 1, 21) will approximate the volume to within .01.
Anyone see any flaws?
9. Originally Posted by AfterShock
Ok, so we're using int(81/x^4, 1, infinity) <= .01/Pi to make Pi*g(x)^2 also within .01. [Note: I am not sure if I am using the right limits of integration...can't use 0 since that's undefined].
You want the area under the two tails to be <=0.01/pi as this bound the
error in the truncated integral, so you are looking for an N such that:
2*int(81/x^4, N, infinity) <= 0.01/pi
int(81/x^4, 1, infinity) = int(81/x^4, 1, N) + int(81/x^4, N, infinity). I will focus on the second part (the tail).
int(81/x^4, N, infinity) <= .01/Pi
Take the lim as b --> infinity from N to b, and we get,
-27/x^3 evaluated from N to b is 0 - (-27/n^3) = 27/n^3 <= .01/Pi
Solving for N I get: 20.3941...so I will use 21 for N. Therefore, if everything is right, I get this as a conclusion:
The volume is finite, and int(g(x)^2, x, 1, 21) will approximate the volume to within .01.
Anyone see any flaws?
Some flaws (as noted) but nice work and the right approach.
If you have the facilities, you might want to check this numerically after
you have recomputed N. (It might also be an idea to work with an error
bound of 0.005/pi rather than 0.01/pi to allow for any possible problems when
rounding to the nearest 0.01).
RonL
10. The infamous volume problem! I am doing something wrong, because when I check to see what the volume is by maple, I get conflicting results.
First of all, how did you come up with this comparison:
"I don't recall the names of these tests. It is clear that |X| > N for large enough N that there exists a constant K > 0
such that:
g(x)^2 < K/x^4"
I am missing something, but how can I make it clear to someone that that is indeed true. Perhaps a little algebra?
I am missing some link in my process.
Let me briefly go through some of my thoughts again. And thanks, especially to CaptainBlack for helping with this problem!
I'll try not to repeat the obvious steps.
So, I have g(x)^2 = (324x^4)/(2x^4 + 3x^2 + 2)^2
I am interested in finding the volume when revolving g(x) by the x-axis...it will be in the form Pi*int(g(x)^2, x = -infinity...infinity). I already calculated
g(x)^2 as shown above. The answer MAPLE gets for the Volume is:
[81*Pi^2*sqrt(14)]/49 ... which = 61.0454. So that's a reference point.
Now, I am comparing it to Pi*int(81/x^4, x = -infinity...infinity). And we concluded that that is bigger than the integral we wanted to calculate. We want this new integral to be <= .01. However, we can divide our integral by Pi to get rid of it, and thus we want our error to be <= .01/Pi.
I don't really get this part...you said I want the area of the TWO tails to be <= .01/Pi, so I want N such that
2*int(81/x^4, x = -infinity...infinity) <= .01/Pi
Would it be FOUR tails?
This is where it gets tricky taking the integral, since my limits of integration are from -infinity...infinity. So since its symmetric along the y axis, just find from 0...infinity and then double it? So then I can see two tails only with which to deal.
2*int(81/x^4, x = 1...infinity) =
2*int(81/x^4, x = 1...N) + 2*int(81/x^4, x = N...infinity). I will focus on the tail, that is the last part (the from N to infinity one).
So I take the lim as b --> infinity from N to b, and we get,
-27/x^3 evaluated from N to b is 0 - (-27/N^3) = 27/N^3 <= .01/Pi
But I have to remember to multiply by 2, so it's 2*[27/N^3] <= .01/Pi
Solving for N, I get N = 25.695. Then, multiply by 2 again to get the volume from -infinity...1
77.085.... that's not within .01...ahhhh im doing something wrong.
Frustrating problem!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936267614364624, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/138857-polynomial-approximations.html
|
# Thread:
1. ## Polynomial Approximations
The following is a worked example:
In the following, for example
$\frac{(g,p_0)}{(p_0,p_0)}p_0 = \frac{(-1)}{2}.1$
What I don't understand is that why they got $(g,p_0)=(-1)$.
Since $p_0 = 1$
And $(g,p_0) = \int^1_{-1} 1.g(x)dx$
But what do we need to substitute for $g$? Because this function is piece-wise, how do I know which one of the two functions I have to put into the formula?
2. Well, what are the definitions of $(g, p_0)$, $(g, p_1)$, etc.?
3. Originally Posted by HallsofIvy
Well, what are the definitions of $(g, p_0)$, $(g, p_1)$, etc.?
Those are inner products. Since $g$ and $p_0$ are continious functions on the given interval then their inner product is defined by
$(g,p_0)=\int^1_{-1}g(x)p_0 dx$
I belive that's the definition. The resulting polynomial we get in this problem is called "Legendre's polynomial", but unfortunently there is not much about it in my textbook.
4. For this:
$(g,p_0) = \int^1_{-1} 1.g(x)dx$
What should I substitute for "g(x)"? The g(x) is defined as a piece wise function. Do I need to substitute "-1" or "2x-1"? (apparently neither of them give the correct answer)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9516854286193848, "perplexity_flag": "middle"}
|
http://cs.stackexchange.com/questions/9246/construct-a-context-free-grammar-for-a-given-set-of-words
|
# Construct a context-free grammar for a given set of words
I have seen a few years back a nice and simple algorithm that, given a (finite) set of words in some alphabet, builds a context-free grammar for a language including these words and in some sense "natural" (e.g., the grammar doesn't produce all words in the alphabet). The algorithm is very simple, it has something like 3--4 rules for grammar transformation attempted on each new word. Any help in finding it would be appreciated.
-
What you want to do is learn an (infinite?) language after having seen a finite sample, with or without (too much) overgeneralisation. That is a hard task. What have you read about this? (If you only want a grammar for exactly that finite set, the answer is trivial.) – Raphael♦ Jan 28 at 10:32
1
So what you're looking for is a (simple) algorithm that performs context-free grammatical inference (CFGI). You can try searching those keywords on google scholar or something else. A quick search returned this review chapter from a Ph.D. thesis. Maybe you'll find what you're looking for in there, or at least pointers to steer your search. – Khaur Jan 28 at 11:15
Given $w_1,\ldots,w_n$, how about the grammar with the rules $S \to w_i$? – Yuval Filmus Jan 28 at 14:38
@YuvalFilmus That would be a simple way to build a grammar, but the grammar wouldn't be very natural, would it? – Khaur Jan 28 at 14:43
Thank you for the pointers. "Learning" and "inference" seem to be the right terms. To clarify, I am not interested in this topic in general, but just in this particular algorithm that I remember to be strikingly simple (in the striking contrast with the papers on grammatical inference). I thought it might be well-known, but maybe the algorithm is applicable only in some narrow case or has some other restriction. Sorry for being so vague, I'll try to remember more details. – nikita Jan 28 at 15:31
show 4 more comments
## 1 Answer
I think you might be referring to Sequitur.
Edit It has been suggested by other commenters that I leave more information for posterity. Fair point.
Sequitur is an algorithm by Craig Neville-Manning and Ian Witten (of Managing Gigabytes fame). It's linear time in the size of the input sequences (although so is the memory usage), and satisfies the twin properties of parsimony (no redundant rules are derived) and utility (every rule is useful).
However, it can't (IIRC) discover arbitrary nesting structure. So a prototypical expression grammar, where an expression can contain an expression, is too much for it. But it will discover word boundaries in English text, and repeat regions in DNA. It's also useful for finding dictionaries for data compression (which is one of Witten's major research interests).
-
Welcome to cs.stackexchange! It is generally advised to add some description/details in your answer instead of just a link - links could break. It would be nice if you could add some overview of what the algorithm does and how. That will substantially improve your answer. – Paresh Jan 29 at 8:17
This does not provide an answer to the question. To critique or request clarification from an author, leave a comment below their post - you can always comment on your own posts, and once you have sufficient reputation you will be able to comment on any post. – AJed Jan 29 at 14:15
Yes, this is exactly it! Thank you. – nikita Jan 29 at 18:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393959641456604, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/56245/graphs-embedded-on-fractals/56284
|
## Graphs Embedded on Fractals
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is fairly well know when graphs can be embedded on various surfaces. Also, it is not hard to see that any graph can be embedded in 3-dimensional space. Has anyone ever studied the embeddability of graphs on various fractals? If so, are there any interesting results?
For example, I made some quick deductions about graphs on Sierpinski's Gasket. No graph that has vertices of degree greater than 4 can be embedded on Sierpinski's Gasket. Also, I conjecture that one can not embed $K_4$ into Sierpinski's Gasket either.
Also, one might ask questions like, "Is it true that one can embed any graph on any space with Hausdorff dimension that is greater than or equal to 3?"
These are just some curiosities that came to me and a friend of mine earlier today, and I am curious if the Math Overflow community knows anything about such things.
-
1
For questions of homeomorphic embedding, the topological dimension should be used, not the Hausdorff dimension, since the Hausdorff dimension is not a homeomorphic invariant. – Gerald Edgar Feb 23 2011 at 14:35
## 3 Answers
Offhand I don't think Hausdorff dimension greater than or equal to 3 is strong enough for embedding any graph. If one considers the analogue of the Sierpinski gasket in 7 dimensions, this fractal has Hausdorff dimension 3, but it doesn't seem that any graph with a node of valence greater than 14 could be embeddable by essentially the same reasoning as you used for embedding graphs in the ordinary Sierpinski gasket.
Upon further thought, one can embed $K_4$ in the Sierpinski gasket since an embedding need not necessarily take straight lines to straight lines. In fact, one can embed it such that only one of the edges is not straight, so by extending this idea, it may still be possible to embed any (?) planar graph of valence < 5 in the Sierpinski gasket.
To see the embedding of $K_4$ I have in mind, let the three vertices of the triangle be (1,0,0), (0,1,0), and (0,0,1). The four vertices of the $K_4$ be $A = (\frac{1}{2},\frac{1}{2},0)$, $B = (\frac{1}{4},\frac{3}{4},0)$, $C = (\frac{1}{4},\frac{1}{2},\frac{1}{4})$, and $D = (0,\frac{3}{4},\frac{1}{4})$. The arcs $AB$, $AC$, $BC$, $BD$, and $CD$ are the straight line subsets of the gasket, while the arc $AD$ goes from $A$ to $(\frac{1}{2},0,\frac{1}{2})$ to $(0,\frac{1}{2},\frac{1}{2})$ to $D$.
-
No non-planar graph should embed, since the gasket may be realised as sitting on the plane – Nick Loughlin Feb 22 2011 at 11:38
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You cannot expect Hausdorff dimension alone to be the right notion for graph embeddability, since it is too metric and not topological enough.
To be more precise, consider snowflaking: for any metric space $(X,d)$ and any $\alpha\in(0,1)$, the function $d^\alpha$ defines a distance. Moreover, the Hausdorff dimension of $(X,d^\alpha)$ is $\alpha^{-1}$ times the Hausdorff dimension of $(X,d)$. So for example, you can metrize the Cantor set or the line so as to give them arbitrarily high Hausdorff dimension.
You could restrict to length space to rule out Cantor sets and snowflakes spaces (they do not have any non-constant rectifiable curve), but I tend to think that any result in this direction will "factor through topology", by which I mean that one proves that the Hausdorff dimension bound on the length space imposes some topological property that in turns provides embeddability.
-
There's also the issue of projections onto dust-like sets. For instance, the Cartesian product of some totally-disconnected dust-set and the unit-square will only ever admit planar-graph embeddings. The question is perhaps better-posed in terms of connected self-affine sets with some non-snowflake-like conditions imposed (I'm not familiar with the definition of a "snowflake space"). – Nick Loughlin Feb 22 2011 at 11:27
The Sierpinski gasket is not a good example for this because of the bound you saw on the degree of the graph. I'd venture to say that this is because the gasket falls into a class of fractals called post-critically finite. It is the fact that in pcf fractals when level n cells intersect there are a uniformly bounded number of them. You might be interested in looking at finitely ramified but not post-critically finite fractals. See this for some nice pictures of the Diamond fractal. Since the number of level n-cells intersecting is unbounded in n you can embed graphs without a bound on degree.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9239451885223389, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/91425/list
|
4 edited title
# statementaboutThe word problem inbraidgroups
3 edited tags
2 added a ^
Hi,
i have read a statement from Sossinsky and Prasolov' s book "Knots, Lİnks, Braids and 3-Manifold", it says that two reduced word represent isotopic braids if and only if they have the same reduced form, page 54. My claim is that this statement is not true: take $b_2b_1b_2^{-1}b_3^{-1}b_3^{-1}$ and $b_3^{-1}b_2b_3b_2b_1b_2^{-1}b_2^{-1}b_3{-1}b_2^{-1}$. b_3^{-1}b_2b_3b_2b_1b_2^{-1}b_2^{-1}b_3^{-1}b_2^{-1}\$. They represent isotopic braids but they are not the same. What is the correction of this statement?
zati
1
# statement about word problem
Hi,
i have read a statement from Sossinsky and Prasolov' s book "Knots, Lİnks, Braids and 3-Manifold", it says that two reduced word represent isotopic braids if and only if they have the same reduced form, page 54. My claim is that this statement is not true: take $b_2b_1b_2^{-1}b_3^{-1}b_3^{-1}$ and $b_3^{-1}b_2b_3b_2b_1b_2^{-1}b_2^{-1}b_3{-1}b_2^{-1}$. They represent isotopic braids but they are not the same. What is the correction of this statement?
zati
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232901334762573, "perplexity_flag": "middle"}
|
http://nrich.maths.org/2661&part=
|
### Greetings
From a group of any 4 students in a class of 30, each has exchanged Christmas cards with the other three. Show that some students have exchanged cards with all the other students in the class. How many such students are there?
### Writ Large
Suppose you had to begin the never ending task of writing out the natural numbers: 1, 2, 3, 4, 5.... and so on. What would be the 1000th digit you would write down.
### Euromaths
How many ways can you write the word EUROMATHS by starting at the top left hand corner and taking the next letter by stepping one step down or one step to the right in a 5x5 array?
# Consecutive Seven
##### Stage: 3 Challenge Level:
Start with the set of the twenty-one numbers $0$ - $20$.
Can you arrange these numbers into seven subsets each of three numbers so that when the numbers in each are added together, they make seven consecutive numbers?
For example, one subset might be $\{2, 7, 16\}$
$2 + 7 + 16 = 25$
another might be $\{4, 5, 17\}$
$4 + 5 + 17 = 26$
As $25$ and $26$ are consecutive numbers these sets are the kind of thing that you need.
[Remember that consecutive numbers are numbers which follow each other when you are counting, for example, $4$, $5$, $6$, $7$ or $19$, $20$, $21$, $22$, $23$.]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944043755531311, "perplexity_flag": "middle"}
|
http://en.m.wikibooks.org/wiki/PBASIC_Programming/RCTIME
|
# PBASIC Programming/RCTIME
## Analog Circuits
Many times, the devices that we connect to our BasicStamp will be regular analog devices, not digital devices. The difference is that instead of sending out ones or zeros, we need to send out a voltage value. As an example, if we wanted to send the value 4, we could either send a digital signal (100), or we could send an analog signal (+4V).
↑Jump back a section
## Resistors and Capacitors
A resistor is a device that slows down the flow of electric current. We have already seen Ohm's law before, but we will repeat it here:
$v = ir$
r is the relationship between the voltage v and and the current i. The value of a resistor is measured in units of Ohms (Ω).
A capacitor is a special type of device that stores energy. When you put a voltage across a capacitor, it fills with energy. When you take the voltage away, the capacitor releases that energy. For an example, a flash bulb in a camera uses a capacitor to store lots of energy until you press the picture button. When you press the button, the capacitor releases the energy very quickly, and the camera makes a bright flash. Without the capacitor the batteries would never be able to produce enough energy so quickly.
The ability of a capacitor to store energy is measured in units called Farads. Most capacitors that will be used in small applications have very small values, such as 1 millifarad or less.
### RC Circuits
An electric circuit that contains both resistors and capacitors is known as an RC circuit. A capacitor does not charge or discharge it's energy instantly. There is a special value, known as the time constant that determines the amount of time the capacitor takes to charge. The time constant is dependant on the value of the capacitor and the value of the resistor.
### Time Constants
The time constant of a circuit is calculated as:
$\tau = rC$
Where τ is the time constant, in seconds. r is the resistance, in Ohms, and C is the capacitance, in Farads. It takes approximately 5 time constants for an RC circuit to completely charge or discharge energy.
### Measuring a Time Constant
The BasicStamp can measure the time constant directly, without first needing to measure the resistance and the capacitance of the circuit. To do this, the BasicStamp outputs 5 volts into the circuit, long enough for the circuit to charge. Then, the BasicStamp turns the output port into an input port, allowing the circuit to discharge energy. When the amount of energy reaches zero, it has been 5 time constants. The BasicStamp divides the amount of time by 5 and returns the time constant value. Keep in mind that this is an approximate process, and it will fail for very large time constants (because the circuit will not have enough time to charge completely).
↑Jump back a section
## RCTIME
The RCTIME function measures the time constant of a circuit attached to one of the ports. The other end of the circuit should be attached to ground. It will return a value for the time constant in milliseconds. To read the time constant on port 11, we would write:
```MyByte VAR Byte
RCTIME 11, MyByte
```
↑Jump back a section
## Special Resistors
There are several types of resistors that we can use in our circuits to do different tasks.
### Potentiometers
A potentiometer, or more simply a "pot" is a resistor with variable resistance. Potentiometers typically have some sort of knob or dial that can be turned to alter the resistance. A good example of a potentiometer is a volume knob in a stereo. When the knob is turned up, the resistance decreases, more current flows, and the sound becomes louder. When the knob is turned down, the resistance increases, less current flows, and the sound becomes softer. Another example is a dimmer switch for a light bulb. The more dim the light, the higher the resistance in the switch must be.
If we have a circuit with a potentiometer, we know what C is, and we can measure the time constant, so we can calculate what the resistance must be. If we know what the maximum and minimum resistance is for the potentiometer, we can calculate how much the knob is turned, and in some cases we can even calculate the exact angle that the knob is pointing at.
Potentiometers can appear as knobs, but they can also be attached to wheels to measure the amount that a wheel has rotated. They can also be available as slide switches.
### Thermistors
Thermistors are resistors whose resistance changes with temperature. These are commonly used in electric thermometers. By measuring the RC time of a thermistor circuit, we can calculate the temperature.
### Photoresistors
A photoresistor is a resistor whose resistance changes depending on how much light there is. By measureing the time constant of the photoresistor, we can determine how much light is shined on it. An example of this is an automatic night-light, which turns on when it detects the light is off.
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9198299646377563, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/19123-remainder-theorem.html
|
# Thread:
1. ## Remainder theorem
Hi all,
I have the fraction:
$(x^4 + 3x^2 - 4)/(x^2 + 1)$
We have been asked to express in mixed number form using i) long division and ii) using the remainder theorem.
Have done the long division bit and got a remainder of -6.
I am not sure how to use the remainder theorem with the x^2 + 1 as if you use f(-1) you get the remainder 0. How do i go about using the remainder theorem with a squared x?
Thanks!
2. Originally Posted by steve@thecostins.co.uk
Hi all,
I have the fraction:
$(x^4 + 3x^2 - 4)/(x^2 + 1)$
We have been asked to express in mixed number form using i) long division and ii) using the remainder theorem.
Have done the long division bit and got a remainder of -6.
I am not sure how to use the remainder theorem with the x^2 + 1 as if you use f(-1) you get the remainder 0. How do i go about using the remainder theorem with a squared x?
Thanks!
Technically you can't use $x^2 + 1$ with synthetic division. However if you set $y = x^2$ then your problem becomes to divide $y^2 + 3y - 4$ by $y + 1$ which can be done by synthetic division.
-Dan
3. ## Remainder theorem
Thanks! I remember doing things like that before. But usually you have to work with y when you've finished.
Do i have to do anything to y when i'm done? Otherwise they might has well just asked me work out the quadratic divided by x + 1.
Does that make any sense?
Many thanks
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446160197257996, "perplexity_flag": "middle"}
|
http://en.m.wikibooks.org/wiki/Understanding_Darcs/Patch_theory
|
# Understanding Darcs/Patch theory
## Math and computer science nerds only
(The occasional physicist will be tolerated)
Casual users be warned, the stuff you're about to read is not for the faint of heart! If you're a day-to-day darcs user, you probably do not need to read anything from this page on. However, if you are interested in learning how darcs really works, we invite you to roll up your sleeves, and follow us in this guided tour of the growing Theory of Patches.
↑Jump back a section
## What is the theory of patches?
The darcs patch formalism is the underlying "math" which helps us understand how darcs should behave when exchanging patches between repositories. It is implemented in the darcs engine as data structures for representing sequences of patches and Haskell functions equivalent to the operations in the formalism. This section is addressed at two audiences: curious onlookers and people wanting to participate in the development of the darcs core. My aim is to help you understand the intuitions behind all this math, so that you can get up to speed with current conflictors research as fast as possible and start making contributions. You should note that I myself am only starting to learn about patch theory and conflictors, so there may be mistakes ahead.
↑Jump back a section
## Why all this math?
One difference between centralized and distributed version control systems is that "merging" is something that we do practically all the time, so it is doubly important that we get merging right. Turning the problem of version control into a math problem has two effects: it lets us abstract all of the irrelevant implementation details away, and it forces us to make sure that whatever techniques we come up with are fundamentally sound, that they do not fall apart when things get increasingly complicated. Unfortunately, math can be difficult for people who do not make use of it on a regular basis, so what we attempt to do in this manual is to ease you into the math through the use of concrete, illustrated examples.
A word of caution though, "getting merging right" does not necessarily consist of having clever behaviour with respect to conflicts. We will begin by focusing on successful, non-conflicting merges and move on to the darcs approach to handling conflicts.
↑Jump back a section
## Context, patches and changes
Let us begin with a little shopping. Arjan is working to build a shopping list for the upcoming darcs hackathon. As we speak, his repository contains a single file s_list with the contents
```1 apples
2 bananas
3 cookies
4 rice
```
Note:the numbers you see are just line numbers; they are not part of the file contents
As we will see in this and other examples in this book, we will often need to assign a name to the state of the repository. We call this name a context. For example, we can say that Arjan's repository is a context $o$, defined by there being a file s_list with the contents mentioned above.
Arjan makes a modification which consists of adding a line in s_list. His new file looks like this:
```1 apples
2 bananas
3 beer
4 cookies
5 rice
```
When Arjan records this change (adding beer), we produce a patch which tells us not only what contents Arjan added ("beer") but where he added them, namely to line 3 of s_list. We can say that in his repository, we have moved from context $o$ to context $a$ via a patch A. We can write this using a compact notation like ${}^oA^a$ or using the graphical representation below:
↑Jump back a section
## Sequential patches
Starting from this context, Arjan might decide to make further changes. His new changes would be patches that apply to the context of the previous patches. So if Arjan makes a new patch $B$ on top of this, it would take us from context $a$ to some new context $b$. The next patch would take us from this context to yet another new context $c$, and so on and so forth. Patches which apply on top of each other like this are called sequential patches. We write them in left to right order as in the table below, either representing the contexts explicitly or leaving them out for brevity:
with context sans context (shorthand)
${}^oA^a$ $A$
${}^oA^aB^b$ $AB$
${}^oA^aB^bC^c$ $ABC$
All darcs repositories are simply sequences of patches as above; however, when performing a complex operation such as an undo or exchanging patches with another user, it becomes absolutely essential that we have some mechanism for rearranging patches and putting them in different orders. Darcs patch theory is essentially about giving a precise definition to the ways in which patches and patch-trees can be manipulated and transformed while maintaining the coherency of the repository.
↑Jump back a section
## Inverses
Let's return to the example from the beginning of this module. Arjan has just added beer to our hackathon shopping list, but in a sudden fit of indecisiveness, he reconsiders that thought and wants to undo his change. In our example, this might consist of firing up his text editor and remove the offending line from the shopping list. But what if his changes were complex and hard to keep track of? The better thing to do would be to let darcs figure it out by itself. Darcs does this by computing an inverse patch, that is, a patch which makes the exact opposite change of some other patch:
The inverse of patch $P$ is $P^{-1}$, which is the patch for which the composition $PP^{-1}$ makes no changes to the context and for which the inverse of the inverse is the original patch.
So above, we said that Arjan has created a patch $A$ which adds beer to the shopping list, passing from context $o$ to $a$, or more compactly, ${}^{o}A^{a}$. Now we are going to create the inverse patch $A^{-1}$, which removes beer from the shopping list and brings us back to context $o$. In the compact context-patch notation, we would write this as ${}^{o}A^{a}{A^{-1}}^{o}$. Graphically, we would represent the situation like this:
Patch inverses may seem trivial, but as we will see later on in this module, they are a fundamental operation and absolutely crucial to make some of the fancier stuff -- like merging -- work correctly. One of the rules we impose in darcs is that every patch must have an inverse. These rules are what we call patch properties. A patch property tells us things which must be true about a patch in order for darcs to work. People often like to dream up new kinds of patches to extend darcs's functionality, and defining these patch properties is how we know that their new patch types will behave properly under darcs. The first of these properties is dead simple:
Patch property: Every patch must have an inverse
↑Jump back a section
## Commutation
Arjan was lucky to realise that he wanted to undo his change as quickly as he did. But what happens if he was a little slower to realise his mistake? What if he makes some other changes before realising that he wants to undo the first change? Is it possible to undo his first change without undoing all the subsequent changes? It sometimes is, but to do so, we need to define an operation called commutation.
Consider a variant of the example above. As usual, Arjan adds beer to the shopping list. Next, he decides to add some pasta on line 5 of the file:
The question is how darcs should behave if Arjan now decides that he does not want beer on the shopping list after all. Arjan simply wants to remove the patch that adds the beer, without touching the one which adds pasta. The problem is that darcs repositories are simple, stupid sequences of patches. We can't just remove the beer patch, because then there would no longer be a context for the pasta patch! Arjan's first patch $A$ takes us to context $a$ like so: ${}^{o}A^{a}$, and his second patch takes us to context $b$, notably starting from the initial context $a$: ${}^{a}B^{b}$. Removing patch $A$ would be pulling the rug out from under patch $B$. The trick behind this is to somehow change the order of patches $A$ and $B$. This is precisely what commutation is for:
The commutation of patches $X$ and $Y$ is represented by $XY \leftrightarrow {Y_1}{X_1}$. $X_1$ and $Y_1$ are intended to perform the same change as $X$ and $Y$
### Why not keep our old patches?
To understand commutation, you should understand why we cannot keep our original patches, but are forced to rely on evil step sisters instead. It helps to work with a concrete example such as the beer and pasta one above. While we could write the sequence $AB$ to represent adding beer and then pasta, simply writing $BA$ for pasta and then beer would be a very foolish thing to do.
Put it this way: what would happen if we applied $B$ before $A$? We add pasta to line 5 of the file:
```1 apples
2 bananas
3 cookies
4 rice
5 pasta
```
Does something seem amiss to you? We continue by adding beer to line 3. If you pay attention to the contents of the end result, you might notice that the order of our list is subtly wrong. Compare the two lists to see why:
$BA$ (wrong!) $AB$ (right)
```1 apples
2 bananas
3 beer
4 cookies
5 rice
6 pasta
```
```1 apples
2 bananas
3 beer
4 cookies
5 pasta
6 rice
```
It might not matter here because it is only a shopping list, but imagine that it was your PhD thesis, or your computer program to end world hunger. The error is all the more alarming because it is subtle and hard to pick out with the human eye.
The problem is one of context, specifically speaking, the context between $A$ and $B$. In order for instructions like "add pasta to line 5 of s_list" to make any sense, they have to be in the correct context. Fortunately, commutation is easy to do, it produces two new patches $B_1$ and $A_1$ which perform the same change as $A$ and $B$ but with a different context in between.
Exercises
Patch $A_1$ is identical to $A$. It adds "beer" to line 3 of the shopping list. But what should patch $B_1$ do?
One more important detail to note though! We said earlier that getting the context right is the motivation behind commutation -- we can't simply apply patches $AB$ in a different order, $BA$ because that would get the context all wrong. But context does not have any effect on whether A and B can commute (or how they should commute). This is strictly a local affair. Conversely, the commutation of A and B does not have any effect either on the global context: the sequences $AB$ and $B_1A_1$ (where the latter is the commutation of the former) start from the same context and end in the same context.
### The complex undo revisited
Now that we know what the commutation operation does, let's see how we can use it to undo a patch that is buried under some other patch. The first thing we do is commute Arjan's beer and pasta patches. This gives us an alternate route to the same context. But notice the small difference between $B$ and $B_1$!
The purpose of commuting the patches is essentially to push patch $A$ on to end of the list, so that we could simply apply its inverse. Only here, it is not the inverse of $A$ that we want, but the inverse of its evil step sister $A_1$. This is what applying that inverse does: it walks us back to the context $b1$, as if we had only applied the pasta patch, but not the beer one.
And now the undo is complete. To sum up, when the patch we want to undo is buried under some other patch, we use commutation to squeeze it to the end of the patch sequence, and then compute the inverse of the commuted patch. For the more sequentially minded, this is what the general scheme looks like:
Exercises
Imagine the opposite scenario: Arjan had started by adding pasta to the list, and then followed up with the beer.
1. If there was no commutation, what concretely would happen if he tried to remove the pasta patch, and not the beer patch?
2. Work out how this undo would work using commutation. Pay attention to the line numbers.
### Commutation and patches
Every time we define a type of patch, we have to define how it commutes with other patches. Most of time, it is very straightforward. When commuting two hunk patches, for instance, we simply adjust their line offset. For instance, we want to put something on line 3 of the file, but if we use patch $Y$ to insert a single line before that, what used to be line 3 now becomes line 4! So patch $X_1$ inserts the line "x" into line 4, much like $X$ inserts it into line 3.
Some patches cannot be commuted. For example, you can't commute the addition of a file with adding contents to it. But for now, we focus on patches which can commute.
↑Jump back a section
## Merging
Note: this might be a good place to take a break. We are moving on to a new topic and new (but similar) examples
We have presented two fundamental darcs operations: patch inverse and patch commutation. It turns out these two operations are almost all that we need to perform a darcs merge.
Arjan and Ganesh are working together to build a shopping list for the upcoming darcs hackathon. Arjan initialises the repository and adds a file s_list with the contents
```1 apples
2 bananas
3 cookies
4 rice
```
He then records his changes, and Ganesh performs a `darcs get` to obtain an identical copy of his repository. Notice that Arjan and Ganesh are starting from the same context
Arjan makes a modification which consists of adding a line in s_list. His new file looks like this:
```1 apples
2 bananas
3 beer
4 cookies
5 rice
```
Arjan's patch brings him to a new context $a$:
Now, in his repository, Ganesh also makes a modification; he decides that s_list is a little hard to decipher and renames the file to shopping. Remember, at this point, Ganesh has not seen Arjan's modifications. He's still starting from the original context $o$, and has moved a new context $b$, via his patch $B$:
### Parallel patches
At this point in time, Ganesh decides that it would be useful if he got a copy of Arjan's changes. Roughly speaking we would like to pull Arjan's patch A into Ganesh's repository B. But, there is a major problem! Namely, Arjan's patch takes us from context $o$ to context $a$. Pulling it into Ganesh's repository would involve trying to apply it to context $b$, which we simply do not know how to do. Put another way: Arjan's patch tells us to add a line to file s_list; however, in Ganesh's repository, s_list no longer exists, as it has been moved to shopping. How are we supposed to know that Arjan's change (adding the line "beer") is supposed to apply to the new file shopping instead?
Arjan and Ganesh's patches start from the same context o and diverge to different contexts a and b. We say that their patches are parallel to each other, and write it as $A \vee B$. In trying to pull patches from Arjan's repository, we are trying to merge these two patches. The basic approach is to convert the parallel patches into the sequential patches $BA_1$, such that $A_1$ does essentially the same change as $A$ does, but within the context of b. We want to produce the situation ${}^o{B^b}{A_1^{c}}$
### Performing the merge
Converting Arjan and Ganesh's parallel patches into sequential ones requires little more than the inverse and commutation operations that we described earlier in this module:
1. So we're starting out with just Ganesh's patch. In context notation, we are at ${}^{o}{B}^{b}$
2. We calculate the inverse patch $B^{-1}$. The sequence $BB^{-1}$ consists of moving s_list to shopping and then back again. We've walked our way back to the original context: ${}^{o}{B}^{b}{B^{-1}}^{o}$
3. Now we can apply Arjan's patch without worries: ${}^{o}{B}^{b}{B^{-1}}^{o}A^{a}$, but the result does not look very interesting, because we've basically got the same thing Arjan has now, not a merge.
4. All we need to do is commute the last two patches, ${B}^{-1}{A}$, to get a new pair of patches ${A_1}{B_1}^{-1}$. Still, the end result doesn't seem to look very interesting since it results in exactly the same state as the last step: ${}^{o}{B}^{b}{A_1}^{c}{{B_1}^{-1}}^{a}$
5. However, one crucial difference is that the second to last patch produces just the state we're looking for! All we now have to do to get at it is to ditch the ${B_1}^{-1}$ patch, which is only serving to undo Ganesh's precious work anyway. That is to say, by simply determining how to produce an $A_1$ which will commute with $B$, we have determined the version of $A$ which will update Ganesh's repository.
The end result of all this is that we have the patch we're looking for, $A_1$ and a successful merge.
### Merging is symmetric
Merging is symmetric
Concretely, we've talked about Ganesh pulling Arjan's patch into his repository, so what about the other way around? Arjan pulling Ganesh's patch into his repository would work the same exact way, only that he is looking for a commuted version of Ganesh's patch $B_1$ that would apply to his repository. If Ganesh can pull Arjan's patch in, then Arjan can pull Ganesh's one too, and the result would be exactly the same:
The result of a merge of two patches $A$ and $B$ is one of two patches $A_1$ and $B_1$, which satisfy the relationship $A \vee B \Longrightarrow B{A_1} \leftrightarrow A{B_1}$
The merge definition describes what should happen when you combine two parallel patches into a patch sequence. The built-in symmetry is essential for darcs because a darcs repository is defined entirely by its patches. Put another way,
To be written
### The commutation with inverse property
The definition of a merge tells us what we want merging to look like. How did we know how to actually perform that merge? The answer comes out of the following property of commutation and inverse: if you can commute the inverse of a patch $A^{-1}$ with some other patch $B$, then you can also commute the patch itself against $B_1$.
$B{A_1} \leftrightarrow A{B_1}$ if and only if ${B_1}{A_1}^{-1} \leftrightarrow {A^{-1}}B$, provided both commutations succeed.
Note how the left hand side of this property exactly matches the relationship demanded by the definition of a merge. To see why this all works,
To be written
↑Jump back a section
## Definitions and properties
definition of inverse $AA^{-1}$ has no effect
inverse of an inverse $(A^{-1})^{-1} = A$
inverse composition property $(AB)^{-1} = B^{-1}A^{-1}$
definition of commutation $AB \leftrightarrow {B_1}{A_1}$
definition of a merge $A \vee B \Longrightarrow B{A_1} \leftrightarrow A{B_1}$
commutation with inverse property $B{A_1} \leftrightarrow A{B_1}$ if and only if ${B_1}{A_1}^{-1} \leftrightarrow {A^{-1}}B$
Next Page: More patch theory | Previous Page: Undoing mistakes
Home: Understanding Darcs
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 112, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585074186325073, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/32757?sort=votes
|
## Tractability of forcing-invariant statements under large cardinals
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is usual to mention theorems of the kind:
Th. Assume there is a proper class of Woodin cardinals, $\mathbb{P}$ is a partial order and $G \subseteq \mathbb{P}$ is V-generic, then $V \models \phi \iff V[G] \models \phi$
where $\phi$ is some set theoretic statement (like "the Strong Omega Conjecture holds"), as some sort of evidence that $\phi$ is a statement less intractable than other statements like CH which are not focing-invariant.
My question is, in wich sense are these statements more tractable? What kind of "empirical evidence" gives support to the hope that they can be decided by large cardinal axioms?
-
I added the forcing and large-cardinals tags. – Joel David Hamkins Jul 21 2010 at 15:34
## 3 Answers
Typically, generic absoluteness is a consequence of a stronger property, that in many cases is really the goal one is after. To explain this stronger property, let me begin by reviewing two important absoluteness results.
1) The first is Mostowski's Absoluteness. Suppose $\phi$ is $\Sigma^1_1(a)$, where $a\in\omega^\omega$. This means that $\phi(x)$ for $x\in\omega^\omega$, asserts
$∃y\in\omega^\omega\forall n\in \omega R(x\upharpoonright n, y\upharpoonright n,a\upharpoonright n)$,
where $R$ is a predicate recursive in $a$. These statements are very absolute: Suppose that $M$ is a well-founded model of a decent fragment of ZF, and that $a,x\in M$. Then $\phi(x)$ holds iff $M\models \phi(x)$.
In particular, whether or not $\phi(x)$ holds cannot be changed by forcing, or by passing to an inner or outer model. Note that $M$ could be countable. In fact, only needs to be an $\omega$-model; well-foundedness is not necessary.
This is how the result is usually stated. What is going on is the following:
Suppose that $T$ is a tree (in the descriptive set theoretic sense) and that $T\in M$. Then $T$ is ill-founded iff $M\models T$ is ill-founded.
In particular, $T$ could be the tree associated to $\phi$. This is the tree of all $(s,t)$ such that $s,t$ are finite sequences of the same length $l$, and $\forall n\le l$ $R(s\upharpoonright l,t\upharpoonright n,a\upharpoonright n)$. So $T$ is the tree of attempts to verify $\phi$: $\phi(x)$ holds iff (there is a $y$ such that for all $n$, $(x\upharpoonright n,y\upharpoonright n)\in T$) iff the tree $T_x$ is ill-founded. Recall that $T_x$ consists of all $t$ such that, if $l$ is the length of $t$, then $(x\upharpoonright n,t\upharpoonright n)\in T$ for all $n\le l$.
The point is that $T$ is a very simple object. As soon as $T,x$ are present, $T_x$ can be built, and the result of the construction of $T_x$ is the same whether it is performed in $V$ or in $M$. Since well-foundedness is absolute, whether or not $T_x$ is ill-founded is the same whether we check this in $M$ or in $V$. Of course, $T_x$ is ill-founded iff $\phi(x)$ holds.
The moral is that the truth of $\Sigma^1_1$ statements is certified by trees. And I think that this is saying that in a strong sense, $\Sigma^1_1$ statements are very tractable. All we need to check their validity is a very easy to build tree and, once we have it, the tree is our certificate of truth or falsehood, this cannot be altered.
Recall that proofs in first-order logic can be described by means of certain finite trees. If something is provable, the tree is a very robust certificate. This is a natural weakening of that idea.
Of course one could argue that, if a $\Sigma^1_1$ statement is not provable, then in fact it may be very hard to establish its truth value, so tractability is not clear. Sure. But, independently of whether or not one can prove something or other, the certificate establishing this truth value is present from the beginning. One does not need to worry that this truth value may change by changing the model one is investigating.
2) The second, and best known, absoluteness result, is Shoenfield's absoluteness. Suppose $\phi$ is $\Sigma^1_2(a)$. This means that $\phi(x)$ holds iff
$\exists y\forall z\exists n$ $R(y\upharpoonright n,z\upharpoonright n,x\upharpoonright n,a\upharpoonright n)$,
where $R$ is recursive in $a$. Let $M$ be any transitive model of a decent fragment of ZFC, and suppose that $\omega_1\subset M$ and $a,x\in M$. Then $\phi(x)$ holds iff $M\models\phi(x)$.
This is again a very strong absoluteness statement. Again, if one manages to show the consistency of $\phi(x)$ by, for example, passing to an inner model or a forcing extension, then in fact one has proved its truth.
Again, one could say that if $\phi$ is not provable, then it is in fact not very tractable at all. But the point is that to investigate $\phi$, one can use any tools whatsoever. One only needs to worry about its consistency, for example, and one can make use of any combinatorial statements that one could add to the universe by forcing.
Just as in the previous case, $\Sigma^1_2$ statements can be certified by trees. The tree associated to $\phi$ is more complicated than in the previous case, and it is now a tree of $\omega\times\omega_1$. (Jech's and Kanamori's books explain carefully its construction.) Again, the tree is very absolute: As soon as we have $a$ and all the countable ordinals at our disposal, the tree can be constructed. (One comparing two models $M\subset V$, we mean all the countable ordinals of $V$, even if $M$'s version of $\omega_1$ is smaller.)
3) Generic absoluteness of a statement $\phi$ is typically a consequence of the existence of absolutely complemented trees associated to $\phi$. In fact, all generic absoluteness results I'm aware of are established by proving that there are such trees ("conditional" generic absoluteness results, such as only for proper forcings, or only in the presence of additional axioms, are somewhat different). This is a direct generalization of the situations above.
To define the key concept, recall that if $A$ is a tree of $\omega\times X$, then the projection $p[A]$ of $A$ is the set of all $x\in\omega^\omega$ such that $\exists f\in X^\omega\forall n\in\omega\,(x\upharpoonright n,f\upharpoonright n)\in A$.
Two (proper class) trees $A$ and $B$ on $\omega\times ORD$ are absolutely complemented iff:
1. $p[A]\cap p[B]=\emptyset$ and $p[A]\cup p[B]=\omega^\omega$.
2. Item 1 holds in all generic extensions.
A statement $\phi$ admits such a pair iff, in addition,
1. In any forcing extension, $\phi(x)$ iff $x\in p[A]$.
The idea is that this is a precise, formal, definable approximation to the intuitive statement one would actually like, namely, that there are such trees describing $\phi$ that have this complementary'' behavior in all outer models. First-order logic limits ourselves to considering forcing extensions.
Let me point out that $\Sigma^1_1$ and $\Sigma^1_2$ statements admit absolutely complemented pairs, so the existence of such a pair is a natural (far reaching) generalization of the two cases above.
Once we accept large cardinals, we can show that much larger classes than $\Sigma^1_2$ admit absolutely complemented trees. For example, any projective statement does. Once again, the point is that as soon as we have the large cardinals and real parameters in our universe, we have the trees, and the trees certify in unambiguous forcing-unchangeable terms, whether the statements hold at any given real. It is in this sense that we consider these statements more tractable.
Here is a rough sketch of an example I particularly like, due to Martin-Solovay (for measurables) and Woodin (in its optimal form). For details, see my paper with Ralf Schindler, projective well-orderings of the reals,'' Arch. Math. Logic (2006) 45:783–793:
$V$ is closed under sharps iff $V$ is $\Sigma^1_3$-absolute. $(*)$
The right hand side means that for any $\Sigma^1_3$ statement $\phi$ (so now we have three alternations of quantifiers) and any two step forcing ${\mathbb P}∗\dot{\mathbb Q}$, for any $G$, ${\mathbb P}$-generic over $V$, any $H$, ${\mathbb Q}$-generic over $V[G]$, and for any real $x$ in $V[G]$, we have $$V[G]\models\phi(x)\Longleftrightarrow V[G][H]\models\phi(x).$$
The left hand side of $(*)$ is a weakening of "there is a proper class of measurable cardinals", which is how the statement is usually presented.
The proof of the implication from left to right in $(*)$ goes by building a tree of attempts to find a witness to the negation of a $\Sigma^1_2$ statement. The goal is that if such a witness can be added by forcing, then in fact we can find one in the ground model. If there is a forcing adding a witness, there is a countable transitive model where this is the case. Essentially, the tree gives the construction of such a model, bit by bit, and if we have a branch, then we have such a model.
So: If there is a witness added in a forcing extension, the tree will have there a branch. So it is illfounded. By absoluteness of well-foundedness, the tree has a branch in $V$. The sharps are used to stretch'' the model so that we can use Shoenfield absoluteness, and conclude that there must be a witness in $V$.
4) Projective absoluteness, a consequence of large cardinals, is established by showing the existence of absolutely complemented trees for any projective statement. The theory of universally Baire sets originates with this concept, and the closely related notion of homogeneously Suslin trees. All of this is also connected to determinacy. Once again, to drive the point home: Generic absoluteness is not the goal. The goal is the existence of the pair of trees. Once we have them, we have a strong certificate of truth or falsehood of a statement. I do not know if one is to accept the search for such trees as a more tractable problem than the original statement whose pair of trees we are now searching for. But it certainly says that consistency of the statement, using large cardinals or any combinatorial tools whatsoever, is enough to have a proof of the statement. This seems much more hopeful and generous an approach than if only proofs in the usual sense are allowed. The existence of these trees for projective statements is what I meant in a comment by large cardinals settle second order arithmetic.'' Put yet another way: If you show, for example, that a projective statement is not 'decidable' (in the presence of large cardinals), meaning that it is consistent and so is its negation, then you have either actually showed that certain large cardinals are inconsistent, or you have found a way of changing the truth of arithmetic statements, and both of these options are much more significant events than the proof of whatever the projective statement you were interested in was. More likely than not, the truth value of the statement will be uncovered at some point, and you know there is no ambiguity as of what it would be, since the witnessing trees are already present in the universe.
(In spite of its length, I am not completely happy with this answer, but I would need to get much more technical to expand on the many interesting points that your question raises. Hopefully there is some food for thought here. For nice references to some of the issues I mention here, Woodin's article in the Notices is a good place to start, and Steel's paper on the derived model theorem has much of the details.)
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There are two ingredients in your question: First, statements invariant under forcing and second, large cardinals. The existence of certain large cardinals is a strengthening of ZFC. Somewhat surprisingly, the known large cardinal axioms are essentially linearly ordered, i.e., given two such axioms, the consistency of one implies the consistency of the other.
Most set theorists believe that these large cardinal axioms are consistent.
Hence there is a natural direction to strengthen ZFC, namely by adding large cardinal axioms. If we want to do mathematics, why not work in the strongest possible theory? Hence we assume that large cardinals exist (like a proper class of Woodins).
Why are forcing invariant statements less intractable than others? Well, simply because there is some hope that the statement can be decided in ZFC, or in ZFC + large cardinals, which we believe in.
Moreover, if $\phi$ is invariant under forcing, we can actually use forcing to prove or disprove $\phi$.
An example is the Baumgartner-Hajnal partition theorem, a Ramsey like statement. In the original proof it is shown that the statement is forcing invariant and follows from Martin's Axiom. Since Martin's Axiom can be forced over every set-theoretic universe, this shows that the Baumgartner-Hajnal statement is actually true in every model of set theory.
Now, of course there may be statements that are invariant under forcing, in particular statements about the natural numbers such as the Riemann hypothesis or P=NP, that might not be decidable in ZFC + large cardinals.
We currently have no method to prove independence results over ZFC other than forcing, inner models and consistency strength (the existence of a large cardinal cannot be proved in ZFC, because from the existence of the L.C. it follows that ZFC is consistent, but by the second incompleteness theorem this cannot be proved in ZFC (unless, of course, ZFC is inconsistent)).
If we are confronted with a statement that is forcing invariant, yet not decided by ZFC + LC, this statement is actually less tractable than others, because we currently may not have any way of proving its independence over ZFC + LC.
-
I see. Forcing-invariant statements fall on the side of statements "potentially" provable from ZFC + LC. Surely this could be what is meant by "more tractable". But as you already mention, this is would be a bit misleading, since they are also "potentially" unprovable from ZFC + LC, with the aggravating circumstance that in that case we wouldn't have ways to produce any models for them. – Marc Alcobé García Jul 21 2010 at 13:13
(Continuing from my last comment) I guess there must be some "empirical evidence" that forcing-invariant (generically absolute) statements are with high probability decidable by ZFC + LC. This would make a lot more sense. Maybe I should restate my question and ask for concrete instances of this empirical evidence. – Marc Alcobé García Jul 21 2010 at 13:14
Marc, there is a great deal of evidence. Large cardinals settle second order arithmetic, for example. – Andres Caicedo Jul 21 2010 at 20:41
Andres, what do you mean by that? No consistent theory can settle even first order arithmetic. – Joel David Hamkins Jul 22 2010 at 1:02
By a result of Woodin, if the Omega Conjecture holds (and so we have a proper class of Woodins), one can force an axiom that decides in Omega-logic the theory of `$H(\omega_2)$`. I don't know if this is what Andrés has in mind, but I can hardly see the link between this kind of results and the optimism for the decidability of the Omega Conjecture. – Marc Alcobé García Jul 22 2010 at 7:30
show 1 more comment
We have numerous statements that are invariant by set forcing, but which are independent of ZFC and even of ZFC + large cardinals. Thus, I dispute the premise of your question. Examples include:
• Eventual GCH. This is the assertion that the GCH holds for all sufficiently large cardinals. This is forcing invariant, by set forcing (see remarks below), since any given forcing notion can affect the continuum function only for cardinals of size less than the size of the forcing. But it is independent of ZFC, since it is implied by GCH and it is easy to use class forcing to produce models with unbounded violations of GCH.
• Eventual non-GCH, or eventual some-other-GCH-pattern. Similarly, we can arrange other GCH patterns just as easily, and as long as the statement is only that the pattern holds eventually, then it will again be independent of ZFC for the same reason.
• The previous examples are also independent of ZFC+ large cardinals, since we can control the GCH pattern while preserving most of the standard large cardinal notions.
• Any kind of eventual statement, about a feature that can be controlled locally by forcing. For example, assertions about the eventual pattern of the existence of $\kappa$-Souslin trees or $\Diamond_\kappa^*$ and so on. These can be controlled by forcing in a class iteration to achieve unbounded patterns, but set forcing can only affect things locally. So the statement that the eventual pattern is such-and-such will be invariant by set-forcing.
• Assertions that there are a proper class of such-and-such large cardinals. These are forcing invariant, because by the Levy-Solovay theorem, the large cardinals above the size of any forcing will be preserved. Thus, the assertion that there is a proper class of inaccessible cardinals, or Woodin cardinals or supercompact cardinals, etc. are all forcing invariant, but we cannot hope to settle these assertions in ZFC, or even with ZFC + very strong large cardinal axioms not of this particular form. For example, it is consistent (from a suitable hypothesis) with a supercompact cardinal (or any other standard large cardinal notion) that there is not a proper class of inaccessible cardinals. It is consistent with an almost huge cardinal that there is a proper class of supercomapct cardinals, and also that there is not (assuming a suitable LC hypothesis).
• The failure of the Ground Axiom. The Ground Axiom, which I introduced with Jonas Reitz, is the assertion that the universe is not a set-forcing extension of any inner model. Despite its second-order nature, it is actually first-order expressible. GA is true in L and forceable over any model of ZFC by class forcing, but once it fails, then of course it remains false in any set forcing extension. So $\neg GA$ is upwards forcing invariant by set forcing. And again, we get the independence here not just over ZFC, but over ZFC + large cardinals.
• There are other similar examples concerning the existence of bedrock models, that is, ground models of the universe that are not themselves forcing extensions of any inner model.
All of these statements are forcing-invariant in the sense you mention, but none of them are settled either way by large cardinal axioms.
Set forcing vs. class forcing. It is important in your example and all my examples above that we are speaking of forcing invariance by set forcing, that is, when the partial order is a set, rather than a proper class. Your example theorem, for example, is true for set forcing, but not for class forcing. In particular, the statement that there is a proper class of Woodin cardinals is itself destroyable by class forcing: one can perform the coding-the-universe forcing that obtains $V[G]=L[x]$ for a real $x$, and there are no Woodin cardinals in $L[x]$.
So you are only talking about forcing invariance for set forcing to begin with. Furthermore, the assertion that "$\varphi$ is set forcing invariant" is first-order expressible for set forcing, but not for class forcing. Thus, it is difficult to formalize or even express any general theory about forcing invariance by class forcing, although one can adopt ad hoc methods for particular statements.
-
1
I was aware about forcing invariance in my question being referred to set forcing. What I did not know is that there where statements invariant by set forcing, provably "independent of ZFC and even of ZFC + LC". In the light of your response now I guess there is some reason to expect the statement about the Strong Omega Conjecture to be decidable from ZFC + LC, but the theorem alone is not enough to justify that optimism. Am I right? – Marc Alcobé García Jul 21 2010 at 20:08
1
That would be closer to my view. I also have heard people make an argument along the lines you suggest, that forcing invariance might be evidence that we could hope to settle the question from ZFC + LC, but in light of the eventual-GCH type examples, I'm not sure how compelling this case is. – Joel David Hamkins Jul 22 2010 at 1:15
One could still try to argue that statements like the Strong Omega Conjecture are not of the eventual-GCH type... – Marc Alcobé García Jul 22 2010 at 6:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 130, "mathjax_display_tex": 1, "mathjax_asciimath": 4, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525980353355408, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/tagged/precision
|
# Tagged Questions
The precision tag has no wiki summary.
1answer
85 views
### Output precision
I've solved some equations using FindRoot and then computed some values. Now when I print the output, I only get a certain precision {{0.01, 496.983, 61.3147, ...
2answers
73 views
### Problem with working precision
I have tried to resolve the problem of the following link How can I solve precision problem I can tell the problem described in that link shortly here, It's no mater how many precision is there after ...
0answers
23 views
### What is the precision of this number? [duplicate]
In a matrix evaluation, I got the following number. 2.8330574963868513 I understand that the number after shows the precision of the output number. But this ...
1answer
65 views
### Padding plot ticks with zeros on the right
I would like to know how can I make a real-valued plot tick pad with a zero to the right of the decimal point on integer values. This is what I have: ...
1answer
70 views
### Numerical Error with Large Matrices
I am writing a Finite Element Analysis program in Mathematica. The code involves handling a large matrix with large entries. I get an error when I try to use Mathematica's "LinearSolve" to solve a ...
5answers
409 views
### Why is this Mandelbrot set's implementation infeasible: takes a massive amount of time to do? [closed]
The Mandelbrot set is defined by complex numbers such as $z=z^2+c$ where $z_0=0$ for the initial point and $c\in\mathbb C$. The numbers grow very fast in the iteration. ...
2answers
136 views
### Cannot Get Numerical Results to Match
I try this numerical summation (in two parts) ...
1answer
99 views
### How to find derivative of a numerical solution, where precision is ambiguous?
I am trying to take the derivative of a numerical solution. I am concerned that the way I'm doing this may be problematic due to numerical error; I think there must be a better way but I'm not very ...
2answers
185 views
### Problem with setting working precision in NIntegrate
I want to obtain a good numerical approximation (up to 10 decimal place would be ok for me) to an integral: $$\int^{\infty}_{0} f(r)r^2dr$$ I am using the function $f(r)$, which is related to the ...
1answer
141 views
### Export image data precision
I'm sure there is a very easy way to control this, but I have been searching for an hour now without luck. I plot a function and export it to get a table that I can use in TikZ. ...
1answer
136 views
### ParallelTable and Precision
I'm using ParallelTable[] to calculate a function over a range of my parameters , ($\omega,\ell$). This seems to be working well (in terms of speed increase) except ...
3answers
359 views
### Strategies to solve an oscillatory integrand only known numerically
I have an integrand that looks like this: the details of computation are complicated but I only know the integrand numerically (I use NDSolve to solve second ...
1answer
148 views
### Confused by (apparent) inconsistent precision
$$e^{\pi \sqrt{163}} \approx 262537412640768743.99999999999925$$ E^(Pi Sqrt[163.0]) N[E^(Pi Sqrt[163.0]), 35] NumberForm[E^(Pi Sqrt[163.]), 35] returns ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8918234705924988, "perplexity_flag": "middle"}
|
http://www.reference.com/browse/wiki/Choked_flow
|
Definitions
# Choked flow
Choked flow of a fluid is a fluid dynamic condition caused by the Venturi effect. When a flowing fluid at a certain pressure and temperature flows through a restriction (such as the hole in an orifice plate or a valve in a pipe) into a lower pressure environment, under the conservation of mass the fluid velocity must increase for initially subsonic upstream conditions as it flows through the smaller cross-sectional area of the restriction. At the same time, the Venturi effect causes the pressure to decrease. Choked flow is a limiting condition which occurs when the mass flux will not increase with a further decrease in the downstream pressure environment.
For homogeneous fluids, the physical point at which the choking occurs for adiabatic conditions is when the exit plane velocity is at sonic conditions or at a Mach number of 1. It is most important to note that the mass flow rate can still be increased by increasing the upstream stagnation pressure, or by decreasing the upstream stagnation temperature.
The choked flow of gases is useful in many engineering applications because the mass flow rate is independent of the downstream pressure, depending only on the temperature and pressure on the upstream side of the restriction. Under choked conditions, valves and calibrated orifice plates can be used to produce a particular mass flow rate.
If the fluid is a liquid, a different type of limiting condition (also known as choked flow) occurs when the Venturi effect acting on the liquid flow through the restriction decreases the liquid pressure to below that of the liquid vapor pressure at the prevailing liquid temperature. At that point, the liquid will partially flash into bubbles of vapor and the subsequent collapse of the bubbles causes cavitation. Cavitation is quite noisy and can be sufficiently violent to physically damage valves, pipes and associated equipment. In effect, the vapor bubble formation in the restriction limits the flow from increasing any further.
## Mass flow rate of a gas at choked conditions
All gases flow from upstream higher stagnation pressure sources to downstream lower pressure sources. There are several situations in which choked flow occurs, such as: change of cross section (as in a convergent-divergent nozzle or flow through an orifice plate), Fanno flow, isothermal flow and Rayleigh flow.
### Choking in change of cross section flow
Assuming ideal gas behavior, steady state choked flow occurs when the ratio of the absolute upstream pressure to the absolute downstream pressure is equal to or greater than [(k + 1 ) / 2 ] k / (k - 1 ), where k is the specific heat ratio of the gas (sometimes called the isentropic expansion factor and sometimes denoted as $gamma$ ).
For many gases, k ranges from about 1.09 to about 1.41, and therefore [(k + 1 ) / 2 ] k / (k - 1 ) ranges from 1.7 to about 1.9 ... which means that choked flow usually occurs when the absolute source vessel pressure is at least 1.7 to 1.9 times as high as the absolute downstream pressure.
When the gas velocity is choked, the equation for the mass flow rate in SI metric units is:
$Q;=;C;A;sqrt\left\{;k;rho;P;bigg\left(frac\left\{2\right\}\left\{k+1\right\}bigg\right)^\left\{\left(k+1\right)/\left(k-1\right)\right\}\right\}$
where the terms are defined in the table below. If the density ρ is not known directly, then it is useful to eliminate it using the Ideal gas law corrected for the real gas compressibility:
$Q;=;C;A;P;sqrt\left\{bigg\left(frac\left\{;,k;M\right\}\left\{Z;R;T\right\}bigg\right)bigg\left(frac\left\{2\right\}\left\{k+1\right\}bigg\right)^\left\{\left(k+1\right)/\left(k-1\right)\right\}\right\}$
so that the mass flow rate is primarily dependent on the cross-sectional area A of the hole and the supply pressure P, and only weakly dependent on the temperature T. The rate does not depend on the downstream pressure at all. All other terms are constants that depend only on the composition of the material in the flow.
where:
Q = mass flow rate, kg/s
C = discharge coefficient, dimensionless (usually about 0.72)
A = discharge hole cross-sectional area, m²
k = cp/cv of the gas
cp = specific heat of the gas at constant pressure
cv = specific heat of the gas at constant volume
$rho$ = real gas density at P and T, kg/m³
P = absolute upstream stagnation pressure, Pa
M = the gas molecular mass, kg/kmole (also known as the molecular weight)
R = Universal gas law constant = 8314.5 (N·m) / (kmole·K)
T = absolute gas temperature, K
Z = the gas compressibility factor at P and T, dimensionless
The above equations calculate the steady state mass flow rate for the stagnation pressure and temperature existing in the upstream pressure source.
If the gas is being released from a closed high-pressure vessel, the above steady state equations may be used to approximate the initial mass flow rate. Subsequently, the mass flow rate will decrease during the discharge as the source vessel empties and the pressure in the vessel decreases. Calculating the flow rate versus time since the initiation of the discharge is much more complicated, but more accurate. Two equivalent methods for performing such calculations are explained and compared online.
The technical literature can be very confusing because many authors fail to explain whether they are using the universal gas law constant R which applies to any ideal gas or whether they are using the gas law constant Rs which only applies to a specific individual gas. The relationship between the two constants is Rs = R / M.
Notes:
• The above equations are for a real gas.
• For a monatomic ideal gas, Z = 1 and ρ is the ideal gas density.
• kmole = 1000 moles = 1000 gram-moles = kilogram-mole
## Thin-plate orifices
The flow of real gases through thin-plate orifices never becomes fully choked. The mass flow rate through the orifice continues to increase as the downstream pressure is lowered to a perfect vacuum, though the mass flow rate increases slowly as the downstream pressure is reduced below the critical pressure. Cunningham (1951) first drew attention to the fact that choked flow will not occur across a standard, thin, square-edged orifice.
## Minimum pressure ratio required for choked flow to occur
The minimum pressure ratios required for choked conditions to occur (when some typical industrial gases are flowing) are presented in Table 1. The ratios were obtained using the criteria that choked flow occurs when the ratio of the absolute upstream pressure to the absolute downstream pressure is equal to or greater than [(k + 1 ) / 2 ] k / (k - 1 ) , where k is the specific heat ratio of the gas. The minimum pressure ratio may be understood as the ratio between the upstream pressure and the pressure at the nozzle throat when the gas is traveling at Mach 1; if the upstream pressure is too low compared to the downstream pressure, sonic flow cannot occur at the throat.
Table 1
Gas k = cp/cv Minimum
Pu/Pd
required for
choked flow
Hydrogen 1.410 1.899
Methane 1.307 1.837
Propane 1.131 1.729
Butane 1.096 1.708
Ammonia 1.310 1.838
Chlorine 1.355 1.866
Sulfur dioxide 1.290 1.826
Carbon monoxide 1.404 1.895
Notes:
• Pu = absolute upstream gas pressure
• Pd = absolute downstream gas pressure
• k values obtained from:
1. Perry, Robert H. and Green, Don W. (1984). Perry's Chemical Engineers' Handbook. 6th Edition, McGraw-Hill Company. ISBN 0-07-049479-7.
2. Phillips Petroleum Company (1962). Reference Data For Hydrocarbons And Petro-Sulfur Compounds. Second Printing, Phillips Petroleum Company.
## See also
• Accidental release source terms includes mass flow rate equations for non-choked gas flows as well.
• Orifice plate includes derivation of non-choked gas flow equation.
• Laval nozzles are Venturi tubes that produce supersonic gas velocities as the tube and the gas are first constricted and then the tube and gas are expanded beyond the choke plane.
• Rocket engine nozzles discusses how to calculate the exit velocity from nozzles used in rocket engines.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9085137844085693, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=2520451
|
Physics Forums
## Comparing HADCRU, GISS and UAH Temperature Records
I loaded the monthly temperature anomalies from HADCRU, GISS and UAH into a single Excel spreadsheet to make it easier to compare them. The result look like the attached image below:
I thought the spreadsheet might be handy for other people, so I've attached it as a zipped Excel file. It also includes the CO2 data from Mauna Loa. I've included links to where I got the data from. What you see in the image is a 12 month moving average and a linear trend for each data set. All three datasets start in December 1978, since that is when the UAH dataset starts.
It is interesting to see how much you can manipulate the linear trend by changing the start date for the graph. In Excel, right click on the graph and click on Select Data. In the Select Data Source dialog go to th Chart Data Range where it says ='Combine'!$A$1:$D$374. Change \$1 to whatever row you want to start with.
Attached Thumbnails
Attached Files
UAH-GISS-HADCRU-CO2.zip (59.8 KB, 24 views)
PhysOrg.com earth sciences news on PhysOrg.com >> NASA sees Cyclone Mahasen hit Bangladesh>> Satellite sees Tropical Storm Alvin's life end quickly>> The genome sequence of Tibetan antelope sheds new light on high-altitude adaptation
Blog Entries: 9
Recognitions:
Science Advisor
Quote by joelupchurch I loaded the monthly temperature anomalies from HADCRU, GISS and UAH into a single Excel spreadsheet to make it easier to compare them. The result look like the attached image below:
That's a handy tool. I'd like to extend it a bit, to do a bit more automatic calculation and also some simple error bounds. I'd just use the normal regression errors without adjusting for autocorrelation, which would be too much work. I may have a shot and post the result.
It's worth noting that GISS and HADCRU are slightly different algorithms for measuring the same basic quantity, but that UAH is actually measuring a different quantity entirely. The former two are surface anomalies; the latter is an anomaly for the lower troposphere. The UAH estimate is often compared with a similar product from RSS (Remote Sensing Systems).
Having these in a spreadsheet is very handy for doing all kinds of things. If you just want a plot you can also try Wood for Trees for an online tool that allows all kinds of plot generation; and also the RSS validation website for comparing RSS and UAH. But like you, I like having my own spreadsheets.
Cheers -- sylas
PS. Just so people new to this know... the plots are "anomalies", or a difference from some mean value. However, each series uses a different baseline for this difference. This is not a problem, because the absolute value of an anomaly is pretty irrelevant. What matters are the comparisons between one year and another within the same series, or the trends. Shifting a whole anomaly series up, or down, doesn't have any significance, so there's nothing unusual about one plot seeming to be above or below another. The difference in the trends is significant, however, and represents a real difference between the different datasets.
Blog Entries: 9
Recognitions:
Science Advisor
Quote by sylas That's a handy tool. I'd like to extend it a bit, to do a bit more automatic calculation and also some simple error bounds. I'd just use the normal regression errors without adjusting for autocorrelation, which would be too much work. I may have a shot and post the result.
Done.
I have made a new speadsheet based on Joel's work. Unfortunately, I had to delete a lot of stuff to keep within the size limits for an attachment. So all the charts are gone, and there's only only sheet, now called "Regression", which compares the three datasets. I've updated the name with a "-v3", and removed the "-CO2" from the name as well.
All the calculations of expected trends are based on doing the regression in the worksheet, rather than relying on the numbers from the trend line in a graph. This means we can also calculate the standard errors on trend.
The sheet is protected. It has two green cells, which are the only ones where you can enter data without unprotecting the sheet. (Feel free to unprotect and modify some more!)
You can enter confidence limits (currently 95%) and a date in the future (currently 2100).
The sheet estimates the gain in temperature from the end of the data (2009.8333) up to the given date (2100) which is very close to what was given originally using the slopes transcribed from the graph. It also calculates the standard errors (uses Excel's builtin "linest" function to do linear regression).
It should be easy to combine this new functionality into the previous sheet.
There are two very important caveats with using these estimates.
We are extending a trend far beyond the end of data
This is not usually a useful thing to do, unless you have some very good reason to think there really is a linear trend that will be continued all the way to 2100. Of course, climate is not that simple.
An actual physics based estimate would need a "scenario" for climate forcings, and make estimates based on that. Exploring the scenarios, and the physics for applying them to climate, is another topic, and I don't propose to divert this thread into evaluating such projections.
But we should remember the limited relevance of projecting an estimate of linear trend.
It can be a useful thing to do, if you recognize the limits of our estimates here. For example, you could compare a linear projection against a physical model, and figure out whether the model is expecting trend to increase, or decrease, as the centuury continues.
We are assuming variation is random noise
The error bounds for simple regression analysis assume that the data is some unknown linear trend plus random noise above and below the trend. However, these time series obviously have strong "autocorrelation", which means that if one month is above the trend, then the next month is more likely to be above the trend as well. The months are not independent of each other, but follow other more complex short term cycles. Given this, the actual errors on trend are substantially larger than calculated from the simple regression model.
Even so, at least by giving some error bounds we get a bit more of an idea of how trend is only approximate.
With these caveats in mind:
The 95% confidence limits on trend in degrees per decade for the three datasets are:
• 0.158 +/- 0.014 (for Hadcrut)
• 0.179 +/- 0.020 (for GISS)
• 0.127 +/- 0.020 (for UAH)
As before, remember that UAH is actually measuring changes in the troposphere, where the other two are measuring changes at the surface, and so UAH is not directly comparable to the other two. Hadcrut trend estimates are a little bit smaller than GISS, because this dataset does not cover quite as much of the globe; and so misses the strong warming at present in the Arctic.
If we assume a simple underlying linear trend all the way up to 2100, we obtain the following 95% confidence limits for expected temperature gain from the end of the data up to 2100:
• 1.42 +/- 0.15 {+/- 0.29} (for Hadcrut)
• 1.61 +/- 0.21 {+/- 0.40} (for GISS)
• 1.15 +/- 0.22 {+/- 0.41} (for UAH)
Note that these are estimates for the climatology at that time; temperatures in a given month will tend to range above and below the climatology. This can be estimated also (although it is not in the sheet) and I have provided those bounds within the curly brackets.
The revised spreadsheet itself is an attachment.
A spreadsheet like this can be a useful tool to explore various ideas. There are all kinds of ways something like this can be extended to see other aspects of the data.
I am not entirely sure how our guidelines apply in a case like this; but no strong claims are being made, and I believe a lot can be learned by exploring data yourself; so I'm building on Joel's contribution to show some aspects of data analysis.
Cheers -- sylas
Attached Files
UAH-GISS-HADCRU-v3.xls (59.5 KB, 21 views)
## Comparing HADCRU, GISS and UAH Temperature Records
Quote by sylas That's a handy tool. I'd like to extend it a bit, to do a bit more automatic calculation and also some simple error bounds. I'd just use the normal regression errors without adjusting for autocorrelation, which would be too much work. I may have a shot and post the result. It's worth noting that GISS and HADCRU are slightly different algorithms for measuring the same basic quantity, but that UAH is actually measuring a different quantity entirely. The former two are surface anomalies; the latter is an anomaly for the lower troposphere. The UAH estimate is often compared with a similar product from RSS (Remote Sensing Systems). Having these in a spreadsheet is very handy for doing all kinds of things. If you just want a plot you can also try Wood for Trees for an online tool that allows all kinds of plot generation; and also the RSS validation website for comparing RSS and UAH. But like you, I like having my own spreadsheets. Cheers -- sylas PS. Just so people new to this know... the plots are "anomalies", or a difference from some mean value. However, each series uses a different baseline for this difference. This is not a problem, because the absolute value of an anomaly is pretty irrelevant. What matters are the comparisons between one year and another within the same series, or the trends. Shifting a whole anomaly series up, or down, doesn't have any significance, so there's nothing unusual about one plot seeming to be above or below another. The difference in the trends is significant, however, and represents a real difference between the different datasets.
I've tried Wood For Trees. The graphs it generates are ugly, but the raw data function is useful since it has the data in a form easy to put in Excel. Converting the data in GISS to an Excel spreadsheet was a pain, because they have each month in a separate column.
You can produce nice readable graphs in Excel, but you need to override the defaults which seem to be intended to produce Powerpoint chartjunk. You should also go to the add-ins and install the analysis toolkit and the solver. They don't install by default.
One tip. In Windows you can right click on a file and click on Send To and select Compress (zipped) folder. Excel workbooks compress quite nicely. I think I will add the RSS data and put up an updated spreadsheet when the December numbers are available.
Thread Tools
| | | |
|-------------------------------------------------------------------------|-----------------|---------|
| Similar Threads for: Comparing HADCRU, GISS and UAH Temperature Records | | |
| Thread | Forum | Replies |
| | General Physics | 2 |
| | Current Events | 30 |
| | Career Guidance | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323965311050415, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Preferential_attachment
|
# Preferential attachment
A preferential attachment process is any of a class of processes in which some quantity, typically some form of wealth or credit, is distributed among a number of individuals or objects according to how much they already have, so that those who are already wealthy receive more than those who are not. "Preferential attachment" is only the most recent of many names that have been given to such processes. They are also referred to under the names "Yule process", "cumulative advantage", "the rich get richer", and, less correctly, the "Matthew effect". It is related to Gibrat's law. The principal reason for scientific interest in preferential attachment is that it can, under suitable circumstances, generate power law distributions.
## Definition
A preferential attachment process is a stochastic urn process, meaning a process in which discrete units of wealth, usually called "balls", are added in a random or partly random fashion to a set of objects or containers, usually called "urns". A preferential attachment process is an urn process in which additional balls are added continuously to the system and are distributed among the urns as an increasing function of the number of balls the urns already have. In the most commonly studied examples, the number of urns also increases continuously, although this is not a necessary condition for preferential attachment and examples have been studied with constant or even decreasing numbers of urns.
A classic example of a preferential attachment process is the growth in the number of species per genus in some higher taxon of biotic organisms.[1] New genera ("urns") are added to a taxon whenever a newly appearing species is considered sufficiently different from its predecessors that it does not belong in any of the current genera. New species ("balls") are added as old ones speciate (i.e., split in two) and, assuming that new species belong to the same genus as their parent (except for those that start new genera), the probability that a species is added to a new genus will be proportional to the number of species the genus already has. This process, first studied by Yule, is a linear preferential attachment process, since the rate at which genera accrue new species is linear in the number they already have.
Linear preferential attachment processes in which the number of urns increases are known to produce a distribution of balls over the urns following the so-called Yule distribution. In the most general form of the process, balls are added to the system at an overall rate of m new balls for each new urn. Each newly created urn starts out with k0 balls and further balls are added to urns at a rate proportional to the number k that they already have plus a constant a > −k0. With these definitions, the fraction P(k) of urns having k balls in the limit of long time is given by[2]
$P(k) = {\mathrm{B}(k+a,\gamma)\over\mathrm{B}(k_0+a,\gamma-1)},$
for k ≥ k0 (and zero otherwise), where B(x, y) is the Euler beta function:
$\mathrm{B}(x,y) = {\Gamma(x)\Gamma(y)\over\Gamma(x+y)},$
with Γ(x) being the standard gamma function, and
$\gamma = 2 + {k_0 + a\over m}.$
The beta function behaves asymptotically as B(x, y) ~ x−y for large x and fixed y, which implies that for large values of k we have
$P(k) \propto k^{-\gamma}.$
In other words, the preferential attachment process generates a "long-tailed" distribution following a Pareto distribution or power law in its tail. This is the primary reason for the historical interest in preferential attachment: the species distribution and many other phenomena are observed empirically to follow power laws and the preferential attachment process is a leading candidate mechanism to explain this behavior. Preferential attachment is considered a possible candidate for, among other things, the distribution of the sizes of cities,[3] the wealth of extremely wealthy individuals,[3] the number of citations received by learned publications,[4] and the number of links to pages on the World Wide Web.[5]
The general model described here includes many other specific models as special cases. In the species/genus example above, for instance, each genus starts out with a single species (k0 = 1) and gains new species in direct proportion to the number it already has (a = 0), and hence P(k) = B(k, γ)/B(k0, γ − 1) with γ = 2 + 1/m. Similarly the Price model for scientific citations[4] corresponds to the case k0 = 0, a = 1 and the widely studied Barabási-Albert model[5] corresponds to k0 = m, a = 0.
Preferential attachment is sometimes referred to as the Matthew effect, but the two are not precisely equivalent. The Matthew effect, first discussed by Robert Merton,[6] is named for a passage in the biblical Gospel of Matthew: "For everyone who has will be given more, and he will have an abundance. Whoever does not have, even what he has will be taken from him." (Matthew 25:29, New International Version.) The preferential attachment process does not incorporate the taking away part. An urn process that includes both the giving and the taking away would produce a log-normal distribution rather than a power law[citation needed]. This point may be moot, however, since the scientific insight behind the Matthew effect is in any case entirely different. Qualitatively it is intended to describe not a mechanical multiplicative effect like preferential attachment but a specific human behavior in which people are more likely to give credit to the famous than to the little known. The classic example of the Matthew effect is a scientific discovery made simultaneously by two different people, one well known and the other little known. It is claimed that under these circumstances people tend more often to credit the discovery to the well-known scientist. Thus the real-world phenomenon the Matthew effect is intended to describe is quite distinct from (though certainly related to) preferential attachment.
## History
The first rigorous consideration of preferential attachment seems to be that of Yule in 1925, who used it to explain the power-law distribution of the number of species per genus of flowering plants.[1] The process is sometimes called a "Yule process" in his honor. Yule was able to show that the process gave rise to a distribution with a power-law tail, but the details of his proof are, by today's standards, contorted and difficult, since the modern tools of stochastic process theory did not yet exist and he was forced to use more cumbersome methods of proof.
Most modern treatments of preferential attachment make use of the master equation method, whose use in this context was pioneered by Simon in 1955, in work on the distribution of sizes of cities and other phenomena.[3]
The first application of preferential attachment to learned citations was given by Price in 1976.[4] (He referred to the process as a "cumulative advantage" process.) His was also the first application of the process to the growth of a network, producing what would now be called a scale-free network. It is in the context of network growth that the process is most frequently studied today. Price also promoted preferential attachment as a possible explanation for power laws in many other phenomena, including Lotka's law of scientific productivity and Bradford's law of journal use.
The application of preferential attachment to the growth of the World Wide Web was proposed by Barabási and Albert in 1999.[5] Barabási and Albert also coined the name "preferential attachment" by which the process is best known today and suggested that the process might apply to the growth of other networks as well.
## References
1. ^ a b Yule, G. U. (1925). "A Mathematical Theory of Evolution, based on the Conclusions of Dr. J. C. Willis, F.R.S". Philosophical Transactions of the Royal Society of London, Ser. B 213 (402–410): 21–87. doi:10.1098/rstb.1925.0002.
2. Newman, M. E. J. (2005). "Power laws, Pareto distributions and Zipf's law". Contemporary Physics 46 (5): 323–351. arXiv:cond-mat/0412004. doi:10.1080/00107510500052444.
3. ^ a b c Simon, H. A. (1955). "On a class of skew distribution functions". Biometrika 42 (3–4): 425–440. doi:10.1093/biomet/42.3-4.425.
4. ^ a b c Price, D. J. de S. (1976). "A general theory of bibliometric and other cumulative advantage processes". J. Amer. Soc. Inform. Sci. 27 (5): 292–306. doi:10.1002/asi.4630270505.
5. ^ a b c Barabási, A.-L.; R. Albert (1999). "Emergence of scaling in random networks". Science 286 (5439): 509–512. arXiv:cond-mat/9910332. doi:10.1126/science.286.5439.509. PMID 10521342.
6. Merton, Robert K. (1968). "The Matthew effect in science". Science 159 (3810): 56–63. doi:10.1126/science.159.3810.56. PMID 17737466.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397395849227905, "perplexity_flag": "middle"}
|
http://medlibrary.org/medwiki/Nevanlinna_theory
|
# Nevanlinna theory
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
In the mathematical field of complex analysis, Nevanlinna theory is part of the theory of meromorphic functions. It was introduced by Nevanlinna (1925). In the opinion of Hermann Weyl,[1] appearance of Nevanlinna's paper "has been one of the few great mathematical events of our century". The question addressed by the theory is to describe when a meromorphic function on the complex plane, or more generally some complex manifold, to some other manifold is necessarily constant. A fundamental tool is to find ways of measuring the rate of growth of such a function.
Other main contributors in the first half of the 20th century were Lars Ahlfors, André Bloch, Henri Cartan, Edward Collingwood, Otto Frostman, Frithiof Nevanlinna, Henrik Selberg, Tatsujiro Shimizu, Oswald Teichmüller, and Georges Valiron. In its original form, Nevanlinna theory deals with meromorphic functions of one complex variable defined in a disc |z| < R or in the whole complex plane (R = ∞). Subsequent generalizations extended Nevanlinna theory to algebroid functions, holomorphic curves, holomorphic maps between complex manifolds of arbitrary dimension, quasiregular maps and minimal surfaces.
This article describes mainly the classical version for meromorphic functions of one variable, with emphasis on functions meromorphic in the complex plane. General references for this theory are Goldberg & Ostrovskii,[2] Hayman[3] and Lang (1987).
## Nevanlinna characteristic
### Nevanlinna's original definition
Let f be a meromorphic function. For every r ≥ 0, let n(r,f) be the number of poles, counting multiplicity, of the meromorphic function f in the disc |z| ≤ r. Then define the Nevanlinna counting function by
$N(r,f) = \int\limits_0^r\left( n(t,f) - n(0,f) \right)\dfrac{dt}{t} + n(0,f)\log r.\,$
This quantity measures the growth of the number of poles in the discs |z| ≤ r, as r increases.
Let log+x = max(log x, 0). Then the proximity function is defined by
$m(r,f)=\frac{1}{2\pi}\int_{0}^{2\pi}\log^+ \left| f(re^{i\theta})\right| d\theta. \,$
Finally, define the Nevanlinna characteristic by
$T(r,f) = m(r,f) + N(r,f).\,$
### Ahlfors–Shimizu version
A second method of defining the Nevanlinna characteristic is based on the formula
$\int_0^r\frac{dt}{t}\left(\frac{1}{\pi}\int_{|z|\leq t}\frac{|f'|^2}{(1+|f|^2)^2}dm\right)=T(r,f)+O(1), \,$
where dm is the area element in the plane. The expression in the left hand side is called the Ahlfors–Shimizu characteristic. The bounded term O(1) is not important in most questions.
The geometric meaning of the Ahlfors—Shimizu characteristic is the following. The inner integral dm is the spherical area of the image of the disc |z| ≤ t, counting multiplicity (that is, the parts of the Riemann sphere covered k times are counted k times). This area is divided by π which is the area of the whole Riemann sphere. The result can be interpreted as the average number of sheets in the covering of the Riemann sphere by the disc |z| < t. Then this average covering number is integrated with respect to t with weight 1/t.
### Properties
The role of the characteristic function in the theory of meromorphic functions in the plane is similar to that of
$\log M(r, f) = \log \max_{|z|\leq r} |f(z)| \,$
in the theory of entire functions. In fact, it is possible to directly compare T(r,f) and M(r,f) for an entire function:
$T(r,f) \leq \log^+ M(r,f) \,$
and
$\log M(r,f) \leq \left(\dfrac{R+r}{R-r}\right)T(R,f),\,$
for any R > r.
If f is a rational function of degree d, then T(r,f) ~ d log r; in fact, T(r,f) = O(log r) if and only if f is a rational function.
The order of a meromorphic function is defined by
$\rho(f) = \limsup_{r \rightarrow \infty} \dfrac{\log^+ T(r,f)}{\log r}.$
Functions of finite order constitute an important subclass which was much studied.
When R < ∞, characteristic can be bounded. Functions in a disc with bounded characteristic, also known as functions of bounded type, are exactly the ratios of bounded analytic functions.
## First fundamental theorem
Let a ∈ C, and define
$\quad N(r,a,f) = N\left(r,\dfrac{1}{f-a}\right), \quad m(r,a,f) = m\left(r,\dfrac{1}{f-a}\right).\,$
For a = ∞, we set N(r,∞,f) = N(r,f), m(r,∞,f) = m(r,f).
The First Fundamental Theorem of Nevanlinna theory states that for every a in the Riemann sphere,
$T(r,f) = N(r,a,f)+m(r,a,f) + O(1),\,$
where the bounded term O(1) may depend on f and a.[4] For non-constant meromorphic functions in the plane, T(r, f) tends to infinity as r tends to infinity, so the First Fundamental Theorem says that the sum N(r,a,f) + m(r,a,f), tends to infinity at the rate which is independent of a. The first Fundamental theorem is a simple consequence of Jensen's formula.
The characteristic function has the following properties of the degree:
$\begin{array}{lcl} T(r,fg)&\leq&T(r,f)+T(r,g)+O(1),\\ T(r,f+g)&\leq& T(r,f)+T(r,g)+O(1),\\ T(r,1/f)&=&T(r,f)+O(1),\\ T(r,f^m)&=&mT(r,f)+O(1), \, \end{array}$
where m is a natural number. The bounded term O(1) is negligible when T(r,f) tends to infinity. These algebraic properties are easily obtained from Nevanlinna's definition and Jensen's formula.
## Second fundamental theorem
We define N(r, f) in the same way as N(r,f) but without taking multiplicity into account (i.e. we only count the number of distinct poles). Then N1(r,f) is defined as the Nevanlinna counting function of critical points of f, that is
$N_1(r,f) = 2N(r,f) - N(r,f') + N\left(r,\dfrac{1}{f'}\right) = N(r,f) + \overline{N}(r,f) + N\left(r,\dfrac{1}{f'}\right).\,$
The Second Fundamental theorem says that for every k distinct values aj on the Riemann sphere, we have
$\sum_{j=1}^k m(r,a_j,f) \leq 2 T(r,f) - N_1(r,f) + S(r,f). \,$
This implies
$(k-2)T(r,f) \leq \sum_{j=1}^k \overline{N}(r,a_j,f) + S(r,f),\,$
where S(r,f) is a "small error term".
For functions meromorphic in the plane, S(r,f) = o(T(r,f)), outside a set of finite length i.e. the error term is small in comparison with the characteristic for "most" values of r. Much better estimates of the error term are known, but Andre Bloch conjectured and Hayman proved that one cannot dispose of an exceptional set.
This theorem is called the Second Fundamental Theorem of Nevanlinna Theory, and it allows to give an upper bound for the characteristic function in terms of N(r,a). For example, if f is a transcendental entire function, using the Second Fundamental theorem with k = 3 and a3 = ∞, we obtain that f takes every value infinitely often, with at most two exceptions, proving Picard's Theorem.
As many other important theorems, the Second Main Theorem has several different proofs. The original proof of Nevanlinna was based on the so-called Lemma on the logarithmic derivative, which says that m(r,f'/f) = S(r,f). Similar proof also applies to many multi-dimensional generalizations. There are also differential-geometric proofs which relate it to the Gauss–Bonnet theorem. The Second Fundamental Theorem can also be derived from the metric-topological theory of Ahlfors, which can be considered as an extension of the Riemann–Hurwitz formula to the coverings of infinite degree.
The proofs of Nevanlinna and Ahlfors indicate that the constant 2 in the Second Fundamental Theorem is related to the Euler characteristic of the Riemann sphere. However, there is a very different explanations of this 2, based on a deep analogy with number theory discovered by Charles Osgood and Paul Vojta. According to this analogy, 2 is the exponent in the Thue–Siegel–Roth theorem. On this analogy with number theory we refer to the survey of Lang (1997) and the book by Min Ru (2001).
## Defect relation
This is one of the main corollaries from the Second Fundamental Theorem. The defect of a meromorphic function at the point a is defined by the formula
$\delta(a,f)=\liminf_{r \rightarrow \infty}\frac{m(r,a,f)}{T(r,f)} = 1 - \limsup_{r \rightarrow \infty} \dfrac{N(r,a,f)}{T(r,f)}. \,$
By the First Fundamental Theorem, 0 ≤ δ(a,f) ≤ 1, if T(r,f) tends to infinity (which is always the case for non-constant functions meromorphic in the plane). The points a for which δ(a,f) > 0 are called deficient values. The Second Fundamental Theorem implies that the set of deficient values of a function meromorphic in the plane is at most countable and the following relation holds:
$\sum_{a}\delta(a,f)\leq 2, \,$
where the summation is over all deficient values.[5] This can be considered as a generalization of Picard's theorem. Many other Picard-type theorems can be derived from the Second Fundamental Theorem.
As another corollary from the Second Fundamental Theorem, one can obtain that
$T(r,f')\leq 2 T(r,f)+S(r,f),\,$
which generalizes the fact that a rational function of degree d has 2d − 2 < 2d critical points.
## Applications
Nevanlinna theory is useful in all questions where transcendental meromorphic functions arise, like analytic theory of differential and functional equations[6][7] holomorphic dynamics, minimal surfaces, and complex hyperbolic geometry, which deals with generalizations of Picard's theorem to higher dimensions.[8]
## Further development
A substantial part of the research in functions of one complex variable in the 20th century was focused on Nevanlinna theory. One direction of this research was to find out whether the main conclusions of Nevanlinna theory are best possible. For example, the Inverse Problem of Nevanlinna theory consists in constructing meromorphic functions with pre-assigned deficiencies at given points. This was solved by David Drasin in 1975. Another direction was concentrated on the study of various subclasses of the class of all meromorphic functions in the plane. The most important subclass consists of functions of finite order. It turns out that for this class, deficiencies are subject to several restrictions, in addition to the defect relation (Norair Arakelyan, David Drasin, Albert Edrei, Alexandre Eremenko, Wolfgang Fuchs, Anatolii Goldberg, Walter Hayman, Joseph Miles, Daniel Shea, Oswald Teichmüller, Alan Weitsman and others).
Henri Cartan, Joachim and Hermann Weyl[1] and Lars Ahlfors extended Nevanlinna theory to holomorphic curves. This extension is the main tool of Complex Hyperbolic Geometry.[9] Intensive research in the classical one-dimensional theory still continues.[10]
## References
1. ^ a b H. Weyl (1943). Meromorphic functions and analytic curves. Princeton University Press. p. 8.
2. Goldberg, A.; Ostrovskii, I. (2008). Distribution of values of meromorphic functions. American Mathematical Society.
3. Hayman, W. (1964). Meromorphic functions. Oxford University press.
4. Ru (2001) p.5
5. Ru (2001) p.61
6. Ilpo Laine (1993). Nevanlinna theory and complex differential equations. Berlin: Walter de Gruyter.
7. Eremenko, A. (1982). "Meromorphic solutons of algebraic differential equations". Russian Math. Surv. 37 (4): 61–95. doi:10.1070/RM1982v037n04ABEH003967.
8. Lang (1987) p.39
9. Lang (1987) ch.VII
10. A. Eremenko and J. Langley (2008).Meromorphic functions of one complex variable. A survey, appeared as appendix to Goldberg, A.; Ostrovskii, I. (2008). Distribution of values of meromorphic functions. American Mathematical Society.
• Lang, Serge (1987). Introduction to complex hyperbolic spaces. New York: Springer-Verlag. ISBN 0-387-96447-9 [Amazon-US | Amazon-UK]. Zbl 0628.32001.
• Lang, Serge (1997). Survey of Diophantine geometry. Springer-Verlag. pp. 192–204. ISBN 3-540-61223-8 [Amazon-US | Amazon-UK]. Zbl 0869.11051.
• Nevanlinna, Rolf (1925), "Zur Theorie der Meromorphen Funktionen", (Springer Netherlands) 46: 1–99, doi:10.1007/BF02543858, ISSN 0001-5962
• Nevanlinna, Rolf (1970) [1936], Analytic functions, Die Grundlehren der mathematischen Wissenschaften 162, Berlin, New York: Springer-Verlag, MR0279280
• Ru, Min (2001). Nevanlinna Theory and Its Relation to Diophantine Approximation. World Scientific Publishing. ISBN 981-02-4402-9 [Amazon-US | Amazon-UK].
## Further reading
• Bombieri, Enrico; Gubler, Walter (2006). "13. Nevanlinna Theory". Heights in Diophantine Geometry. New Mathematical Monographs 4. Cambridge University Press. pp. 444–478. ISBN 978-0-521-71229-3 [Amazon-US | Amazon-UK].
• Vojta, Paul (1987). Diophantine Approximations and Value Distribution Theory. Lecture Notes in Mathematics 1239. Springer-Verlag. ISBN 978-3-540-17551-3 [Amazon-US | Amazon-UK].
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Nevanlinna theory", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Nevanlinna_theory
• ## Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• ## Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• ## About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8364000916481018, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/diffeomorphism-invariance
|
# Tagged Questions
The diffeomorphism-invariance tag has no wiki summary.
2answers
346 views
### Gauge invariance and diffeomorphism invariance in Chern-Simons theory
I have studied Chern-Simons (CS) theory somewhat and I am puzzled by the question of how diff. and gauge invariance in CS theory are related, e.g. in $SU(2)$ CS theory. In particular, I would like to ...
1answer
215 views
### failing to see the conundrum in the Einstein hole argument
I've been reading about the Einstein hole argument, and i fail to understand what makes active diffeomorphisms "special" compared to passive diffeomorphismsm also known as good old coordinate ...
1answer
318 views
### argument about fallacy of diff(M) being a gauge group for general relativity
I want to outline a solid argument (or bulletpoints) to show how weak is the idea of diff(M) being the gauge group of general relativity. basically i have these points that in my view are very solid ...
0answers
249 views
### composition of space expansion and movement as a gauge invariance
suppose i have a space-time where we have one point-like object* which we will call movement space probe or $\mathbf{M}_{A}$ for short, and it will be moving with constant velocity $V^A_{\mu}$ in ...
2answers
288 views
### Diff(M) and requirements on GR observables
This question is kind of inspired in this one: Diff(M) as a gauge group and local observables in theories with gravity The conundrum i'm trying to understand is how is derived the (quite) ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430925250053406, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/30654/can-you-demystify-the-power-law?answertab=oldest
|
Can you demystify the Power Law?
How would you describe the Power Law in simple words? The Wikipedia entry is too long and verbose. I would like to understand the concept of the power law and how and why it shows up everywhere.
For example, a recent Economist article Cry havoc! And let slip the maths of war, shows that terrorist attacks follow an inverted power law curve.
-
2
– Qiaochu Yuan Apr 3 '11 at 17:37
3 Answers
The power law just is the equation $f(x)=ax^k$ (the Wikipedia article allows some deviation.) If $k \lt 0$ it says that big events get less likely than small events and how fast. It shows up many places in science, and finding the exponent is often a clue to the underlying laws. It also shows up in the real world. This paper discusses finding it in statistics on the internet. Of course, real world data doesn't fit a power law exactly, and how close is "close enough" may be in the eye of the beholder. It is also known as "Zipf's Law" if you want to search around for places it is found.
-
Just to expand on Ross Millikan's answer. Power-laws are important and pop up all the time because they obey scale-invariance. Scale-invariance is the property that a system behaves the same when all length-scales are multiplied with a common factor. System show scale invariance if there is no characteristic length scale associated which, e.g., happens at phase transitions.
-
As Ross Millikan points out, a function $p(x)$ obeys a power law if
$$p(x) \propto x^\alpha, \ \ \alpha \in \mathbb{R}$$
There are many real-world examples of power law functions. For example, Newtonian gravity obeys a power law ($F = G \frac{m_1 m_2}{r^2}$, where $\alpha=2$ in this case), Coulombs's law of electrostatic potential ($F = \frac{Q_1 Q_2}{4 \pi r^2 \varepsilon_0}$, again $\alpha=2$), critical exponents of phase transitions near the critical point ($C \propto |T_c - T|^\alpha$), earthquake magnitude vs. frequency (this is why we measure earthquakes on a Ricther-scale ) and the length of the coastline of Britain ( $L(G) = M G^{1-D}$, where $1-D$ is the exponent in the power law), to name just a few.
There are many ways to generate power law distributions. Reed and Hughes show they can be generated from killed exponential processes, Simkin and Roychowdhury give an overview of how power laws have been rediscovered many times and Brian Hayes gives a nice article on 'Fat-Tails'. See Mitzenmacher for a review of other generative models to create power law distributions.
Of the ways to generate power law distributions, I believe the one that is most descriptive as to why power laws are so pervasive is the stability of sums of identical and independently distributed random variables. If $X_k$ are identical and independently distributed random variables, then their sum, $S$, will converge to a stable distribution:
$$S_n = \sum_{k=0}^{n-1} X_k$$
Under suitable conditions, the random variable has power law tails, i.e.:
$$\Pr\{ S > x \} \propto x^{-\alpha}, \ \ x \to \infty$$
alternatively we can talk about it's probability density function (abusing notation a bit):
$$\Pr\{ S = x \} \propto x^{-(\alpha+1) }, \ \ x \to \infty$$
for some $\alpha \in (0, 2]$. The only exception is when the random variables $X_k$ have finite second moment (or finite variance if you prefer), in which case $S$ converges to a Normal distribution (which is, incidentally, also a stable distribution).
The class of distributions that are stable are called Levy $\alpha$-stable. See Nolan's first chapter on his upcoming book for an introduction.
Incidentally, this is also why we often see power laws with exponent in the range of $(1,3]$. As $\alpha \to 1$, the distribution is no longer renormalizable (i.e. $\int_{-\infty}^{\infty} x^{-\alpha} dx \to \infty$ as $\alpha \to 1$)) whereas when $\alpha \ge 3$ the distribution has finite variance and the conditions for it being a Normal distribution apply.
The stability is why I believe power laws are so prevalent in nature: They are the limiting distributions of random processes, irrelevant of the fine grained details, when random processes are combined. Be it terrorist attacks, word distribution, galaxy cluster size, neuron connectivity or friend networks, under suitably weak conditions, if these are the result of the addition of some underlying processes, they will converge to a power law distribution as it is the limiting, stable, distribution.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205387830734253, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/93958-sum-periodic-functions.html
|
# Thread:
1. ## Sum of periodic functions
This theorem was inspired to me by this thread. I have found a proof of it but I'll post it later; I want to see if somebody can find another one!
Let $m(x), n(x)$ be continuous real-valued functions having periods $p,q$. Show that $m(x)+n(x)$ is periodic if and only if $p,q$ are linearly dependent over $\mathbb{Q}$.
2. Originally Posted by Bruno J.
This theorem was inspired to me by this thread. I have found a proof of it but I'll post it later; I want to see if somebody can find another one!
Let $m(x), n(x)$ be continuous real-valued functions having periods $p,q$. Show that $m(x)+n(x)$ is periodic if and only if $p,q$ are linearly dependent over $\mathbb{Q}$.
If $m(x) + n(x)$ has period T, then
$T=ap$
$T=bq$
where a,b are natural numbers.
hence, $ap=bq$
or,
$p=\frac{b}{a}q$
now, what is meant by linear independence here?
3. How can you be sure that the period of the sum is a multiple of both $p,q$? You need to show that this is true; it's the hard part of the problem.
If $x_1,...,x_n$ are linearly independent over $K$, then whenever $k_1x_1+...+k_nx_n=0$ with $k_1,...,k_n \in K$ we must have $k_1=...=k_n=0$. If they are linearly dependent then they are not linearly independent. In your post, $p-\frac{b}{a}q=0$ is a linear dependence relation over the rationals.
But you have to show that we must have $T=ap=bq$ for some integers $a,b$.
4. For a,b integers:
m(x + ap) + n(x + bq) = m(x) + n(x) (1)
Therefore ap=bq defines all periods of the sum function.
If the sum is periodic, p = b/a * q, where b/a obviously belongs to Q.
Alas c1*p + c2*q = (c1 * b/a + c2) * q, for c1=-a/b and c2=1 (both € Q) it is 0, that is if the sum is periodic p,q are linearly dependent over Q (2)
If p,q are linearly dependent over Q, p = -c2/c1 * q for c1,c2 rational numbers, then expressing c1,c2 as c1N/c1D and c2N/c2D, where c1N,c1D,c2N,c2D are integers, we get c1N*c2D*p = -c2N*c1D*q.
Taking (1) and making a=c1N*c2D and b=-c2N*c1D, ab=pq, so If p,q are linearly independent over Q, the sum function is periodic (3).
m(x) + n(x) is periodic If(3) and only if(2) p,q are linearly independent over Q.
(Sorry for the ugly proof)
5. For a,b integers:
m(x + ap) + n(x + bq) = m(x) + n(x) (1)
Ok so far.
Therefore ap=bq defines all periods of the sum function.
How so? If $T$ is the period of $m(x)+n(x)$, then $m(x)+n(x)=m(x+T)+n(x+T)$; but that does NOT imply $m(x)=m(x+T)$ and $n(x)=n(x+T)$. You are making a huge leap here.
If you do not use the continuity of $m,n$ your proof is certainly flawed because we can construct non-continuous $m,n$ whose sum is periodic, but whose periods are linearly independent over $\mathbb{Q}$.
6. Originally Posted by Bruno J.
Ok so far.
How so? If $T$ is the period of $m(x)+n(x)$, then $m(x)+n(x)=m(x+T)+n(x+T)$; but that does NOT imply $m(x)=m(x+T)$ and $n(x)=n(x+T)$. You are making a huge leap here.
If you do not use the continuity of $m,n$ your proof is certainly flawed because we can construct non-continuous $m,n$ whose sum is periodic, but whose periods are linearly independent over $\mathbb{Q}$.
Ups I'll try it later...
7. Bump. Nobody has solved this one yet... last call! I'll give the solution soon.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367193579673767, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/users/5530/tarek?tab=activity&sort=comments
|
# Tarek
reputation
19
bio
website
location Germany
age
member for 1 year, 7 months
seen 7 hours ago
profile views 54
A PhD student in Germany. physics.reality@gmail.com
| | | bio | visits | | |
|-----------|----------------|---------|----------|-------------|------------------|
| | 149 reputation | website | | member for | 1 year, 7 months |
| 19 badges | location | Germany | seen | 7 hours ago | |
# 15 Comments
| | | |
|-------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| May1 | comment | English translation of Helmholtz' paper: “On the Physical Significance of the Principle of Least Action”Looking at the contents of this book, it does not contain the paper in the question. |
| Mar13 | comment | Hamiltonian of a simple graph$\Sigma_z=S_1^z+S_2^z$ The first matrix you wrote for $\Sigma_z$ is consistent with the definitions for$S_1^z$ and $S_2^z$. What is the problem in $S_1^z |ZZ\rangle=-0.5|ZZ\rangle$ ? Isn't this expected? |
| Feb12 | comment | Nuclear Magnetic Resonance (NMR) Conceptual Questions |
| Jan6 | comment | Simulating the evolution of a wavepacket through a crystal lattice@hwlau I am not an expert in either of them. For sufficiently small systems, direct evolution with a simple 4th order Runge Kutta algorithm is sufficient. This amounts to using a truncated Taylor expansion of the time evolution operator. The Hamiltonian is represented as a sparse matrix. |
| Dec31 | comment | Simulating the evolution of a wavepacket through a crystal latticeHave you thought of using numerical algorithms, such as tDMRG, TEBD ? |
| Oct26 | comment | What is the spin rotation operator for spin > 1/2?Of course I am asking about the analogous formula for the expansion of the exponential in terms of cosines and sines not about the spin matrix ! |
| Oct5 | comment | Scale invariance symmetry as a simple argument in an electrostatics problemThis is a valid proof, but I doubt that it is the scale invariance symmetry meant by Prof. Preskill. He wrote: "Actually, this problem can also be solved by a symmetry argument, though the symmetry used is less obvious than rotational invariance. Readers may enjoy constructing this symmetry argument, which (in keeping with the theme of this post) I find more elegant than the argument using concentric rings suggested by your hint." Please read the discussion in the post in the question. |
| Sep12 | comment | Driving a solution of optical isomer molecules with the resonant frequencyI am indeed interested in the simulation details of this. If you wish, you can send to [my email]. In order to understand the theory behind this, we should first know how the chiral states are stabilized in the first place. One famous stabilization mechanism is [decoherence] (prl.aps.org/abstract/PRL/v103/i2/e023202). It seems to me that the predictions of this paper can be tested based on the setup in the question. |
| Sep12 | comment | Driving a solution of optical isomer molecules with the resonant frequencyAren't $\psi_{L/R}$ the chiral states which are not eigenstates of the Hamiltonian, (and hence the paradox of Hund), while $\psi_\pm$ are the true eigenstates (which are not degenerate)? |
| Sep12 | comment | Driving a solution of optical isomer molecules with the resonant frequencyThe left and right-handed states are in fact degenerate. The two level system should be thought of as composed of the eigenstates of the hamiltonian/parity operator. Each of these eigenstates is either a sysmmetric or anti-symmetric superposition of the chirality states (left and right handed). |
| Jul29 | comment | How are quantum phenomena in atoms and molecules protected against decoherence?Is there a proof that the pointer states are energy eigenstates? |
| Jul12 | comment | Are there devices which convert thermal energy to electric energy?I am seeking an effect which does not require temperature gradient. Can a device be built which captures "flying" phonons and convert them to electric energy without the need for a colder surface? I am aware that such an idea may violate the 2nd law of thermodynamics, but I wish to know from a technological perspective why this is (not) feasible. |
| Apr20 | comment | Partially filled orbitals and strongly correlated electronsI think you are using the term screening here in a different sense than the usual one, where electrons closer to the nucleus screen the positive nuclear charge from electrons at a larger distance, and hence those later electrons feels a smaller nuclear charge. |
| Apr18 | comment | Partially filled orbitals and strongly correlated electrons |
| Feb10 | comment | Electrodynamics textbook that emphasizes applicationsThanks Chris. I didn't mean engineering applications like transmission lines and waveguides, but simply applications from everyday life for the curious students. |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.904602587223053, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/torque+newtonian-mechanics
|
# Tagged Questions
2answers
63 views
### How do objects change their axis of rotation?
If I hold a pencil at its end and spin it, throwing it upwards, it will spin about its end, but will soon start spinning around its center. How is this? I would draw the following torque diagram for ...
1answer
31 views
### Find the bending moment of a pole attached to a moving block
I'm having trouble with the following problem. What I've done so far: x-y is the usual coordinate system. $a=\frac{F}{m}=\frac{800}{60}$ and the y component of this is $a_y=a\sin{60^\circ}$. To ...
1answer
58 views
### Direction of the torque
In each one of the following figures there's a pole of length $1.2 \text{m}$ and there's a force $\vec F = 150 \text{N}$ acting on it. Determine the torque that is created by the force relative ...
1answer
111 views
### Calculating the acceleration of a car
I'm trying to calculate the maximum acceleration a car can achieve with the current gear ratio. I am ignoring drag forces and friction to keep it simple. I'm doing this by: calculating the torque ...
2answers
90 views
### Thrust center in space
I have this dilemma: Suppose you have a space ship somewhere in deep space, where there is no drag force or substantial gravity. If the ship has a single engine situated in such a way that the center ...
1answer
179 views
### Is angular momentum always conserved in the absence of an external torque?
Consider either the angular momentum of the earth around the sun or equivalently swinging a ball horizontally on a string. I know that with respect to the point of rotation of the swinging ball, ...
4answers
313 views
### Difference between torque and moment
What is the difference between torque and moment? I would like to see mathematical definitions for both quantities. I also do not prefer definitions like "It is the tendancy..../It is a measure of ...
0answers
242 views
### Neglecting friction on a pulley?
So, this is how the problem looks: http://www.aplusphysics.com/courses/honors/dynamics/images/Atwood%20Problem.png Plus, the pulley is suspended on a cord at its center and hanging from the ceiling. ...
1answer
194 views
### Calculation of torque for motor used in 4 wheel robot [duplicate]
Possible Duplicate: Torque Required For a Motor to Move an Object on Wheels? I want to build a 4 wheel robot. the maximum weight of the robot is approximately 100 kg. and the radius of my ...
1answer
47 views
### If a cart hits a wall, does the weight of it affect how it moves, when the center of gravity is constant?
I have a model that represents a bicycle (a wood block with wheels), and I'm balancing the center of gravity so it's the same as a real bike. However, when the center of mass is kept constant, does ...
1answer
89 views
### Does mass concentration affect the torque induced by a force?
If you had two bodies with the same weight but one having mass concentrated more in the center, while the other had most mass concentrated on the outside, but both had the same center of mass and ...
1answer
141 views
### How can the torque a bicycle experiences be calculated based on the center of gravity, weight and a force?
The position center of gravity of a bicycle and its rider is known, and the distance from it to the point of contact of the front wheel with the ground, in terms of horizontal and vertical distance (x ...
3answers
556 views
### What determines the direction of precession of a gyroscope?
I understand how torque mathematically causes a change to the direction of angular momentum, thus precessing the gyroscope. However, the direction, either clockwise or counterclockwise, of this ...
0answers
125 views
### How do you determine the torque caused by the mass of a lever?
Suppose we have two objects sitting on two side of a lever, and the lever also has a mass, and those objects have masses. Then how we can balance $\sum τ$? This is what I have done: ...
2answers
460 views
### Torque And Moment Of Inertia
I am reading the two concepts mentioned in the title. According to the definition of torque and moment of inertia, it would appear that if I pushed on a door, with the axis of rotation centered about ...
1answer
139 views
### Relationship between torques and centre of mass or centre of gravity [closed]
1) A wardrobe is $2$m high and $1.6$m wide. When empty, it has a mass of $110$kg and its centre of gravity is $0.8$m above the centre of its base. What is the minimum angle through which it must be ...
2answers
232 views
### Force applied off center on an object
Assume there is a rigid body in deep space with mass $m$ and moment of inertia $I$. A force that varies with time, $F(t)$, is applied to the body off-center at a distance $r$ from its center of mass. ...
1answer
273 views
### Torque and equilibrium
I am a little stuck on the concept involved in part b) of the following problem: A $5 m$ long road gate has a mass of $m = 15 kg$. The mass is evenly distributed in the gate. The gate is supported at ...
1answer
326 views
### Calculating torque in a structure
I posted this on math stack exchange but realize it is more a physics question. I have a structure which is set up as shown in the image. A weight hangs from point A with mass $m$. Joint B is free ...
0answers
258 views
### Torque required to rotate a cement mixer..? [closed]
I need to design a motor to rotate a cement mixer which should mix one cubic meter. So, I calculated the required volume to be 1600 liters as it is an horizontal cylinder. Consider that the mixer ...
6answers
2k views
### Why is torque not measured in Joules?
Recently, I was doing my homework and I found out that Torque can be calculated using $\tau = rF$. This means the units of torque are Newton meters. Energy is also measured in Newton meters which are ...
1answer
273 views
### Why does the beam in a weighing balance get tilted proportional to the weights added to each pan?
I'm talking about a beam balance(a simple weighing balance with a beam and two pans hung on either side) As answered in a previous question, the beam comes back to the original position when one ...
3answers
562 views
### Why do rolling disc (coin) move in circular path?
We have a coin that is rolled such that it's tilted at an small angle $\theta$. Question:: What turns around rolling disc so that it traces circular motion (spiral as it's speed decreses)? ...
3answers
185 views
### How to imagine the following power and torque?
I can roughly feel a force of 10 newton only by considering the weight of 1 kilogram object. How to imagine the following measurements? Maximum horsepower 16 hp @ 9,500 RPM Maximum torque ...
3answers
811 views
### Proving angular momentum is conserved for a particle moving in a central force field $\vec F =\phi(r) \vec r$
A problem I am trying to work out is as follows: A particle moves in a force field given by $\vec F =\phi(r) \vec r$. Prove that the angular momentum of the particle about the origin is constant. ...
0answers
55 views
### Two particles rest [closed]
Two particles rest a distance $d$ apart on the edge of a table. One of the particles, of mass $m$, falls off the edge and falls vertically. Using the other particle (still at rest) as the origin, ...
1answer
416 views
### Normal force in a compound pendulum (physical pundulum) system?
Consider a compound pendulum pivoted about a fixed horizontal axis, illustrated by the force diagram on the right: # Okay, I can't figure out where the normal force on the pendlum should point ...
3answers
2k views
### How do I calculate DC motor speed for a given load?
Suppose I have a robot of a given mass, and I'm choosing between 2 different wheels and 2 different motors to put on it. For each wheel I have the diameter, and for each motor I know the stall torque ...
4answers
684 views
### How do levers amplify forces?
This is really bothering me for a long time, because the math is easy to do, but it's still unintuitive for me. I understand the "law of the lever" and I can do the math and use the torques, or ...
1answer
2k views
### Torque Required For a Motor to Move an Object on Wheels?
I've been attempting to calculate how much torque a motor needs to produce in order to start a stationary object on wheels moving. (The torque is being applied to the rear 2 wheels, the front 2 are on ...
3answers
2k views
### Torque required to turn a drum/barrel
I need to spec a motor to turn a mixing barrel. The barrel contains loose earth and can be filled to a maximum of 50% of its interior volume. It is a horizontal cylinder, and will rotate through its ...
1answer
980 views
### Moment calculation [closed]
Consider image below. The weight of the fire-fighter is 840 N. What is the torque of the fire-fighter's weight about P and what is the value of the force C which cancels out the torque?
3answers
2k views
### Why are bicycle pedal threads' handedness left on the left and right on the right?
I understand the reason that bicycle pedals are oppositely threaded on either side. What I don't understand is why it works because I'm missing something. Take the right pedal for example. It's ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419075846672058, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/113055-expressing-linear-function-terms-independent-random-variables.html
|
# Thread:
1. ## Expressing a linear function in terms of independent random variables
Let Y1<Y2<...<Yn be the order statistics of a random sample of size n from the pdf $f(x) = e^{-x}$ x ranging from 0 to infinity.
Demonstrate that all linear functions of Y1, Y2,...,Yn such as $\Sigma a_i Y_i$ can be expressed as a linear function of independent random variables.
so:
$\Sigma a_i Y_i = a_1Y_1 + a_2Y_2 + ... + a_nY_n = a_1e^{-x_1} + a_2e^{-x_2} +...+a_ne^{-x_n}$
That can't be right though....
2. Hello,
Hey... I think you have serious problems with the definitions... f is the pdf, the random variable does not equal $e^{-x}$
3. So what would Y equal to? Would I need to do a change of variable?
4. Originally Posted by statmajor
Let Y1<Y2<...<Yn be the order statistics of a random sample of size n from the pdf $f(x) = e^{-x}$ x ranging from 0 to infinity.
Demonstrate that all linear functions of Y1, Y2,...,Yn such as $\Sigma a_i Y_i$ can be expressed as a linear function of independent random variables.
so:
$\Sigma a_i Y_i = a_1Y_1 + a_2Y_2 + ... + a_nY_n = a_1e^{-x_1} + a_2e^{-x_2} +...+a_ne^{-x_n}$
That can't be right though....
$Y_1, Y_2, \dots , Y_n$ are the order statistics of an Exponential(1) random variable; so $W_1 = Y_1, \; W_2 = Y_2 - Y_1, \; W_3 = Y_3 - Y_2, \dots , W_n = Y_n - Y_{n-1}$ are n independent (exponentially distributed) random variables.
A proof of this fact can be found in Feller, "An Introduction to Probability Theory and Its Applications, Volume II"; or you may be familiar with the statement that if arrival times are exponentially distributed then the inter-arrival times are independent and exponentially distributed.
Then
$Y_1 = W_1$
$Y_2 = W_2 + W_1$
$Y_3 = W_3 + W_2 + W_1$
etc.,
so the Y's are linear functions of the W's. Hence any linear function of the Y's can be re-written as a linear function of the W's.
5. God, I'm such an idiot. There was another part to this question where it asks me to prove that Z1 = nY1 Z2 = (n-1)(Y2 - Y1) Z3 = (n-2)(Y3-Y2),...,Zn = Yn - Y(n-1)
so:
$\Sigma a_i Y_i = a_1\frac{Z_1}{n} + a_2(\frac{Z_2}{n-1} + \frac{Z_1}{n})+..+a_n(Z_n + ... + \frac{Z_1}{n})$
Can't believe I didn't realise this sooner. Thanks a lot.
Last question: is the nth term correct?
6. The last equation looks consistent with your definition of the Zs.
7. Thanks for all your help.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9023886322975159, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/72408/history-of-the-triangle-inequality/72453
|
## History of the triangle inequality
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am currently preparing a talk that revolves around the triangle inequality.
Because this inequality is so well-established, I do not want to (in my talk) belabor too much upon the importance it enjoys. For example, I learned some useful views here. But, these concerns are currently too advanced---for my purposes, I am seeking first some historical background; specifically,
Approximately when, where, and how did the concept of a triangle inequality get formalized, and its importance recognized?
EDIT It seems that the above question is not precise or clear enough. How about the slightly clarified question:
When was it realized (was it Fréchet's 1906 paper cited in the comments?) that the triangle inequality should be a fundamental axiom for defining distances?
-
Is this question identical to the same question with "triangle inequality" replaced by "metric space"? – Qiaochu Yuan Aug 8 2011 at 22:53
Hi Qiaochu, could you link to that question? (or do you mean that I should augment / alter the title of this question?) – S. Sra Aug 8 2011 at 23:04
I'm not really clear on what "gets formalized" means. I suppose the concept itself was known to the ancient Greeks, and the algebraic inequality for Euclidean distance has been written down for centuries. For "gets formalized" to count, do you mean that the concept of real number should have been made rigorous first? And do you mean introduced as a formal axiom for concepts of distance? – Todd Trimble Aug 8 2011 at 23:07
4
Here's the link to Fréchet's original paper: dx.doi.org/10.1007/BF03018603 see also this thread mathoverflow.net/questions/51494/… for some comments. – Theo Buehler Aug 8 2011 at 23:43
2
@Qiaochu: I mean $d(a,b) \le d(a,c)+d(b,c)$ for some distance function $d$ (not necessarily $R^n$) – S. Sra Aug 9 2011 at 0:43
show 10 more comments
## 3 Answers
There is a discussion of this issue in Dieudonné's History of Functional Analysis, p. 115:
It may seem obvious to us that the results of Hilbert are but one step removed from what we now call the theory of Hilbert space; but if, in fact, the birth of that theory almost immediately followed the publication of Hilbert's papers, it seems to me that it is due to the fact that this publication precisely occurred during the emergence of a new concept in mathematics, the concept of structure.
Until the middle of the XIXth century, mathematicians had been dealing with well determined mathematical "objects": numbers, points, curves, surfaces, volumes, functions, operators. But the fact that algebraic manipulations on different kinds of "objects" had a strikingly similar appearance soon attracted attention (cf. chap.IV, §3), and after 1840 it gradually became clear that the essence of these manipulations did not lie in the nature of the objects, but in the rules to be followed in handling them, which might be the same for very different types of objects. However, a precise formulation of this idea had to wait for the adoption of the set-theoretic concepts and language; and it is only in 1895 that our definition of a group, on an arbitrary underlying set, was formulated by Weber [225]. The trend towards the definition of algebraic structure then gained momentum, and around 1920 all fundamental notions of present-day Algebra had been defined.
In Analysis, no similar development had yet occurred in 1900. The extensions of the ideas of limit and continuity which had been formulated always were relative to special objects such as curves, surfaces or functions. The possibility of defining such notions in an arbitrary set is an idea which undoubtedly was first put forward by Fréchet in 1904 [69], and developed by him in his famous thesis of 1906 [71].
If I may summarize: the idea that one should talk about mathematical objects in terms of the axioms they should satisfy was itself quite new around 1900, and the specific application of this idea to the triangle inequality seems quite likely to have originated with Fréchet for that reason.
-
Thanks Qiaochu; The above quotation is spot-on, and the very last sentence of the quote is essentially the attribution that I was searching for. – S. Sra Aug 9 2011 at 16:32
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It was not Frechet, since the statement that "a line is the shortest distance between two points" (which is obviously equivalent to the triangle inequality) is one of Euclid's axioms, and since Euclid is widely viewed as more of a scribe than the discoverer, presumably it goes back further than that.
-
Indeed, that is, as you say, obviously equivalent to the triangle inequality for the sense of d(x, y) as meaning "length of straight-line path from x to y". One might also consider the sense of d(x, y) as meaning "length of shortest path from x to y" (or, more pedantically, "infimum of path lengths"). But then the triangle inequality is tautologous! (Though there may still be a notable first explicit remark upon this tautology...) Perhaps we need to figure out which "triangle inequality" exactly is of interest to our question-asker Suvrit. – Sridhar Ramesh Aug 9 2011 at 2:13
2
"a line is the shortest distance between two points" ... is one of Euclid's axioms ... it is not! – Gerald Edgar Aug 9 2011 at 12:27
1
It is Book I, Proposition 20. – Gerald Edgar Aug 9 2011 at 12:31
1
@Gerald. I eat my words. – Igor Rivin Aug 9 2011 at 13:34
Here is a suggestion, to get the idea across in an informal way---it is what I always tell the students when I introduce the triangle inequality: I tell them that its essential content, and the way it gets used, is that "things close to the same thing are close to each other".
-
This ought to be a comment, because it is not an answer to the question. – Todd Trimble Aug 9 2011 at 11:37
Actually Dick, I am already quoting this statement of yours (I saw it in your response to a previous triangle-ineq. related question)! – S. Sra Aug 9 2011 at 16:28
@suvrit Right,sorry,I missed the indirect reference. – Dick Palais Aug 9 2011 at 20:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9637561440467834, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/579/how-can-we-reason-about-the-cryptographic-capabilities-of-code-breaking-agencies/589
|
# How can we reason about the cryptographic capabilities of code-breaking agencies like the NSA or GCHQ?
I have read in Applied Cryptography that the NSA is the largest hardware buyer and the largest mathematician employer in the world.
• How can we reason about the symmetric ciphers cryptanalysis capabilities of code-breaking agencies like the NSA or GCHQ given that they have performed first class unpublished cryptographic research for the last ~40 years?
• What sort of computational lower bounds can we establish on an attack against these ciphers given that these agencies may have unpublished and unknown cryptanalysis techniques of equivalent utility as differential cryptanalysis (we only know about differential cryptanalysis because someone outside the NSA/IBM rediscovered it)?
For example, could we have developed a good lower bound on the ease of finding collisions in md5 without knowledge of differential cryptanalysis?
This question is restricted to symmetric ciphers.
-
2
I think it's an interesting question and one that every user of cryptography outside of the US Government itself must be asking. While it's true that an organization such as the NSA gives up very little information about its capabilities (and the question could be more general and apply to all such organizations) the amount of information is non-zero. Even in the absence of information, the question degenerates to another interesting and important question: why should we have confidence that algorithm X is secure in principle? – Marsh Ray Sep 1 '11 at 18:35
Applied Crypto is getting pretty long in the tooth. I'd be interested in seeing the community's consensus on the current strength of the NSA in this area. It's not objective, but community consensus is valuable in itself; perception is reality and all. :-) – Steve Dispensa Sep 1 '11 at 19:03
1
Yes, this is a good question. Everyone should have this question. The fact that nobody has a good answer is an important fact and it is good to publish the question on this site. A lot of the most important questions around don't really have solid answers. Uncertainty and ignorance are what we have to face up to and decide how to manage. – Zooko Sep 1 '11 at 19:56
@gokoon - are my edits in line with what you wished to ask with this question? – Ethan Heilman Sep 1 '11 at 20:11
1
Don't focus too much on specific organizations, such as the American National Security Agency (NSA). Maybe you think NSA are The Good Guys and will not do anything bad (even if they have such an ability) or you at least think they are on your side and will not do anything bad to you. But do we have any reason to believe that the Chinese or Russian classified cryptography research are less advanced than the American or British? Also some users (e.g. Europeans) may not feel okay about the possibility of NSA having such power over their communications. – Zooko Sep 6 '11 at 15:42
show 7 more comments
## 2 Answers
Applied Cryptography is book which is becoming, say, not-so-recent. NSA has quite a lot of budget, but not an infinite amount, and there are other organization, in particular big private corporation, which also have impressive means. Google or Apple, for instance, are companies with R&D activity in the area of cryptography, and who are able to potentially throw a billion dollars at a given problem (and they could probably do so with more administrative ease and flexibility than a federal agency).
Also, there has been quite some change in the area of public research on cryptography. In the early 1980s, there was a couple of conferences dedicated to cryptography; in 2011, there are more than one hundred ! The field has simply expanded, inflated, so much that no single organization, even the NSA or Microsoft or Apple, can claim employing a non-negligible proportion of the available brain resources. It is a recent change: from my own personal experience, inflation really began in earnest around 1995.
That's one thing that can be said about NSA abilities. They do not tell who they employ and what they work on; but we can estimate the probabilities of NSA having discovered advanced cryptanalytic techniques which have evaded the grasp of public academics. As Leibniz put it, discoveries are a product of ideas which are "floating around", and who will actually make the discovery is a random choice. In other words, if NSA employs 1% of the top cryptographers, then they will get 1% of the advances. Even if there is such as a thing as scientific capital (scientists work much more efficiently when they are in labs with many other scientists and a strong local tradition of working on the same subjects), it is still quite improbable that NSA is far ahead of everybody else.
Another point is about incentives. NSA is a budget sink-hole, but it has goals: namely, to protect the USA against their enemies (the rest of the World). When the NSA says that an algorithm is good (say, the AES), other US organization (both public and private) begin to use it. It is sure that NSA would like to be able to break encryption systems which are in widespread use; but, and (in my view) this is for them a much more important goal for NSA, they want the encryption that US organizations use to be unbreakable by their enemies. As such, it would make sense for NSA to promote an algorithm that they can break only if they have good reason to believe that only them can break it. NSA, like all secret services, knows what secret is: they keep their secrets, but they also assume that they do not know all about the secrets of their competitors. Correspondingly, there again, I find it implausible that NSA would know how to break AES, since they keep on brandishing it as "the solution" and there is not the slightest hint of a plan to define another symmetric encryption standard, if only as a backup.
So this is how I reason about the unknown capacities of secretive organization: I look at their resources, and I match their observable actions against their goals. Which leads to the following conclusion: if NSA can break AES, then either they have access to some non-Earth-based technology and science (a popular theme in movies, e.g. Men In Black), or they are not really trying to protect the organizations they are supposed to protect. Or both.
On the purely scientific plane, we have no proof that symmetric primitives really exist (in particular hash functions; but we do not know either if it is possible, in a Turing-said-it way, to have a symmetric cipher with an in-memory representation shorter than $\log n!$ bits: the amount of bits needed to represent a randomly selected permutation over $n$ bits). Right now we have candidates: defined block ciphers which we do not know how to break. And not block ciphers which we know cannot be broken. Therefore, there are no real "lower bounds" which would work against unknown cryptanalytic advances.
-
2
Your comment seems to ignore ECHELON, where the US co-operated with UK, Australia, New Zealand, and Canada, to monitor telephone (and other) communications. ECHELON was implicated in a number of government level industrial espionage events. The US happily joined ECHELON, despite the risk to their own country's industrial privacy. – DanBeale Sep 2 '11 at 18:57
1
Strong encryption is in everyone's interest, including the NSA. Strong encryption == secure commerce. If the government needs access to something, it has many vectors to get the key, including jailing your for not turning over the key. – duffbeer703 Sep 5 '11 at 20:00
1
They can always resort to the "Jack Bauer Side-Channel Attack", which involves them drilling holes in your kneecaps until you reveal the key. Billion dollar supercomputer or one agent with a \$50 hammer drill... hey, I guess you can put a price on human rights! – Polynomial Nov 23 '11 at 14:12
What sort of computational lower bounds can we establish on an attack against these ciphers [...]
At the moment we cannot prove such a result, theoretically proving strong lowerbounds on the amount of resources required to break candidate cryptographic primitives would imply separating $\mathsf{P}$ from $\mathsf{NP}$ (a million dollar problem), and it is consistent with what we know today that these two are equal, and we maybe living in what is one of the worlds called Algorithmica, Heuristica, or Pessiland (among Impagliazzo's five worlds).
For more on Impagliazzo's worlds, see his paper or check this workshop at CCI.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9619356989860535, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/242057/why-the-principle-of-counting-does-not-match-with-our-common-sense/242083
|
Why the principle of counting does not match with our common sense
Principle of counting says that
"the number of odd integers, which is the same as the number of even integers, is also the same as the number of integers overall."
This does not match with my common sense (I am not a mathematician, but a CS student).
Can some people here could help me to reach a mathematicians level of thinking for this problem. I have searched net a lot (Wikipedia also)
-
4
You probably have never counted them all yet. :) – Hagen von Eitzen Nov 21 '12 at 15:54
1
In the inifinite case, it is no longer true that if $A$ is a proper subset of $B$, then the "size of $A$" is strictly less than the "size of $B$." That intuition is true in finite sets, but would prove useless when dealing with infinite sets. – Thomas Andrews Nov 21 '12 at 16:04
Note, most mathematicians don't say "the number of odd integers" because that is confusing the word "number." Rather, we talk about the "cardinality of the set of odd integers." – Thomas Andrews Nov 21 '12 at 16:06
1
The thing is, even if you think you do, you have very little common sense regarding «counting infinite sets». You, like most people, are just extrapolating your life-long experience with manipulating finite sets, and there is no reason —if you think about it— to expect things to work the same. – Mariano Suárez-Alvarez♦ Nov 21 '12 at 16:28
1
To add on Thomas' last comment, the idea behind cardinality is to discard internal structure, because there are plenty of very large sets which don't have any internal structure which makes "intuitive sense" for us. – Asaf Karagila Nov 21 '12 at 17:21
show 1 more comment
3 Answers
The basic idea is very simple: Two sets are said to have the same number of elements if they can be put in a one-to-one correspondence with each other.
So here is a one-to-one correspondence between the positive odd numbers and the positive even numbers: (1,2), (3,4), (5,6), … So any odd number $2i-1$ is matched to a corresponding even number $2i$.
To match all positive numbers with all positive even numbers, match instead $i$ with $2i$, resulting in (1,2), (2,4), (3,6), (4,8), …
See also: Hilbert's paradox of the Grand Hotel for a more entertaining way to see this.
-
While I find the Grand Hotel paradox to be very educational and relevant, it's not directly related to the question. The question asks, in a nutshell, "how come the paradox is not a mathematical contradiction?" – Asaf Karagila Nov 21 '12 at 16:04
@AsafKaragila: Ah, but the question, as you summarize it, cannot have a mathematical answer, in the sense that you cannot prove the consistency of mathematics. Instead, what is called for is to develop a better intuition, once it becomes clear that your old intuitions don't work. Which happens all the time in mathematics (not to mention physics). And one way to do it is to first have your faced rubbed in the failure of old intuitions. – Harald Hanche-Olsen Nov 21 '12 at 16:13
Yes. I agree, and I said that in my answer. Naive intuition does not go very far in mathematics, but it provides a good start for "how to model things" in the sense of what properties we expect a certain mathematical notion to have. – Asaf Karagila Nov 21 '12 at 16:14
Well, counting is really just judging how big a set is. It is a question about cardinality, then.
Your intuition is based on finite sets. Infinite sets are different. However mathematics is not built on naive intuition, when it does it often run into paradoxes and problems (e.g. Russell's paradox and naive set theory).
Therefore we need to find good properties which generalize what we want to hold. So we have to think "when do two sets have the same size?" well, for finite sets we know that if two sets have the same number of elements then they have the same size. But we also know the following:
1. If $A\subseteq B$ then $A$ cannot exceed the size of $B$.
2. Equinumerosity is an equivalence relation.
3. If there is a bijection between $A$ and $B$ then they must have the same cardinality. It is impossible that $\{0,1\}$ and $\{5,6\}$ would have different sizes.
It turns out that to say that $A$ and $B$ have the same cardinality (or same number of element, although the word number is definitely confusing at first) if and only if there exists a bijection between $A$ and $B$. Furthermore, we can order the cardinals by injections, namely $|A|\leq|B|$ if and only if there is an injection from $A$ into $B$.
So it turns out that for infinite sets you get a few "naively paradoxical" results. Like having a set which is in bijection with a proper subset. However infinite sets often have "room for change" and allow us to move things around like that.
One could model the size of sets differently, but the properties which are written above will not necessarily be preserved. For example if we require that a proper subset always have a smaller cardinality, then this is no longer invariant under bijections.
Some reading material related to this:
The third one is particularly relevant.
-
I too once thought like you, but in CS counting is your friend.
When counting sets, we can determine if two sets have the same size or cardinality. For example countable infinite sets can always be put into correspondence with the natural numbers $\mathbb{N}$. These countably infinite sets are recursively enumerable languages, they can be iterated indefinitely.
So lets call the set of even numbers $\mathbb{E}$ and the set of odd numbers $\mathbb{O}$
Now, starting from the beginning, in increasing order, pair the smallest even number with the smallest natural number, next, pair the next smallest number with the next smallest natural number, so on and so forth. It looks something like this:
$\mathbb{N} = \{0, 1, 2, 3, 4, ...\}$
$\mathbb{E} = \{2, 4, 6, 8, 10, ...\}$
We see for every number in $\mathbb{N}$ there is a number in $\mathbb{E}$, that is, every number in $\mathbb{E}$ can be mapped to a unique number in $\mathbb{N}$ and vice versa.
This kind of mapping is known as a computable map or a bijection Such functions are one-to-one and onto, every element from each set is paired with a unique element of the other set.
The same argument can be applied to show than $|\mathbb{O}| = |\mathbb{N}|$.
Now, you may ask why is this important to CS, well, every language (set) that can be mapped to $\mathbb{N}$ can be recognized by a computer machine, and languages that do not map to $\mathbb{N}$ cannot even be recognized by a computer, therefore, we study this.
-
I didnt get the lats part.Why mapped set can be recognized by a computer machine? – user5507 Nov 21 '12 at 16:26
Using with computable functions is not a great idea for developing an initial intuition about infinite sets. It means that every infinite set has an uncountable subset. How about that. – Asaf Karagila Nov 21 '12 at 16:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419363141059875, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/50010-multiple-choice-general-limit-question.html
|
Thread:
1. Multiple Choice General Limit Question
If lim x->2^- f(x) = lim x->2^+ = -1, but f(2) = 1, then lim x->2 f(x).
What is the best answer for this limit?
a. is -1.
b. does not exist.
c. is infinite.
d. is 1.
2. $\lim_{x \to a} f(x)$ exists iff $\lim_{x \to a^+} f(x) = \lim_{x \to a^-} f(x)$ and is equal to their limit.
This has nothing to do with the value at $f(a)$.
3. Originally Posted by gearshifter
If lim x->2^- f(x) = lim x->2^+ = -1, but f(2) = 1, then lim x->2 f(x).
What is the best answer for this limit?
a. is -1.
b. does not exist.
c. is infinite.
d. is 1.
If the left hand and the right hand limits are the same, regardless of the value of the function at that point, the limit is the value of the left and right hand limits.
Lets say that $\lim_{x\to{x_0^-}}f(x)=\lim_{x\to{x_0^+}}f(x)=c$.
If $f(x_0)=f;~f\neq c$, it doesn't have an affect on the value of the limit. Thus $\lim_{x\to{x_0}}f(x)=c$
Does this make sense?
With this, determine the correct answer.
--Chris
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8691569566726685, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/1422/demonstrating-the-insecurity-of-an-rsa-signature-encoding-scheme
|
# Demonstrating the insecurity of an RSA signature encoding scheme
I'm working on problem 12.4 from Katz-Lindell. The problem is as follows:
Given a public encoding function $\newcommand{\enc}{\operatorname{enc}}\enc$ and a textbook RSA signature scheme where signing occurs by finding $\enc(m)$ and raising it to the private key $d \bmod N$, how can we demonstrate the scheme's insecurity for $\enc(m) = 0||m||0^{L/10}$, where $L = |N|$ and $|m| = 9L/10 - 1$ and m is not the message of all zeroes?
-
Okay, and where do you need help here? – Paŭlo Ebermann♦ Dec 9 '11 at 21:52
I need to know how to find a forgery on an m not in Q, where Q is the set of queries to the adversary's signing oracle. Is there something dead simple that I'm missing about this? – pg1989 Dec 9 '11 at 21:56
Have a look at the corresponding verifying scheme. Can you find a number, which, when taken to the power $e$ (the public exponent), gives something in this encoding? (This depends on the public key, but assume it it something like $3$.) – Paŭlo Ebermann♦ Dec 9 '11 at 22:08
Well, here's a hint: remember that for textbook RSA $enc(X) \cdot enc(Y) = enc( X \cdot Y) \mod N$ -- how can we find two messages $X$ and $Y$ such that $X \cdot Y \mod N$ is also a valid message? – poncho Dec 9 '11 at 22:10
@pg1989: Presumably that should go "and $m$ is not ...". $\;\;$ – Ricky Demer Dec 9 '11 at 22:20
show 2 more comments
## 1 Answer
An RSA signature scheme with public key $(n,e)$, private exponent $d$, and encoding function $enc$ (including but not limited to the question's $enc$), signs message $m$ as $$Sign(m) = enc(m)^d\bmod n$$
Such scheme is insecure if an adversary can figure out $k>0$ distinct messages $m_i$, and integers $u_i$, $r$, $s$ verifying $$s^e \cdot enc(m_0) \cdot \prod_{0\lt i\lt k} enc(m_i)^{u_i} \equiv r^e \pmod n$$ because this implies (by raising to the power $d$) $$Sign(m_0) \equiv r \cdot s^{-1} \cdot\prod_{0\lt i\lt k}Sign(m_i)^{-u_i} \pmod n$$ which allows computing the signature of $m_0$ (if $k\gt 1$, it is also necessary that the attacker obtain the signatures of the other messages $m_i$; that becomes an existential forgery, or chosen-message attack). Although dated, Jean-Francois Misarsky's How (Not) to Design RSA Signature Schemes is an interesting and relatively easy reading on that topic.
In fact, every known attack on an RSA signature scheme is either of the above kind (with more or less involved computations to exhibit $m_i$, $u_i$, $r$, $s$); or amounts to factorization of $n$ (which includes anything recovering $d$, perhaps by side-channel attack); or is some implementation error, perhaps widespread.
In order to mount an attack of the above kind, a relation of the form $enc(m_0)=r^e$ is ideal. It gives the signature of $m_0$ without any consideration on $n$ or known signature. When $e$ is 3, 5 or 7, this can be done with the encoding $enc$ in the question, by considering $r=2^t$ for some appropriate $t$, and extended to $r=v\cdot2^t$ for some small $v$.
Similarly, $enc(m_0) = r^e\cdot enc(m_1)$ gives the signature of one message from the signature of the other, without any consideration on $n$. This can be done with the encoding $enc$ in the question, for a wider choice of $e$.
Similarly, $enc(m_0) \cdot enc(m_1) = enc(m_2) \cdot enc(m_3)$ gives the signature of one message from the signature of the other three, for any public key $(n,e)$. With the encoding $enc$ in the question, there is ample choice (the equation simplifies to $m_0\cdot m_1=m_2\cdot m_3$, and all messages which left bit is 0 or which integer representation is composite are vulnerable). The ISO/IEC 9796:1991 signature encoding scheme (section 11.3.5 of the Handbook of Applied Cryptography), now withdrawn, turned out to be vulnerable to that, of course if the adversary can obtain the signature of three chosen messages, and is content with the signature of the fourth.
Even the hash-based ISO/IEC 9796-2:1997 (now known as ISO/IEC 9796-2:2010 scheme 1), still in wide use, is vulnerable if the adversary can obtain the signature of many weird chosen messages and is content with the signature of another, which fortunately is seldom the case in practice.
Some require $e>2^{16}$ (FIPS 186-3 appendix B3.1, RGS Annex B1 section 2.2.1.1, and I have seen suggestions for much wider random $e$), because some attacks on weak encoding schemes or implementations of RSA signature/encryption have been easiest for $e=3$ or other small $e$, as is the case for the scheme in the question. I will not condone a course of action that will lead us to loose the main appeal of RSA (or Rabin) signature schemes: fast and simple verification with modest hardware.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9199661016464233, "perplexity_flag": "middle"}
|
http://lucatrevisan.wordpress.com/2009/01/22/cs276-lecture-2-semantic-security/
|
in theory
"Marge, I agree with you - in theory. In theory, communism works. In theory." -- Homer Simpson
# CS276 Lecture 2: Semantic Security
January 22, 2009 in CS276 | Tags: semantic security
In which we encounter for the first time message indistinguishability and semantic security
In the last lecture we saw that
• all classical encryption schemes which allow the encryption of arbitrarily long messages have fatal flaws;
• it is possible to encrypt with perfect security using one-time pad, but the scheme can be used only once, and the key has to be as long as the message;
• if one wants perfect security, one needs a key as long as the total length of all messages that are going to be sent.
Our goal for the next few lectures will be to study schemes that allow the sending of messages that are essentially arbitrarily long, using a fixed key, and having a security that is essentially as good as the perfect security of one-time pad.
Today we introduce a notion of security (semantic security) that is extremely strong. When it is met there is no point for an adversary to eavesdrop the channel, regardless of what messages are being sent, of what she already knows about the message, and what goal she is trying to accomplish.
First, let us fix the model in which we are going to work. For the time being, we are going to be very modest, and we shall only try to construct an encryption scheme that, like one-time pad, is designed for only one use. We just want the key to be reasonably short and the message to be of reasonably large length.
We shall also restrict ourselves to passive adversaries, meaning that Eve is able to see the communication between Alice and Bob, but she cannot inject her own messages in the channel, and she cannot prevent messages from being delivered.
The definition of correctness for an encryption scheme is straightforward.
Definition 1 [Symmetric-Key Encryption Scheme -- Finite case] A symmetric-key encryption scheme with key-length ${k}$, plain-text length ${m}$ and ciphertext-length ${c}$ is is a pair of probabilistic algorithms ${(Enc,Dec)}$, such that ${Enc: \{ 0,1 \}^k \times \{ 0,1 \}^m \rightarrow \{ 0,1 \}^c}$, ${Dec: \{ 0,1 \}^k \times \{ 0,1 \}^c \rightarrow \{ 0,1 \}^m}$, and for every key ${K \in \{ 0,1 \}^k}$ and every message ${M}$,
$\displaystyle {\mathbb P} [ Dec(K,Enc(K,M)) = M ] = 1 \ \ \ \ \ (1)$
where the probability is taken over the randomness of the algorithms
Definition 2 [Symmetric-Key Encryption Scheme -- Variable Key Length Case] A symmetric-key encryption scheme with variable key-length is is a pair of polynomial-time probabilistic algorithms ${(Enc,Dec)}$ and a function ${m(k)> k}$, such that for every security parameter ${k}$, for every key ${K \in \{ 0,1 \}^k}$ and every message ${M \in \{ 0,1 \}^{m(k)}}$,
$\displaystyle {\mathbb P} [ Dec(k,K,Enc(k,K,M)) = M ] = 1 \ \ \ \ \ (2)$
(Although it is redundant to give the algorithm the parameter ${k}$, we do so because this will emphasize the similarity with the public-key setting that we shall study later.) It will be more tricky to satisfactorily formalize the notion of security.
One super-strong notion of security, which is true for the one-time pad is the following:
Definition 3 [Perfect Security] A symmetric-key encryption scheme ${(Enc,Dec)}$ is perfectly secure if, for every two messages ${M,M'}$, the distributions ${Enc(K,M)}$ and ${Enc(K,M')}$ are identical, where we consider the distribution over the randomness of the algorithm ${Enc()}$ and over the choice of ${K \sim \{ 0,1 \}^k}$.
The informal discussion from the previous lecture gives a hint to how to solve the following
Exercise 4 Prove that if ${(Enc,Dec)}$ is perfectly secure, then ${k\geq m}$.
Before we move on, let us observe two limitations that will be present in any possible definitions of security involving a key of length ${k}$ which is much smaller than the message length. Eve can always employ one of the following two trivial attacks:
1. In time ${2^k}$, Eve can enumerate all keys and produce a list of ${2^k}$ plaintexts, one of which is correct. Further considerations can help her prune the list, and in certain attack models (which we shall consider later), she can figure out with near certainty which one is right, recover the key and totally break the system.
2. Eve can make a random guess of what the key is, and be correct with probability ${2^{-k}}$.
Already if ${k=128}$, however, neither line of attack is worrisome for Alice and Bob. Even if Eve has access to the fastest of super-computers, Alice and Bob will be long dead of old age before Eve is done with the enumeration of all ${2^{128}}$ keys; and Alice and Bob are going to both going to be stricken by lighting, and then both hit by meteors, with much higher probability than ${2^{-128}}$. The point, however, is that any definition of security will have to involve a bound on Eve’s running time, and allow for a low probability of break. If the bound on Eve’s running time is enormous, and the bound on the probability of a break is minuscule, then the definition is as satisfactory as if the former was infinite and the latter was zero.
All the definitions that we shall consider involve a bound on the complexity of Eve, which means that we need to fix a model of computation to measure this complexity. We shall use (non-uniform) circuit complexity to measure the complexity of Eve, that is measure the number of gates in a boolean circuit implementing Eve’s functionality.
If you are not familiar with circuit complexity, the following other convention is essentially equivalent: we measure the running time of Eve (for example on a RAM, a model of computation that captures the way standard computers work) and we add the length of the program that Eve is running. The reason is that, without this convention, we would never be able to talk about the complexity of computing finite functions. Every function of an 128-bit input, for example, is very efficiently computable by a program which is nothing but a series of ${2^{128}}$ if-then-elses.
Finally we come to our first definition of security:
Definition 5 [Message Indistinguishability -- concrete version] We say that an encryption scheme ${(Enc,Dec)}$ is ${(t,\epsilon)}$ message indistinguishable if for every two messages ${M,M'}$, and for every boolean function ${T}$ of complexity ${\leq t}$, we have
$\displaystyle | {\mathbb P} [ T(Enc(K,M) =1 ] - {\mathbb P} [ T (Enc(K,M')) =1 ] | \leq \epsilon \ \ \ \ \ (3)$
where the probability is taken over the randomness of ${Enc()}$ and the choice of ${K\sim \{ 0,1 \}^k}$.
(Typical parameters that are considered in practice are ${t=2^{80}}$ and ${\epsilon = 2^{-60}}$.)
When we have a family of ciphers that allow varying key lengths, the following asymptotic definition is standard.
Definition 6 [Negligible functions] A function ${\nu : {\mathbb N} \rightarrow {\mathbb R}^+}$ is negligible if for every polynomial ${p}$ and for every sufficiently large ${n}$
$\displaystyle \nu(n) \leq \frac 1 {p(n)}$
Definition 7 [Message Indistinguishability -- asymptotic definition] We say that a variable key length encryption scheme ${(Enc,Dec)}$ is message indistinguishable if for every polynomial ${p}$ there is a negligible function ${\nu}$ such that for all sufficiently large ${k}$ the scheme is ${(p(k),\nu(k))}$-message indistinguishable when the security parameter is ${k}$.
The motivation for the asymptotic definition, is that we take polynomial time to be an upper bound to the amount of steps that any efficient computation can take, and to the “number of events” that can take place. This is why we bound Eve’s running time by a polynomial. The motivation for the definition of negligible functions is that if an event happens with negligible probability, then the expected number of experiments that it takes for the event to happen is superpolynomial, so it will “never” happen. Of course, in practice, we would want the security parameters of a variable-key scheme to be exponential, rather than merely super-polynomial.
Why do we use message indistinguishability as a formalization of security?
A first observation is that if we take ${\epsilon = 0}$ and put no limit on ${t}$, then message-indistinguishability becomes perfect security, so at least we are dealing with a notion whose “limit” is perfect security.
A more convincing explanation is that message indistinguishability is equivalent to semantic security, a notion that we describe below and that, intuitively, says that Eve might as well not look at the channel.
What does it mean that “Eve might as well not look at the channel”? Let us summarize Eve’s information and goals. Alice has a message ${M}$ that is sent to Bob over the channel. The message comes from some distribution ${X}$ (for example, it is written in English, in a certain style, it is about a certain subject, and so on), and let’s assume that Eve knows ${X}$. Eve might also know more about the specific message being sent, because of a variety of reasons; call ${I(M)}$ the information that Eve has about the message. Finally, Eve has a goal in eavesdropping the conversation, which is to learn some information ${f(M)}$ about the message. Perhaps she wants to reconstruct the message in its entirety, but it could also be that she is only interested in a single bit. (Does ${M}$ contain the string “I hate Eve”? Is ${M}$ a confidential report stating that company ${Y}$ is going to miss its earning estimate? And so on.)
Why is Eve even bothering tapping the channel? Because via some cryptoanalytic algorithm ${A}$, which runs in a reasonable amount of time, she thinks she has a good chance to accomplish. But the probability of accomplishing her goal would have been essentially the same without tapping the channel, then there is no point.
Definition 8 [Semantic Security -- Concrete definition] An encryption scheme ${(Enc,Dec)}$ is ${(t,o,\epsilon)}$ semantically secure if for every distribution ${X}$ over messages, every functions ${I: \{ 0,1 \}^m \rightarrow \{ 0,1 \}^{*}}$ and ${f: \{ 0,1 \}^m \rightarrow \{ 0,1 \}^*}$ (of arbitrary complexity) and every function ${A}$ of complexity ${t_A\leq t}$, there is a function ${A'}$ of complexity ${\leq t_A + o}$ such that
$\displaystyle | {\mathbb P} [ A(Enc(K,M),I(m)) = f(M)] - {\mathbb P} [ A'(I(m)) = f(M)] | \leq \epsilon$
Think, as before, of ${t=2^{80}}$ and ${\epsilon = 2^{-60}}$, and suppose ${o}$ is quite small (so that a computation of complexity ${o}$ can be performed in a few seconds or less), and notice how the above definition captures the previous informal discussion.
Now let’s see that semantic security is equivalent to message indistinguishability.
Lemma 9 [Semantic Security Implies Message Indistinguishability] If ${(Enc,Dec)}$ is ${(t,o,\epsilon)}$ semantically secure, then it is ${(t,2\epsilon)}$ message indistinguishable.
Note that semantic security implies message indistinguishability regardless of the overhead parameter ${o}$.
Proof: We prove that if ${(Enc,Dec)}$ is not ${(t,2\epsilon)}$ message indistinguishable then it is not ${(t,o,\epsilon)}$ semantically secure regardless of how large is ${o}$.
If ${(Enc,Dec)}$ is not ${(t,2\epsilon)}$ message indistinguishable, then there are two messages ${M_0,M_1}$ and an algorithm ${T}$ of complexity ${\leq t}$ such that
$\displaystyle {\mathbb P} [ T(Enc(K,M_1)) = 1 ] - {\mathbb P} [ T(Enc(K,M_0)) = 1 ] > 2\epsilon \ \ \ \ \ (4)$
Pick a bit ${b}$ uniformly at random in ${\{ 0,1 \}}$; then we have
$\displaystyle {\mathbb P} [ T(Enc(K,M_b)) = b ] > \frac 12 + \epsilon \ \ \ \ \ (5)$
And now take ${A}$ to be ${T}$, ${X}$ to be the distribution ${M_b}$ for a random ${b}$, and define ${f(M_b) = b}$, and ${I(M)}$ to be empty. Then
$\displaystyle {\mathbb P} [ A(I(M),Enc(K,M)) = f(M) ] > \frac 12 + \epsilon \ \ \ \ \ (6)$
On the other hand, for every ${A'}$, regardless of complexity
$\displaystyle {\mathbb P} [ A'(I(M),Enc(K,M)) = f(M) ] = \frac 12 \ \ \ \ \ (7)$
and so we contradict semantic security. ◻
Lemma 10 [Message Indistinguishability Implies Semantic Security] If ${(Enc,Dec)}$ is ${(t,\epsilon)}$ message indistinguishable and ${Enc}$ has complexity ${\leq p}$, then ${(Enc,Dec)}$ is ${(t- \ell_f,p,\epsilon)}$ semantically secure, where ${\ell_f}$ is the maximum length of ${f(M)}$ over ${M\in \{ 0,1 \}^m}$.
Proof: Fix a distribution ${X}$, an information function ${I}$, a goal function ${f}$, and a cryptoanalytic algorithm ${A}$ of complexity ${\leq t- \ell_f}$.
Take ${A'(I(M)) = A(I(M),Enc(K,{\bf 0} ))}$, so that the complexity of ${A'}$ is equal to the complexity of ${A}$ plus the complexity of ${Enc}$.
For every message ${M}$, we have
$\displaystyle {\mathbb P} [ A(I(M),Enc(K,M) ) = f(M) ] \leq {\mathbb P} [ A(I(M),Enc(K,{\bf 0}) ) = f(M) ] + \epsilon \ \ \ \ \ (8)$
Otherwise defining ${T(C)=1 \Leftrightarrow A(I(M),C) = f(M)}$ would contradict the indistinguishabiltiy.
Averaging over ${M}$ in ${X}$
$\displaystyle {\mathbb P}_{M\sim X, K\in \{ 0,1 \}^n} [ A(I(M),Enc(K,M) ) = f(M) ] \leq {\mathbb P}_{M\sim X, K\in \{ 0,1 \}^n} [ A(I(M),Enc(K,{\bf 0}) ) = f(M) ] + \epsilon \ \ \ \ \ (9)$
and so
$\displaystyle {\mathbb P}_{M\sim X, K\in \{ 0,1 \}^n} [ A(I(M),Enc(K,M) ) = f(M) ] \leq {\mathbb P}_{M\sim X, K\in \{ 0,1 \}^n} [ A'(I(M)) = f(M) ] + \epsilon \ \ \ \ \ (10)$
◻
It is also possible to define an asymptotic version of semantic security, and to show that it is equivalent to the asymptotic version of message indistinguishability.
Definition 11 [Semantic Security -- Asymptotic Definition] An encryption scheme ${(Enc,Dec)}$ is semantically secure if for every polynomial ${p}$ there exists a polynomial ${q}$ and a negligible function ${\nu}$ such that ${(Enc,Dec)}$ is ${(p(k),q(k),\nu(k))}$ semantically secure for all sufficiently large ${k}$.
Exercise 12 Prove that a variable key-length encryption scheme ${(Enc,Dec)}$ is asymptotically semantically secure if and only if it is asymptotically message indistinguishable,
## 7 comments
Wouldn’t it be easier to understand Definition 8, if we take the same adversary A in both instances ? In one case, A gets I(m) and Enc(k,m) and in the other case it gets I(m) and a random string in C (the set of cipher texts). Due to this change, the overhead parameter o will also not be required. Am I missing any subtle issue here ?
On the other hand, does this change make it more difficult to prove the equivalence between Indistinguishability and Semantic security ?
Som: if you use this definition, then it is very easy to prove equivalence with message indistinguishability, in fact your definition is almost the same as message indistinguishability. Indeed, implicitly, the proof of equivalence beween semantic security and message indistinguishability goes by the chain of implications
message indistinguishability -> your definition -> semantic security
by following the reverse directions in a proof by contradiction. (That is, a violation of semantic security can be made into a violation in which the adversaries are the same, which leads to a violation of message indistinguishability.)
Is it safe to use the hash of the plain text as a key for encryption?…
Semantic security is relevant to symmetric encryption. The idea is simply that your adversary should not be able to distinguish between the encryption of two chosen plaintexts. Check out Luca Trevesian’s lecture notes talking about semantic security f…
Excellent blog here! Also your site loads up fast! What web host are
I wish my site loaded up as fast as yours lol
I do not even understand how I ended up here, however I assumed this submit was once great.
I don’t recognize who you are but certainly you are going to a famous blogger in case you aren’t already.
Cheers!
[...] power, and the crypto underlying SSL/TLS wilts under that assumption. In contrast, the notion of semantic security actually does apply, yet it is not broad enough. Semantic security essentially gives an adversary [...]
And when such basic principles are disturbed,
neither does the IMD, the Eurogroup of the EU and IMF
did to paphos car hire today is poised to announce what president
Clifton E. There are a lot of money and are an informed
buyer. Budget Singapore renters will benefit from this tech whether or
not your personal or family car is that it makes you dizzy.
How about the services while hiring the car.
But it’s a little harder to find cinema or theatre that doesn’t come with a ticket price.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 137, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922245442867279, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/42424?sort=newest
|
## Question about examples of symplectic non-Kahler classes.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $M$ be an even dimensional smooth manifold.
I want to find an example $M$ satisfying the following conditions,
1. $M$ admits a Kahler structure.
2. $\omega$ is a symplectic form on $M$.
3. There is no Kahler structure $(M,\omega',J)$ such that $[\omega']=[\omega] \in H^2(M;\mathbb{R})$
(I mean, want to find an example $M$ such that "Kahler cone $\neq$ symplectic cone" with non-empty Kahler cone.)
Thank you in advance.
-
## 2 Answers
One sort of example arises from the fact that if one starts with a Kahler form $\omega$ (which represents a class of type (1,1) in the Hodge decomposition by definition of a Kahler form), then if $\phi$ is the real part of any closed form of Hodge type (2,0), $\omega+\phi$ will still be a symplectic form (it tames the complex structure $J$), but won't any longer be Kahler, at least if one regards the complex structure as being fixed--in principle there could be another complex structure with respect to which the form is Kahler. Thus you get examples this way on any Kahler manifold with $H^{2,0}\neq 0$. In the case of Kahler surfaces (symplectic $4$-manifolds) this is equivalent to the geometric genus being nonzero (or, in language more familiar to topologists, $b^+>1$).
In fact, a paper of Draghici (see the last paper listed on this page) shows essentially that, on a minimal Kahler surface of general type, if one starts at $\omega$ and goes out sufficiently far on the ray in the direction of $\phi$, then one eventually gets to classes that aren't represented by Kahler forms with respect to any complex structure, not just the original one.
There's a different sort of example in a paper of T.-J. Li and myself: we observe that if the Kahler surface $(M,\omega,J)$ contains any smooth J-complex curve (real 2D surface) $C$ of negative self-intersection other than a sphere of square $-1$, then one can obtain symplectic forms in the class $[\omega_t]=[\omega]+tPD[C]$ (where PD means Poincare dual) for a range of values of $t$ including some large enough that $[\omega_t]$ evaluates negatively on $C$. So the resulting symplectic form $\omega_t$ can't even be tamed by $J$. Again, in general $\omega_t$ might in principle be Kahler after deforming $J$ to some different complex strucutre, but Section 4.1 of that paper gives an example where this is carried out on a rigid surface (i.e. one admitting no deformations of the complex structure).
-
Thank you for your kind answer. Now I understood what I want to know. – YCho Oct 18 2010 at 18:09
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is probably the simplest example. Take the Fubini-Study form $\omega$ on $CP^2$. Then $-\omega$ is symplectic, but never Kaehler, because by Yau's theorem $CP^2$ admits a unique (standard) complex structure.
-
If J is the standard complex structure on CP2, isn't -J a complex structure with respect to which $-\omega$ is Kahler? Yau's theorem in the form that I know it (see pnas.org/content/74/5/1798.full.pdf) says that any complex surface homotopy equivalent to CP2 is biholomorphic to it. In the case of the complex structure -J, the biholomorphism is just $[z_0:z_1:z_2]\mapsto [\bar{z_0}:\bar{z_1}:\bar{z_2}]$. – Mike Usher Oct 19 2010 at 22:57
You are right - thanks. Sorry for the confusion. – Misha Verbitsky Oct 20 2010 at 18:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211561679840088, "perplexity_flag": "head"}
|
http://terrytao.wordpress.com/category/non-technical/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Category Archive
You are currently browsing the category archive for the ‘non-technical’ category.
## (Ingrid Daubechies) Planning for the World Digital Mathematical Library
8 May, 2013 in guest blog, media | Tags: Ingrid Daubechies, World Digital Mathematical Library | by Terence Tao | 39 comments
[This guest post is authored by Ingrid Daubechies, who is the current president of the International Mathematical Union, and (as she describes below) is heavily involved in planning for a next-generation digital mathematical library that can go beyond the current network of preprint servers (such as the arXiv), journal web pages, article databases (such as MathSciNet), individual author web pages, and general web search engines to create a more integrated and useful mathematical resource. I have lightly edited the post for this blog, mostly by adding additional hyperlinks. - T.]
This guest blog entry concerns the many roles a World Digital Mathematical Library (WDML) could play for the mathematical community worldwide. We seek input to help sketch how a WDML could be so much more than just a huge collection of digitally available mathematical documents. If this is of interest to you, please read on!
The “we” seeking input are the Committee on Electronic Information and Communication (CEIC) of the International Mathematical Union (IMU), and a special committee of the US National Research Council (NRC), charged by the Sloan Foundation to look into this matter. In the US, mathematicians may know the Sloan Foundation best for the prestigious early-career fellowships it awards annually, but the foundation plays a prominent role in other disciplines as well. For instance, the Sloan Digital Sky Survey (SDSS) has had a profound impact on astronomy, serving researchers in many more ways than even its ambitious original setup foresaw. The report being commissioned by the Sloan Foundation from the NRC study group could possibly be the basis for an equally ambitious program funded by the Sloan Foundation for a WDML with the potential to change the practice of mathematical research as profoundly as the SDSS did in astronomy. But to get there, we must formulate a vision that, like the original SDSS proposal, imagines at least some of those impacts. The members of the NRC committee are extremely knowledgeable, and have been picked judiciously so as to span collectively a wide range of expertise and connections. As president of the IMU, I was asked to co-chair this committee, together with Clifford Lynch, of the Coalition for Networked Information; Peter Olver, chair of the IMU’s CEIC, is also a member of the committee. But each of us is at least a quarter century older than the originators of MathOverflow or the ArXiv when they started. We need you, internet-savvy, imaginative, social-networking, young mathematicians to help us formulate the vision that may inspire the creation of a truly revolutionary WDML!
Some history first. Several years ago, an international initiative was started to create a World Digital Mathematical Library. The website for this library, hosted by the IMU, is now mostly a “ghost” website — nothing has been posted there for the last seven years. [It does provide useful links, however, to many sites that continue to be updated, such as the European Mathematical Information Service, which in turn links to many interesting journals, books and other websites featuring electronically available mathematical publications. So it is still worth exploring ...] Many of the efforts towards building (parts of) the WDML as originally envisaged have had to grapple with business interests, copyright agreements, search obstructions, metadata secrecy, … and many an enterprising, idealistic effort has been slowly ground down by this. We are still dealing with these frustrations — as witnessed by, e.g., the CostofKnowledge initiative. They are real, important issues, and will need to be addressed.
The charge of the NRC committee, however, is to NOT focus on issues of copyright or open-access or who bears the cost of publishing, but instead on what could/can be done with documents that are (or once they are) freely electronically accessible, apart from simply finding and downloading them. Earlier this year, I posted a question about one possible use on MathOverflow and then on MathForge, about the possibility to “enrich” a paper by annotations from readers, which other readers could wish to consult (or not). These posts elicited some very useful comments. But this was but one way in which a WDML could be more than just an opportunity to find and download papers. Surely there are many more, that you, bloggers and blog-readers, can imagine, suggest, sketch. This is an opportunity: can we — no, YOU! — formulate an ambitious setup that would capture the imagination of sufficiently many of us, that would be workable and that would really make a difference?
## An introduction to special relativity for a high school math circle
22 December, 2012 in expository, math.MP, non-technical, teaching | Tags: mass-energy equivalence, special relativity | by Terence Tao | 25 comments
Things are pretty quiet here during the holiday season, but one small thing I have been working on recently is a set of notes on special relativity that I will be working through in a few weeks with some bright high school students here at our local math circle. I have only two hours to spend with this group, and it is unlikely that we will reach the end of the notes (in which I derive the famous mass-energy equivalence relation E=mc^2, largely following Einstein’s original derivation as discussed in this previous blog post); instead we will probably spend a fair chunk of time on related topics which do not actually require special relativity per se, such as spacetime diagrams, the Doppler shift effect, and an analysis of my airport puzzle. This will be my first time doing something of this sort (in which I will be spending as much time interacting directly with the students as I would lecturing); I’m not sure exactly how it will play out, being a little outside of my usual comfort zone of undergraduate and graduate teaching, but am looking forward to finding out how it goes. (In particular, it may end up that the discussion deviates somewhat from my prepared notes.)
The material covered in my notes is certainly not new, but I ultimately decided that it was worth putting up here in case some readers here had any corrections or other feedback to contribute (which, as always, would be greatly appreciated).
[Dec 24 and then Jan 21: notes updated, in response to comments.]
## Lars Hormander
30 November, 2012 in math.AP, obituary | Tags: correspondence principle, fourier integral operators, lars hormander, pseudodifferential operators | by Terence Tao | 4 comments
Lars Hörmander, who made fundamental contributions to all areas of partial differential equations, but particularly in developing the analysis of variable-coefficient linear PDE, died last Sunday, aged 81.
I unfortunately never met Hörmander personally, but of course I encountered his work all the time while working in PDE. One of his major contributions to the subject was to systematically develop the calculus of Fourier integral operators (FIOs), which are a substantial generalisation of pseudodifferential operators and which can be used to (approximately) solve linear partial differential equations, or to transform such equations into a more convenient form. Roughly speaking, Fourier integral operators are to linear PDE as canonical transformations are to Hamiltonian mechanics (and one can in fact view FIOs as a quantisation of a canonical transformation). They are a large class of transformations, for instance the Fourier transform, pseudodifferential operators, and smooth changes of the spatial variable are all examples of FIOs, and (as long as certain singular situations are avoided) the composition of two FIOs is again an FIO.
The full theory of FIOs is quite extensive, occupying the entire final volume of Hormander’s famous four-volume series “The Analysis of Linear Partial Differential Operators”. I am certainly not going to try to attempt to summarise it here, but I thought I would try to motivate how these operators arise when trying to transform functions. For simplicity we will work with functions ${f \in L^2({\bf R}^n)}$ on a Euclidean domain ${{\bf R}^n}$ (although FIOs can certainly be defined on more general smooth manifolds, and there is an extension of the theory that also works on manifolds with boundary). As this will be a heuristic discussion, we will ignore all the (technical, but important) issues of smoothness or convergence with regards to the functions, integrals and limits that appear below, and be rather vague with terms such as “decaying” or “concentrated”.
A function ${f \in L^2({\bf R}^n)}$ can be viewed from many different perspectives (reflecting the variety of bases, or approximate bases, that the Hilbert space ${L^2({\bf R}^n)}$ offers). Most directly, we have the physical space perspective, viewing ${f}$ as a function ${x \mapsto f(x)}$ of the physical variable ${x \in {\bf R}^n}$. In many cases, this function will be concentrated in some subregion ${\Omega}$ of physical space. For instance, a gaussian wave packet
$\displaystyle f(x) = A e^{-(x-x_0)^2/\hbar} e^{i \xi_0 \cdot x/\hbar}, \ \ \ \ \ (1)$
where ${\hbar > 0}$, ${A \in {\bf C}}$ and ${x_0, \xi_0 \in {\bf R}^n}$ are parameters, would be physically concentrated in the ball ${B(x_0,\sqrt{\hbar})}$. Then we have the frequency space (or momentum space) perspective, viewing ${f}$ now as a function ${\xi \mapsto \hat f(\xi)}$ of the frequency variable ${\xi \in {\bf R}^n}$. For this discussion, it will be convenient to normalise the Fourier transform using a small constant ${\hbar > 0}$ (which has the physical interpretation of Planck’s constant if one is doing quantum mechanics), thus
$\displaystyle \hat f(\xi) := \frac{1}{(2\pi \hbar)^{n/2}} \int_{\bf R} e^{-i\xi \cdot x/\hbar} f(x)\ dx.$
For instance, for the gaussian wave packet (1), one has
$\displaystyle \hat f(\xi) = A e^{i\xi_0 \cdot x_0/\hbar} e^{-(\xi-\xi_0)^2/\hbar} e^{-i \xi \cdot x_0/\hbar},$
and so we see that ${f}$ is concentrated in frequency space in the ball ${B(\xi_0,\sqrt{\hbar})}$.
However, there is a third (but less rigorous) way to view a function ${f}$ in ${L^2({\bf R}^n)}$, which is the phase space perspective in which one tries to view ${f}$ as distributed simultaneously in physical space and in frequency space, thus being something like a measure on the phase space ${T^* {\bf R}^n := \{ (x,\xi): x, \xi \in {\bf R}^n\}}$. Thus, for instance, the function (1) should heuristically be concentrated on the region ${B(x_0,\sqrt{\hbar}) \times B(\xi_0,\sqrt{\hbar})}$ in phase space. Unfortunately, due to the uncertainty principle, there is no completely satisfactory way to canonically and rigorously define what the “phase space portrait” of a function ${f}$ should be. (For instance, the Wigner transform of ${f}$ can be viewed as an attempt to describe the distribution of the ${L^2}$ energy of ${f}$ in phase space, except that this transform can take negative or even complex values; see Folland’s book for further discussion.) Still, it is a very useful heuristic to think of functions has having a phase space portrait, which is something like a non-negative measure on phase space that captures the distribution of functions in both space and frequency, albeit with some “quantum fuzziness” that shows up whenever one tries to inspect this measure at scales of physical space and frequency space that together violate the uncertainty principle. (The score of a piece of music is a good everyday example of a phase space portrait of a function, in this case a sound wave; here, the physical space is the time axis (the horizontal dimension of the score) and the frequency space is the vertical dimension. Here, the time and frequency scales involved are well above the uncertainty principle limit (a typical note lasts many hundreds of cycles, whereas the uncertainty principle kicks in at ${O(1)}$ cycles) and so there is no obstruction here to musical notation being unambiguous.) Furthermore, if one takes certain asymptotic limits, one can recover a precise notion of a phase space portrait; for instance if one takes the semiclassical limit ${\hbar \rightarrow 0}$ then, under certain circumstances, the phase space portrait converges to a well-defined classical probability measure on phase space; closely related to this is the high frequency limit of a fixed function, which among other things defines the wave front set of that function, which can be viewed as another asymptotic realisation of the phase space portrait concept.
If functions in ${L^2({\bf R}^n)}$ can be viewed as a sort of distribution in phase space, then linear operators ${T: L^2({\bf R}^n) \rightarrow L^2({\bf R}^n)}$ should be viewed as various transformations on such distributions on phase space. For instance, a pseudodifferential operator ${a(X,D)}$ should correspond (as a zeroth approximation) to multiplying a phase space distribution by the symbol ${a(x,\xi)}$ of that operator, as discussed in this previous blog post. Note that such operators only change the amplitude of the phase space distribution, but not the support of that distribution.
Now we turn to operators that alter the support of a phase space distribution, rather than the amplitude; we will focus on unitary operators to emphasise the amplitude preservation aspect. These will eventually be key examples of Fourier integral operators. A physical translation ${Tf(x) := f(x-x_0)}$ should correspond to pushing forward the distribution by the transformation ${(x,\xi) \mapsto (x+x_0,\xi)}$, as can be seen by comparing the physical and frequency space supports of ${Tf}$ with that of ${f}$. Similarly, a frequency modulation ${Tf(x) := e^{i \xi_0 \cdot x/\hbar} f(x)}$ should correspond to the transformation ${(x,\xi) \mapsto (x,\xi+\xi_0)}$; a linear change of variables ${Tf(x) := |\hbox{det} L|^{-1/2} f(L^{-1} x)}$, where ${L: {\bf R}^n \rightarrow {\bf R}^n}$ is an invertible linear transformation, should correspond to ${(x,\xi) \mapsto (Lx, (L^*)^{-1} \xi)}$; and finally, the Fourier transform ${Tf(x) := \hat f(x)}$ should correspond to the transformation ${(x,\xi) \mapsto (\xi,-x)}$.
Based on these examples, one may hope that given any diffeomorphism ${\Phi: T^* {\bf R}^n \rightarrow T^* {\bf R}^n}$ of phase space, one could associate some sort of unitary (or approximately unitary) operator ${T_\Phi: L^2({\bf R}^n) \rightarrow L^2({\bf R}^n)}$, which (heuristically, at least) pushes the phase space portrait of a function forward by ${\Phi}$. However, there is an obstruction to doing so, which can be explained as follows. If ${T_\Phi}$ pushes phase space portraits by ${\Phi}$, and pseudodifferential operators ${a(X,D)}$ multiply phase space portraits by ${a}$, then this suggests the intertwining relationship
$\displaystyle a(X,D) T_\Phi \approx T_\Phi (a \circ \Phi)(X,D),$
and thus ${(a \circ \Phi)(X,D)}$ is approximately conjugate to ${a(X,D)}$:
$\displaystyle (a \circ \Phi)(X,D) \approx T_\Phi^{-1} a(X,D) T_\Phi. \ \ \ \ \ (2)$
The formalisation of this fact in the theory of Fourier integral operators is known as Egorov’s theorem, due to Yu Egorov (and not to be confused with the more widely known theorem of Dmitri Egorov in measure theory).
Applying commutators, we conclude the approximate conjugacy relationship
$\displaystyle \frac{1}{i\hbar} [(a \circ \Phi)(X,D), (b \circ \Phi)(X,D)] \approx T_\Phi^{-1} \frac{1}{i\hbar} [a(X,D), b(X,D)] T_\Phi.$
Now, the pseudodifferential calculus (as discussed in this previous post) tells us (heuristically, at least) that
$\displaystyle \frac{1}{i\hbar} [a(X,D), b(X,D)] \approx \{ a, b \}(X,D)$
and
$\displaystyle \frac{1}{i\hbar} [(a \circ \Phi)(X,D), (b \circ \Phi)(X,D)] \approx \{ a \circ \Phi, b \circ \Phi \}(X,D)$
where ${\{,\}}$ is the Poisson bracket. Comparing this with (2), we are then led to the compatibility condition
$\displaystyle \{ a \circ \Phi, b \circ \Phi \} \approx \{ a, b \} \circ \Phi,$
thus ${\Phi}$ needs to preserve (approximately, at least) the Poisson bracket, or equivalently ${\Phi}$ needs to be a symplectomorphism (again, approximately at least).
Now suppose that ${\Phi: T^* {\bf R}^n \rightarrow T^* {\bf R}^n}$ is a symplectomorphism. This is morally equivalent to the graph ${\Sigma := \{ (z, \Phi(z)): z \in T^* {\bf R}^n \}}$ being a Lagrangian submanifold of ${T^* {\bf R}^n \times T^* {\bf R}^n}$ (where we give the second copy of phase space the negative ${-\omega}$ of the usual symplectic form ${\omega}$, thus yielding ${\omega \oplus -\omega}$ as the full symplectic form on ${T^* {\bf R}^n \times T^* {\bf R}^n}$; this is another instantiation of the closed graph theorem, as mentioned in this previous post. This graph is known as the canonical relation for the (putative) FIO that is associated to ${\Phi}$. To understand what it means for this graph to be Lagrangian, we coordinatise ${T^* {\bf R}^n \times T^* {\bf R}^n}$ as ${(x,\xi,y,\eta)}$ suppose temporarily that this graph was (locally, at least) a smooth graph in the ${x}$ and ${y}$ variables, thus
$\displaystyle \Sigma = \{ (x, F(x,y), y, G(x,y)): x, y \in {\bf R}^n \}$
for some smooth functions ${F, G: {\bf R}^n \rightarrow {\bf R}^n}$. A brief computation shows that the Lagrangian property of ${\Sigma}$ is then equivalent to the compatibility conditions
$\displaystyle \frac{\partial F_i}{\partial x_j} = \frac{\partial F_j}{\partial x_i}$
$\displaystyle \frac{\partial G_i}{\partial y_j} = \frac{\partial G_j}{\partial y_i}$
$\displaystyle \frac{\partial F_i}{\partial y_j} = - \frac{\partial G_j}{\partial x_i}$
for ${i,j=1,\ldots,n}$, where ${F_1,\ldots,F_n, G_1,\ldots,G_n}$ denote the components of ${F,G}$. Some Fourier analysis (or Hodge theory) lets us solve these equations as
$\displaystyle F_i = -\frac{\partial \phi}{\partial x_i}; \quad G_j = \frac{\partial \phi}{\partial y_j}$
for some smooth potential function ${\phi: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}}$. Thus, we have parameterised our graph ${\Sigma}$ as
$\displaystyle \Sigma = \{ (x, -\nabla_x \phi(x,y), y, \nabla_y \phi(x,y)): x,y \in {\bf R}^n \} \ \ \ \ \ (3)$
so that ${\Phi}$ maps ${(x, -\nabla_x \phi(x,y))}$ to ${(y, \nabla_y \phi(x,y))}$.
A reasonable candidate for an operator associated to ${\Phi}$ and ${\Sigma}$ in this fashion is the oscillatory integral operator
$\displaystyle Tf(y) := \frac{1}{(2\pi \hbar)^{n/2}} \int_{{\bf R}^n} e^{i \phi(x,y)/\hbar} a(x,y) f(x)\ dx \ \ \ \ \ (4)$
for some smooth amplitude function ${a}$ (note that the Fourier transform is the special case when ${a=1}$ and ${\phi(x,y)=xy}$, which helps explain the genesis of the term “Fourier integral operator”). Indeed, if one computes an inner product ${\int_{{\bf R}^n} Tf(y) \overline{g(y)}\ dy}$ for gaussian wave packets ${f, g}$ of the form (1) and localised in phase space near ${(x_0,\xi_0), (y_0,\eta_0)}$ respectively, then a Taylor expansion of ${\phi}$ around ${(x_0,y_0)}$, followed by a stationary phase computation, shows (again heuristically, and assuming ${\phi}$ is suitably non-degenerate) that ${T}$ has (3) as its canonical relation. (Furthermore, a refinement of this stationary phase calculation suggests that if ${a}$ is normalised to be the half-density ${|\det \nabla_x \nabla_y \phi|^{1/2}}$, then ${T}$ should be approximately unitary.) As such, we view (4) as an example of a Fourier integral operator (assuming various smoothness and non-degeneracy hypotheses on the phase ${\phi}$ and amplitude ${a}$ which we do not detail here).
Of course, it may be the case that ${\Sigma}$ is not a graph in the ${x,y}$ coordinates (for instance, the key examples of translation, modulation, and dilation are not of this form), but then it is often a graph in some other pair of coordinates, such as ${\xi,y}$. In that case one can compose the oscillatory integral construction given above with a Fourier transform, giving another class of FIOs of the form
$\displaystyle Tf(y) := \frac{1}{(2\pi \hbar)^{n/2}} \int_{{\bf R}^n} e^{i \phi(\xi,y)/\hbar} a(\xi,y) \hat f(\xi)\ d\xi. \ \ \ \ \ (5)$
This class of FIOs covers many important cases; for instance, the translation, modulation, and dilation operators considered earlier can be written in this form after some Fourier analysis. Another typical example is the half-wave propagator ${T := e^{it \sqrt{-\Delta}}}$ for some time ${t \in {\bf R}}$, which can be written in the form
$\displaystyle Tf(y) = \frac{1}{(2\pi \hbar)^{n/2}} \int_{{\bf R}^n} e^{i (\xi \cdot y + t |\xi|)/\hbar} a(\xi,y) \hat f(\xi)\ d\xi.$
This corresponds to the phase space transformation ${(x,\xi) \mapsto (x+t|\xi|, \xi)}$, which can be viewed as the classical propagator associated to the “quantum” propagator ${e^{it\sqrt{-\Delta}}}$. More generally, propagators for linear Hamiltonian partial differential equations can often be expressed (at least approximately) by Fourier integral operators corresponding to the propagator of the associated classical Hamiltonian flow associated to the symbol of the Hamiltonian operator ${H}$; this leads to an important mathematical formalisation of the correspondence principle between quantum mechanics and classical mechanics, that is one of the foundations of microlocal analysis and which was extensively developed in Hörmander’s work. (More recently, numerically stable versions of this theory have been developed to allow for rapid and accurate numerical solutions to various linear PDE, for instance through Emmanuel Candés’ theory of curvelets, so the theory that Hörmander built now has some quite significant practical applications in areas such as geology.)
In some cases, the canonical relation ${\Sigma}$ may have some singularities (such as fold singularities) which prevent it from being written as graphs in the previous senses, but the theory for defining FIOs even in these cases, and in developing their calculus, is now well established, in large part due to the foundational work of Hörmander.
## Spending symmetry
18 November, 2012 in admin, book | by Terence Tao | 9 comments
I recently finished the first draft of the last of my books based on my 2011 blog posts (and also my Google buzzes and Google+ posts from that year), entitled “Spending symmetry“. The PDF of this draft is available here. This is again a rather assorted (and lightly edited) collection of posts (and buzzes, and Google+ posts), though concentrating in the areas of analysis (both standard and nonstandard), logic, and geometry. As always, comments and corrections are welcome.
## UCLA Math Undergraduate Merit Scholarship for 2013
15 November, 2012 in advertising | Tags: scholarship, UCLA, undergraduate study | by Terence Tao | 1 comment
[Once again, some advertising on behalf of my department, following on a similar announcement in the previous three years.]
Two years ago, the UCLA mathematics department launched a scholarship opportunity for entering freshman students with exceptional background and promise in mathematics. We have offered one scholarship every year, but this year due to an additional source of funding, we will also be able to offer an additional scholarship for California residents.The UCLA Math Undergraduate Merit Scholarship provides for full tuition, and a room and board allowance for 4 years. In addition, scholarship recipients follow an individualized accelerated program of study, as determined after consultation with UCLA faculty. The program of study leads to a Masters degree in Mathematics in four years.
More information and an application form for the scholarship can be found on the web at:
and
To be considered for Fall 2013, candidates must apply for the scholarship and also for admission to UCLA on or before November 30, 2012.
## Garth Gaudry
18 October, 2012 in obituary | by Terence Tao | 3 comments
Garth Gaudry, who made many contributions to harmonic analysis and to Australian mathematics, and was also both my undergradaute and masters advisor as well as the head of school during one of my first academic jobs, died yesterday after a long battle with cancer, aged 71.
Garth worked on the interface between real-variable harmonic analysis and abstract harmonic analysis (which, despite their names, are actually two distinct fields, though certainly related to each other). He was one of the first to realise the central importance of Littlewood-Paley theory as a general foundation for both abstract and real-variable harmonic analysis, writing an influential text with Robert Edwards on the topic. He also made contributions to Clifford analysis, which was also the topic of my masters thesis.
But, amongst Australian mathematicians at least, Garth will be remembered for his tireless service to the field, most notably for his pivotal role in founding the Australian Mathematical Sciences Institute (AMSI) and then serving as AMSI’s first director, and then in directing the International Centre of Excellence for Education in Mathematics (ICE-EM), the educational arm of AMSI which, among other things, developed a full suite of maths textbooks and related educational materials covering Years 5-10 (which I reviewed here back in 2008).
I knew Garth ever since I was an undergraduate at Flinders University. He was head of school then (a position roughly equivalent to department chair in the US), but still was able to spare an hour a week to meet with me to discuss real analysis, as I worked my way through Rudin’s “Real and complex analysis” and then Stein’s “Singular integrals”, and then eventually completed a masters thesis under his supervision on Clifford-valued singular integrals. When Princeton accepted my application for graduate study, he convinced me to take the opportunity without hesitation. Without Garth, I certainly wouldn’t be where I am at today, and I will always be very grateful for his advisorship. He was a good person, and he will be missed very much by me and by many others.
## Bill Thurston
22 August, 2012 in math.GT, math.HO, obituary | Tags: geometrization conjecture, hyperbolisation theorem, Outside in, William Thurston | by Terence Tao | 31 comments
Bill Thurston, who made fundamental contributions to our understanding of low-dimensional manifolds and related structures, died on Tuesday, aged 65.
Perhaps Thurston’s best known achievement is the proof of the hyperbolisation theorem for Haken manifolds, which showed that 3-manifolds which obeyed a certain number of topological conditions, could always be given a hyperbolic geometry (i.e. a Riemannian metric that made the manifold isometric to a quotient of the hyperbolic 3-space $H^3$). This difficult theorem connecting the topological and geometric structure of 3-manifolds led Thurston to give his influential geometrisation conjecture, which (in principle, at least) completely classifies the topology of an arbitrary compact 3-manifold as a combination of eight model geometries (now known as Thurston model geometries). This conjecture has many consequences, including Thurston’s hyperbolisation theorem and (most famously) the Poincaré conjecture. Indeed, by placing that conjecture in the context of a conceptually appealing general framework, of which many other cases could already be verified, Thurston provided one of the strongest pieces of evidence towards the truth of the Poincaré conjecture, until the work of Grisha Perelman in 2002-2003 proved both the Poincaré conjecture and the geometrisation conjecture by developing Hamilton’s Ricci flow methods. (There are now several variants of Perelman’s proof of both conjectures; in the proof of geometrisation by Bessieres, Besson, Boileau, Maillot, and Porti, Thurston’s hyperbolisation theorem is a crucial ingredient, allowing one to bypass the need for the theory of Alexandrov spaces in a key step in Perelman’s argument.)
One of my favourite results of Thurston’s is his elegant method for everting the sphere (smoothly turning a sphere $S^2$ in ${\bf R}^3$ inside out without any folds or singularities). The fact that sphere eversion can be achieved at all is highly unintuitive, and is often referred to as Smale’s paradox, as Stephen Smale was the first to give a proof that such an eversion exists. However, prior to Thurston’s method, the known constructions for sphere eversion were quite complicated. Thurston’s method, relying on corrugating and then twisting the sphere, is sufficiently conceptual and geometric that it can in fact be explained quite effectively in non-technical terms, as was done in the following excellent video entitled “Outside In“, and produced by the Geometry Center:
In addition to his direct mathematical research contributions, Thurston was also an amazing mathematical expositor, having the rare knack of being able to describe the process of mathematical thinking in addition to the results of that process and the intuition underlying it. His wonderful essay “On proof and progress in mathematics“, which I highly recommend, is the quintessential instance of this; more recent examples include his many insightful questions and answers on MathOverflow.
I unfortunately never had the opportunity to meet Thurston in person (although we did correspond a few times online), but I know many mathematicians who have been profoundly influenced by him and his work. His death is a great loss for mathematics.
## Call for nominations: National Academy of Sciences Award for Scientific Reviewing
13 July, 2012 in advertising | by Terence Tao | 15 comments
The National Academy of Sciences award for Scientific Reviewing is slated to be given in Mathematics (understood to include Applied Mathematics) in April 2013. The award cycles among many fields, and the last (and only) time it was given in Mathematics was 1995. This year, I am on the prize committee for this award and am therefore circulating a call for nominations.
This award is intended “to recognize authors whose reviews have synthesized extensive and difficult material, rendering a significant service to science and influencing the course of scientific thought”. As such, it is slightly different in focus from most awards in mathematics, which tend to focus more on original research contributions than on synthesis and exposition, which in my opinion is an equally important component of mathematical research.
In 1995, this prize was awarded to Rob Kirby “For his list of problems in low-dimensional topology and his tireless maintenance of it; several generations have been greatly influenced by Kirby’s list.”.
Instructions for how to submit nominations can be found at this page. Nominees and awardees do not need to be members of the Academy, and can be based outside of the United States. The award comes with a medal and a \$10,000 prize. The deadline for nominations is 1 October 2012.
## Mini-polymath4 discussion thread
12 July, 2012 in admin, polymath | Tags: mini-polymath4 | by Terence Tao | 7 comments
I’ve just opened the research thread for the mini-polymath4 project over at the polymath blog to collaboratively solve one of the six questions from this year’s IMO. This year I have selected Q3, which is a somewhat intricate game-theoretic question. (The full list of questions this year may be found here.)
This post will serve as the discussion thread of the project, intended to focus all the non-research aspects of the project such as organisational matters or commentary on the progress of the project. The third component of the project is the wiki page, which is intended to summarise the progress made so far on the problem.
As with the previous mini-polymath projects, I myself will be serving primarily as a moderator, and hope other participants will take the lead in the research and in keeping the wiki up-to-date.
## Mini-polymath4 begins in three hours
Just a reminder that the mini-polymath4 project will begin in three hours at Thu July 12 2012 UTC 22:00.
### Recent Comments
Sandeep Murthy on An elementary non-commutative…
Luqing Ye on 245A, Notes 2: The Lebesgue…
Frank on Soft analysis, hard analysis,…
andrescaicedo on Soft analysis, hard analysis,…
Richard Palais on Pythagoras’ theorem
The Coffee Stains in… on Does one have to be a genius t…
Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
%anchor_text% on Books
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245A, Notes 2: The Lebesgue…
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 125, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407397508621216, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/42327/universal-definition-of-fourier-transform
|
Universal definition of Fourier transform [closed]
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a category theoretic definition for the Fourier transform using only its universality properties? I am not looking for the most general definition -- one that works only in some special settings will do. I am looking for a simple definition that will make precise my (possibly incorrect) intuition that Fourier transforms are in some sense extremal among unitary transforms.
Here is another non category-theoretic way to ask this question, which may or may not be equivalent: Give a "natural" optimization problem on the space of unitary transforms whose solution turns out to be the Fourier transform.
-
2
I assume you're not interested in something like Pontryagin duality. If that's the case, you might want to edit your question/title accordingly. – arsmath Oct 15 2010 at 21:21
3
What universality properties? What categories do you have in mind? – Yemon Choi Oct 16 2010 at 4:31
3
-1: This question needs to be made more specific. Are you looking for a rule that assigns to each locally compact group $A$ a special function on $Isom(L^2(A), L^2(A^\vee))$ that is minimized by the Fourier transform, in a way that is compatible with homomorphisms? – S. Carnahan♦ Oct 16 2010 at 6:04
11
+1 Often the point of asking a question is not so much to get an answer, but rather to be shown how to improve the question. Fortunately, the above comment does suggests such an improvement, among other things. – Chris Brav Oct 16 2010 at 8:52
4
@Chris: I don't agree: at least, I do not think that it's a good kind of question to ask on MO. It's what research supervisors (and teachers) should be there for. Asking vague questions in the hope that someone will suggest the correct version, seems too much like asking other people to do the work, at least on MO. – Yemon Choi Oct 17 2010 at 0:24
show 9 more comments
1 Answer
For me, the "traditional Fourier Transform" is a change of basis of the algebra of functions from a group to some chosen field: from the canonical basis to something sometimes called the Fourier Basis. Because the Transform is constructed using the representation theory of the group, it has "natural" generalisations to objects with "similar" representation theory, e.g. it is defined for Hopf algebras.
I always think of the FT as this kind of duality. The Fourier Basis have lot's of interesting properties, but I have not seen a definition of it using extremals. It would be very interesting to see that.
A good start would be if someone gives an answer for finite abelian groups and finite non-abelian groups. Though "non-abelian" groups have a notion of FT, it is not uniquely defined and it is hard to work with it. An "extremal" condition would be enlightening.
Update 14th Dec 2011
Sorry that this comes several months later, but I found that there is "a way" to define the quantum Fourier transform for abelian finite groups using an extremal argument. This argument comes from reference [1] where the Fourier transform is studied as a tool to design measurements in Quantum Computation and proven to be optimal to solve the abelian hidden subgroup problem. Unfortunately, this property does not hold for non-abelian quantum Fourier transforms.
More concretely, what it is proven in [1] is the following (all definitions I use are defined in this paper):
Consider the hidden subgroup problem defined for an abelian group $G$, where the hidden subgroup $H$ is chosen uniformly at random from all subgroups of $G$. Given $n$ tensored random coset states (cosets of $H$), then the measurement that maximises the probability of correctly identifying the subgroup $H$ is the following:
1. Start on a random coset-state $|x+H\rangle$ for unknown $H$ which is just a uniform quantum superposition over the elements of the coset $x+H$. Cf. [1] for details on how to create these quantum states.
2. Apply the abelian quantum Fourier transform of $G$ on this state.
3. Perform a projective measurement.
Taking several outcomes of the above procedure one obtains a generating set of the orthogonal group $H^\perp$ from which the original subgroup $H$ can be recovered solving a system of linear modular equations [2]. As far as I know these "orthogonal subgroups" are sometimes called orthogonal complements in Mathematics.
To sum up, they key ingredient of the above quantum algorithm is the abelian Fourier transform which is used to implement a quantum measurement to solve the hidden subgroup problem since it maximises the probability of distinguishing the hidden subgroup. In [1] it is shown that the abelian quantum Fourier transform arises as an optimal POVM which is the solution of a semidefinite program. I guess that maybe you could adopt this kind of extremal property as a definition of the Fourier transform for finite abelian groups. Note: it is not clear to me that the optimal POVM found in [1] is unique (up to permutations).
-
I know I did not answer the question, but I think that even for simpler mathematical objects that the ones you ask it is not obvious that FT can be defined with an optimization problem. – Juan Bermejo Vega Jun 13 2011 at 11:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289693832397461, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/100557-simple-trig-verification.html
|
# Thread:
1. ## Simple trig verification
Sorry for the double post, I put it in the wrong section. I know this problem is very easy, I am just having so much trouble grasping the concept. I have to verify this problem and don't know where to start.
Any help GREATLY appreciated. Homework due at 3PM today.
2. Originally Posted by kdickson91
Sorry for the double post, I put it in the wrong section. I know this problem is very easy, I am just having so much trouble grasping the concept. I have to verify this problem and don't know where to start.
Any help GREATLY appreciated. Homework due at 3PM today.
I don't feel comfortable...
However, if you were to show me what you have tried so far, I'd be able to guide you a bit.
3. I used trig identities to change everything I could.
$((1/cos(t))/cos(t))-((1+tan^2 t)/(1+cot^2 t))$
4. How do I do fractions with the 'math' tags? That would probably make this a bit easier to read.
5. Originally Posted by kdickson91
How do I do fractions with the 'math' tags? That would probably make this a bit easier to read.
Change eveything to sin and cosine, that should help.
IE $\tan^2{x}=\frac{\sin^2{x}}{\cos^2{x}}$...
To learn latex, go here... LaTex Tutorial
Fractions... \frac{numerator}{denominator}
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553436636924744, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/number-theory/92084-number-q.html
|
# Thread:
1. ## Number in Q
prouve :
2. Originally Posted by dhiab
prouve :
Hi
If you consider the equation $x^3+5x-6=0$ you can find that x=1 is the only real solution since $x^3+5x-6=(x-1)(x^2+x+6)$
Now if you try to solve it using Cardan method Méthode de Cardan - Wikipédia you will find that one real solution is
$\sqrt[3]{3+\sqrt{9+\frac{125}{27}}}-\sqrt[3]{-3+\sqrt{9+\frac{125}{27}}}$
Since 1 is the only real solution you get $\sqrt[3]{3+\sqrt{9+\frac{125}{27}}}-\sqrt[3]{-3+\sqrt{9+\frac{125}{27}}} = 1$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.838513970375061, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/24579/convergence-of-sumn3-sin2n-1/24712
|
## Convergence of $\sum(n^3\sin^2n)^{-1}$
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I saw a while ago in a book by Clifford Pickover, that whether $\displaystyle \sum_{n=1}^\infty\frac1{n^3\sin^2 n}$ converges is open.
I would think that the question of its convergence is really about the density in $\mathbb N$ of the sequence of numerators of the standard convergent approximations to $\pi$ (which, in itself, seems like an interesting question). Naively, the point is that if $n$ is "close" to a whole multiple of $\pi$, then $1/(n^3\sin^2n)$ is "close" to $\frac1{\pi^2 n}$.
[Numerically there is some evidence that only some of these values of $n$ affect the overall behavior of the series. For example, letting $S(k)=\sum_{n=1}^{k}\frac1{n^3\sin^2n}$, one sees that $S(k)$ does not change much in the interval, say, $[50,354]$, with $S(354)<5$. However, $S(355)$ is close to $30$, and note that $355$ is very close to $113\pi$. On the other hand, $S(k)$ does not change much from that point until $k=100000$, where I stopped looking.]
I imagine there is a large body of work within which the question of the convergence of this series would fall naturally, and I would be interested in knowing something about it. Sadly, I'm terribly ignorant in these matters. Even knowing where to look for some information on approximations of $\pi$ by rationals, or an ad hoc approach just tailored to this specific series would be interesting as well.
-
1
The sum you have written obviously converges. – Xandi Tuni May 14 2010 at 6:32
15
No, it doesn't obvously converge :-) – Robin Chapman May 14 2010 at 6:33
4
Good lord! Sorry, I was too quick with that comment. – Xandi Tuni May 14 2010 at 6:41
6
If we replace $n^3$ with $n^2$, what happens? (This isn't a rhetorical question; I'm honestly wondering if anyone knows the answer.) More generally, consider sums $F(a,b) = \sum_{n=1}^\infty 1/(n^a |sin n|^b)$. Clearly $F(a,b)$ increases with $a$ and decreases with $b$, but when is $F(a,b)$ known to be either finite or infinite? – Michael Lugo May 14 2010 at 13:17
4
The convergence of this general form is related to the irrationality measure of $\pi$, that is the infimum of exponents $k$ such that $|\pi-a/b|<1/b^k$ has only finitely many integer solutions. (For $|\sin n|$ to be small, $n$ must be close to an integer multiple $m\pi$ of $\pi$ and then $|\sin n|\sim m|\pi-n/m|$.) Results are known (see for instance planetmath.org/encyclopedia/…) and these will yield explicit values of $a$, $b$ for which the series converges, but the proofs are delicate and don't yield the best expected result. – Robin Chapman May 14 2010 at 15:26
## 1 Answer
As Robin Chapman mentions in his comment, the difficulty of investigating the convergence of $$\sum_{n=1}^\infty\frac1{n^3\sin^2n}$$ is due to lack of knowledge about the behavior of $|n\sin n|$ as $n\to\infty$, while the latter is related to rational approximations to $\pi$ as follows.
Neglecting the terms of the sum for which $n|\sin n|\ge n^\varepsilon$ ($\varepsilon>0$ is arbitrary), as they all contribute only to the `convergent part' of the sum, the question is equivalent to the one for the series $$\sum_{n:n|\sin n|< n^\varepsilon}\frac1{n^3\sin^2n}. \qquad(1)$$ For any such $n$, let $q=q(n)$ minimizes the distance $|\pi q-n|\le\pi/2$. Then $$\sin|\pi q-n|=|\sin n|< \frac1{n^{1-\varepsilon}},$$ so that $|\pi q-n|\le C_1/n^{1-\varepsilon}$ for some absolute constant $C_1$ (here we use that $\sin x\sim x$ as $x\to0$). Therefore, $$\biggl|\pi-\frac nq\biggr|<\frac{C_1}{qn^{-\varepsilon}},$$ equivalently $$\biggl|\pi-\frac nq\biggr|<\frac{C_2}{n^{2-\varepsilon}} \quad\text{or}\quad \biggl|\pi-\frac nq\biggr|<\frac{C_2'}{q^{2-\varepsilon}}$$ (because $n/q\approx\pi$) for all $n$ participating in the sum (1). It is now clear that the convergence of the sum (1) depends on how often we have $$\biggl|\pi-\frac nq\biggr|<\frac{C_2'}{q^{2-\varepsilon}}$$ and how small is the quantity in these cases. (Note that it follows from Dirichlet's theorem that an even stronger inequality, $$\biggl|\pi-\frac nq\biggr|<\frac1{q^2},$$ happens for infinitely many pairs $n$ and $q$.) The series (1) converges if and only if $$\sum_{n:|\pi-n/q|< C_2n^{-2+\varepsilon}}\frac1{n^5|\pi-n/q|^2}$$ converges. We can replace the summation by summing over $q$ (again, for each term $\pi q\approx n$) and then sum the result over all $q$, because the terms corresponding to $|\pi-n/q|< C_2n^{-2+\varepsilon}$ do not influence on the convergence: $$\sum_{q=1}^\infty\frac1{q^5|\pi-n/q|^2} =\sum_{q=1}^\infty\frac1{q^3(\pi q-n)^2} \qquad(2)$$ where $n=n(q)$ is now chosen to minimize $|\pi-n/q|$.
Summarizing, the original series converges if and only if the series in (2) converges.
It is already an interesting question of what can be said about the convergence of (2) if we replace $\pi$ by other constant $\alpha$, for example by a "generic irrationality". The series $$\sum_{q=1}^\infty\frac1{q^3(\alpha q-n)^2}$$ for a real quadratic irrationality $\alpha$ converges because the best approximations are $C_3/q^2\le|\alpha-n/q|\le C_4/q^2$, and they are achieved on the convergents $n/q$ with $q$ increasing geometrically. A more delicate question seems to be for $\alpha=e$, because one third of its convergents satisfies $$C_3\frac{\log\log q}{q^2\log q}<\biggl|e-\frac pq\biggr|< C_4\frac{\log\log q}{q^2\log q}$$ (see, e.g., [C.S.Davis, Bull. Austral. Math. Soc. 20 (1979) 407--410]). The number $e$, quadratic irrationalities, and even algebraic numbers are `generic' in the sense that their irrationality exponent is known to be 2. What about $\pi$?
The irrationality exponent $\mu=\mu(\alpha)$ of a real irrational number $\alpha$ is defined as the infimum of exponents $\gamma$ such that the inequality $|\alpha-n/q|\le|q|^{-\gamma}$ has only finitely many solutions in $(n,q)\in\Bbb Z^2$ with $q\ne0$. (So, Dirichlet's theorem implies that $\mu(\alpha)\ge2$. At the same time from metric number theory we know that it is 2 for almost all real irrationals.) Assume that $\mu(\pi)>5/2$, then there are infinitely many solutions to the inequality $$\biggl|\pi-\frac nq\biggr|<\frac{C_5}{q^{5/2}},$$ hence infinitely many terms in (2) are bounded below by $1/C_5$, so that the series diverges (and (1) does as well). Although the general belief is that $\mu(\pi)=2$, the best known result of V.Salikhov (see this answer by Gerry and my comment) only asserts that $\mu(\pi)<7.6064\dots$,.
I hope that this explains the problem of determining the behavior of the series in question.
-
Thanks, Robin and Wadim! – Andres Caicedo May 15 2010 at 15:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9656463265419006, "perplexity_flag": "head"}
|
http://en.m.wikibooks.org/wiki/High_School_Mathematics_Extensions/Counting_and_Generating_Functions
|
# High School Mathematics Extensions/Counting and Generating Functions
Before we begin: This chapter assumes knowledge of
1. Ordered selection (permutation) and unordered selection (combination) covered in Basic counting,
2. Method of Partial Fractions and,
3. Competence in manipulating Summation Signs
## Some Counting Problems
..more to come
↑Jump back a section
## Generating functions
..some motivation to be written
To understand this section you need to see why this is true:
$\lim_{x \to \infty} \frac{x^2 + x}{x^2} = 1 + \lim_{x \to \infty}\frac{ 1}{x} = 1$
For a more detailed discussion of the above, head to Infinity and infinite processes.
Generating functions, otherwise known as Formal Power Series, are useful for solving problems like:
$x_1 + x_2 + 2x_3 = m$
where
$x_n \ge 0$; n = 1, 2, 3
how many unique solutions are there if $m = 55$?
Before we tackle that problem, let's consider the infinite polynomial:
$S = 1 + x + x^2 + x^3 + ... + x^n + x^{n+1}...$
We want to obtain a closed form of this infinite polynomial. The closed form is simply a way of expressing the polynomial so that it involves only a finite number of operations.
To find the closed form we starting with our function:
$S = 1 + x + x^2 + x^3 + ...$
We multiply both sides of the function by x to get: $xS = x + x^2 + x^3 + ...$
Next we subtract S-xS to get
S - xS = 1 + x + x^2 + x^3 ... - x - x^2 - x^3 ...
Grouping like terms we get
(1 - x)S = 1 + (x - x) + (x^2 - x^2) + (x^3 - x^3)
Which simplifies to
(1 - x)S = 1
Dividing both sides by $\frac{1}{1 - x}$ we get $S = \frac{1}{1 - x}$
So the closed form of
1 + x + x2 + x3 + ...
is
$\frac{1}{1 - x}$
For convenience we can write, although this isn't true for any particular value of x.
$1 + x + x^2 + x^3 + ... = \frac{1}{1 - x} \ ; \ -1 < x < 1$
#### info - Infinite sums
The two expressions are not equal. It's just that for certain values of x (-1 < x < 1), we can approximate the right hand side as closely as possible by adding up a large number of terms on the left hand side. For example, suppose x = 1/2, RHS = 2; we approximate the LHS using only 5 terms we get LHS equals 1 + 1/2 + 1/4 + 1/8 + 1/16 = 1.9375, which is close to 2, as you can imagine by adding more and more terms, we will get closer and closer to 2.
Anyway we really only care about its nice algebraic properties, not its numerical value. From now on we will omit the condition for equality to be true when writing out generating functions.
Consider a more general case:
$S = A + ABx + AB^2x^2 + AB^3x^3 + ...$
where A and B are constants.
We can derive the closed-form as follows:
$\begin{matrix} S &=& A + &ABx + AB^2x^2 + AB^3x^3 + ... \\ BxS &=& &ABx + AB^2x^2 + AB^3x^3 + ... \\ \\ (1 - Bx)S &=& A \\ S &=& \frac{A}{1 - Bx} \end{matrix}$
The following identity as derived above is worth investing time and effort memorising.
$A + ABx + AB^2x^2 + AB^3x^3 + ... = \frac{A}{1 - Bx}$
### Exercises
1. Find the closed-form:
(a)$1 - z + z^2 - z^3 + z^4 - z^5 + ...$
(b)$1 + 2z + 4z^2 + 8z^3 + 16z^4 + 32z^5 + ...$
(c)$z + z^2 + z^3 + z^4 + z^5 + ...$
(d)$3 - 4z + 4z^2 - 4z^3 + 4z^4 - 4z^5 + ...$
(e)$1 - z^2 + z^4 - z^6 + z^8 - z^{10} + ...$
2. Given the closed-form, find a function f(n) for the coefficients of xn:
(a)$\frac{1}{1 + z}$ (Hint: note the plus sign in the denominator)
(b)$\frac{1}{1 - 2z}$ (Hint: substitute A=1 and B=2 into $\frac{A}{1 - Bz}$)
(c)$\frac{z}{1 + z}$ (Hint: multiply all the terms in $\frac{1}{1 + z}$ by z)
### Method of Substitution
We are given that:
1 + z + z2 + ... = 1/(1 - z)
and we can obtain many other generating functions by substitution. For example: letting z = x2 we have:
1 + x2 + x4 + ... = 1/(1 - x2)
Similarly
A + ABx + A(Bx)2 + ... = A/(1 - Bx)
is obtained by letting z = Bx then multiplying the whole expression by A.
#### Exercises
1. What are the coefficients of the powers of x:
1/(1 - 2x3)
2. What are the coefficients of the powers of x (Hint: take out a factor of 1/2):
1/(2 - x)
### Linear Recurrence Relations
The Fibonacci series
1, 1, 2, 3, 5, 8, 13, 21, 34, 55...
where each and every number, except the first two, is the sum of the two preceding numbers. We say the numbers are related if the value a number takes depends on the values that come before it in the sequence. The Fibonacci sequence is an example of a recurrence relation, it is expressed as:
$\begin{matrix} x_n &=& x_{n-1} &+& \ x_{n - 2}; \ \mbox{for n} \ge 2\\ x_0 &=& 1\\ x_1 &=& 1\\ \end{matrix}$
where xn is the (n+ 1)th number in the sequence. Note that the first number in the sequence is denoted x0. Given this recurrence relation, the question we want to ask is "can we find a formula for the (n+1)th number in the sequence?". The answer is yes, but before we come to that, let's look at some examples.
#### Example 1
The expressions
$\begin{matrix} x_n &=& 2x_{n-1}& + &3x_{n-2}; \ \mbox{for n} \ge 2\\ x_0 &=& 1\\ x_1 &=& 1 \end{matrix}$
define a recurrence relation. The sequence is: 1, 1, 5, 13, 41, 121, 365... Find a formula for the (n+1)th number in the sequence.
Solution Let G(z) be generating function of the sequence, meaning the coefficient of each power (in ascending order) is the corresponding number in the sequence. So the generating functions looks like this
$G(z) = 1 + z + 5z^2 + 13z^3 + 41z^4 + 121z^5 + ...$
Now, by a series of algebraic manipulations, we can find the closed form of the generating function and from that the formula for each coefficient
$\begin{matrix} & &G(z) & = &x_0 + &x_1z + &x_2z^2 &+ &x_3z^3 + x_4z^4 + x_5z^5 + ...\\ 2z&\times &G(z) & = & &2x_0z + &2x_1z^2 &+ &...\\ 3z^2&\times &G(z)& = & & &3x_0z^2 &+ &...\\ \end{matrix}$
$\begin{matrix} G(z) - 2zG(z) - 3z^2G(z) &=& x_0 + (x_1 - 2x_0)z & + \\ & & (x_2 - 2x_1 - 3x_0)z^2 & + \\ & & (x_3 - 2x_2 - 3x_1)z^3 & + &... \end{matrix}$
by definition xn - 2xn - 1 - 3xn - 2 = 0
$\begin{matrix} (1 - 2z - 3z^2)& \times & G(z)& =& x_0 + (x_1 - 2x_0)z\\ \\ & & G(z)& =& \frac {1 - z} {1 - 2z - 3z^2}\\ \\ & & G(z)& =& \frac {1 - z} {(1 - 3z)(1 + z)} \end{matrix}$
by the method of partial fractions we get:
$G(z) = \frac {1} {2} \times \frac {1} {1 - 3z} + \frac {1} {2} \times \frac {1} {1 + z}$
each part of the sum is in a recognisable closed-form. We can conclude that:
$x_n = \frac {1} {2} \times 3^n + \frac {1} {2} \times (-1)^n$
the reader can easily check the accuracy of the formula.
#### Example 2
$\begin{matrix} x_n &=& x_{n-1} &+& x_{n-2}& -& x_{n-3}; \ \mbox{for n} \ge 3\\ x_0 &=& 1 \\ x_1 &=& 1 \\ x_2 &=& 1 \end{matrix}$
Find a non-recurrent formula for xn.
Solution Let G(z) be the generating function of the sequence described above.
$G(z) = x_0 + x_1z + x_2z^2 + ...$
$\begin{matrix} G(z)(1 - z - z^2 + z^3) &=& x_0 &+& (x_1 - x_0)z + (x_2 - x_1 - x_0)z^2\\ G(z)(1 - z - z^2 + z^3) &=& 1 - z^2\\ \\ \end{matrix}$
$\begin{matrix} G(z) &=& \frac{1 - z^2}{1 - z - z^2 + z^3}\\ \\ G(z) &=& \frac{1 - z}{1 - 2z + z^2}\\ \\ G(z) &=& \frac{1} {1 - z} \end{matrix}$
Therefore xn = 1, for all n.
#### Example 3
A linear recurrence relation is defined by:
$\begin{matrix} x_n &=& x_{n-1}& + & 6x_{n-2} + 1; \ \mbox{for n} \ge 2\\ x_0 &=& 1\\ x_1 &=& 1\\ \end{matrix}$
Find the general formula for xn.
Solution Let G(z) be the generating function of the recurrence relation.
$G(z)(1 - z - 6z^2) = x_0 + (x_1 - x_0)z + (x_2 - x_1 - 6x_0)z^2 +...$
$G(z)(1 - z - 6z^2) = 1 + z^2 + z^3 + z^4 + ...$
$G(z)(1 - z - 6z^2) = 1 + z^2(1 + z + z^2 + ...)$
$G(z)(1 - z - 6z^2) = 1 + \frac{z^2}{1 - z}$
$G(z)(1 - z - 6z^2) = \frac{1 - z + z^2}{1 - z}$
$\begin{matrix} G(z) &=& \frac{1 - z + z^2}{(1 - z)(1 + 2z)(1 - 3z)}\\ G(z) &=& -\frac{1}{6(1-z)} + \frac{7}{15(1 + 2z)} + \frac{7}{10(1-3z)} \end{matrix}$
Therefore
$x_n = -\frac{1}{6} + \frac{7}{15}(-2)^n + \frac{7}{10}3^n$
#### Exercises
1. Derive the formula for the (n+1)th numbers in the sequence defined by the linear recurrence relations:
$\begin{matrix} x_n &=& 2x_{n-1}& - &1; \ \mbox{for n} \ge 1\\ x_0 &=& 1 \end{matrix}$
2. Derive the formula for the (n+1)th numbers in the sequence defined by the linear recurrence relations:
$\begin{matrix} 3x_n &=& -4x_{n-1}& + & x_{n-2}; \ \mbox{for n} \ge 2 \\ x_0 &=& 1\\ x_1 &=& 1\\ \end{matrix}$
3. (Optional) Derive the formula for the (n+1)th Fibonacci numbers.
↑Jump back a section
## Further Counting
Consider the equation
a + b = n; a, b ≥ 0 are integers
For a fixed positive integer n, how many solutions are there? We can count the number of solutions:
0 + n = n
1 + (n - 1) = n
2 + (n - 2) = n
...
n + 0 = n
As you can see there are (n + 1) solutions. Another way to solve the problem is to consider the generating function
G(z) = 1 + z + z2 + ... + zn
Let H(z) = G(z)G(z), i.e.
H(z) = (1 + z + z2 + ... + zn)2
I claim that the coefficient of zn in H(z) is the number of solutions to a + b = n, a, b > 0. The reason why lies in the fact that when multiplying like terms, indices add.
Consider
A(z) = 1 + z + z2 + z3 + ...
Let
B(z) = A2(z)
it follows
B(z) = (1 + z + z2 + z3 + ...) + z(1 + z + z2 + z3 + ...) + z2(1 + z + z2 + z3 + ...) + z3(1 + z + z2 + z3) + ...
B(z) = 1 + 2z + 3z2 + ...
Now the coefficient of zn (for n ≥ 0) is clearly the number of solutions to a + b = n (a, b > 0).
We are ready now to derive a very important result: let tk be the number solutions to a + b = n (a, b > 0). Then the generating function for the sequence tk is
T(z) = (1 + z + z2 + ... + zn + ...)(1 + z + z2 + ... + zn + ...)
$T(z) = \frac{1}{(1 - z)^2}$
i.e.
$\frac{1}{(1 - z)^2} = 1 + 2z + 3z^2 + 4z^3 + ... + (n+1)z^n + ...$
### Counting Solutions to a1 + a2 + ... + am = n
Consider the number of solutions to the following equation:
a1 + a2 + ... + am = n
where ai ≥ 0; i = 1, 2, ... m. By applying the method discussed previously. If tk is the number of solutions to the above equation when n = k. The generating function for tk is
$T(z) = \frac{1}{(1-z)^m}$
but what is tk? Unless you have learnt calculus, it's hard to derive a formula just by looking the equation of T(z). Without assuming knowledge of calculus, we consider the following counting problem.
"You have three sisters, and you have n (n ≥ 3) lollies. You decide to give each of your sisters at least one lolly. In how many ways can this be done?"
One way to solve the problem is to put all the lollies on the table in a straight line. Since there are n lollies there are (n - 1) gaps between them (just as you have 5 fingers on each hand and 4 gaps between them). Now get 2 dividers, from the (n - 1) gaps available, choose 2 and put a divider in each of the gaps you have chosen! There you have it, you have divided the n lollies into three parts, one for each sister. There are $n - 1 \choose 2$ ways to do it! If you have 4 sisters, then there are $n - 1 \choose 3$ ways to do it. If you have m sisters there are $n - 1 \choose m - 1$ ways to do it.
Now consider: "You have three sisters, and you have n lollies. You decide to give each of your sisters some lollies (with no restriction as to how much you give to each sister). In how many ways can this be done?"
Please note that you are just solving:
a1 + a2 + a3 = n
where ai ≥ 0; i = 1, 2, 3.
You can solve the problem by putting n + 3 lollies on the table in a straight line. Get two dividers and choose 2 gaps from the n + 2 gaps available. Now that you have divided n + 3 lollies into 3 parts, with each part having 1 or more lollies. Now take back 1 lollies from each part, and you have solved the problem! So the number of solutions is $n + 2 \choose 2$. More generally, if you have m sisters and n lollies the number of ways to share the lollies is
${{n + m - 1} \choose {m - 1}} = {{n + m - 1} \choose {n}}$ .
Now to the important result, as discussed above the number of solutions to
a1 + a2 + ... + am = n
where ai ≥ 0; i = 1, 2, 3 ... m is
${{n + m - 1} \choose {n}}$
i.e.
$\frac{1}{(1 - z)^m} = \sum_{i=0}^\infty {m + i - 1 \choose i}z^i$
#### Example 1
The closed form of a generating function T(z) is
$T(z) = \frac{z}{(1-z)^2}$
and tk in the coefficient of zk is T(z). Find an explicit formula for tk.
Solution
$\begin{matrix} \frac{1}{(1-z)^2} &=& \sum_{i=0}^\infty (i+1)z^i\\ \\ \frac{z}{(1-z)^2} &=& z\sum_{i=0}^\infty (i+1)z^i\\ \\ &=& \sum_{i=0}^\infty (i+1)z^{i+1}\\ \end{matrix}$
Therefore tk = k
#### Example 2
Find the number of solutions to:
a + b + c + d = n
for all positive integers n (including zero) with the restriction a, b, c ,d ≥ 0.
Solution By the formula
$\begin{matrix} \frac{1}{(1-z)^4} &=& \sum_{i=0}^\infty {{n + 3}\choose {3}}z^i\\ \\ \end{matrix}$
so
the number of solutions is ${{n + 3}\choose {3}}$
### More Counting
We turn to a slightly harder problem of the same kind. Suppose we are to count the number of solutions to:
2a + 3b + c = n
for some integer $n \ge 0$, with a, b, also c greater than or equal zero. We can write down the closed form straight away, we note the coefficient of xn of:
$(1 + x^2 + x^4 + ...)(1 + x^3 + x^6 + ...)(1 + x + x^2 + ...) = \frac{1}{(1 - x^2)(1 - x^3)(1 - x)}$
is the required solution. This is due to, again, the fact that when multiplying powers, indices add.
To obtain the number of solutions, we break the expression into recognisable closed-forms by method of partial fraction.
#### Example 1
Let sk be the number of solutions to the following equation:
2a + 2b = n; a, b ≥ 0
Find the generating function for sk, then find an explicit formula for sn in terms of n.
Solution
Let T(z) be the generating functions of tk
T(z) = (1 + z2 + z4 + ... + z2n + ...)2
$T(z) = \frac{1}{(1 - z^2)^2}$
It's not hard to see that
$s_n = 0 \ \mbox{if n is odd}$
$s_n = {n/2 + 1\choose n/2} = {n/2 + 1\choose 1} = n/2 + 1 \ \mbox{if n is even}$
#### Example 2
Let tk be the number of solutions to the following equation:
a + 2b = n; a, b ≥ 0
Find the generating function for tk, then find an explicit formula for tn in terms of n.
Solution
Let T(z) be the generating functions of tk
T(z) = (1 + z + z2 + ... + zn + ...)(1 + z2 + z4 + ... + z2n + ...)
$T(z) = \frac{1}{(1 - z)} \times \frac{1}{1 - z^2}$
$T(z) = \frac{1}{(1 - z)^2} \times \frac{1}{1 + z}$
$T(z) = \frac{Az + B}{(1 - z)^2} + \frac{C}{1 + z}$
A = -1/4, B = 3/4, C = 1/4
$T(z) = -\frac{1}{4}\sum_{i=0}^\infty (i+1)z^{i+1} + \frac{3}{4}\sum_{i=0}^\infty (i+1)z^i + \frac{1}{4}\sum_{i=0}^\infty (-1)^iz^i$
$t_k = -\frac{1}{4}k + \frac{3}{4}(k + 1) + \frac{1}{4} (-1)^k$
### Exercises
1. Let
$T(z) = \frac{1}{(1 + z)^2}$
be the generating functions for tk (k = 0, 1, 2 ...). Find an explicit formula for tk in terms of k.
2. How many solutions are there the following equations if m is a given constant
$a + b + 2c = m$
where a, b and c ≥ 0
↑Jump back a section
## Problem Set
1. A new Company has borrowed \$250,000 initial capital. The monthly interest is 3%. The company plans to repay \$x before the end of each month. Interest is added to the debt on the last day of the month (compounded monthly).
Let Dn be the remaining debt after n months.
a) Define Dn recursively.
b) Find the minimum values of x.
c) Find out the general formula for Dn.
d) Hence, determine how many months are need to repay the debt if x = 12,000.
2. A partion of n is a sequence of positive integers (λ1,λ1,..,λr) such that λ1 ≥ λ2 ≥ .. ≥ λr and λ1 + λ2 + .. + λr = n. For example, let n = 5, then (5), (4,1), (3,2), (3,1,1), (2,2,1), (2,1,1,1), (1,1,1,1,1) are all the partions of 5. So we say the number of partions of 5 is 7. Derive a formula for the number of partions of a general n.
3. A binary tree is a tree where each node can have up to two child nodes. The figure below is an example of a binary tree.
a) Let cn be the number of unique arrangements of a binary tree with totally n nodes. Let C(z) be a generating function of cn.
(i) Define C(z) using recursion.
(ii) Hence find the closed form of C(z).
b) Let $P(x)=\sqrt{1+ax}=p_0 + p_1 x + p_2 x^2 + p_3 x^3 ...$ be a power series.
(i) By considering the n-th derivative of P(x), find a formula for pn.
(ii) Using results from a) and b)(i) , or otherwise, derive a formula for cn.
Hint: Instead of doing recursion of finding the change in cn when adding nodes at the buttom, try to think in the opposite way, and direction.(And no, not deleting nodes)
↑Jump back a section
## Project - Exponential generating function
This project assumes knowledge of differentiation.
(Optional)0.
(a)
(i) Differentiate log x by first principle.
(ii)*** Show that the remaining limit in last part that can't be evaluated indeed converges. Hence finish the differentiation by assigning this number as a constant.
(b) Hence differentiate $a^x$.
1. Consider $E(x) = e^x$
(a) Find out the n-th derivative of E(x).
(b) By considering the value of the n-th derivative of E(x) at x = 0, express E(x) in power series/infinite polynomial form.
(Optional)2.
(a) Find out the condition for the geometric progression(that is the ordinary generating function introduced at the begining of this chapter) to converges. (Hint: Find out the partial sum)
(b) Hence show that E(x) in the last question converges for all real values of x. (Hint: For any fixed x, the numerator of the general term is exponential, while the denominator of the general term is factorial. Then what?)
3. The function E(x) is the most fundamental and important exponential generating function, it is similar to the ordinary generating function, but with some difference, most obviously having a fractorial fraction attached to each term.
(a) Similar to ordinary generating function, each term of the polynomial expansion of E(x) can have number attached to it as coefficient. Now consider $A(x) = a_1 + a_2 \frac{x}{1!} + a_3 \frac{x^2}{2!} + a_4 \frac{x^3}{3!} + ...$
Find $A'(x)$ and compare it with A(x). What do you discover?
(b) Substitute nz, where n is a real number and z is a free variable, into E(x), i.e. E(nz). What have you found?
4. Apart from A(x) defined in question 2, let $B(x) = b_1 + b_2 \frac{x}{1!} + b_3 \frac{x^2}{2!} + b_4 \frac{x^3}{3!} + ...$
(a) What is A(x) multiplied by B(x)? Compare this with ordinary generating function, what is the difference?
(b) What if we blindly multiply A(x) with x(or xn in general)? Will it shift coefficient like what happened in ordinary generating function?
Notes: Question with *** are difficult questions, although you're not expected to be able to answer those, feel free to try your best.
# Feedback
What do you think? Too easy or too hard? Too much information or not enough? How can we improve? Please let us know by leaving a comment in the discussion section. Better still, edit it yourself and make it better.
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 78, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8817872405052185, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.