url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://physics.stackexchange.com/questions/14181/do-stationary-states-with-higher-energy-necessarily-have-higher-position-momentu?answertab=oldest
|
# Do stationary states with higher energy necessarily have higher position-momentum uncertainty?
For simple potentials like square wells and harmonic oscillators, one can explicitly calculate the product $\Delta x \Delta p$ for stationary states. When you do this, it turns out that higher energy levels have higher values of
$\Delta x \Delta p$.
Is this true for all time-independent potentials?
Certainly, it is possible to find two states $\mid \Psi_1 \rangle$ and $\mid \Psi_2 \rangle$ with $\langle \Psi_1 \mid H \mid \Psi_1 \rangle > \langle \Psi_2 \mid H \mid \Psi_2 \rangle$ and also $\Delta x_1 \Delta p_1 < \Delta x_2 \Delta p_2$. For example, choose a quadratic potential, let $\mid \Psi_2 \rangle$ be the first state and let $\mid \Psi_1 \rangle$ be a Gaussian coherent state (thus with minimum uncertainty) and fairly high energy. So I'm asking here just about the stationary states.
As Ron pointed out in the comments, this question is most interesting if we consider potentials with only a single local minimum, and increasing potential to the right of it and decreasing to the left.
-
1
There are trivial counterexamples, where you have a double-dip well, and the approximate ground state of the higher dip is lower uncertainty than the excited states of the lower dip. But your question is probably for monotonically increasing potentials. – Ron Maimon Aug 31 '11 at 15:13
@Ron Good point - I didn't think of that. I've updated the question. – Mark Eichenlaub Aug 31 '11 at 15:17
Could you clarify what you mean by "consider potentials with only a single local minimum, and increasing potential to the right of it and decreasing to the left."? I think that's a different situation than what Ron meant by "monotonically increasing". Because if the minimum considered is truly only a local minimum, and the potential decreases continuously to a lower V on the left, you don't have discrete energy levels. – Anonymous Coward Aug 31 '11 at 17:05
1
@Anonymous If the potential is decreasing to the left of the local minimum and increasing to the right of it, then the local minimum is necessarily a global minimum. Perhaps the confusion is in the word "decreasing". I didn't mean that as you go further to the left, the value of the potential goes down. I meant that if you look to the left of the minimum, you see a decreasing potential there. – Mark Eichenlaub Aug 31 '11 at 17:20
1
Could it actually be that for this class of potentials, the uncertainty of both coordinate and momentum grows separately as one goes to higher excited states? Intuitively, the uncertainty of coordinate will grow since the potential gets wider and the particle thus less localized, while the uncertainty of momentum will grow since the wave function becomes more and more oscillating thanks to the well-known property about the number of nodes of the stationary states. It is certainly an amusing problem! – Tomáš Brauner Aug 31 '11 at 17:35
show 1 more comment
## 1 Answer
The answer is no, and a counterexample is the following plateau potential:
$V(x) = x^2 \ \ \ \ \; \mathrm{for}\ \ \ \ x\ge -A$
$V(x) = A^2 \ \ \ \ \mathrm{for}\ \ \ \ -A-k \le x < -A$
$V(x) = \infty\ \ \; \ \ \mathrm{for}\ \ \ \ x <-A-k$
A is imagined to be a huge constant, and k is a large constant, but not anywhere near as huge as A. The potential has a plateau between -A-k and -A, but is continuous and increasing on either side of the origin. It's loss of uncertainty happens when the energy reaches the Plateau value of $A^2$, and it happens semiclassically, so it happens for large quantum numbers.
Semiclassically, in the Bohr-Sommerfeld (WKB) approximation, the particle has the same eigenfunctions as the harmonic oscillator, until the energy equals $A^2$. At this point, the next eigenfunction oscillates around the minimum, then crawls at a very very slow speed along the plateau, reflects off the wall, and comes back very very slowly to the oscillator.
The time spent on the plateau is much longer than the time spent oscillating (for appropriate choice of A and k) because the classical velocity on the plateau is so close to zero. This means that the position and momentum uncertainty is dominated by the uncertainty on the plateau, and the value of the position uncertainty is much less than the uncertainty for the oscillation if k is much smaller than A, and the value of the momentum uncertainty is nearly zero, because the momentum on the plateau is next to zero.
### WKB expectation values are classical orbit averages
This argument uses the WKB expression for the expectation values of functions of x, which, from the WKB wavefunction,
$$\psi(x) = {1\over \sqrt{2T}} {1\over \sqrt{v}} e^{i\int^x p dx}$$,
Where v(x) is the classical velocity a particle would have at position x, and T is just a constant, a perverse way to parametrize the normalization constant of the WKB wavefunction. The expected value of any function of the X operator is equal to
$$\langle f(x)\rangle = \int |\psi(x)|^2 f(x) = {1\over 2 T} \int {1\over v(x)} f(x) dx = {1\over T}\oint f(x(t)) dt$$
Where the last integral is the integral around the full classical orbit. The last expression obviously works for functions of P (it works for any operator using the corresponding classical function on phase space). So the expectation value is just the average value of the quantity along the orbit, the factor of 2 disappears because you go over every x value twice along the orbit, and the strangely named normalization factor "T" is revealed to be the period of the classical orbit, because the average value of the unit operator is 1.
-
Very interesting! I totally agree with your argument that the position uncertainty is dominated by the plateau (and thus of the order of k). However, in case of momentum, according to your last formula, the average of p^2 is proportional to the integral of v(x)dx (the average of p itself is always zero for a stationary state), so it is dominated by the oscillator region since it is much larger than the plateau and also has much larger velocity. I presume one can still tune the parameters so that ∆x∆p drops around the plateau though. That answers the original question of Mark. – Tomáš Brauner Sep 3 '11 at 9:18
In fact, as long as the WKB approximation holds, one can conclude using your argument that ∆p grows when going to higher excited states. One integrates v(x)dx over the classical trajectory, v(x) grows with energy and one moreover integrates over a larger range of x as energy increases. – Tomáš Brauner Sep 3 '11 at 9:22
While $p^2$ is dominated by the oscillator region, you must remember that it is a time average, so that the time spent on the plateau reduces the total integral through the normalization constant. The decrease in $p^2$ is almost exactly proportional to the increase in the period T at this level. – Ron Maimon Sep 3 '11 at 19:05
Ah, you are right, stupid me! – Tomáš Brauner Sep 3 '11 at 22:09
I made the same mistake at first. – Ron Maimon Sep 3 '11 at 22:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9274478554725647, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=522640
|
Physics Forums
## Biot-Savart law and Poisson's equation
Dear colleagues,
I have questions regarding Biot-Savart law. From [1], it is shown that the equation (Biot-Savart) is derived from the solution to Poisson's equation (assuming here div A=0)
\begin{equation}
\vec{\nabla}^2 \vec{A} = -\mu \vec{J}
\end{equation}
which is
\begin{equation}
\vec{A}(\vec{r}) =\frac{\mu}{4\pi}\int_V{\frac{\vec{J}{\rm d}^3r'}{\left|\vec{r}-\vec{r'}\right|}}
\end{equation}
where $\vec{r}$ is the position where $\vec{A}$ is evaluated and $\vec{r'}$ is the position where the integral is evaluated.
The first thing that troubles me is the singularity $\left|\vec{r}-\vec{r'}\right|$ when we evaluate the field at the point of integration. For a wire of finite radius, this means that the $\vec{A}$ field inside the conductor is infinite (or am I missing something?). If so, why in books on electromagnetics do we usually replace the conductor by an equivalent filamentary current $I=\vec{J}\cdot{\rm d}\vec{s}$? The field calculated inside the conductor will be different. This can be seen from the solution for the $\vec{B}$ field by using Ampere's equation (for the infinitely-long finite-radius wire)
\begin{equation}
B_{\theta}=\frac{\mu I}{2 \pi \rho}
\end{equation}outside the wire
\begin{equation}
B_{\theta}=\frac{\mu I \rho}{2 \pi R_{wire}^2}
\end{equation} inside the wire
where $R_{wire}$ is the cross-section radius and $(\rho,\theta,z)$ are the cylindrical coordinates. This means essentially that the field at $\rho=0$ is zero and that it is proportional to $\rho$ inside the conductor and inversely proportional to $\rho$ on the outside. How can we get this from the solution to Poisson's equation for a finite-radius wire?
The other thing that troubles me with the solution to Poisson's equation (second equation) is the value of the integrand when $\vec{r}=\vec{r'}$, but outside the wire (thus where J=0). This means we get a 0/0 integrand for each $\vec{r}$ outside the wire, which numerically gives NaN for the whole integral. Is this a problem analytically? because this contribution might (should) be 0, probably by using L'Hopital's rule (I guess).
Best regards,
M.
[1] Smythe,W.R., "Static and dynamic electricity", McGraw-Hill, 1968.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Blog Entries: 1 Recognitions: Science Advisor M, I know the Poisson integral looks singular, but it is not. What you're forgetting about is the volume element. To see the behavior near r = r', write the integrand near that point in terms of a spherical coordinate, R = |r - r'|. Then d3r = 4π R2 dR, and the integral is like ∫ 4π J(0) R dR, which is nonsingular. The factor in the numerator that comes from the volume element goes to zero faster than the denominator does.
How did the ${\rm d}\theta$ and ${\rm d}\phi$ disappear in spherical coordinates? Isn't that equivalent of considering an infinitesimal rectangular prism $xy{\rm d}z$ (which would also be an infinitesimal volume, but not infinitesimal in all three dimensions)? M.
Blog Entries: 1
Recognitions:
Science Advisor
## Biot-Savart law and Poisson's equation
In the immediate neighborhood of R = 0, the integrand is spherically symmetric and you can integrate at once over solid angle, producing the factor of 4π. In other words, ∫∫∫ ... d3r = ∫∫∫ ... R2 dR d2Ω = ∫ ... 4π R2 dR
As posted previously in the forums here: http://www.physicsforums.com/showthread.php?t=119419 the field of an infinite wire at r=0 is infinity. In your case, are saying that this field should be zero? From Green's function, it is normal to get infinite vector potential, since we assumed Dirac sources. So the filamentary current case is ok with me. However, what happens inside with a finite conductor (e.g. a cylindrical conductor)? Wouldn't that mean we get infinity everywhere inside? Or this is perhaps where I mix things up. thanks M.
Recognitions: Science Advisor According to $\vec{j}=\sigma \vec{E}$ you get a finite electric field inside a finite conductor. The solution of this standard magnetostatics problem, using the above (approximate, i.e., non-relativistic form of) Ohm's Law, for an infinite wire is a constant electric field along the wire.
I'm interested in the magnetic field B, not the electric field E. That's why I'm interested in the vector potential A, found from the solution of Poisson's equation. M.
Thread Tools
| | | |
|-------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Biot-Savart law and Poisson's equation | | |
| Thread | Forum | Replies |
| | Classical Physics | 3 |
| | Advanced Physics Homework | 10 |
| | Classical Physics | 1 |
| | Introductory Physics Homework | 2 |
| | Mechanical Engineering | 0 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8837352991104126, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/geometry/16841-midpoint-theorum-help.html
|
Thread:
1. Midpoint Theorum Help!!
I need help in this sum.
Q: In the adjoining figure, ABCD is a square and BCPQ is a parallelogram. If M is the midpoint of DC and PM produced meets AN at N, prove that AN=1/2 AD. If BD meets MN at R, prove that BQ=1/2 AC.
I have no problem with the first part of the question but the second part confuses me. Please help, thanks.
Attached Thumbnails
2. Please check the statement.
I think that it should be $B{\color{red}R}=\frac{1}{2}AC$.
3. Originally Posted by Insaeno
If BD meets MN at R, prove that BQ=1/2 AC.
Originally Posted by Plato
Please check the statement.
I think that it should be $B{\color{red}R}=\frac{1}{2}AC$.
I think it's more likely
"If BD meets MN at Q, prove that BQ=1/2 AC."
It amounts to the same thing, but at least Q is on the diagram.
-Dan
4. I'm pretty sure I copied the sum correctly. If it is so, then the question must be flawed. Thanks anyways!!!
5. This diagram also fits the given. Clearly in this diagram R could not be Q.
Let’s suppose the R is the point of intersection of BD with NM (it may be Q or not). We know that NM is parallel to BC. In the triangle BCD, because M is the midpoint of CD then R is the midpoint of BD. The diagonals of a rectangle are the same length and bisect one another. Therefore, it follows that $BR=\frac{1}{2}AC$
Attached Thumbnails
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427288174629211, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/28968/examples-of-piecewise-smooth-dynamical-systems/29049
|
# Examples of piecewise smooth dynamical systems [closed]
I have recently been studying continuous dynamical systems whose phase space can be divided into a number of regions. Inside each of these the flow is smooth, but there is a discrete jump in the flow just at the boundaries. In the mathematical description, the right hand side of the differential equation is different for different regions of the phase space of the dynamical variables.
Note: I don't mean something trivial like systems which exhibit smoothness in different regions of physical space separated by boundaries, like differently heated gases in partitions, or water in contact with vapour etc. The different regions I mention are regions in the phase space of the dynamical systems. So imagine a set of continuous-time differential equations defining a flow which is segregated in its phase space into regions in which the evolution of the equations is piecewise smooth.
I also don't mean phase transition. There is no variation of order parameter or bifurcations here. The piecewise smoothness exists in the dynamical phase space for a fixed value of the system parameters.
I have been studying them in an engineering context of a mechanical device in which there is a sudden change in the velocity of a moving part when it hits something. But it struck me that such piecewise smooth systems should be found in many scenarios, from other areas of physics, maybe certain quantum phenomena, to biological systems that can be studied with the theory of dynamical systems.
Some examples of the kind of systems I am looking for are:
• Quantum mechanics: the Muffin-Tin potential is a quantum model where the potential (the right side of the differential equation) is approximated to be piecewise defined.
• Classical mechanics: the hard impacting oscillator (oscillator with a rigid wall at an end restricting the amplitude, like the devices I was studying).
• Theoretical computer science: Hybrid automata and reachability problems which are further piecewise linear.
I am curious to apply my understanding of the mechanical system to such systems.
So, what are other dynamical systems in nature which exhibit piecewise smooth behaviour?
-
2
Overly broad and clearly big list. Vote to close. – genneth May 25 '12 at 12:53
– Frédéric Grosshans May 25 '12 at 12:58
– Frédéric Grosshans May 25 '12 at 13:02
@FrédéricGrosshans, no it doesn't correspond to phase transitions. The jump occurs when you change the order parameter of the system in that case. Here there is a jump between two regions of the phase space for the same fixed system parameter values. – Abhranil Das May 25 '12 at 13:23
1
@FrédéricGrosshans yes, but neither of shock waves or water in contact with vapour is a dynamical system in which there are changing macroscopic variables. I'll edit my question to elaborate. – Abhranil Das May 25 '12 at 14:35
show 2 more comments
## closed as not constructive by Qmechanic♦, genneth, dmckee♦May 27 '12 at 15:46
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance.
## 1 Answer
You might be interested in in Pulse-coupled oscillators. See for example Mirrollo&Strogatz,1990:
They investigate a set of identical oscillators each described by a single phase variable $\phi_i \in [0,1]$ with $\dot{\phi_i}=1$. When $\phi_i = 1$ the oscillator resets to zero and sends out a spike which causes an instantaneous phase jump in all other oscillators which is given by a transfer function $h(\phi)$. In that paper they show that a special class of transfer functions will cause the oscillators to synchronize for all initial conditions.
This model with finite delays between sending of spikes and reception is used to model neural networks see for example Jahnke et al, 2008 and the references therein.
-
Thank you! Will look into this. – Abhranil Das May 27 '12 at 22:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9227180480957031, "perplexity_flag": "middle"}
|
http://terrytao.wordpress.com/tag/open-mapping-theorem/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
Tag Archive
You are currently browsing the tag archive for the ‘open mapping theorem’ tag.
The closed graph theorem in various categories
20 November, 2012 in expository, math.AG, math.CT, math.CV, math.GR, math.RA | Tags: closed graph theorem, open mapping theorem | by Terence Tao | 20 comments
Given a function ${f: X \rightarrow Y}$ between two sets ${X, Y}$, we can form the graph
$\displaystyle \Sigma := \{ (x,f(x)): x\in X \},$
which is a subset of the Cartesian product ${X \times Y}$.
There are a number of “closed graph theorems” in mathematics which relate the regularity properties of the function ${f}$ with the closure properties of the graph ${\Sigma}$, assuming some “completeness” properties of the domain ${X}$ and range ${Y}$. The most famous of these is the closed graph theorem from functional analysis, which I phrase as follows:
Theorem 1 (Closed graph theorem (functional analysis)) Let ${X, Y}$ be complete normed vector spaces over the reals (i.e. Banach spaces). Then a function ${f: X \rightarrow Y}$ is a continuous linear transformation if and only if the graph ${\Sigma := \{ (x,f(x)): x \in X \}}$ is both linearly closed (i.e. it is a linear subspace of ${X \times Y}$) and topologically closed (i.e. closed in the product topology of ${X \times Y}$).
I like to think of this theorem as linking together qualitative and quantitative notions of regularity preservation properties of an operator ${f}$; see this blog post for further discussion.
The theorem is equivalent to the assertion that any continuous linear bijection ${f: X \rightarrow Y}$ from one Banach space to another is necessarily an isomorphism in the sense that the inverse map is also continuous and linear. Indeed, to see that this claim implies the closed graph theorem, one applies it to the projection from ${\Sigma}$ to ${X}$, which is a continuous linear bijection; conversely, to deduce this claim from the closed graph theorem, observe that the graph of the inverse ${f^{-1}}$ is the reflection of the graph of ${f}$. As such, the closed graph theorem is a corollary of the open mapping theorem, which asserts that any continuous linear surjection from one Banach space to another is open. (Conversely, one can deduce the open mapping theorem from the closed graph theorem by quotienting out the kernel of the continuous surjection to get a bijection.)
It turns out that there is a closed graph theorem (or equivalent reformulations of that theorem, such as an assertion that bijective morphisms between sufficiently “complete” objects are necessarily isomorphisms, or as an open mapping theorem) in many other categories in mathematics as well. Here are some easy ones:
Theorem 2 (Closed graph theorem (linear algebra)) Let ${X, Y}$ be vector spaces over a field ${k}$. Then a function ${f: X \rightarrow Y}$ is a linear transformation if and only if the graph ${\Sigma := \{ (x,f(x)): x \in X \}}$ is linearly closed.
Theorem 3 (Closed graph theorem (group theory)) Let ${X, Y}$ be groups. Then a function ${f: X \rightarrow Y}$ is a group homomorphism if and only if the graph ${\Sigma := \{ (x,f(x)): x \in X \}}$ is closed under the group operations (i.e. it is a subgroup of ${X \times Y}$).
Theorem 4 (Closed graph theorem (order theory)) Let ${X, Y}$ be totally ordered sets. Then a function ${f: X \rightarrow Y}$ is monotone increasing if and only if the graph ${\Sigma := \{ (x,f(x)): x \in X \}}$ is totally ordered (using the product order on ${X \times Y}$).
Remark 1 Similar results to the above three theorems (with similarly easy proofs) hold for other algebraic structures, such as rings (using the usual product of rings), modules, algebras, or Lie algebras, groupoids, or even categories (a map between categories is a functor iff its graph is again a category). (ADDED IN VIEW OF COMMENTS: further examples include affine spaces and ${G}$-sets (sets with an action of a given group ${G}$).) There are also various approximate versions of this theorem that are useful in arithmetic combinatorics, that relate the property of a map ${f}$ being an “approximate homomorphism” in some sense with its graph being an “approximate group” in some sense. This is particularly useful for this subfield of mathematics because there are currently more theorems about approximate groups than about approximate homomorphisms, so that one can profitably use closed graph theorems to transfer results about the former to results about the latter.
A slightly more sophisticated result in the same vein:
Theorem 5 (Closed graph theorem (point set topology)) Let ${X, Y}$ be compact Hausdorff spaces. Then a function ${f: X \rightarrow Y}$ is continuous if and only if the graph ${\Sigma := \{ (x,f(x)): x \in X \}}$ is topologically closed.
Indeed, the “only if” direction is easy, while for the “if” direction, note that if ${\Sigma}$ is a closed subset of ${X \times Y}$, then it is compact Hausdorff, and the projection map from ${\Sigma}$ to ${X}$ is then a bijective continuous map between compact Hausdorff spaces, which is then closed, thus open, and hence a homeomorphism, giving the claim.
Note that the compactness hypothesis is necessary: for instance, the function ${f: {\bf R} \rightarrow {\bf R}}$ defined by ${f(x) := 1/x}$ for ${x \neq 0}$ and ${f(0)=0}$ for ${x=0}$ is a function which has a closed graph, but is discontinuous.
A similar result (but relying on a much deeper theorem) is available in algebraic geometry, as I learned after asking this MathOverflow question:
Theorem 6 (Closed graph theorem (algebraic geometry)) Let ${X, Y}$ be normal projective varieties over an algebraically closed field ${k}$ of characteristic zero. Then a function ${f: X \rightarrow Y}$ is a regular map if and only if the graph ${\Sigma := \{ (x,f(x)): x \in X \}}$ is Zariski-closed.
Proof: (Sketch) For the only if direction, note that the map ${x \mapsto (x,f(x))}$ is a regular map from the projective variety ${X}$ to the projective variety ${X \times Y}$ and is thus a projective morphism, hence is proper. In particular, the image ${\Sigma}$ of ${X}$ under this map is Zariski-closed.
Conversely, if ${\Sigma}$ is Zariski-closed, then it is also a projective variety, and the projection ${(x,y) \mapsto x}$ is a projective morphism from ${\Sigma}$ to ${X}$, which is clearly quasi-finite; by the characteristic zero hypothesis, it is also separated. Applying (Grothendieck’s form of) Zariski’s main theorem, this projection is the composition of an open immersion and a finite map. As projective varieties are complete, the open immersion is an isomorphism, and so the projection from ${\Sigma}$ to ${X}$ is finite. Being injective and separable, the degree of this finite map must be one, and hence ${k(\Sigma)}$ and ${k(X)}$ are isomorphic, hence (by normality of ${X}$) ${k[\Sigma]}$ is contained in (the image of) ${k[X]}$, which makes the map from ${X}$ to ${\Sigma}$ regular, which makes ${f}$ regular. $\Box$
The counterexample of the map ${f: k \rightarrow k}$ given by ${f(x) := 1/x}$ for ${x \neq 0}$ and ${f(0) := 0}$ demonstrates why the projective hypothesis is necessary. The necessity of the normality condition (or more precisely, a weak normality condition) is demonstrated by (the projective version of) the map ${(t^2,t^3) \mapsto t}$ from the cusipdal curve ${\{ (t^2,t^3): t \in k \}}$ to ${k}$. (If one restricts attention to smooth varieties, though, normality becomes automatic.) The necessity of characteristic zero is demonstrated by (the projective version of) the inverse of the Frobenius map ${x \mapsto x^p}$ on a field ${k}$ of characteristic ${p}$.
There are also a number of closed graph theorems for topological groups, of which the following is typical (see Exercise 3 of these previous blog notes):
Theorem 7 (Closed graph theorem (topological group theory)) Let ${X, Y}$ be ${\sigma}$-compact, locally compact Hausdorff groups. Then a function ${X \rightarrow Y}$ is a continuous homomorphism if and only if the graph ${\Sigma := \{ (x,f(x)): x \in X \}}$ is both group-theoretically closed and topologically closed.
The hypotheses of being ${\sigma}$-compact, locally compact, and Hausdorff can be relaxed somewhat, but I doubt that they can be eliminated entirely (though I do not have a ready counterexample for this).
In several complex variables, it is a classical theorem (see e.g. Lemma 4 of this blog post) that a holomorphic function from a domain in ${{\bf C}^n}$ to ${{\bf C}^n}$ is locally injective if and only if it is a local diffeomorphism (i.e. its derivative is everywhere non-singular). This leads to a closed graph theorem for complex manifolds:
Theorem 8 (Closed graph theorem (complex manifolds)) Let ${X, Y}$ be complex manifolds. Then a function ${f: X \rightarrow Y}$ is holomorphic if and only if the graph ${\Sigma := \{ (x,f(x)): x \in X \}}$ is a complex manifold (using the complex structure inherited from ${X \times Y}$) of the same dimension as ${X}$.
Indeed, one applies the previous observation to the projection from ${\Sigma}$ to ${X}$. The dimension requirement is needed, as can be seen from the example of the map ${f: {\bf C} \rightarrow {\bf C}}$ defined by ${f(z) =1/z}$ for ${z \neq 0}$ and ${f(0)=0}$.
(ADDED LATER:) There is a real analogue to the above theorem:
Theorem 9 (Closed graph theorem (real manifolds)) Let ${X, Y}$ be real manifolds. Then a function ${f: X \rightarrow Y}$ is continuous if and only if the graph ${\Sigma := \{ (x,f(x)): x \in X \}}$ is a real manifold of the same dimension as ${X}$.
This theorem can be proven by applying invariance of domain (discussed in this previous post) to the projection of ${\Sigma}$ to ${X}$, to show that it is open if ${\Sigma}$ has the same dimension as ${X}$.
Note though that the analogous claim for smooth real manifolds fails: the function ${f: {\bf R} \rightarrow {\bf R}}$ defined by ${f(x) := x^{1/3}}$ has a smooth graph, but is not itself smooth.
(ADDED YET LATER:) Here is an easy closed graph theorem in the symplectic category:
Theorem 10 (Closed graph theorem (symplectic geometry)) Let ${X = (X,\omega_X)}$ and ${Y = (Y,\omega_Y)}$ be smooth symplectic manifolds of the same dimension. Then a smooth map ${f: X \rightarrow Y}$ is a symplectic morphism (i.e. ${f^* \omega_Y = \omega_X}$) if and only if the graph ${\Sigma := \{(x,f(x)): x \in X \}}$ is a Lagrangian submanifold of ${X \times Y}$ with the symplectic form ${\omega_X \oplus -\omega_Y}$.
In view of the symplectic rigidity phenomenon, it is likely that the smoothness hypotheses on ${f,X,Y}$ can be relaxed substantially, but I will not try to formulate such a result here.
There are presumably many further examples of closed graph theorems (or closely related theorems, such as criteria for inverting a morphism, or open mapping type theorems) throughout mathematics; I would be interested to know of further examples.
$\Box$
245B, Notes 9: The Baire category theorem and its Banach space consequences
1 February, 2009 in 245B - Real analysis, math.FA, math.GN, math.MG | Tags: Baire category theorem, closed graph theorem, non-complemented subspace, open mapping theorem, uniform boundedness principle | by Terence Tao | 38 comments
The notion of what it means for a subset E of a space X to be “small” varies from context to context. For instance, in measure theory, when $X = (X, {\mathcal X}, \mu)$ is a measure space, one useful notion of a “small” set is that of a null set: a set E of measure zero (or at least contained in a set of measure zero). By countable additivity, countable unions of null sets are null. Taking contrapositives, we obtain
Lemma 1. (Pigeonhole principle for measure spaces) Let $E_1, E_2, \ldots$ be an at most countable sequence of measurable subsets of a measure space X. If $\bigcup_n E_n$ has positive measure, then at least one of the $E_n$ has positive measure.
Now suppose that X was a Euclidean space ${\Bbb R}^d$ with Lebesgue measure m. The Lebesgue differentiation theorem easily implies that having positive measure is equivalent to being “dense” in certain balls:
Proposition 1. Let $E$ be a measurable subset of ${\Bbb R}^d$. Then the following are equivalent:
1. E has positive measure.
2. For any $\varepsilon > 0$, there exists a ball B such that $m( E \cap B ) \geq (1-\varepsilon) m(B)$.
Thus one can think of a null set as a set which is “nowhere dense” in some measure-theoretic sense.
It turns out that there are analogues of these results when the measure space $X = (X, {\mathcal X}, \mu)$ is replaced instead by a complete metric space $X = (X,d)$. Here, the appropriate notion of a “small” set is not a null set, but rather that of a nowhere dense set: a set E which is not dense in any ball, or equivalently a set whose closure has empty interior. (A good example of a nowhere dense set would be a proper subspace, or smooth submanifold, of ${\Bbb R}^d$, or a Cantor set; on the other hand, the rationals are a dense subset of ${\Bbb R}$ and thus clearly not nowhere dense.) We then have the following important result:
Theorem 1. (Baire category theorem). Let $E_1, E_2, \ldots$ be an at most countable sequence of subsets of a complete metric space X. If $\bigcup_n E_n$ contains a ball B, then at least one of the $E_n$ is dense in a sub-ball B’ of B (and in particular is not nowhere dense). To put it in the contrapositive: the countable union of nowhere dense sets cannot contain a ball.
Exercise 1. Show that the Baire category theorem is equivalent to the claim that in a complete metric space, the countable intersection of open dense sets remain dense. $\diamond$
Exercise 2. Using the Baire category theorem, show that any non-empty complete metric space without isolated points is uncountable. (In particular, this shows that Baire category theorem can fail for incomplete metric spaces such as the rationals ${\Bbb Q}$.) $\diamond$
To quickly illustrate an application of the Baire category theorem, observe that it implies that one cannot cover a finite-dimensional real or complex vector space ${\Bbb R}^n, {\Bbb C}^n$ by a countable number of proper subspaces. One can of course also establish this fact by using Lebesgue measure on this space. However, the advantage of the Baire category approach is that it also works well in infinite dimensional complete normed vector spaces, i.e. Banach spaces, whereas the measure-theoretic approach runs into significant difficulties in infinite dimensions. This leads to three fundamental equivalences between the qualitative theory of continuous linear operators on Banach spaces (e.g. finiteness, surjectivity, etc.) to the quantitative theory (i.e. estimates):
1. The uniform boundedness principle, that equates the qualitative boundedness (or convergence) of a family of continuous operators with their quantitative boundedness.
2. The open mapping theorem, that equates the qualitative solvability of a linear problem Lu = f with the quantitative solvability.
3. The closed graph theorem, that equates the qualitative regularity of a (weakly continuous) operator T with the quantitative regularity of that operator.
Strictly speaking, these theorems are not used much directly in practice, because one usually works in the reverse direction (i.e. first proving quantitative bounds, and then deriving qualitative corollaries); but the above three theorems help explain why we usually approach qualitative problems in functional analysis via their quantitative counterparts.
Read the rest of this entry »
Recent Comments
Sandeep Murthy on An elementary non-commutative…
Luqing Ye on 245A, Notes 2: The Lebesgue…
Frank on Soft analysis, hard analysis,…
andrescaicedo on Soft analysis, hard analysis,…
Richard Palais on Pythagoras’ theorem
The Coffee Stains in… on Does one have to be a genius t…
Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
%anchor_text% on Books
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245A, Notes 2: The Lebesgue…
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 137, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278088808059692, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/24031/dimension-of-module
|
## Dimension of module
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Does dimension of a module (say, dimension of its support) have anything to do with the supremum length of chains of prime submodules like rings? Let's restrict to finitely generated modules over Noetherian ring. Prime submodules are defined analogously to primary submodules: a submodule P in M is prime if P$\neq$M and $M/P$ has no zero divisors, i.e. $am\in P$ implies $m\in P$ or $a \in \mbox{Ann}(M/P)$.
-
meta: try to indicate which field of mathematics you're talking about as you begin using terms. For me, a module is more likely to be over a von Neumann algebra or over a tensor category than a Noetherian ring. No one has a monopoly on modules anymore! You did explain which sense you meant, of course, but it took until the second sentence, and most of the terminology of the first sentence doesn't even make sense until you've done so. – Scott Morrison♦ May 9 2010 at 19:16
So you are asking for the relation between the Krull dimension and the prime dimension of a module. I think the two dimensions are equal for multiplication modules. Would that be of interest? – Gjergji Zaimi May 10 2010 at 0:25
## 1 Answer
Let $R$ be an integral domain, then for the module $R^n$ its maximal length of chains of prime submodules is much larger than its dimension (for $n>>0$).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205681085586548, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/35409/determining-the-balance-equations-for-a-poisson-process/35474
|
# Determining the balance equations for a Poisson Process
I'm trying to do an exercise (not homework) and I fail to understand the solution the reader is giving me.
Consider a gas station with one gas pump. Cars arrive at the gas station according to a Poisson process with an arrival rate of 20 cars per hour. An arriving car finding $n$ cars at the station immediately leaves with probability $q_n = \frac{n}{4}$ and joins the queue with probability $1-q_n$, where $n=0,1,2,3,4$. Cars are served in order of arrival. The service time (ie. the time for pumping and paying) is exponential and the mean service time is 3 minutes.
Determine the stationary distribution of the number of cars at the gas station.
Converting everything to minutes we have arrival rate $\lambda = \frac{1}{3}$ and service rate $\mu = \frac{1}{3}$.
Now, the reader I use gives as solution:
Solve the global balance equation $\lambda q_n p_n = \mu p_{n+1}, n=0,1,2,3$.
Here, $p_n = P(L = n)$ is the probability that there are $n$ people in the system (either in the queue or in service).
I fail to see how these balance equations are obtained. If I were to make a guess then I'd say "there are $\lambda$ amount of cars coming to the gas station per minute, of which $1-q_n\lambda$ goes to the gas station queue, which happens with probability $p_n$. The amount of cars leaving is $\mu p_{n+1}$ because a car was added to the queue so there were $p_{n+1}$ cars, so $\lambda(1-q_n)p_n = \mu p_{n+1}$" I'm sure this doesn't make any sense but I'm having a hard time getting a feel for this equation. Any help is appreciated.
-
1
I am studying Masters in Applied Statistics and doing the course Stochastic Models and Forecasting, similar topic to what your doing. I have some reference books below: Reference books such as Introduction to probability models by sheldon Ross and probability and Random Processes by Geoffry Grimmett and David Stirzaker, Third Edition, These books are good, but one by Ross is better to read and has explanations, easier to understand – user64079 Feb 26 at 16:04
@user64079: references are always welcome. Thanks! – Stijn Feb 26 at 16:45
## 1 Answer
Their equations $\lambda q_n p_n = \mu p_{n+1}$ are clearly wrong, and your equations $\lambda(1-q_n)p_n = \mu p_{n+1}$ are correct. They accidentally switched $q_n$ and $1-q_n$.
-
That's nice to know. Thanks. Now I want to make sure I can make the equation rigorous. Could you perhaps comment on how I got the equation, especially on the right hand side? – Stijn Apr 27 '11 at 17:51
Your chain is a "birth and death" process with birth rates $\lambda_n=\lambda (1-q_n)$ and death rates $\mu_n=\mu$. Solving the detailed balance equations $\lambda(1-q_n)p_n = \mu p_{n+1}$ proves that the process is reversible, and that the $p_n$'s are the stationary probabilities. The intuition for the balance equations is that, in the long run, the rate of transitions from $n$ to $n+1$ must equal the rate of transitions from $n+1$ to $n$. – Byron Schmuland Apr 27 '11 at 18:20
I will definitely look into that. Thanks for the help. – Stijn Apr 27 '11 at 18:50
1
@Stijn This ought to be explained in your textbook, assuming you are using one. If anything is unclear, let me know. Feel free to email me directly if you like, go to my profile page for contact info. – Byron Schmuland Apr 27 '11 at 18:53
I'm taking a Queueing Theory course which uses a reader rather than a full fledged text book so it kind of squeezes in different concepts, which I may or may not have seen before, in not a whole lot of text. I'm planning to look up some birth & death-process basics but if I have questions in the future I'll happily take you up on that offer. Thanks a lot! – Stijn Apr 27 '11 at 21:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537986516952515, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/game-theory?sort=faq&pagesize=15
|
# Tagged Questions
The study of competitive and non-competitive games, equilibrium concepts such as Nash equilibrium, and related subjects. Combinatorial games such as Nim are under (combinatorial-game-theory), and algorithmic aspects (e.g. auctions) are under (algorithmic-game-theory).
4answers
773 views
### Probability of dice sum just greater than 100
Can someone please guide me to a way by which I can solve the following problem. There is a die and 2 players. Rolling stops as soon as some exceeds 100(not including 100 itself). Hence you have the ...
5answers
803 views
### A lady and a monster
A famous problem: a lady is in the center of the circular lake and a monster is on the boundary of the lake. The speed of the monster is $v_m$, and the speed of the swimming lady is $v_l$. The goal of ...
2answers
592 views
### Winning strategy for a matchstick game
There are $N$ matchsticks at the table. Two players play the game. Rules: (i) A player in his or her turn can pick $a$ or $b$ match sticks. (ii) The player who picks the last matchstick loses the ...
0answers
79 views
### Shifted Young tableaux & Hook numbers & Bulgarian Solitaire
I would like to find articles or documentation regarding this process: Starting from what ever integer partition, e.g. 5,2 for the number 7. Construct his Young tableaux and then fill it with Hook ...
6answers
3k views
### Deal or no deal: does one switch (to avoid a goat)?/ Should deal or no deal be 10 minutes shorter?
Okay so this question reminded me of one my brother asked me a while back about the hit day-time novelty-worn-off-now snoozathon Deal or no deal. For the uninitiated: In playing deal or no deal, the ...
1answer
58 views
### Why the set of outcomes generated by a fixed strategy of one player in Gale-Stewart game is a perfect set?
In the proof that there is a payoff set $X$ such that the Gale-Stewart game is not determined(see here, Proposition 3.1.). I don't know why $X$, the set of all outcomes generated by a fixed strategy ...
1answer
170 views
### Game Theory Matching a Deck of Cards
Moderator Note: This question is from a contest which ended 1 Dec 2012. Suppose we have a deck of cards labeled from $1$ to $52$. Let them be shuffled in a random configuration, then made ...
1answer
377 views
### Set of rationalizable strategies
Consider a guessing game with ten players, numbered 1 through 10. Simultaneously and independently, the players select integers between 0 and 10. Thus player i's strategy space is $\mathbf{S}_i$ $=$ ...
1answer
197 views
### Saddle points in zero sum game
We only had one lecture about the subject and already have quite difficult questions, could someone please help me? The matrix looks something like this: \begin{matrix} 3 & 2 & 1 & 4 ...
1answer
73 views
### Name for a certain “product game”
Let $G,H$ be two (combinatorial impartial) games. Consider the following new game $P$: The positions are the pairs of positions of $G$ and $H$. A move in $P$ is a move in $G$, or a move in $H$, or a ...
1answer
726 views
### Number of moves to solve a flood-it/sock-dye game
[ Question based on the sock dye game ] [ Update: It appears that this game is better known as "Flood it" and is NP-hard. Also, "the number of moves required to flood the whole board is $\Omega(n)$ ...
6answers
1k views
### Game theory - self study
I want to self study game theory. Which math-related qualifications should I have? And can you recommend any books? Where do I have to begin?
7answers
572 views
### Game theory textbooks/lectures/etc
I looking for good books/lecture notes/etc to learn game theory. I do not fear the math, so I'm not looking for a "non-mathematical intro" or something like that. Any suggestions are welcome. Just put ...
4answers
239 views
### Good non-mathematician book on Game Theory
I'm looking for a good book on Game Theory. I run a software company and from the little I've heard about Game Theory, it seems interesting and potentially useful. I've looked on Amazon.com but ...
3answers
426 views
### Best Strategy for a die game
You are allowed to roll a die up to six times. Anytime you stop, you get the dollar amount of the face value of your last roll. Question: What is the best strategy? According to my calculation, for ...
5answers
231 views
### How can the observed strategies* in this actual auction be explained?
This is a "real world" question. Recently I witnessed the separate auctions of about 30 houses. The place where I went uses the following rules. The following describes the procedure for the ...
1answer
264 views
### Optimal strategy for slice weighing game
I watched an interesting contest on a Swedish game show the other night. I have tried to find an english name of the contest but haven't found any. Two contestants were each given one large sausage ...
1answer
58 views
### Invariance of strategy-proof social choice function when peaks are made close from solution
A question emerging from reading Schummer, J., & Vohra, R. V. (2002). Strategy-proof Location on a Network. Journal of Economic Theory, 104(2), 405–428. The setting is as follows: A finite set ...
1answer
425 views
### Number Game: 31 - Winning Strategy?
My Maths teacher taught us how to play a game called 31 on Friday. Not once did my Maths teacher lose. I want to know why. I'll explain the game... 31 is a game between two people. Let's say you've ...
1answer
239 views
### Which side has winning strategy in Go?
Go is actually a finite two-person game of perfect information and cannot end in a draw. Then by Zermelo's theorem, it is exactly one of the two has winning strategy, either Black or White. So my ...
0answers
79 views
### What is the (expected) outcome of this hybrid auction?
A certain hybrid auction can be accurately modelled as follows. There are $n$ risk-neutral, rational participants $i=1,2,\ldots,n$, and a guy called Zerro: $i=0$. Each, except Zerro, has a private ...
1answer
121 views
### Understanding common knowledge in logic and game theory
For $k = 2$, it is merely "first-order" knowledge. Each blue-eyed person knows that there is someone with blue eyes, but each blue eyed person does ''not'' know that the other blue-eyed person ...
2answers
197 views
### Game theory Computing pure Nash equilibrium probability
We have a $2$-player game and each player has $n$ strategies. The payoffs for each player are in range $\left[0,1\right]$ and are selected at random. Show that the probability that this random game ...
1answer
324 views
### Game about placing pennies on table
This problem is from The Art and Craft of problem solving book: Consider the following two player game. Each player takes turns placing a penny on the surface of a rectangular table. No penny can ...
2answers
293 views
### Finding best response function with probabilities (BR) given a normal-matrix representation of the game
We are given players 1, 2 and their respective strategies (U, M, D for player 1, L, C, R for player 2) and the corresponding pay-offs through the following table: \$\begin{matrix} 1|2 & L & C ...
2answers
38 views
### $n$-player version of Zermelo's Theorem
Zermelo's Theorem states that "Every finite zero-sum 2-player game is determined (one of the two players has a winning strategy)." I was wondering if anyone has investigated the generalization of this ...
1answer
181 views
### Nim Variant (reducing by divisors)
Alice and Bob play the following game. They choose a number $N$ to play with. The rules are as follows: Alice plays first, and the two players alternate. In his/her turn, a player can subtract from ...
0answers
113 views
### Determine market price and quantities produced; non-cooperative cournot game
$P(Q)$ represents a market where demand $Q$ is related to price $P$ by $$P(Q) = Q^{-\frac{1}{2}}$$ In this market there are $m$ identical producers, say firm 1, 2, up to $m$ which can produce any ...
2answers
103 views
### Nash Equilibrium for the prisoners dilemma when using mixed strategies
Consider the following game matrix \begin{array}{l|c|c} & \textbf{S} & \textbf{G} \\ \hline \textbf{S} & (-2,-2) & (-6, -1) \\ \hline \textbf{G} & (-1,-6) ...
2answers
144 views
### GameTheory, Solve for optimal strategies by solving a system of linear equations
In a book on game theory I saw the following example of a game, a modified version of Roshambo (or Rock-paper-scissors), which is described by the following payoff-matrix: \begin{array}{c|c|c} ...
0answers
68 views
### Finding all number combination which XOR results to 0
Let's say I have a fixed list of numbers: $2, 3, 1, 2$ and I can reduce every number from $n$ to $0$, for instance: $1,3,1,2$ or $0,3,0,1$ etc. I am looking for all combinations of this sort, where ...
1answer
118 views
### Nim Variant (Restricted removal)
Alice and Bob play the following game : There are $N$ piles of stones with $S_i$ stones in the $i$th pile. Piles are numbered from 1 to $N$. Alice and Bob play alternately, with Alice starting. In a ...
1answer
152 views
### Given a victory condition and a set strategy, what are the chances of winning on a given turn in a game of Magic: The Gathering?
Tl;DR: You have winning cards. To win, you must be able to play those cards, and have them in your hand. Your hand is randomly drawn. When might you win? How could find the answer to this (very ...
0answers
117 views
### Extension of lady and monster
A famous problem: a lady is in the center of the circle lake, the monster is on the boundary of the lake. The speed of the monster is $v_m$, of swimming lady - $v_l$. The goal of the lady is to come ...
1answer
232 views
### Nash equilibria and best response functions
a) Let $G=(A,u)$ be a strategic game such that, for each $i \in N$ $A_i$ is a nonempty, convex, compact subset of $R^{m_i}$ $u_i$ is continuous For each $a_{-i}$, $u_i(a_{-i}, . )$ is quasi-concave ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360172152519226, "perplexity_flag": "middle"}
|
http://cms.math.ca/Events/winter10/abs/cp.html
|
2010 CMS Winter Meeting
Coast Plaza Hotel and Suites, Vancouver, December 4 - 6, 2010 www.cms.math.ca//Events/winter10
Contributed Papers
Org: Martin Barlow (UBC)
[PDF]
BHAGWAN AGGARWALA, University of Calgary
Entry Inhibitors of HIV [PDF]
Entry Inhibitors of HIV (Fuzeon is an example) have been approved by FDA (Food and Drug Administration of the United States) only when taken in combination with other HIV medications and not otherwise. Using a Mathematical model for HIV propagation, we speculate on why this maybe a correct policy.
REZA AKHTAR, Miami University
Small-sum pairs in abelian groups [PDF]
Let $A$ and $B$ be subsets of size $k$ in a finite abelian group $G$. Answering a question of Bihani and Jin, we prove that if $A+B$ and $A+A$ both have size $2k-1$, then, under suitable technical hypotheses, $A$ must be a translate of $B$. The main ingredient in the proof is Kemperman's structure theorem. This is joint work with Paul Larson.
MASHHOOR AL-REFAI, Princess Sumaya University for Technology
On Some Properties Defined Over Strongly Graded Rings and Graded Modules [PDF]
Let $G$ be a group with identity $e$. A ring $R$ is said to be $G-$graded if there exists additive subgroups $R_g$ of $R$ such that $R=\underset{g\in G}\bigoplus R_g$ and $R_gR_h\subset R_{gh}$ for all $g,h\in G$. The $G-$graded ring $R$ is denoted by $(R,G)$. We denote by $\text{supp}(R,G)$ the support of $G$ which is defined as $\{g\in G: R_g\not=0\}.$ The elements of $R_g$ are called homogeneous of degree $g$. For $x\in R,$ $x$ can be written uniquely as $\sum_{g\in G}x_g$ where $x_g$ is the component of $x$ in $R_g$. Also, we write $h(R)=\bigcup_{g\in G}R_g.$
Many studies in group graded rings assume $R$ to be a strongly graded ring, i.e., $R_gR_h=R_{gh}$ for all $g,h\in G$. But this strong condition is hard to satisfy.
In 1995, we defined three successively stronger properties that a grading may have, and we investigated the relationship between these strong gradings and the stronger non-degenerate and faithful properties which are motivated by the work of Cohen and Rowen.
We will define new types of strongly graded rings and strongly graded modules and introduce some properties defined over strongly graded rings. A survey of my contribution to the field, will also be given.
A. BASS BAGAYOGO, University College of Saint-Boniface
Discrete Element Method for Granular Flow and Cracks Propagation [PDF]
Granular Materials (GM) are everywhere in nature and are the second-most manipulated material in industry after water, but as once written by Pierre-Gilles de Gennes, their statistical physics is still in its infancy. In this talk, after a short overview of the mathematical challenges and the state of the art related to the diverse set of behaviors of GM, I will present some numerical simulations results, by using the contemporary Discrete Element Method (DEM) in order to simulate a wide variety of cases. I will also characterize the industrial relevance of the simulations, and the link with the cracks propagation.
ROSS CHURCHLEY, University of Victoria
A graph is called {\em monopolar} if its vertices can be partitioned into an independent set and a disjoint union of cliques. Monopolar graphs, which include all bipartite and split graphs, form an important subclass of the so-called polar graphs. We present a structural characterization of monopolar claw-free graphs which suggests a simple $O(n^3)$ algorithm for their recognition. This contrasts with the NP-completeness of related recognition problems, including those for monopolar graphs in general and for polar claw-free graphs.
PINAR COLAK, Simon Fraser University
Two-sided chain conditions in Leavitt path algebras [PDF]
Leavitt path algebras are a natural generalization of the Leavitt algebras, which are a class of algebras introduced by Leavitt in 1962. For a directed graph $E$, the Leavitt path algebra $L_K(E)$ of $E$ with coefficients in $K$ has received much recent attention both from algebraists and analysts over the last decade. So far, some of the algebraic properties of Leavitt path algebras have been investigated, including primitivity, simplicity and being Noetherian.
First, we explicitly describe the generators of two-sided ideals in Leavitt path algebras associated to arbitrary graphs. We show that any two-sided ideal $I$ of a Leavitt path algebra associated to an arbitrary graph is generated by elements of the form $(v + \sum_{i=1}^n\lambda_i g^i)(v - \sum_{e\in S} ee^*)$, where $g$ is a cycle based at vertex $v$, and $S$ is a finite subset of $s^{-1}(v)$. Then, we use this result to describe the necessary and sufficient conditions on the arbitrary sized graph $E$, such that the Leavitt path algebra associated to $E$ satisfies two-sided chain conditions. This is joint work with Dr. Gene Abrams, Dr. Jason P. Bell and Dr. Kulumani M. Rangaswamy.
LORRAINE DAME, University of Victoria
Student Readiness and Success in Entry Level Undergraduate Mathematics [PDF]
Which elements of a student's preparation are predictors of success in entry level undergraduate math (ELUM) courses? This presentation describes recent research at the University of Victoria, which includes studies of the relationships between ELUM course outcomes, high school grades, and diagnostic test scores. It shows that higher grades in secondary school English and Math go together with a greater probability of success and higher grades in ELUM courses. The results of an in-house developed diagnostic test show that students identified as at-risk were significantly more likely to fail or drop an ELUM course.
DENNIS EPPLE, University of Victoria
Proper Circular Arc Graphs and Path Systems on Tori [PDF]
Proper Circular Arc graphs are a generalization of proper interval graphs. In this talk, it will be shown how colourings of proper circular arc graphs, permutation groups and path systems on tori are intertwined and how these concepts can be used to derive an algebraic classification of maximal $k$-colourable proper circular arc graphs.
ROSS J. KANG, Durham University
Maximum bounded-density subgraphs of random graphs [PDF]
For the Erd{\H o}s-R\'enyi random graph, we give a precise asymptotic formula for the order of a largest vertex subset whose induced subgraph has average degree at most $t$, given that $p = p(n) \ge n^{-2/9}n^{\varepsilon}$ for some fixed $\varepsilon > 0$, $p$ is bounded away from $1$, and $t = t(n) = o(\log (n p) / \log \log (n p))$. For $t^2 = o(\log (n p) / \log \log (n p))$, we obtain two-point concentration. This generalises a theorem on the independence number of random graphs. For both the lower and upper bounds, our proofs rely on large deviations inequalities for the binomial distribution. We provide a comparison with a formula for the order of a largest vertex subset whose induced subgraph has maximum degree at most $t$, which was obtained instead by methods from analytic combinatorics. This is joint work with Nikolaos Fountoulakis and Colin McDiarmid.
HUILAN LI, TRUEMAN MACHENRY, Drexel University
The Convolution Ring of Arithmetic Functions and Symmetric Polynomials [PDF]
Inspired by Rearick (1968), we introduce two new operators, LOG and EXP. The LOG operates on generalized Fibonacci polynomials giving generalized Lucas polynomials. The EXP is the inverse of LOG. In particular, LOG takes a convolution product of generalized Fibonacci polynomials to a sum of generalized Lucas polynomials and EXP takes the sum to the convolution product. We use this structure to produce a theory of logarithms and exponentials within arithmetic functions giving another proof of the fact that the group of multiplicative functions under convolution product is isomorphic to the group of additive functions under addition. The hyperbolic trigonometric functions are constructed from the EXP operator, again, in the usual way.
SHAHLA NASSERASR, University of Victoria
Complete Solution to the TP$_2$-Completion Problem [PDF]
A matrix is called TP$_2$ if all 1-by-1 and 2-by-2 minors are positive. The TP$_2$-completion problem asks which partial matrices have a TP$_2$-completion. For each given pattern of the specified entries, an explicit finite list of polynomial inequalities in the specified entries is given that characterizes the TP$_2$-completability of any partial matrix with that pattern. The method uses a generalized form of the Bruhat order on permutations, some new partial orders on matrices and the logarithmic method to reduce to the TP$_2$-completion problem to determining the generators of a certain finitely generated, pointed cone. An algorithm that finds these polynomial (in fact monomial) inequalities for a given pattern is given.
JAMES NASTOS, UBCO
A novel branching strategy for parameterized graph modification problems [PDF]
Many \emph{fixed-parameter tractable} algorithms using a bounded search tree have been repeatedly improved by describing a larger number of branching rules involving an increasingly complex case analysis. We introduce a novel and general branching strategy that branches on the forbidden subgraphs of a relaxed class of graphs. By using the class of $P_4$-sparse graphs as the relaxed graph class, we obtain efficient bounded-search tree algorithms for several parameterized deletion problems. For the cograph edge-deletion problem and the trivially perfect edge-deletion problem, the branching strategy yields the first non-trivial bounded-search tree algorithms. For the cograph vertex deletion problem, the running time of our simple bounded search algorithm matches those previously designed with the help of complicated case distinctions and non-trivial running time analysis and computer-aided branching rules.
VARVARA SHEPELSKA, University of Manitoba
Slicely Countably Determined Banach Spaces [PDF]
We introduce the class of slicely countably determined Banach spaces which contains in particular all spaces with the Radon-Nikod\'ym property and all spaces without copies of $\ell_1$. We present many examples and several properties of this class. We give some applications to Banach spaces with the Daugavet and the alternative Daugavet properties, lush spaces and Banach spaces with numerical index 1. In particular, we show that the dual of a real infinite-dimensional Banach space with the alternative Daugavet property contains $\ell_1$ and that operators which do not fix copies of $\ell_1$ on a space with the alternative Daugavet property satisfy the alternative Daugavet equation.
CHESTER JAY WEATHERBY, University of Delaware
On the transcendence of Fourier and other infinite series [PDF]
We investigate the transcendental nature of the sums $$\sum_{n \in \mathbb{Z}} {f(n)A(n)\over B(n)} \hspace{0.25cm} \textrm{and} \hspace{0.25cm} \sum_{n \in \mathbb{Z}} {A(n)\over B(n)}$$ where $A(x),B(x)$ are polynomials with algebraic coefficients with $\deg A < \deg B$, $f$ is an algebraic valued periodic function, and the sum is over integers $n$ which are not zeros of $B(x)$. By relating these sums to the Fourier series of some special functions we are able to obtain transcendence results. In certain cases we relate these sums to a theorem of Nesterenko regarding the algebraic independence of $\pi$ and $e^{\pi \sqrt{D}}$ for positive integer $D$.
YONGJUN XING, Mathematics and Statistics of University of Regina
Spread of some classes of normal matices [PDF]
A spread of a matrix has extensive and practical applications in combinatorial optimization problems and cybernetics problems. The spread of a matrix is simply defined as the maximum absolute value of difference between any two eigenvalues of that matrix. There are many existing papers dealing with bounding the spread of a matrix in general. Of interest to us is the spread of n-by-n normal matrices with entries in closed set. In this paper, we are interested in the classes of real skew-symmetric matrices, complex Hermitian matrices and complex skew-Hermitian matrices, and we determine the structure of these matrices, in each class, when their spread attains the maximum value.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8978243470191956, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/20855/show-that-the-energy-levels-of-a-particle-in-a-specific-potential-are-e-n-n-f?answertab=votes
|
# Show that the energy levels of a particle in a specific potential are $E_n=(n+\frac{1}{2})h\omega-\frac{1}{2}\frac{F^2}{m\omega^2}$ [closed]
A particle of mass m moves on the x-axis under the influence of the potential $$V(x)=\frac{1}{2}m\omega^2x^2+Fx$$ Can anyone help me, using Schrödinger's equation in one dimension that the energy levels are: $$E_n=(n+\frac{1}{2})h\omega-\frac{1}{2}\frac{F^2}{m\omega^2}$$ Where n is a non negative integer?
-
What have you tried, and what concept exactly is giving you trouble? Remember that this is not a site to get people to do your homework for you. – David Zaslavsky♦ Feb 12 '12 at 2:42
## closed as too localized by David Zaslavsky♦Feb 12 '12 at 2:41
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, see the FAQ.
## 2 Answers
Try a change of coordinates $x\rightarrow x-x_0$, where $x_0$ is an appropriate constant.
-
$V(x)=\frac{1}{2}m\omega^2x^2+Fx=\frac{1}{2}m\omega^2(x^2+\frac{2F}{m\omega^2}x)=\frac{1}{2}m\omega^2((x+\frac{2F}{m\omega^2})^2-\frac{F^2}{m^2\omega^4})=\frac{1}{2}m\omega^2x'^2-\frac{1}{2}\frac{F^2}{m\omega^2}$ and you have, that it is potential of the oscillator minus constant. So energy levels offset by this constant.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931446373462677, "perplexity_flag": "middle"}
|
http://nrich.maths.org/8408
|
# Number Spirals
##### Stage: 2 Challenge Level:
Let's explore making spirals from the middle outwards.
We'll use a short list of numbers repeated over and over again.
Here we see one I've explored a bit. I used the numbers $1$ to $6$ and went anticlockwise.
I noticed that often when I got to the number $6$ I had completed a rectangle.
I coloured these $6$'s yellow and drew a rectangles out of dashes.
You could explore that idea further. There are many other things you can explore.
We could also look at starting it in a different way.
Perhaps you could try a new one using the numbers $1$ to $4$.
Try any other sets of numbers and explore what happens when you use them to form a spiral.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446437954902649, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/24743/does-exceptionalism-persist-as-sample-size-gets-large
|
# Does exceptionalism persist as sample size gets large?
Which of the following is more surprising?
1. In a group of 100 people, the tallest person is one inch taller than the second tallest person.
2. In a group of one billion people, the tallest person is one inch taller than the second tallest person.
Put more precisely, suppose we have a normal distribution with given mean $\mu$ and standard deviation $\sigma$. If we sample from this distribution $N$ times, what is the expected difference between the largest and second largest values in our sample? In particular, does this expected difference go to zero as $N$ grows?
In another question, it is explained how to compute the distribution $MAX_N$ of the maximum, but I don't see how to extract an estimate for the expected value of the maximum from that answer. Though $E(MAX_N)-E(MAX_{N-1})$ isn't the number I'm looking for, it might be a good enough estimate to determine if the value goes to zero as $N$ gets large.
-
For continuous distributions, 2 is much more surprising. For discrete distributions with spacing over one inch, less so, particularly if there is only one item at each height. – Ross Millikan Mar 3 '11 at 6:08
@Ross: why is continuity important? The answer I most believe right now (Michael's) suggests that 2 is indeed much more surprising, but that this is special to the normal distribution. That is, if you sample from a continuous distribution with a longer tail, 1 is much more surprising. – Anton Geraschenko Mar 3 '11 at 6:12
I was thinking continuity is only important because samples can be close. Think of taking one billion samples from a distribution of the even naturals from 0 to 10^10^10. You wouldn't be surprised that the largest was more than 1 because you would have to get two from the same bin. But over that range, even with a continuous distribution you would expect the gap from the top to the second to be >1. – Ross Millikan Mar 3 '11 at 16:24
## 6 Answers
A quick heuristic attempt at this: first, standard results on order statistics tell us that if we take $n$ samples from any distribution, with CDF $F$, the $k$th shortest person out of $n$ will typically have height around $F^{-1}(k/(n+1))$.
So fix $\mu = 0$ and $\sigma = 1$. Then we expect the height of the tallest out of $n-1$ people to be around $\Phi^{-1}(1-1/n)$, and the height of the second tallest to be around $\Phi^{-1}(1-2/n)$, where $\Phi$ is the standard normal CDF. The question, then, is what happens to $\Phi^{-1}(1-1/n)-\Phi^{-1}(1-2/n)$ as $n$ gets large.
Now, it's a standard estimate that for large $z$, $1-\Phi(z) \approx \phi(z)/z$, where $\phi(z) = e^{-z^2/2}/\sqrt{2\pi}$ is the standard normal PDF. So let $\epsilon = 1-\Phi(z)$; then we get $\epsilon \approx \phi(z)/z$. Inverting gives the approximation
$$\Phi^{-1}(1-\epsilon) \approx W\left( {1 \over 2\epsilon^2 \pi} \right)^{1/2}$$,
where $W$ is the Lambert $W$ function, the inverse of $x \rightarrow xe^x$. In particular, if $\epsilon = 1/n$, then we have
$$\Phi^{-1}(1-1/n) \approx W \left( {n^2 \over 2 \pi} \right)^{1/2}$$.
So finally the question becomes, what happens to
$$W\left( {4n^2 \over 2 \pi} \right)^{1/2} - W\left( {n^2 \over 2\pi} \right)^{1/2}$$
as $n$ gets large? It appears that this goes to zero as $n$ gets large; that is, smaller gaps are expected between the smallest and second smallest entries in larger samples from the normal distribution. So (2) is more surprising.
That being said, I've thrown out a lot here, but I'm guessing that this captures the correct asymptotics.
-
1
I like it. This suggests that the difference between the top two will go to zero precisely when the original PDF decays faster than $1/x^2$ (i.e. the CDF approaches $1$ faster than $1-1/x$). Intuitively, it's believable that the $k$-th shortest person will "typically" have height $F^{-1}(k/(n+1))$, but I couldn't find this statement on the wikipedia page. Is this really the correct expected value? – Anton Geraschenko Mar 3 '11 at 5:54
@Anton Geraschenko: In general it is not true. What is true is that for a sample size $n$ from a uniform distribution on [0,1], the expected value of the $k$th ordered value is $k/(n+1)$: it has a Beta distribution. – Henry Mar 3 '11 at 9:58
Henry's comment is correct; that's why I used words like "typical" that don't have technical meanings. – Michael Lugo Mar 3 '11 at 14:56
If $\mu_{k:n}$ is the mean of the $k$th largest value, then, for an N$(0,1)$ distribution, $F^{-1}(\frac{k-1}{n}) \leq \mu_{k:n} \leq F^{-1}(\frac{k}{n})$ (David and Nagaraja, Order Statistics, p. 81). Perhaps these bounds can be used to make Michael's argument more rigorous? – Mike Spivey Mar 3 '11 at 17:45
@Anton Geraschenko: Neither it's true that a decay faster than 1/x^2 implies that the difference will go to zero. See the exponential case, in my answer or in Shai Covo's – leonbloy Mar 3 '11 at 18:24
The precise version of the question was answered in the affirmative in the paper "Extremes, Extreme Spacings, and Tail Lengths: An Investigation for Some Important Distributions," by Mudholkar, Chaubey, and Tian (Calcutta Statistical Association Bulletin 61, 2009, pp. 243-265). (Unfortunately, I haven't been able to find an online copy.)
Let $X_{i:n}$ denote the $i$th order statistic from a random sample of size $n$. Let $S_{n:n} = X_{n:n} - X_{n-1:n}$, the rightmost extreme spacing. The OP asks for $E[S_{n:n}]$ when sampling from a normal distribution.
The authors prove that, for an $N(0,1)$ distribution, $\sqrt{2 \log n}$ $S_{n:n}$ converges in distribution to $\log Z - \log Y$, where $f_{Z,Y}(z,y) = e^{-z}$ if $0 \leq y \leq z$ and $0$ otherwise.
Thus $S_{n:n} = O_p(1/\sqrt{\log n})$ and therefore converges in probability to $0$ as $n \to \infty$. So $\lim_{n \to \infty} E[S_{n:n}] = 0$ as well. Moreover, since $E[\log Z - \log Y] = 1$, $E[S_{n:n}] \sim \frac{1}{\sqrt{2 \log n}}$. (For another argument in favor of this last statement, see my previous answer to this question.)
In other words, (2) is more surprising.
Added: This, does, however, depend on the fact that the sampling is from the normal distribution. The authors classify the distribution of extreme spacings as ES short, if $S_{n:n}$ converges in probability to $0$ as $n \to \infty$; ES medium, if $S_{n:n}$ is bounded but non-zero in probability; and ES long, if $S_{n:n}$ diverges in probability. While the $N(0,1)$ distribution has ES short right tails, the authors show that the gamma family has ES medium right tails (see Shai Covo's answer for the special case of the exponential) and the Pareto family has ES long right tails.
-
Let $\mu_{i:n}$ denote the mean of the $i$th largest observation from a sample of size $n$ from a distribution. The question is about the behavior of $\mu_{n:n} - \mu_{n-1:n}$ for a normal distribution. David and Nagaraja's Order Statistics says that
1. For an $N(0,1)$ distribution, $\mu_{n:n}$ has the asymptotically dominant term $\sqrt{2\log n}$ (pp. 302-303, see also Shai Covo's answer).
2. For any distribution, $\mu_{n-1:n} + (n-1)\mu_{n:n} = n \mu_{n-1,n-1}$ (p. 44).
From (2), we have $\mu_{n:n} - \mu_{n-1:n} = n (\mu_{n:n} - \mu_{n-1,n-1})$.
Adding the asymptotic from (1), we see that $\mu_{n:n} - \mu_{n-1:n}$ has the asymptotically dominant term $n \left(\sqrt{2\log n} - \sqrt{2\log (n-1)} \right)$, which is dominated by $$\frac{n}{n \sqrt{2 \log n}} = \frac{1}{\sqrt{2 \log n}},$$ which tells us both that $\mu_{n:n} - \mu_{n-1:n} \to 0$ and the rate at which it does so.
-
Revised answer.
A very accurate approximation for the case of the normal distribution can be found in this paper. Let $X_{1:n} \leq X_{2:n} \leq \cdots \leq X_{n:n}$ be the ordered statistics obtained from a random sample $X_1,X_2,\ldots,X_n$, where $X_i \sim {\rm Normal}(\mu,\sigma^2)$. According to Eq. (2), for $i \geq n/2$ and as $n \to \infty$, $$X_{i:n} \approx \mu + \sigma \bigg[\sqrt {2\ln n} - \frac{{\ln (\ln n) + \ln (4\pi ) - 2W_{i:n} }}{{2\sqrt {2\ln n} }}\bigg],$$ where $W_{i:n}$ has the density $$g_{i:n} (w) = \frac{1}{{(n - i)!}}\exp ( - (n - i + 1)w - \exp ( - w)), \;\; - \infty < w < \infty .$$ Thus, for example, $$g_{n:n} (w) = \exp ( - w - \exp ( - w)), \;\; - \infty < w < \infty$$ and $$g_{n-1:n} (w) = \exp ( - 2w - \exp ( - w)), \;\; - \infty < w < \infty .$$ According to Eqs. (3) and (4) of that paper, $${\rm E}[X_{n:n} ] \approx \mu + \sigma \bigg[\sqrt {2\ln n} - \frac{{\ln (\ln n) + \ln (4\pi ) - 2 \cdot 0.5772}}{{2\sqrt {2\ln n} }}\bigg]$$ and $${\rm Var}[X_{n:n} ] \approx \frac{{\sigma ^2 \cdot 1.64493}}{{2\ln n}}.$$
Some general facts, which are somewhat useful in our context. If $X_{1:n} \leq X_{2:n} \leq \cdots \leq X_{n:n}$ are the ordered statistics obtained from a random sample $X_1,X_2,\ldots,X_n$, where the $X_i$ have cdf $F$ and pdf $f$, then $${\rm E}[X_{i:n}] = \frac{{n!}}{{(i - 1)!(n - i)!}}\int_{ - \infty }^\infty {x [ F(x)] ^{i - 1} [ 1 - F(x)] ^{n - i} f(x)\,dx}.$$ By an exercise in a book on order statistics, $${\rm E}[X_{r + 1:n} - X_{r:n} ] = {n \choose r}\int_{ - \infty }^\infty {[F(x)]^r [1 - F(x)]^{n - r}\, dx} ,\;\; r = 1, \ldots ,n - 1.$$ Letting $r=n-1$ thus gives $${\rm E}[X_{n:n} - X_{n-1:n} ] = n \int_{ - \infty }^\infty {[F(x)]^{n-1} [1 - F(x)]\, dx}.$$ Applying this formula to the case of exponential with mean $\theta$ gives a constant difference: $${\rm E}[X_{n:n} - X_{n-1:n} ] = n\int_0^\infty {(1 - e^{ - x/\theta } )^{n - 1} e^{ - x/\theta } \, dx} = \theta.$$ Nevertheless, the corresponding pdf, $\theta ^{ - 1} e^{ - x/\theta } \mathbf{1}(x \geq 0)$, goes to zero much faster than, say, $1/x^2$ as $x \to \infty$. In fact, $X_{n:n} - X_{n-1:n}$ is exponentially distributed with mean $\theta$ (see also leonbloy's answer). Indeed, substituting the exponential cdf $F(x)=(1-e^{-x/\theta})\mathbf{1}(x \geq 0)$ and pdf $f(x)=\theta^{-1} e^{-x/\theta}\mathbf{1}(x \geq 0)$ into the general formula $$f_{X_{n:n} - X_{n-1:n} } (w) = \frac{{n!}}{{(n - 2)!}}\int_{ - \infty }^\infty {[F(x)]^{n - 2} f(x)f(x + w)\,dx},\;\; 0 < w < \infty$$ for the density of $X_{n:n}-X_{n-1:n}$ (which is a special case of the formula for $X_{j:n}-X_{i:n}$, $1 \leq i < j \leq n$), gives $$f_{X_{n:n} - X_{n-1:n} } (w) = \theta^{-1}e^{-w/\theta}, \;\; 0 < w < \infty,$$ that is, $X_{n:n} - X_{n-1:n}$ is exponential with mean $\theta$.
-
What I found is that $${\rm E}[X_{r + 1:n} - X_{r:n} ] = {n \choose r}\int_{ - \infty }^\infty {[1-F(x)]^r [F(x)]^{n - r}\, dx} ,\;\; r = 1, \ldots ,n - 1.$$ If the pdf is symmetric, such as normal, it wouldn't matter, but if the pdf is not symmetric, it matters. I think the difference between the largest and the second largest goes to 0 when n increases for any continuous real distribution. Intuitively, it makes sense. – Theta30 Mar 3 '11 at 6:09
@Mielo: where did you find that formula? – Shai Covo Mar 3 '11 at 6:18
"Expected values of normal order statistics", Biometrika, 48 – Theta30 Mar 3 '11 at 6:25
@Mielo: I don't have access to that paper. The formula given in my answer should be correct. – Shai Covo Mar 3 '11 at 6:32
@Mielo: In the paper you found, $X_{n:n}$ denotes the largest order statistic? Is the cdf assumed symmetric? – Shai Covo Mar 3 '11 at 7:03
show 3 more comments
I'd take this approach:
• Call $p_d= P(x_1 > x_2 + d)$
• The probability that sample $x_1$ is the largest value AND it exceeds the second largest in more than $d$ is just $p_d^{N-1}$
• The probability that the largest value exceeds the second largest in more that $d$ is then $N p_d^{N-1}$ WRONG-fixed below
To compute $p_d$: that is the probability that the difference of two iid normals exceeds $d$. But the difference of two normals is a normal of media cero, and variance $2 \sigma^2$, so that probability is given by the integral of the normal cumulative distribution function. What matters to us is that it's a constant that don't depend on $N$.
So the probability in question (fixed $d$, $N \to \infty$) tends to zero.
UPDATE: As correctly pointed out in the comments, the second step is wrong, the events are not independent. They are independent, though, if $x_1$ is fixed - that is, they are conditionally independent. So:
$P(x_1 > x_2 + d | x_1) = F_x(x_1 - d)$
$P(x_1 > x_i + d | x_1) = F_x^{N-1}(x_1 - d)$
And $P(x_1 > x_i + d) = \int_{-\infty}^{\infty}F_x^{N-1}(x - d) f_x(x) dx$
(this tends to $1/N$ as $d \to 0^+$, as was to be expected)
So, restricting to $d \ge 0$, the probability that the largest value exceeds the second largest in more that $d$ is (if I have not messed up anything else) :
$P(x_A - x_B > d)= N \int_{-\infty}^{\infty} F_x^{N-1}(x - d) f_x(x) dx$
where $x_A$ is the largest value and $x_B$ the second one.
The question is about the above formula going to zero or not for fixed $d$, and growing N. That seems to depends on the density.
As a check: if $x$ is exponential with parameter $\lambda$, the integrals can be evaluated, and I get:
$P(x_A - x_B > d) = exp( - \lambda d)$
This (besides tending to 1 for $d \to 0^+$, as it should) does not depend on $N$. Hence, for an exponential distribution the two events of the original question are equally surprising.
-
1
You can't treat the events $(x_1>x_i+d)$ as independent. For example, if $d=0$, then $p_d=1/2$ (regardless what the original distribution was), but the probability that the largest value exceeds the second largest value by at least $0$ is clearly $1$, not $N/2^{N-1}$. – Anton Geraschenko Mar 3 '11 at 5:40
You are totally right. I tried to fix it. – leonbloy Mar 3 '11 at 15:10
I'm no statistician, but it would clearly depend on what type of distribution you're talking about. The highest-rated answer is talking about PDFs and CDFs, and then he goes on to pick a CDF and do calculations off of that. The problem with that is that not everything is a bell curve or other very common distribution.
Given a completely flat distribution, such as a computer's random number generator, outliers will evaporate very quickly.
Given a bell curve, such as your height example, outliers will be semi-persistent.
For a distribution function with infinite variance (i.e. +/- infinity,) then there is no reason for outliers to have close neighbors.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 140, "mathjax_display_tex": 17, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411843419075012, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/58576/how-to-show-martingale-is-bounded-in-l1
|
# How to show martingale is bounded in $L^1$?
Fix a filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_n)_{n\geq 0},\mathbb{P})$ and an $L^1$-bounded submartingale $X_n$.
We can show that, for $n\geq 0$, the sequence $(\mathbb E[X^+_p|\mathcal{F}_n],p\geq n)$ is increasing and converges to an a.s. limit $M_n$. We want to show that $(M_n)_{n\geq 0}$ is an $L^1$-bounded martingale.
$$\mathbb E[M_{n+1}|\mathcal{F}_n]=\mathbb E\lim_{p\to\infty}[\mathbb E[X_p^+|\mathcal{F}_{n+1}]|\mathcal{F}_n]$$
So it seems the right tool here is dominated convergence. By the conditional dominated convergence theorem, $M$ is a martingale if, for $n\geq0$, $\mathbb E|M_n|<\infty$.
How do we show that $M$ is $L^1$-bounded?
Thank you.
-
## 1 Answer
The $L^1$-bounded submartingale $(X^+_p)$ has a Riesz decomposition $$X^+_p=Y_p+Z_p\hskip1.5cm (1)$$ where $(Y_p)$ is a martingale and $Z_p\to 0$ in $L^1$. Conditioning on ${\cal F}_n$ where $p\geq n$, we get $$\mathbb{E}(X^+_p\, |\,{\cal F}_n) =\mathbb{E}(Y_p\, |\,{\cal F}_n)+\mathbb{E}(Z_p\, |\,{\cal F}_n)=Y_n +\mathbb{E}(Z_p\, |\,{\cal F}_n).$$ Letting $p\to\infty$ shows that $\mathbb{E}(X^+_p\, |\,{\cal F}_n)$ converges to $Y_n$ in $L^1$. The martingale $(Y_n)$ is $L^1$ bounded by (1).
The Riesz decomposition follows because $(X^+_p)$ is a quasimartingale, see this reference.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9180859923362732, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/283591/hyperbola-is-a-pair-of-straight-lines
|
# Hyperbola is a pair of straight lines?
I'm confused by this question:
If $f(x) = 2x^2 - 6y^2+xy+2x-17y-12=0$ is to represent a pair of straight lines, one of which has equation $x+2y+3=0$, what must be the equation of the other line? Verify that $f(x)=0$ does, indeed, represent a pair of straight lines.
Given the general form of a conic section $Ax^2+By^2+Cxy+Dx+Ey+F=0$ we know that if $C^2 > 4AC$ as here, it's a hyperbola. Therefore I don't get how the equation can represent 2 straight lines. Any clues?
-
2
Conceptually, simply recall that a conic section is, of course, the intersection of a plane with a cone. (See en.wikipedia.org/wiki/Conic_section ) When the plane happens to pass through the apex of the cone, you get the degenerate cases: the "point" ellipse/circle, the "line" parabola, and the "crossed lines" hyperbola (which is effectively its own set of asymptotes). – Blue Jan 21 at 17:41
## 1 Answer
Dividing $f(x,y)$ through by the suggested $x+2y+3$ gives $$f(x,y) = (x+2y+3)(2x-3y-4)=0.$$ The product is zero when either $x+2y+3=0$ or $2x-3y-4=0$, both of which are equations for lines.
You're right that $f$ is has positive discriminant, but it happens to be a reducible degenerate conic. Maybe the simplest example is $y^2-x^2=0$, which is clearly a pair of lines. Generally speaking, a conic section $f(x,y)=0$ will be degenerate any time you can factor $f(x,y) = a(x,y)b(x,y)$.
-
Thanks very much! I think I need to brush up my long division :/ – Luigi Plinge Jan 21 at 17:40
1
One trick for performing the division is to write $f(x,y) = (x+2y+3)(ax+by+c)$ and then equate coefficients to find $a,b,c$. – user7530 Jan 21 at 17:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246209859848022, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/2263/how-to-cluster-stocks-and-construct-an-affinity-matrix?answertab=active
|
# How to cluster stocks and construct an affinity matrix?
My goal is to find clusters of stocks. The "affinity" matrix will define the "closeness" of points. This article gives a bit more background. The ultimate purpose is to investigate the "cohesion" within ETFs and between similar ETFs for arbitrage possibilities. Eventually if everything goes well this could lead to the creation of a tool for risk modelling or valuation. Currently the project is in the proposal/POC phase so resources are limited.
I found this Python example for clustering with related docs. The code uses correlations of the difference in open and close prices as values for the affinity matrix. I prefer to use the average return and standard deviation of returns. This can be visualised as a two dimensional space with the average and standard deviation as dimensions. Instead of correlation, I would then calculate the "distance" between data points (stocks) and fill the affinity matrix with the distances. The choice of the distance function is still an open issue. Is calculating the distance between data points instead of correlations valid?
If it is can I extend this approach with more dimensions, such as dividend yield or ratios such as price/earnings?
I did a few experiments with different numbers of parameters and different distance functions resulting in different numbers of clusters ranging from 1 to more than 300 for a sample size of 900 stocks. The sample consists of large and mid cap stocks listed on the NYSE and NASDAQ. Is there a rule of thumb for the number of clusters one should expect?
-
4
@Tal: How can I down vote Tal's comment? This is not the first time I see Tal post an offensive comment. Can somebody stop him? I LOVE this site but feel torturing seeing Tal's very subjective and offensive comments. By the way, Navi, I think I know what you try to do and also think that is valid. – Alchemist Oct 27 '11 at 21:09
– Tal Fishman Oct 27 '11 at 21:25
3
@Alchemist: you can't down vote comments, but you can flag them. See the FAQ. – Ryogi Oct 27 '11 at 23:32
## 3 Answers
You should consider an unsupervised learning algorithm such as K-nearest neighbor ('KNN').
KNN will measure the distance amongst the observations in your space. You can and probably should consider alternative distance functions (besides euclidean) particularly if you are clustering on features such as returns which have outliers. There are quite a few unsupervised clustering algorithms out there - see here. You can certainly include features such as stock characteristics with these algorithms. You can also include the betas of the securities with respect to various risk factors as well. This would allow you to capture the distances in correlation space since a security based covariance matrix can be expressed as the : cross-product of (betas for factors) * covariance matrix of factor returns * transposed(betas for factors).
I would spend time thinking about the appropriate choice of features (which features are stable? which features predict risk or return? which sets of features are contributing unique sources of information? what are the invariants?) and choice of distance function.
Also, if you are mixing features with different unit scales (i.e. returns, betas, variances) then you need to normalize/pre-process your inputs otherwise the features with the highest variance will be the primary basis for clustering. Alternatively, you can stick to one class of features for your your clustering so you have some more intuition on interpreting the results.
-
– Navi Oct 30 '11 at 12:51
Depends on your application. If you are going to find a covariance matrix (which can be expressed in terms of a linear factor model), then using an algorithm that uncovers non-linear structure doesn't help you. However, if you use such an algorithm to group similar individuals and then build, say, different , polynomial regressions or regression models with interaction terms, within each cluster it might be worth using this clustering technique. On its face the algorithm seems interesting - consider your application scenario. – Quant Guy Oct 30 '11 at 16:29
Rather than suggesting alternative clustering techniques, as Quant Guy and Flake have (great advice, btw), I'll offer my thoughts on the method you've proposed.
On the characteristics used to cluster stocks: You propose using sample statistics (mean and standard deviation of returns). I would suggest you use the entire return (not price) series. For example, if stock A's returns on two successive days are (+1%,-1%) and B's returns are (-1%,+1%), your method would rank these two very closely based on mean and standard deviation, when in fact they should be quite far apart, particularly if most stock pairs in your sample are positively correlated (which I believe the vast majority are). Potential elaborations on this method include volatility-adjusted returns, market-beta adjusted returns, and excess returns relative to some risk model. I would shy away from using too many non-return characteristics, particularly fundamentals such as dividend yield or P/E, but you may want to introduce size (market cap) and industry. If your sample is exclusively ETFs, I strongly reiterate my advice to shy away from all fundamentals, including size and industry (which are irrelevant for broad ETFs and potentially misleading for narrow ones).
On the distance function: The function referenced in the paper you linked seems very complicated so I can't comment on it, but in general you should consider that if you use more than one type of input (e.g. returns and fundamentals), you should upweight differences which should be close together and downweight differences which are expected to be far apart, using something like Mahalanobis.
On the number of clusters: You will need to think more about your sample to figure out the optimal number of clusters. If you are looking at the entire market, I think statistical techniques are generally good at identifying no more than about 5 independent sources of variation. Assuming each one of these has 2 potential states, that implies $2^5=32$ clusters. If you repeat the analysis using only ETFs in your sample, given all the overlap, I'd go for even fewer, perhaps 10 clusters.
Best of luck. Your question does make sense to me now; the edits helped immensely.
-
Quant Guy's answer is quite informative for your question already.
Just to add few other things: instead of figuring out the choice of features by your own brain, you could also use machine learning techniques to help in extracting the 'features' for your specific purpose, e.g. risk modeling or returns forecasting or portfolio construction as mentioned by Tal.
Take a look at Principal component analysis and manifold learning (e.g. isomap). Even more interesting, the Unsupervised Feature Learning and Deep Learning. The first two methods both have implementation in the scikit-learn, the library you are currently looking into.
The first two methods mentioned above, could help you not only to extract the more important components from you features, but also to visualize your clustering given your feature dimension is bigger than 2.
-
Thanks for the links. PCA forms the basis for spectral clustering doesn't it. I am not familiar with the term "Deep Learning". What does it mean? I have a list about 20 parameters that could be considered for the featureset. Performance wise might be good to do some cherry picking first - I am not sure whether the algorithms in scikit-learn are O(n) or higher order. – Navi Oct 30 '11 at 13:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398343563079834, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?s=7400154d1617519c275ecb052f9c2382&p=3953927
|
Physics Forums
## Fourier transform capabilities in reconstructing missing data
Hi,
I know this topic is more suited for Computing & Technology, but it has even more to do with general questions about Fourier transform capabilities. I have a question about sample restoration in Discrete Fourier Transform. Suppose we have a signal with the stack of frequencies from 1 Hz to 128 Hz, and the number of samples is twice the number of frequencies, i.e. 256. If for some reason half of the samples is lost, first half or last half (128 samples), can we restore completely those lost samples from the remaining half? Note that in this case the one who has to restore the signal doesn`t know what frequencies are in the signal, but does know the number of the missing samples. Can we find all the frequencies from that sparse signal? In this example, the frequencies are harmonics and all beginning at the same faze and with the same amplitude.
If we apply FFT algorithm on the time domain, I assume that the information about missing samples can be extracted from the frequency domain, because all sine waves have the same length in a time domain, and the only thing which happened is that frequencies are missing their fazes, for example, 1Hz is cut in half of its faze, 2Hz are missing a whole faze, and so on. Is it possible to use FFT to analyze the remaining samples, discover all the frequencies in a frequency domain, and use those frequencies to recreate a time domain and restore missing samples?
I ask this because I found a paper about recovering missing samples from oversampled band-limited signals. There is a statement in it which says that a band-limited oversampled signal is completely determined even if an arbitrary finite number of samples is lost. I understood this statement as described above. I just want to clarify this in the general sense of the matter, the possibility or feasibility of that kind of sample restoration.
And, if this method of sample restoration is possible, I have another question. Given the previous example, can a signal be restored if we continue to stack frequencies from 128Hz on, as fn + 1Hz = fn+1, n<256? Is there an algorithm that can find all the frequencies in a partial time domain, as I described here, even if there are greater number of frequencies than the number of remaining samples?
If its not absolutely necessary, please dont insert math equations because all I see is [Math Processing Error], write them in text instead. Thank you.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Science Advisor The key words are "oversampled" and "bandwidth limited". Without those restrictions, you can't reconstruct anything, because every data point (in the both the time domain and the frequency domain) is independent of all the other data points.
OK, my mistake. Lets say that, in the same example, the stack of frequencies is still from 1 to 128 Hz, the max bandwidth is 256 Hz, and the signal is oversampled 2x, giving the total of 512 samples. If half of the samples are lost (first or last 256 samples), is it possible to restore the whole signal without losses? And if that same signal had additional frequency stacking like fn + 1Hz = fn+1, n<256, can all those frequencies be restored in their domain, and make an inverse FFT to recreate a complete time domain with all 512 original samples? And, if its possible, how it`s done? Sorry for all those question marks, I hope to find the wright answers. Thanks again.
Recognitions:
Science Advisor
## Fourier transform capabilities in reconstructing missing data
If should be farily easy to see this won't work, without doing any math. If it did work, you could repeat it as may times as you want, and re-create an arbitrary amount of the original signal from just 256 samples.
If you are sampling at 512 samples/sec, your 256 samples cover 0.5 sec of data. When you do the FFT, you get frequencies of 0, 2, ... 512 Hz (or -256, -254, ... -2, 0, 2 ... 256 of you prefer). If the signal is bandwidth limited to 128 Hz, half of the amplitudes should be zero, but the calculated ampltides will NOT be zero, because the FFT is taking a finite length of data and assuming that is repeats as a periodic signal. That periodic signal will NOT be bandwidth limited to 128 Hz, because the two ends of the sample won't "match up" properly. It will only be limited to the Nyquist frequency of 256 Hz.
So, if you try to extend the 256 samples to 512, you have two problems. One is that half the 256 fourier coefficients are "wrong" (they are nonzero but they should be zero). The other is that you don't have amplitudes for the frequencies of 1, 3, 5 ... 255 Hz. You could make some approximations to generate the missing Fourier data, but that won't re-create the missing data exactly.
There is an underlying "catch 22" here. A bandwidth limited (analog) signal must have nonzero amplitude for an infinite amount of time, so you can't represent its frequency content "exactly" from a finite number of samples. On the other hand, a signal with infinte bandwidth CAN be zero everywhere except for a finite time interval, but you can't represent it "exactly" with a finite number of frequencies so you need still need an "infinite" number of samples and an "infintely high" sampling rate.
Of course you can use techniques like this to change the sample rate and/or interpolate missing data approximately in a useful way, but that's not the same as regenerating the missing data exactly.
Thread Tools
| | | |
|------------------------------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: Fourier transform capabilities in reconstructing missing data | | |
| Thread | Forum | Replies |
| | Calculus | 0 |
| | Calculus | 0 |
| | Calculus & Beyond Homework | 4 |
| | Calculus | 0 |
| | General Physics | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9264068007469177, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/2359-isomorphism.html
|
# Thread:
1. ## Isomorphism
Let n be a positive integer. Define f : Z ! Zn by f(a) = [a]. Is f a homomorphism? One-to-one? Onto? An isomorphism? Explain.
Can any one help? Thanks very much.
2. Originally Posted by suedenation
Let n be a positive integer. Define f : Z ! Zn by f(a) = [a]. Is f a homomorphism? One-to-one? Onto? An isomorphism? Explain.
Can any one help? Thanks very much.
Since [ ] is the greatest integer function we have,
$[\, a \, ]=a$ for all integers.
1)Onto: We need to show that for all $x\in\mathbb{Z}_n$ we can find such a $y\in \mathbb{Z}$ such as, $[\, y\, ]=x$ which is true if you take $x=y$.
2)One-to-One: We need to show that if $[\, x \, ]=[\, y \, ]$ then, $x=y$. Based on the first paragraph that $x,y$ are integers, we have that $x=y$
3)Homomorphism: We need to show that,
$[\, x+y\, ]=[\, x\, ]+_n[\, y\, ]$
Because, $x,y$ are integers we have,
$x+y=x+_ny$. Which is definitely not true.
-----
Thus, this map is not an isomorphism. Think about it, how can you have an isomorphsim between a finite and an infinite set? Impossible.
3. Actually, [a] is the equivalence class of a modulo n; i.e., [a]={k:k in Z and k=a (mod n)}.
4. ## :o
Well, the method is the same as ThePerfectHacker has demonstrated.
For the homomorphism part, check whether $<br /> f(x+y)=[\, x+y\, ]$ is equal to $f(x)+f(y)=[\, x\, ]+[\, y\, ].<br />$ Not hard at all
Ifr $f$ were to be one-to-one, you would have had to show that $x\neq y \Rightarrow f(x)\neq f(y).$ Try testing this, with integers like $a$ and $2a$.
The onto part is also not hard. Just consider an equivalence class $[a]$, and find an integer (obvious!) to map to this.
And, since an isomorphism needs to be one-to-one, this (is?/is not?) an isomorphism.
Try it.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281684160232544, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/1664/how-many-explanatory-variables-is-too-many/1681
|
# How many explanatory variables is too many?
When researching any sort of predictive model, whether using ordinary linear regression or more sophisticated methods such as neural networks or classification and regression trees, there seems to always be a temptation to add in more explanatory variables/factors. The in-sample performance of the model always improves, and sometimes it improves a great deal, even after one has already added quite a few variables already. When is it too much? When is the supposed improvement in in-sample performance very unlikely to carry over into live trading? How can you measure this (beyond simple things like the Akaike and Bayesian Information Criteria, which don't work very well in my experience anyway)? Advice, references, and experiences would all be welcome.
-
## 5 Answers
“Make things as simple as possible, but not simpler.” The problem you want to avoid is (near) multicollinearity. The tip-off will be that adding/removing a regressor will significantly change the coefficients on the other regressors. In practice (well, in the research that I read) I rarely see this explicitly tested.
If you think that you have multicollinearity, then it's likely best to either estimate over a subset without multicollinearity or to drop the offending regressors. A model with less explanatory power as measured by $R^2$ is certainly better than a model with incorrect (unstable) explanatory power.
-
This is a good start, but ultimately it's still in-sample. If what you're interested in is prediction, it's hard to beat actually predicting out-of-sample and seeing how good it gets. Since you don't care about the coefficients just the results, avoiding multicollinearity isn't really the answer here. – Ari B. Friedman Aug 14 '11 at 9:08
@gsk3 -- Isn't everything in sample? I don't know tomorrow's data and I would use all (relevant) available data to calibrate my model. If my model is correct, then it should work in sub-samples, unless I think there are multiple regimes, but then it should work in sub-samples of the relevant regime. – richardh♦ Aug 15 '11 at 9:38
1
@gsk3 -- And this wouldn't set the upper limit on the number of factors/regressors. I could add humidity downtown and humidity midtown as regressors in my model and "improve" its explanatory power, although these almost certainly have no impact on my model. Because these are collinear, I could get economically and statistically significant coefficients on these factors, even though a change in humidity has no impact on the market. – richardh♦ Aug 15 '11 at 9:47
You want to know how it performs out-of-sample. Therefore you agree in advance that when developing your model you will only use, say 2/3 of your data. That tricks nature into giving you 1/3 of your data as out-of-sample data that you actually can observe. This is a common technique in predictive applications, and it works quite well when you've got a lot of data--as is likely the case here. – Ari B. Friedman Aug 15 '11 at 9:50
The point is that you arrived at humidity down/uptown by trying out a bunch of regressors and chancing on one that fit that dataset. Therefore, the holdout sample is unlikely to show the regressor as being good, and you'll see the problem. If it does work in the holdout sample, maybe you've just found a new regressor that actually does work. It doesn't matter whether it's causal or not--for prediction covariance is good enough as long as it consistently covaries. – Ari B. Friedman Aug 15 '11 at 10:57
show 5 more comments
Although not directly related to financial modeling, I've found the following quotation to be very instructive:
"I remember my friend Johnny von Neumann used to say, 'with four parameters I can fit an elephant and with five I can make him wiggle his trunk.'" -- E. Fermi
You may also read this: http://mahalanobis.twoday.net/stories/264091/
-
Funny, but not really an answer. Can you please make this a comment on my question? This is actually a serious question I am facing in my work right now and would appreciate serious answers on this site. – Tal Fishman Aug 19 '11 at 20:34
There's no rule to answer this question for you. You need some combination of:
• Judgment: Are the parameters you're including reasonable?
• Sniff test: Is there theory to justify your parameter choices, or are you just hunting for chance associations?
• Hold-outs: You correctly mention that the problem is "in sample performance." The solution is therefore to hold out some data when you start and look at out-of-sample performance. Of course, if you iterate enough times, you can over-fit your holdout sample, too! So save this until the last step, and be honest with yourself.
As always, the key is to be certain of what question you are trying to answer. Then you can muster as much unbiased evidence as possible.
-
I think you're looking for a metric that quantifies the effectiveness of the added variable(s). Objectively, you want each variable to have correlation to your model estimation output and non-correlation between other variables that may be utilized. If you adjust your $R^2$ metric accordingly (less degrees of freedom per variable) you'll get a reasonable feel for where the limit is for adding more variables (otherwise $R^2$ will just increase and you're back to asking the same question).
-
Hi WaveRider, welcome to Quant.SE and thanks for your answer. – Tal Fishman Sep 2 '11 at 14:16
Just pick up a decent econometrics book (Gujurati is what I used in school).
If you have multicollinearity, find a dummy variable.
http://en.wikipedia.org/wiki/Coefficient_of_determination#Adjusted_R2 << this should be somewhat helpful.
I have no trading experience, so cum grano salis.
-
2
I am not the person who downvoted this question, but I am puzzled at your statement. Why should we find a dummy variable when we have multicollinearity in a model? Under what conditions? – Thomas Aug 29 '11 at 6:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474017024040222, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/91940/intuition-around-why-domain-of-x-of-arcsine-and-arccosine-is-11-for-real-re/91941
|
# Intuition around why domain of x of arcsine and arccosine is [-1;1] for “real result” & domain for arctangent is all real numbers
Context
I'm working my way through basic trig (this question has a focus on inverse trig functions, specifically arcsine, arccosine and arctangent ), using Khan Academy, wikipedia and some of "trig without tears" - http://oakroadsystems.com/twt/
What I think I understand
My understanding is that the range of the usual principal value of arcsine, arccosine and arctangent is defined by "convention". Which reads to me, to mean a consensus of mathematicians agreed upon these principle values (I'm certain with some underlying reasoning, which I don't know of yet) to solve the issue of potential multiple results from a the same input to one of these functions.
What I don't get
Domain of arcsine & arccosine, why?
When discussing the domain of "x" for these 3 functions, as shown in this table on wikipedia: http://en.wikipedia.org/wiki/Inverse_trigonometric_function#Principal_values I see the domain of arcsine and arccosine is −1 ≤ x ≤ 1.
I've watched a number of basic tutorials on the unit circle and how it can be used to help solve these functions. So of course, I can see and visualize the fact that the maximum and minimum values of X on the unit circle are 1 & -1.
But I'm struggling to understand the intuition behind the restriction of the domain? My understanding right now is that the unit circle has x value [1;-1], so is that why the range is that?
Domain of arctangent, why?
Also the domain for arctangent is "all real numbers" - in the video on Khan Academy http://www.khanacademy.org/video/inverse-trig-functions--arctan?playlist=Trigonometry Sal (main teacher on Khan Academy) talks about how tangent of something also represents that slope of a line (I guess the hypotenuse) and how that could have infinite results.
I don't really understand this. If slope is rise over run - isn't there a limit to that ratio?
-
If the the sine (and thus the cosine as well) is always between $-1$ and $1$ for all real angles, then it stands to reason that the domain of the inverse is $[-1,1]$... – J. M. Dec 16 '11 at 5:29
"If slope is rise over run - isn't there a limit to that ratio?" - a vertical line is infinitely steep, no? – J. M. Dec 16 '11 at 5:30
## 2 Answers
Remember that $\arcsin$ is supposed to be the inverse function of $\sin$ (or at least, of a "restricted sine function"; that's the choice of 'principal value').
The way that $\arcsin$ works is supposed to be: you plug in the value somebody else got out of the sine function, and $\arcsin$ will tell you what number was put into the sine function to get that value. It's like a "reverse telephone directory": you look up the phone number and find out the person it belongs to, instead of the usual way of looking up the person and finding their phone number.
But that means that the only things that you can put into the $\arcsin$ functions are real numbers that actually come out of the sine function (the only numbers you can look up in a reverse telephone directory are telephone numbers, so you can't look up "000-0000").
What are the numbers that can come out of the sine function? Every number between $-1$ and $1$ (inclusively), but only those numbers: the sine function will never give a result that is greater than $1$ or smaller than $-1$. That means that the numbers you can plug into $\arcsin$ are only the numbers that come out of the sine function: the numbers between $-1$ and $1$.
The same thing is true for $\arccos$: you can only put into $\arccos$ numbers that may come out of the cosine function, and the only numbers that may come out of the cosine function are the numbers on $[-1,1]$.
However when we come to the $\arctan$ function, things are different: what are the numbers that may come out of the tangent function? Every real number! Every real numbers is the tangent of some angle, so now we can put into $\arctan$ any real number, because any real number is, potentially (and in actuality) the result of applying the tangent function.
Note. Your final paragraph seems to be confusing the "trigonometric tangent function" with the tangent line to the graph of a function. The "trigonometric tangent function" is the function defined by $$\tan(x) = \frac{\sin(x)}{\cos(x)}.$$ The "tangent line to the graph of $f(x)$ at $x=a$" is a straight line that has certain properties (it goes through $(a,f(a))$, and is the straight line that "best approximates" the graph of $y=f(x)$ near the point $(a,f(a))$). The derivative, which is a key concept in calculus, is the slope of the tangent-line-to-the-graph-of-$f(x)$ (which is defined as a limit of a certain ratio), not to the trigonometric tangent function which is what $\arctan(x)$ is related to.
-
thanks. WHy will the sine function never give a result greater than 1 or smaller than -1? – drc Dec 16 '11 at 21:01
1
@drc: Because of how it is defined. If you define it in terms of right triangles, the sine of $x$ is the length of the opposite side divided by the length of the hypothenuse; since the hypothenuse is always at least as long as the opposite side, then the quotient is never greater than $1$; taking into account signs give you that is never smaller than $-1$. If you define the sine function as the $y$-coordinate of a point on the unit circle, then the coordinates on the unit circle satisfy $x^2+y^2=1$, and so they must satisfy $|y|=\sqrt{y^2}=\sqrt{1-x^2}\leq \sqrt{1} = 1$. So $|\sin t|\leq 1$. – Arturo Magidin Dec 17 '11 at 3:40
Thanks this is great, the key thing I missed her was the fact that the hypothenuse is at least as long as the opposite side. Awesome stuff! – drc Dec 20 '11 at 3:13
If $f(x)$ and $f^{-1}(x)$ are inverse functions (meaning that $(f \circ f^{-1})(x) = (f^{-1} \circ f)(x) = x$ on the respective domains), then the domain of $f$ is the range of $f^{-1}$ and the domain of $f^{-1}$ is the range of $f$.
In your case, since the range of $y = \sin x$ is $[-1,1]$, then the domain of $y = \arcsin x$ is $[-1,1]$. Since the range for $y = \tan x$ is $(-\infty, \infty)$, then the domain for $y = \arctan x$ is $(-\infty, \infty)$.
Finally, when we discuss the functions $y = \arcsin x$ and $y = \arccos x$, we restrict the range of these functions so that the functions are one-to-one. Being one-to-one is what allows us to rightfully describe these functions as inverses.
-
From my reading, you are stating the convention, but I don't really see where you have showing why. – drc Dec 16 '11 at 20:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325920343399048, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/205201/how-to-show-that-this-logical-argument-is-valid/259197
|
# How to show that this logical argument is valid?
I am asked to show the following argument is valid:
I know you need to use the rules of inference like modus ponens/converse fallacy but I'm confused because it doesn't look like any of the forms I've learned about?
$$N\to B\lor S\\ S\to W\lor A \\ M\to N\land W \\ \text{therefore, }M\to B\lor A$$
I don't want to use the truth table because it will be real long. If someone can get me started i would really appreciate the help. thx
-
After you ask a question here, if you get an acceptable answer, you should "accept" the answer by clicking the check mark ✓ next to it. This scores points for you and for the person who answered your question. If you don't do this, people are less likely to answer your later questions. – MJD Oct 1 '12 at 3:35
## 2 Answers
No valid argument can prove this. Suppose $M, N, S$ and $W$ are true, and $A$ and $B$ are false. Then the three premisses are all true, but the conclusion is false.
-
As a way to find this, the only way to make the conclusion false is to have $M$ true and $A$ and $B$ false. To make the premises true you then (from the third) need $N$ and $W$ true. Then from the first you need $S$ to be true. Now check that the second is satisfied and you are done. – Ross Millikan Oct 1 '12 at 4:28
You could turn your question into a binary decision diagram (BDD). If you assure that these BDDs are short and unique for logical equivalence classes, than you can read-off from the BDDs the following:
1) If the BDD has the form "true", then it is a tautology.
2) If the BDD has the form "false", then its negation is a tautology.
3) In all other cases the BDD nor its negation is a tautology.
In case 2) and 3) there is a model of the BDD that make it false. Let's give it a try, and let's convert your problem into a BDD via the program in ( * ) and ( ** ). We get the following result:
````?- convert((n=>b v s)&(s=>w v a)&(m=>n & w)=>(m=>b v a),X).
X = (a->true;b->true;m->(n->(s->(w->false;true);true);true);true)
````
So it indeed differs from "true" and should be thus falsifiable. By extracting a CNF from the BDD we can also read off a counter model. Let's also do it for your problem:
````?- cnf((a->true;b->true;m->(n->(s->(w->false;true);true);true);true),[],L).
L = [not(w),not(s),not(n),not(m),b,a]
````
So the only counter model is w=1, s=1, n=1, m=1, b=0, a=0.
Proof: If the BDD is not "true", then the CNF will at least contain one row. From this row we can directly construct a model that falsifies the row, for unmentioned propositional variables add either false or true, for mentioned propositional variables add a value that falsifies the literal in the row. Since by construction no propositional variable occurs twice in a row, the row can be falsified. Since the row is falsified the whole CNF is falsified.
( * ) Invert, Inter and Union on BDDs with Lexical Variable Order:
http://www.xlog.ch/jekejeke/principia/shannon.p
( ** ) Convert, DNF and CNF for BDDs:
http://www.xlog.ch/jekejeke/principia/nfs.p
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8925772309303284, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/130870-proof-limits-sequences.html
|
# Thread:
1. ## Proof: Limits of Sequences
Supply a proof for the following theorem: Suppose that f is continuous at a and that x(subscript)n tends to a as n goes to infinity. Then there is an integer N such that f(x(subscript)n) is defined for all integers n > N; furthermore, f(x(subscript)n) tends to f(a) as n goes to infinity.
Should I be typing these in LaTeX and posting them here somehow? I hope it is obvious that typing this out with regular script that x(subscript)n represents the sequence x-sub-n. This is my first post so I hope to clean up my future posts before submission to aviod confusion or ambiguity.
2. [tex]\lim_{n \to \infty}(x_n)\to a[/tex] gives $\lim_{n \to \infty}(x_n)\to a$
3. Originally Posted by paulread
Supply a proof for the following theorem: Suppose that f is continuous at a and that x(subscript)n tends to a as n goes to infinity. Then there is an integer N such that f(x(subscript)n) is defined for all integers n > N; furthermore, f(x(subscript)n) tends to f(a) as n goes to infinity.
Should I be typing these in LaTeX and posting them here somehow? I hope it is obvious that typing this out with regular script that x(subscript)n represents the sequence x-sub-n. This is my first post so I hope to clean up my future posts before submission to aviod confusion or ambiguity.
Let $N$ be any neighborhood of $f(x)$, by $f$'s continuity we know that $f^{-1}(N)$ is a neighborhood of $x$ and so by assumption all but finitely many of $\left\{x_n\right\}_{n\in\mathbb{N}}$ and so all but finitely many of $f(x_n)$ are in $ff^{-1}(N)\subseteq N$.
The conclusion follows.
P.S. You should probably specify what you kind of space we are in here. The above works for any topological space with a convergent sequence. Some weird things happen though. For example, any sequence in an indiscrete space converges to every point.
4. By the way, some textbooks define "f is continuous at x=a" to mean that $\lim_{n\to\infty} f(x_n}= f(a)$ for every sequence $\{x_n\}$ that converges to a.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469419121742249, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/30762/model-selection-for-linear-mixed-models-over-alternative-sets-of-parameters-nlm
|
# Model-selection for linear mixed models over alternative sets of parameters (nlme function in R)
My models look like:
````lme1 = lme(y~X+Y+V, random=~1|Subject, data=mydata, method ="ML")
lme2 = lme(y~X+Y+V2+V3, random=~1|Subject, data=mydata, method ="ML")
lme3 = lme(y~X+Y+V4, random=~1|Subject, data=mydata, method ="ML")
````
where X and Y are factors, but V, V2, V3,and V4 are continuous variables (modeled as covariates). I am using `Method ="ML"` in the hope that I could compare the likelihood values across the models.
My research question has to do with whether V4 (in lme3) was a better predictor than V2 and V3 together, V2+V3 was better than V, etc. What goodness of fits measure is valid here? Can I use AIC values to compare models of different sets of parameters?
I've also found some references on computing $R^2$ for mixed models. In particular, I am interested in the likelihood ratio test $R^2$ (Magee, 1990) which computes a $R^2$ by comparing each of these models to the null model. Using this method, I'd be comparing all three of my models to the same null model with just `y~1`. Is it then a valid approach to compare the $R^2$s generated?
I am not a statistician but I would like to use a valid (at least justifiable) measure for my analysis. Any feedback would be greatly appreciated.
-
1
Since the models are fit by maximum likelihood, AIC would be defensible here. – Macro Jun 19 '12 at 21:28
1
@Macro: thanks for your comment. Could you elaborate on how AIC would be defensible here, when it comes down to comparing models Lme2 and LME3, even though they are non-nested and the number of variables are different? There seemed to be some disagreements over whether AICs can be used in non-nested models. Should I be concerned about that? Thanks again for your help! – Wynn Jun 20 '12 at 18:54
2
The AIC is an estimate of the (relative) Kullback-Leibler information. There is no reason to believe that it is applicable only for nested models (I actually don't know why people think that). – Néstor Jun 20 '12 at 19:09
1
@Néstor Very true. You can use likelihood-ratio tests for nested models. AIC and the like are especially useful for non-nested models (i.e., situations where you cannot use likelihood-ratio tests). – Henrik Jun 20 '12 at 20:43
I agree with everyone about AIC, but I just wanted to point out that in the code you have above, `y` is being modelled as continuous, even though you say it's a factor. You might want to consider using `lmer` in the `lme4` package or the `MCMCglmm` package depending on what kind of factor `y` is. – smillig Aug 28 '12 at 6:59
## 1 Answer
I would use Akaike’s Information Criterion ($AIC$) for model selection where: $$AIC = -2ln(L)-2k$$ Though a better alternative is often $AIC_c$, which is the second-order Akaike’s Information Criterion ($AIC_c$). $AIC_c$ is corrected for small sample size with an addtion bias-correction term because $AIC$ can perform poorly when the ratio of sample size to the number parameters in the model is small (Burnham and Anderson 2002). $$AIC_c = -2ln(L)-2k+\frac{2k(k+1)}{(n-k-1)}$$ In fact, I would always use $AIC_c$ since the bias-correction term goes to zero as sample size increases. However, there are some types of models where it is difficult to determine sample size (i.e., hierarchical models of abundance see links to these model types here).
$AIC$ or $AIC_c$ can be recaled to $\mathsf{\Delta}_i=AIC_i-minAIC$ where the best model will have $\mathsf{\Delta}_i=0$. Further, these values can be used to estimate relative strength of evidence ($w_i$) for the alternative models where: $$w_i = \frac{e^{(-0.5\mathsf{\Delta}_i)}}{\sum_{r=1}^Re^{(-0.5\mathsf{\Delta}_i)}}$$ This is often refered to as the "weight of evidence" for model $i$ from the model set. As $\mathsf{\Delta}_i$ increases, $w_i$ decreases suggesting model $i$ is less plausible. Also, the weights of evidence for the models in a model set can be use in model averaging and multi-model inference.
-
– RioRaider Aug 31 '12 at 4:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365447759628296, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/157866/derivative-for-matrix-function?answertab=active
|
# Derivative for Matrix function
I have a matrix kernel function which I am trying to find the derivative to. Function is K = c * exp[-1/2 * (P(X1 - X2))' * P(X1 -X2)] where uppercase are matrices and lower case are scalars (and ' denotes transpose). I'm trying to find dK/dP. I'm pretty rusty on matrix calculus, can anyone give me a hand here?
Thanks
-
1
Your notation is not very clear. Is $P$ a matrix? then did you mean something like $K(P) = c \exp[-\frac12 \| P(X_1 - X_2) \|^2]$? In that case you should probably write (P(X1-X2))', the transpose should be around everything and maybe a trace before it? Is P symmetric or not? – passerby51 Jun 13 '12 at 17:43
I was assuming this is a matrix valued function. If that's true I'd undelete my (by now corrected) answer. If @passerby51 assumption is correct, the derivative is just $$D_V K = -1/2K \langle V(X_1-X_2), P(X_1-X_2)\rangle$$ with $\langle.,.\rangle$ the scalar product on the vector space of matrices. – user20266 Jun 13 '12 at 17:50
Sorry yeah the transpose is around the whole thing. P is not symmetric. – tomas Jun 13 '12 at 17:50
Yeah it is a matrix valued function. I didn't quite get a chance to see your deleted answer Thomas. What was it? Thanks for the response guys. – tomas Jun 13 '12 at 17:53
undeleted my reply. – user20266 Jun 13 '12 at 17:53
show 3 more comments
## 1 Answer
Let $$Z(P)=(P(X_1-X_2))^TP(X_1-X_2)$$ If $K$ is viewed as depending only on $P$ and if you differentiate in direction $V$ you get $$D_V K = K*\frac{-1}{2} \left\{\frac{I-e^{-ad_{Z(P)}}}{ad_{Z(p)}} \left[ (V(X_1-X_2))^TP(X_1-X_2) + P(X_1-X_2)^TV(X_1-X_2) \right] \right\}$$
The term on the right hand side in curly parenthesis needs explanation. It is the matrix valued function $\frac{I-e^{-ad_{Z(P)}}}{ad_{Z(P))}}$ applied to the term in square brackets. This in turn means
$$\frac{I-e^{-ad_{Z}}}{ad_{Z}}[Y] = Y-\frac{[Z,Y]}{2!} + \frac{[Z,[Z,Y]]}{3!} - \ldots$$
See, e.g., Chapter 3.3 in Brian C. Halls Book 'Lie Groups, Lie Algebras and Representations for a derivation of the derivative of the exponential.
(Sorry for posting a too simple and wrong answer first, which is true only if $Z$ and $D_VZ$ commute). (I don't like the $\frac{d}{dP}$ notation).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402824640274048, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/101476/radius-of-convergence-of-sum-n-geq2-fraczn-lnn
|
# Radius of convergence of $\sum_{n\geq2}\frac{z^{n}}{\ln(n)}$?
What is the R.O.C. of the following power series:
$$\sum_{n\geq2}\frac{z^{n}}{\ln(n)}\qquad?$$ Here is my attempt:
$$\lim_{n\rightarrow\infty}\left|\frac {z^{n+1}\ln(n)}{\ln(n+1)z^{n}}\right|=\lim_{n\rightarrow\infty}\left|\frac{z\ln(n)}{\ln(n+1)}\right|=z$$ so the R.O.C. = $\frac {1}{z}$. Is this right?
-
3
Well, here's a first check: the radius of convergence of a power series is a number (or possibly $+\infty$), right? Is $\frac{1}{z}$ a number? – Pete L. Clark Jan 23 '12 at 0:34
– Did Jan 23 '12 at 0:40
1
@Emir: That rule you cite is not quite correct. How does it make any sense that the range of possible $z$ values actually depends on the variable $z$? (Answer: It doesn't make any sense.) – anon Jan 23 '12 at 0:57
2
The right formula is $1/L=\lim_{n\to\infty} |a_{n+1}/a_n|$. In your case $a_n=\log(n)$. – emiliocba Jan 23 '12 at 1:29
2
@Emir: Can you find the radius of converge for $\sum_n z^n$? – JavaMan Jan 23 '12 at 2:14
show 1 more comment
## 1 Answer
Using the Cauchy-Hadamard theorem:
$$\frac{1}R=\limsup_{n\to\infty} \left|\frac{1}{\ln(n)}\right|^\frac{1}{n} = 1$$
Then $R=1$.
EDIT: Alternatively, the same result can be obtained using the (less general) ratio test, since: $$\frac{1}R=\lim_{n\to\infty} \left|\frac{\ln(n+1)}{\ln(n)}\right| = 1$$
-
1
Well, this is certainly correct, but the OP asked about the correctness of his argument. So this is not a maximally helpful answer. (And do you really think that someone who thinks a radius of convergence might be $\frac{1}{z}$ knows about the limit superior? To me it seems unlikely.) – Pete L. Clark Jan 23 '12 at 3:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8694950938224792, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/5968/plotting-implicitly-defined-space-curves
|
# Plotting implicitly-defined space curves
It is known that space curves can either be defined parametrically,
$$\begin{align*}x&=f(t)\\y&=g(t)\\z&=h(t)\end{align*}$$
or as the intersection of two surfaces,
$$\begin{align*}F(x,y,z)&=0\\G(x,y,z)&=0\end{align*}$$
Curves represented parametrically can of course be plotted in Mathematica using ParametricPlot3D[]. Though implicitly-defined plane curves can be plotted with ContourPlot[], and implicitly-defined surfaces can be plotted with ContourPlot3D[], no facilities exist for plotting space curves like the intersection of the torus $(x^2+y^2+z^2+8)^2=36(x^2+y^2)$ and the cylinder $y^2+(z-2)^2=4$:
Sometimes, one might be lucky and manage to find a parametrization for the intersection of two algebraic surfaces, but these situations are few and far between, especially if the two surfaces are of sufficiently high degree. The situation is worse if at least one of the surfaces is transcendental.
How might one write a routine that plots space curves defined as the intersection of two implicitly-defined surfaces?
It would be preferable if the routine returns only Line[] objects representing the space curve. A routine that handles only algebraic surfaces would be an acceptable answer, but it would be nice if your routine can handle transcendental surfaces as well. A bonus feature for the routine might be the ability to determine if the two surfaces given do not have a space curve intersection, or intersect only at a single point, or other such degeneracies.
-
1
– rm -rf♦ May 23 '12 at 20:18
@R.M, Funny, I thought getting the intersection of two parametrically-defined surfaces was an even tougher problem... so I decided not to ask for it in the question. :) That would be nice, though. – J. M.♦ May 23 '12 at 20:25
@J. M.♦ Can we extend the question a bit by considering intersection of two BSpline surfaces that has no explicit analytical form? This will be very effective for 3D modeling. – PlatoManiac May 23 '12 at 21:28
@Plato: that might be best done as a separate question... (that would be covered by the "two parametrically-defined surfaces" case, as BSplineSurface[] objects can be transformed into BSplineFunction[]s). – J. M.♦ May 23 '12 at 21:36
2
@rcollyer Regarding the equation: if you accept a differential equation then it's easy: take the cross product of the gradients of the implicit functions, and perhaps normalize it (call it t[r], where r is a 3D point). Then the curve is described by D[r[u],u] == t[r[u]] for curve parameter u. This is because t must be tangent to both surfaces for all points r on the intersection curve. You "just" have to integrate that equation with a starting point r0 on the intersection. – Jens May 24 '12 at 3:26
show 4 more comments
## 2 Answers
I take zero credit for this. It is a method I learned from Maxim Rytin.
ContourPlot3D[{(x^2 + y^2 + z^2 + 8)^2 - 36 (x^2 + y^2),
y^2 + (z - 2)^2 - 4}, {x, -4, 4}, {y, -4, 4}, {z, -2, 2},
Contours -> {0}, ContourStyle -> Opacity[0], Mesh -> None,
BoundaryStyle -> {1 -> None, 2 -> None, {1, 2} -> {{Green, Tube[.03]}}}, Boxed -> False]
-
2
+1 Magical ;-) {1,2}->Green means intersection (boundary) between surface 1 and 2 will be green. Here is a minimal set of options to make it work: ContourPlot3D[{(x^2 + y^2 + z^2 + 8)^2 - 36 (x^2 + y^2), y^2 + (z - 2)^2 - 4}, {x, -4, 4}, {y, -4, 4}, {z, -2, 2}, ContourStyle -> None, Mesh -> None, BoundaryStyle -> {1 -> None, 2 -> None, {1, 2} -> Green}] – Vitaliy Kaurov May 23 '12 at 22:28
Nice! I find it interesting, though, that the docs do not say that the Filling format specification can be applied to BoundaryStyle. Very useful. – rcollyer May 24 '12 at 3:01
I'll just note that if you take a look at the InputForm[] of the output of Daniel's version, the Polygon[] objects representing the isosurfaces are still there, just transparent. One could of course do something like ContourPlot3D[{(x^2 + y^2 + z^2 + 8)^2 - 36 (x^2 + y^2), y^2 + (z - 2)^2 - 4}, {x, -4, 4}, {y, -4, 4}, {z, -2, 2}, BoundaryStyle -> {1 -> None, 2 -> None, {1, 2} -> {}}, ContourStyle -> None, Mesh -> None] (as Vitaliy comments) if one just wants the curves themselves. So many unused points, though... – J. M.♦ May 24 '12 at 5:49
@R.M Thanks for the updates. I seem to be getting an awful lot of upvotes for code that was never of my own devising, and not notably better (that I can tell) than that in the other response. Embarrassed am I. – Daniel Lichtblau May 24 '12 at 14:22
As rightly you should be keeping all the geometric goodies to yourself. There should be a Spelunking tag... – Yves Klett Sep 17 '12 at 8:33
This is admittedly a hack, and does not work as well with PlotPoints / MaxRecursion as a dedicated function would (you will notice that I needed to increase PlotPoints for a good result), but it's a good way to make a quick and dirty plot:
ContourPlot3D[(x^2 + y^2 + z^2 + 8)^2 == 36 (x^2 + y^2),
{x, -4, 4}, {y, -4, 4}, {z, -2, 2},
MeshFunctions -> {Function[{x, y, z}, y^2 + (z - 2)^2 - 4]},
Mesh -> {{0}},
ContourStyle -> None,
PlotPoints -> 30, BoxRatios -> Automatic]
Addendum
Per @whuber's comment below, the same thing can be achieved using RegionFunction:
ContourPlot3D[(x^2 + y^2 + z^2 + 8)^2 == 36 (x^2 + y^2),
{x, -4, 4}, {y, -4, 4}, {z, -2, 2},
RegionFunction -> Function[{x, y, z}, y^2 + (z - 2)^2 < 4],
ContourStyle -> None, Mesh -> None, BoxRatios -> Automatic]
The style can be adjusted using BoundaryStyle.
I believe that this will give good quality results even without increasing PlotPoints, but I'll need to do some more testing before I can be certain about that...
-
What made this problem hard back in the days of old Mathematica was that the version of ContourPlot3D[] available in the standard packages didn't have the MeshFunctions option, and thus it took a bit of labor to look at which polygons were intersecting. This is nice, even if it is quick-and-dirty. :) – J. M.♦ May 23 '12 at 20:29
+1 With suitable graphics settings, the RegionFunction option will accomplish something similar. – whuber May 23 '12 at 20:32
@J.M. I think Mathematica's plotting framework is really powerful, and it uses adaptive sampling at every stage. I am not sure if mesh lines are computed with a higher resolution than the surface or not, but the system definitely has the capability to do this for the intersection of two surfaces: it does it when computing exclusions. Try e.g. Plot3D[Boole[x^2 + y^2 < 1], {x, -1.1, 1.1}, {y, -1.1, 1.1}] and not how smooth and sharp the circle boundary is. If you turn off exclusions (Exclusions -> None), it'll be a lot more jagged. – Szabolcs May 23 '12 at 20:33
@whuber And you just answered the question that didn't fit in the comment I wrote above! RegionFunction will sample the intersection curve very smoothly, similarly to how Exclusions are computed! (The Plot example above.) This should give a better result than MeshFunctions, I think, but I'd need to test it. – Szabolcs May 23 '12 at 20:34
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188105463981628, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/296117/special-kaehler-manifolds
|
# Special Kaehler manifolds
If we have complex vector space $V=T^{*}C^{m}$ with standard complex symplectic form $\Omega =\sum_{i=1}^{m}dz^{i}\wedge dw^{i}$, and if $\tau : V\to V$ is standard real structure of $V$ with set of fixed points $V^{\tau }=T^{*}R^{m}$. Then $\gamma =\sqrt{-1}\Omega (.,\tau .)$ defines a Hermitian form. A holomorphic immersion $\phi : M\to V$ of a complex manifold $M$ into $V$ is called nondegenerate if $\phi ^{*}\gamma$ is nondegenerate. If $\phi$ is nondegenerate $\phi^{*}\gamma$ defines a Kaehler metric $g$ on $M$. If, additionaly, $\phi$ is a Lagrangian immersion then it induces a torsionfree flat connection $\nabla$ on $M$. These are facts from paper of V. Cortes, Realization of special Kaehler manifolds as parabolic spheres. So, I tried to understand them by using the simplest example where $m = 2$ but unsuccessfully.
My question is how we get metric $g$ and connection $\nabla$ on $M$, and what means that $\phi^{*}\gamma$ is nondegenerate?
-
## 1 Answer
Your questions are answered in section 1.3 of this paper. The basic ideas are as follows.
The form $\phi^\ast \gamma$ is just the pullback of $\gamma$ by $\phi$: For any $m \in M$ and vectors $u, v \in T_m M$, $$(\phi^\ast \gamma)_m(u, v) = \gamma(d\phi_m(u), d\phi_m(v)).$$ So nondegeneracy of $\phi^\ast \gamma$ means nondegeneracy as a form, i.e. $$(\phi^\ast \gamma)(u, v) = 0 \text{ for all $v$ if and only if $u = 0$}.$$
The induced metric $g$ on $M$ is $g = \mathrm{Re}(\phi^\ast \gamma)$.
Section 1.3 of the linked paper explains how to get a flat, torsion-free connection $\nabla$ on $M$ in the case that $\phi$ is a totally complex holomorphic immersion. Proposition 6 tells us that a holomorphic immersion is Lagrangian and nondegenerate if and only if it is Lagrangian and totally complex. Hence we can apply the totally complex case here to get our flat, torsion-free connection.
-
Is there an example of such an immersion, that is Lagrangian and totally complex? Is Kaehler immersion same as totally complex? In the paper above everything is pure theory with no examples. – Novak Djokovic May 1 at 3:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174156785011292, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/266928/sum-of-random-number-of-random-variables
|
# Sum of random number of random variables
Consider a sequence of independent and identically distributed random variables $X_1, X_2, \ldots$ such that $$\mathbb P(X_i\geqslant k)=\prod_{\ell=1}^{k-2}\frac{n-\ell}n \ \textrm{ for every } 2\leqslant k\leqslant n+1.$$
Consider also $$Z= \sum_{i=1}^Y X_i ,$$
where $Y$ follows a geometric distribution with success probability $1/n$.
What is the mean and variance of $Z$ and is it possible to calculate its full distribution? I am particularly interested in what happens for large $n$.
-
2
If $Y,X_1,X_2,\ldots$ are independent, then $E(Z)$ is just equal to $E(Y)E(X)$. Otherwise, you need to specify how $Y,X_1,X_2,\ldots$ are related to each other. – user1551 Dec 29 '12 at 8:39
Thanks. I should have added variance explicitly and not have it hidden in the full distribution part. Fixed. – user54551 Dec 29 '12 at 9:20
But you haven't answer the key question: are $Y,X_1,X_2,\ldots$ independent? – user1551 Dec 29 '12 at 9:22
Sorry. Yes they are. – user54551 Dec 29 '12 at 9:25
I have edited that bit of information into your question. See if I understand you correctly. If not, feel free to roll back. – user1551 Dec 29 '12 at 9:35
## 1 Answer
To elaborate on @user1551 comment, under independence, you could use the law of iterated expectations to derive what you need. First, the conditional expectation of $Z$ on fixing $Y=y$ is \begin{eqnarray*} E \left[ Z|Y = y \right] & = & E \left[ \sum_{i = 1}^y X_i \right]\\ & = & yE \left[ X_i \right] \end{eqnarray*} This allows to compute $E[Z]$ \begin{eqnarray*} E \left[ Z \right] & = & E_Y \left[ E \left[ Z|Y = y \right] \right]\\ & = & E_Y \left[ yE \left[ X_i \right] \right]\\ & = & E \left[ Y \right] E \left[ X_i \right] \end{eqnarray*}
For the variance of $Z$, you could exploit formulas for conditional variances (such as here).
You could apply a similar argument in order to derive the cumulative distribution of $Z$. \begin{eqnarray*} \Pr \left[ Z \leqslant z \right] & = & E_Y \left[ \Pr \left[ Z \leqslant z|y \right] \right] \end{eqnarray*}
-
@lip1 Sorry, there were two terms missing from the formula. – Learner Jan 1 at 2:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8348373174667358, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4176841
|
Physics Forums
## Which of these interpretations of the modulus squared of wavefunction is right?
Does $|\psi(\mathbf{x},t)|^2d^3\mathbf{x}$ or $|\psi(\mathbf{x},t)|^2d^3\mathbf{x}dt$ give the probability of a particle to collapse at the point $\mathbf{x}$ at time $t$?
Griffiths sides with the former, but I'm having doubts.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Quote by dEdt Does $|\psi(\mathbf{x},t)|^2d^3\mathbf{x}$ or $|\psi(\mathbf{x},t)|^2d^3\mathbf{x}dt$ give the probability of a particle to collapse at the point $\mathbf{x}$ at time $t$? Griffiths sides with the former, but I'm having doubts.
It's the former, it doesn't make sense to integrate over time, at any instant t, the integration over space gives you the overall probability at that time, which is 1.
Quote by cattlecattle It's the former, it doesn't make sense to integrate over time, at any instant t, the integration over space gives you the overall probability at that time, which is 1.
Here are my issues: 1) space and time seem to be treated on different footings, and 2) the probability that say a Geiger counter goes off at a given time t is zero, but the probability that it goes off over some time interval is non-zero.
## Which of these interpretations of the modulus squared of wavefunction is right?
Quote by dEdt Here are my issues: 1) space and time seem to be treated on different footings,
Yes. This is nonrelativistic quantum mechanics, which treats space and time differently. To fix this we have relativistic quantum field theory.
Quote by dEdt 2) the probability that say a Geiger counter goes off at a given time t is zero, but the probability that it goes off over some time interval is non-zero.
If you do an ideal position measurement at time t, the probability of finding the particle *somwhere* is 1. Geiger counters don't do ideal position measurements; the quantum mechanical analysis of radioactive decay is somewhat more complicated.
2) the probability that say a Geiger counter goes off at a given time t is zero, but the probability that it goes off over some time interval is non-zero.
This is true, but I think the probability given by $|\psi^2|dV$ according to Born's interpretation should not be used directly to predict the frequency of counts of the Geiger detector (unless one smuggles the source intensity to $\psi$, which can allow us to do just that; but then the above form of Born's rule is not applicable.)
Instead, if the wave function for a particle is normalized (the most clear approach), it gives us the probability that this particle is at some point of space (without the necessity to detect it there).
You are right that the number of counts(clicks) of detector set in some definite distance from the piece of matter scattering charged particles will be proportional to time interval of the measurement, but this is because larger interval allows more $\textit{distinct particles}$ to come at the detector; however, each one can be ascribed by normalized wave function that gives density of probability in space by the rule $|\psi|^2dV$.
Thread Tools
| | | |
|------------------------------------------------------------------------------------------------------|--------------------------------------------|---------|
| Similar Threads for: Which of these interpretations of the modulus squared of wavefunction is right? | | |
| Thread | Forum | Replies |
| | Advanced Physics Homework | 3 |
| | Quantum Physics | 9 |
| | Advanced Physics Homework | 1 |
| | Set Theory, Logic, Probability, Statistics | 10 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288160800933838, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/182581/prime-factorization-composite-integers/182595
|
# Prime factorization, Composite integers.
Describe how to find a prime factor of 1742399 using at most 441 integer divisions and one square root.
So far I have only square rooted 1742399 to get 1319.9996. I have also tried to find a prime number that divides 1742399 exactly; I have tried up to 71 but had no luck. Surely there is an easier way that I am missing without trying numerous prime numbers. Any help would be great, thanks.
-
1
- you need only to test numbers up to $1319$. - test $2$ for even numbers - test $3$ for multiples of $3$ - the remaining numbers will be $1$ or $5 \bmod 6$ – Raymond Manzoni Aug 14 '12 at 20:36
@RaymondManzoni This is undoubtedly what the question had in mind. One can also eliminate multiples of 5 fairly efficiently and reduce the number of divisions by working mod 30. – Mark Bennet Aug 14 '12 at 20:43
Note also that in Raymond's scheme the first divisor you encounter will be prime (this is not entirely trivial). – Mark Bennet Aug 14 '12 at 20:44
@RaymondManzoni Thanks for your help guys. I understand using 2 to test for even numbers and 3 to test for multiples of 3 but I am confused over where the 5 mod 6 comes from. – Apeman Aug 14 '12 at 22:25
a prime number greater than $3$ must be equal to $1$ or $5 \bmod 6$ because a prime (larger than $3$) must be odd i.e. $1,3$ or $5 \bmod 6$. Since $3\bmod 6$ was handled by the case $3$ only remains $1$ and $5$. – Raymond Manzoni Aug 14 '12 at 22:30
show 1 more comment
## 4 Answers
Are you allowed to have a table of prime numbers? There are fewer than 441 such primes below 1320.
You probably aren't supposed to use this method, but 1742399 is nearly a square as you have identified and you can write it as 1742400-1, which is the difference of two squares. This gets you two smaller factors almost for free.
-
• you need only to test numbers up to $1319$.
• test $2$ for even numbers
• test $3$ for multiples of $3$
• the remaining numbers will be $1$ or $5 \bmod 6$
-
2
And testing the remaining numbers in the natural increasing order will mean that the first divisor encountered is a prime. – Mark Bennet Aug 14 '12 at 20:46
– Raymond Manzoni Aug 14 '12 at 21:02
I liked your answer because I could see immediately where the 441 came from. I could factorise the number, but I couldn't identify where the question was coming from, and you did. – Mark Bennet Aug 14 '12 at 21:05
@Mark: in fact I began by dividing $1320$ by $441$ got nearly $3$ and deduced that the modulo $6$ was in play (I played with that long ago so...). Your prime suggestion was more powerful but would have needed the division by nearly $\ln(1320)\approx 7.2$... Cheers, – Raymond Manzoni Aug 14 '12 at 21:11
Note that the problem asks you do describe how you would go about factorizing 1742399 in at most 442 operations. You are not being asked to carry out all these operations yourself! I think your method of checking all primes up to the squareroot is exactly what the problem is looking for, but to be safe you should check that there are no more than 441 primes less than or equal to 1319.
-
Hint $\$ Prime $\rm\:p\neq 2,3\:\Rightarrow\: p\equiv \pm 1\pmod 6$
Remark $\$ For more on sieving see my post here, which includes links to ingenenious mechanical sieving machines devised by Lehmer in the precomputer era, e.g. using bicycles chains, photoelectric devices, etc. Below are some general references.
Wooding, Kjell. The Sieve Problem in One and Two-Dimensions.
PhD Thesis. Calgary, Alberta. April, 2010
Lehmer, D.H. The sieve problem for all-purpose computers, MTAC, v. 7 1953, p. 6-14
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433936476707458, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/1145/how-much-would-it-cost-in-u-s-dollars-to-brute-force-a-256-bit-key-in-a-year/1149
|
# How much would it cost in U.S. dollars to brute force a 256 bit key in a year?
I am often told that any key can be broken and that it is only a matter of time and resources for any key to be broken. I know that this it technically true. However, I think that there is probably a point where it makes sense to say a key is uncrackable (for example, if it would cost 100 times the world GDP to crack it, it is essentially uncrackable without the help of an advanced alien civilization, etc.).
How much would it cost in U.S. dollars to brute force a 256 bit key using a strong algorithm such as AES or Twofish in a year?
I would also be curious to know what it would cost to crack a 128 bit key in a year.
I am asking this mostly out of curiosity. I do not know very much about cryptography, so please feel free to pick the algorithm of your choice if that matters. I am interested in how one would project the cost (assume you have to buy the hardware but you get to choose what hardware you buy).
-
4
– Hendrik Brummermann♦ Jun 24 '12 at 20:54
## 5 Answers
256-bit key cracking through exhaustive search is totally out of reach of Mankind. And it takes quite a lot of wishful thinking to even envision a 128-bit key cracking:
• trying one key must be reduced to the flip of a single logic gate (compared to the hundreds of thousands which are actually required);
• that gate must be more energy-efficient than the most efficient logic gates currently in production;
• all of energy production on Earth must be diverted to that single key cracking goal.
Under these conditions (each of which being utterly unrealistic in its own way), a 128-bit key cracking effort could be imagined.
But this is far beyond the point where the notion of "dollar" makes any sense. The dollar is a currency: a conventional representation of "values", that people give to each other under the assumption that they could convert it back to tangible objects or services as they wish. So there is no possible notion of dollar when the sum far exceeds the total worth of what can ever be bought on Earth. The Gross World Product is, as of 2011, somewhere between 60 and 80 trillions of dollars: it depends a lot on what dollar you use as basis and how you try to map that on "purchase power". The point is that there is no meaningful notion of dollar beyond about $8*10^{13}$.
If you follow @mikeazo argument (450\$of energy consumption per machine and per year, where one machine can try about$3.2*10^{21}$keys per year), then the GWD, converted entirely in energy, would allow for$2.5*10^{35}\$ keys to be tried, i.e. a space of 118 bits or so. A 128-bit key space is 1024 times harder than that. Also, this assumes that everything produced on Earth can be reduced to energy with the same efficiency than the most competitive coal plants, which is a bit optimistic because GWD includes a lot of things which are not energy-convertible, such as artistic creations: how exactly would you make electricity out of, say, a song ? Moreover, all the energy invested in the computation becomes, ultimately, heat, so there could be some climatic consequences, as in "the Earth is cooked".
To sum up: even if you use all the dollars in the World (including the dollars which do not exist, such as accumulated debts) and fry the whole planet in the process, you can barely do 1/1000th of an exhaustive key search on 128-bit keys. So this will not happen. And a 256-bit key search is about 340 billions of billions of billions of billions times harder than a 128-bit key search, so don't even think about it.
-
4
If 128-bit key cracking is already impossible, then why do we have 256-bit keys? – Joren Nov 9 '11 at 4:42
5
@Joren Good question! :) Some attacks compromise a certain number of rounds of AES with some complexity. For instance, a 2009 attack by Biryukov et. al. compromises 9 rounds of AES with a complexity of 2^39 (as opposed to 2^256 for brute force). It stands to reason that using a 256 bit key rather than a 128 bit key is the easiest way to increase the number of rounds from 10 to 14, i.e. without changing the AES spec. On a side note, Bruce Schneier has previously commented that if ever AES is broken too badly, we merely need to increase the number of rounds to fix it. – Stefano Palazzo Nov 9 '11 at 5:25
2
@Joren: historically, AES has 128-, 192- and 256-bit keys because some inflexible US military regulations mandate the use of three distinct "security levels" (under the assumption that "really secure" cryptosystems are necessarily slow -- which was true in the 1930s, but not anymore). The three key sizes are enough to satisfy these regulations; but nobody said that the lower level had to be weak ! Nowadays, the 256-bit key size is rationalized by talking about quantum computers, but that's an afterthought. – Thomas Pornin Nov 9 '11 at 12:03
It may be worth noting that the implementation can still be messed up (as it was on Windows, making the implementation vulnerable to a rainbow table attack). – Sean Vikoren Dec 21 '11 at 16:00
There is some Thermodynamic Limitations. A good explanation about Thermodynamic Limitations is by Bruce Schneier in Applied Cryptography:
One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than $kT$, where $T$ is the absolute temperature of the system and $k$ is the Boltzman constant. (Stick with me; the physics lesson is almost over.)
Given that $k =1.38 \cdot 10^{-16} \mathrm{erg}/{^\circ}\mathrm{Kelvin}$, and that the ambient temperature of the universe is $3.2{^\circ}\mathrm K$, an ideal computer running at $3.2{^\circ}\mathrm K$ would consume $4.4 \cdot 10^{-16}$ ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump.
Now, the annual energy output of our sun is about $1.21 \cdot 10^{41}$ ergs. This is enough to power about $2.7 \cdot 10^{56}$ single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all of its energy for 32 years, without any loss, we could power a computer to count up to $2^{192}$. Of course, it wouldn’t have the energy left over to perform any useful calculations with this counter.
But that’s just one star, and a measly one at that. A typical supernova releases something like $10^{51}$ ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states.
These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.
-
The average cost for electricity in the US is $\$0.12$per kWh. For a single server I'll use 3741 kWh annually as an estimate. That would be about$\$450$ per year for one machine.
Let's say you can do $10^{14}$ decryptions per second. That is $3.15*10^{21}$ decrypts per year for one machine. You need to do (on average) $2^{255}$ decryptions in a year, so you would need $\frac{2^{255}}{3.15*10^{21}} \approx 1.84*10^{55}$ machines. To figure your cost you would multiply that by $\$450$and get about$\$8*10^{57}$ or 8 octodecillion dollars. World GDP is about $63*10^{12}$, so brute-forcing a 256-bit key would cost about $10^{44}$ times the world GDP.
You can follow similar math to get the cost of brute forcing a 128-bit key.
NOTE: I am completely ignoring hardware costs, maintenance, etc. The estimate above is for electricity only. We can take a hint from the NSA on this. You'd be a lot better off hiring a few thousand mathematicians and have them work on breaking the cipher as opposed to trying to brute-force it.
-
4
There is something wrong with the figures above. If a "machine" can do $10^{14}$ decryptions per second (a very optimistic figure, by the way), this translates to about $3.2*10^{21}$ decryptions per year, not $3.6*10^{55}$. There is a lost factor of $10^{34}$ here -- also known as "sixteen billions of billions of billions of billions of billions of billions of billions of billions of billions of billions". – Thomas Pornin Nov 8 '11 at 13:35
@ThomasPornin, you are correct. Not sure how I got off on the conversion there. I've updated the answer. Thanks! – mikeazo♦ Nov 8 '11 at 13:45
Non-technical brute force method:
The most cost-effective "brute-force" method I can think of is to hire a gang of mobsters to force the guy who knows the password into giving it up. For a guy with no security, a good mobster would probably cost about \$5,000, and you'd need at least 3 of them. If you are going for a high-profile guy, a good mobster would probably cost about \$50,000 and you would need about 25 of them. Thus, you are looking at anywhere from \$15,000 to \$1.25 million using this method.
Technical brute force method using quantum computers:
If you want to go the technical route, you need to first be sure that you can check the key solely on your resources. Any dependence on someone else's system and they will be the limiting factor, because it will be impossible to try that many combinations without overloading their system.
Once you figure out how to check the key on your system, I'd suggest using a quantum computer in parallel with your other computers. Currently, the largest quantum computer is 14 qubits. This kind of computer could theoretically try all combinations of 14 bits in one operation. Thus, the first 14 bits can be treated as one bit if you put it in parallel with your normal computer. This means you can crack the password as if it were 115 or 243 bits instead of 128 or 256, which is a huge gain (8,192 times less expensive).
The cost of your 14-qubit computer will be insignificant to your total cost, even if it were \$1 billion dollars. Thus, using mikeazo's formula, this means that you could crack the 256 bit code with$\frac{2^{242}}{7*10^{18}} \approx \$10^{54}$ dollars and the 128 bit code with $\frac{2^{114}}{7*10^{18}} \approx \$3*10^{15}$= \$3 quadrillion dollars.
In summary, with each qubit increase in our parallel quantum computer, the above prices will decrease in approximately half until they approach the point where the price of the quantum computer becomes the limiting factor. So dig down deep into that research quantum computer guys, we've got a code to crack!
-
3
– mikeazo♦ Nov 8 '11 at 17:45
@mikeazo: The mobsters are brutes, so "brute"-force...oh so funny ;) For my real answer I made the assumption that there was a way to test if any combination of the set of qubits plus a definite combination of the remaining bits is a solution. Worst case if that assumption is false, since a 256-bit quantum computer is able to reduce the key complexity by 128 bits, it is safe to assume that a 14-bit quantum computer would be able to reduce the key complexity by at least 7 bits, which is still a gain of $2^7 = 128$ times less resources. – Briguy37 Nov 8 '11 at 19:27
5
The bit about a quantum computer with 14 qubits being able to "try all combinations of 14 bits in one operation" is incorrect. It is a very tempting assumption (a view of quantum computers as zillions of computers all running in parallel through the magic of quantum), but it is wrong -- otherwise, a QC with 256 qubits could break a 256-bit key in time 1. QC does offer (theoretically) a performance boost on exhaustive search, but not to that point: it can reduce a space of size $N$ to $\sqrt{N}$ (hence 256-bit key search with a QC should be as hard as 128-bit "normal" key search). – Thomas Pornin Nov 8 '11 at 19:36
@Thomas Pornin: I don't know how you can say concretely that my assumption is incorrect. For example, can you prove that it is impossible to make a boolean quantum function that checks if any answer in the combination of qubits is a valid key? This function would allow us to fix one bit as 0 and provide the rest as qubits in both states. If the result our quantum function was true, then the bit we fixed is 0, otherwise it is 1. Thus, the number of operations to determine the key would be the number of bits in the key. – Briguy37 Nov 8 '11 at 21:03
6
@Briguy37: I cannot prove it, but some smart people can. Roughly speaking, if we can break a cipher with an $n$-bit key in less than $2^{n/2}$ operations on a quantum computer, then we can break it in less than $2^n$ operations on a classical computer. It is quite technical, but a part of the problem is that even if you have a superposition of many states, the "filtering out" part to get a classical result (a definite 0 or 1) is constrained and cannot be done "at will". – Thomas Pornin Nov 8 '11 at 21:43
The reason that encryption works is that you have to try on average the order of magnitude of 1/2 the number of permutations in the set of all possible answers. So with 128 bits you have to explore the set of 128 bit numbers and if your are lucky you will explore less than half of the possible answers and if you are unlucky you will explore more than half of the possible answers. Doubling the number of digits thus is the product of the number of possible answers multiplied by itself. Which is, of course, a very big number.
It is not that a quantum computer can do it in one operation any more than an ordinary computer can find the solution of a math problem in one calculation. There is an algorithm and you work the algorithm to find the answer. The difference between ordinary computers and quantum computers is that the quantum algorithm will examine each possible answer of your set at the same time while an ordinary computer will examine only one of the possible answer of your problem at a time (assuming a simple computer rather than one with multiple CPUs.)
As for why you can have a 128 bit key, this becomes clear if you assume that the encryption method is factoring large prime numbers. The way you encrypt is to find the largest secret prime number you practically can, and then multiply it by another prime number of similar order of magnitude. From the previous discussion it is quite clear that the result will be a number that is about twice the number of digits that you can practically factor, and to factor it would take a huge amount of time...probably more time than for the Sun to burn out.
So you encode stuff using the large number as a public cipher to cipher the message, and the only way you can decode it is to factor the public key. Only the person who knows the factors can decode the message, because this calculations are much faster. Such systems use what are called trap door functions. That is to say calculations which are very easy to do, and extremely hard to do in reverse unless you have additional information that is not public, and that can not be easily discovered.
Now the last part, which is really the kicker with quantum computers. It turns out there is some doubt whether we will be able to use such devices in the publicly claimed way. You see in order to use a quantum calculation you have to some how read the answer. The only way you have to read the answer is with statistics. There is no other way. Well, it turns out to do very accurate statistics is a difficult task. If, for example, you wished to find a 256 bit prime number, you would have to do good enough statistics to distinguish the correct 256 bit prime number for all other 256 bit prime numbers, which is to say, you must have an answer that is accurate to about 1 part in 2 raised to the 256th power. It may turn out that this task is as hard as finding the same prime number using an ordinary computer.
The fact is, one of the few definitive results in quantum research relating to interactive quantum computers is IQC = PSPACE. The result means that no interactive quantum computer can give results any faster than calculation in polynomial time. Those that are funded for quantum research claim that this doesn't mean you can't do quantum computing, and I guess they are right. But I haven't heard any of them make a public statement about how one can do an end run around the IQC limitation.
-
Factoring 256-bit numbers is not really a big problem nowadays, and is much faster than brute-forcing a similar-size key. The RSA key sizes where factoring needs a quantum computer are much larger. (Welcome to Cryptography Stack Exchange, by the way.) – Paŭlo Ebermann♦ Dec 2 '11 at 22:34
You've confused P with PSPACE there at the end. PSPACE = calculations possible in polynomial space (that is, memory). PSPACE is (probably) even bigger than NP. We don't know that IQC or BQP or any of the other quantum complexity classes are bigger than P, but it's a safe bet on the same order as betting P != NP. – Zack Dec 20 '11 at 20:23
This is post number 1337 :) – Polynomial Jul 23 '12 at 18:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507817029953003, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/118113/what-makes-elementary-functions-elementary/137236
|
# What makes elementary functions elementary?
Is there a mathematical reason (or possibly a historical one) that the "elementary" functions are what they are? As I'm learning calculus, I seem to focus most of my attention on trigonometric, logarithmic, exponential, and $n$th roots, and solving problems that have solutions which are elementary functions. I've been curious why these functions are called elementary, as opposed to some other functions that turn up rather naturally in mathematics. What is the reason that these functions take up most of our attention, and is there a reason that some additional functions are not included amongst the elementary functions? In other words, what property or properties do these functions possess that separates them from non-elementary functions (if there is one)?
-
thanks for this question. As to what I have posted earlier, there is no elementary function that exists for $x$. Now, I am also wondering about the "distinction" of elementary from those of not – Keneth Adrian Mar 9 '12 at 5:07
Those elementary functions are the familiar functions since from lower level maths. They are the most useful functions regarding mathematics and they play a very vital role in applications. That's makes them elementary. – Hassan Muhammad Mar 9 '12 at 5:22
4
– AD. Mar 9 '12 at 5:30
2
The polynomial, exponential, sine, and cosine functions are "elementary" because they are very useful and will more frequently arise naturally in an investigation (whether within math or in an application) than most other functions. So everyone needs to know them. But why are they so useful? I think fundamentally it's because they are solutions to some of the simplest differential equations you could write down. The polynomials are the functions whose nth derivative is constantly 0. The sine and cosine functions satisfy $y'' + y = 0$ and the exponential function satisfies $y' = y$. – Mike Benfield Apr 26 '12 at 13:06
@Mike: or if one wishes to be a bit more inclusive, the exponential function, the sine, the cosine, and their hyperbolic counterparts all satisfy the differential equation $y^{(iv)}=y$. – J. M. Apr 26 '12 at 13:31
show 1 more comment
## 8 Answers
As Sivaram Ambikasaran mentioned the description on Wikipedia is fine.
It believe the class of elementary functions, $E$, is commonly thought of as a construction of the form
1. All polynomials are in $E$
2. The exponential and the logarithm function is in $E$
3. The sine and cosine functions are in $E$.
4. $E$ is closed under addition, subtraction, multiplication, division and composition (finitely many operations of these).
5. $E$ is the smallest set with the properties 1-4.
This applies to both real or complex valued functions.
Edit 1:
Some examples of functions that are elementary
• $f(x)=1-x^2$
• $s(x) = \sqrt{x}$ (see addendum below)
• $g(x)=\arctan x$ (see addendum below)
• $U(x)=\sin\frac{1}{\log(1+x^2)}$
• $A(x)=|x| = \sqrt{x^2}$
• $_2F_1(1,1,2,x) = \log(1-x)$ (a Gauss Hypergeometric notation that ends up in a elementary function)
Some examples of functions that are not elementary
• The Sine integral $si(x)=\int_0^x\frac{\sin t}{t}dt$
• The Error function $erf(x)=\frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}dt$
• The Cantor function
• The characteristic function of an interval.
Edit 2:
The nomenclature elementary function is of course used since the functions that are elementary can be deduced using finitely many applications of elementary operations on basic "high school functions".
Edit 3:
Convinced by comments - removed closed under inversion in step 5. Added log in step 2.
Addendum (2013-02-28):
To see that the square root function $s(x)=\sqrt{x}$ is elementary using the above definition just note that $$x\mapsto \frac{1}{2}\log x = \log x^{1/2}=\log\sqrt{x}$$ and hence $$s(x)=\sqrt{x}=\exp(\log\sqrt{x})$$ is elementary too. Perhaps $\arctan$ is a bit harder to deduce from the other steps in the construction of $E$, first note that \begin{eqnarray} \tan x=\frac{\sin x}{\cos x}= \frac{e^{ix}-e^{-ix}}{i(e^{ix}+e^{-ix})}= -i\frac{e^{2ix}-1}{e^{2ix}+1}=\\ -i\frac{e^{2ix}+1 -2}{e^{2ix}+1} = -i\frac{e^{2ix}+1 }{e^{2ix}+1}-i\frac{-2}{e^{2ix}+1}=\\-i +i\frac{2}{e^{2ix}+1} \end{eqnarray}
Hence if we solve for, $e^{2ix}$ in the above identity we get \begin{eqnarray} e^{2ix}=\frac{2}{1-i\tan x}-1= \frac{2-(1-i\tan x)}{1-i\tan x}= \frac{1+i\tan x}{1-i\tan x} \end{eqnarray} and taking the complex logarithm (more precisely the principal branch of the logarithm) we get $$x=\frac{1}{2i}\log\frac{1+i\tan x}{1-i\tan x}$$ That is $$t\mapsto \arctan t =\frac{1}{2i}\log\frac{1+it}{1-it}$$ belongs to $E$.
-
That definition sounds wrong to me. By that definition, $\sin x = ax + b$ would have an elementary solution $x$ as a function of $a, b$. – Zsbán Ambrus Mar 9 '12 at 9:20
1
@ZsbánAmbrus Could you please explain what you mean? What you look at is an equation - not a function. – AD. Mar 9 '12 at 9:28
2
This nice, concise definition of $E$ falls short of explaining why we consider its elements "elementary". Note that polynomials are merely finite applications of addition and multiplication; $n$-th roots undo "powers" (repeated multiplication). Exponentials are merely powers with a different focus (base held constant rather than exponent); logs undo those. Trigs get into the club via ties to the complex exponential. Perhaps what makes elementary functions "elementary", then, is that they're fundamentally "arithmetical". – Blue Mar 9 '12 at 11:19
3
– Rahul Narain Mar 9 '12 at 13:04
1
Correct. Do not include function inversion. Instead of writing down what you remember, why not cite a definitive reference? – GEdgar Mar 9 '12 at 15:58
show 8 more comments
The original truly elementary functions are rational functions built up with the basic arithmetic operations.
Arctan and log are necessary and sufficient to integrate rational functions while sin/cos and exp are the solutions of the simplest differential equations and serve as a basis for the simplest interesting class of differential equations.
So, in a way, trigonometric functions, log and exp are the next thing found if one adds calculus to basic arithmetic.
But of course, it remains always slightly arbitrary since it is hard to justify the exclusion of elliptic functions, for example.
Yet, few people would argue that the elementary functions aren't simpler/satisfy simpler relations than elliptic functions in most respects.
-
2
"At one time... every young mathematician was familiar with $\mathrm{sn}\,u$, $\mathrm{cn}\,u$, and $\mathrm{dn}\,u$, and algebraic identities between these functions figured in every examination" - E.H. Neville – J. M. Apr 26 '12 at 13:26
The good old times. – Phira Apr 26 '12 at 13:30
I wonder if part of the rationale behind the exclusion of the elliptic functions is that they're inherently two-argument functions, whereas all of the elementary functions are functions of a single variable. They're also arguably more self-contained than any of the elementary functions - one is less likely to accidentally 'trip over' an elliptic function while doing other mathematical work than with any of the core elementary functions. – Steven Stadnicki Feb 28 at 21:21
There is an article an the January edition of the Notices entitled "Closed Forms: What They Are and Why We Care". http://www.ams.org/notices/201301/index.html
-
This is a sort of conjectural answer.
I suspect the elementary functions are called that simply because they are so simple. This sounds a bit tautological, so let me go on. What classifies as an elementary function? I think everyone would agree that polynomials, the nth-root functions, logarithms, exponentials, sines, and cosines are all elementary functions. A couple cool things about these functions are that we completely understand their graphs, continuity and rates of growth, and all derivatives and antiderivatives of all orders more or less. And we've seen polynomials and nth-root functions since elementary school; exponentials, logs, sines and cosines since mid-secondary school. One might argue that we don't really look at exponentials or logs until we prep for calculus - that's a fair statement, perhaps.
What I'm trying to say is that we've seen these all before and get them. When I think of a non-standard function, the first that come to mind are either the logarithmic integral $li(x) = \int \frac{1}{\ln x}$ and the sine integral $si(x) = \int \frac{\sin x}{x}$. I don't know why, really. But I have to really think before I know what the sine integral looks like (the logarithmic integral happens to not be so bad mentally, so it goes).
But what about other functions. Do we call them elementary? Are rational functions elementary? I might argue that a rational function is more elementary than $\sec x$, I think, because secant is weird (in my opinion). And we don't understand arbitrary antiderivatives of secants so well. To admit a weakness, I can't currently think of the antiderivative at all. (Is is $\ln|\sec x + \tan x|$? Something like that - differentiate it and find out, I suppose).
What about compositions of elementary functions. Is $\cos \sin (x)$ elementary? I hope not. But products are. Why? Again, I would say that products are easy, compositions are weird. Compositions can change domains and ranges in not-immediately-intuitive ways. I suppose products could too, but those fit my intuition better.
To give credence, natural log is a stretch for an elementary function in my opinion. It's well studied, but not well-behaved. We do understand arbitrary antiderivatives of it, but it takes a bit of work. This is a theme with natural log, just like asking how multiplying by $\ln (x)$ or $1/\ln(x)$ affects convergence (it 'almost' never does).
This makes me think of general reciprocals. $\frac{1}{x}$ is elementary. Is $\frac{1}{\ln x}$? I sort of hope not there too - it's close to the logarithmic integral.
So in short, I would say that elementary functions are those that we all like and understand without thinking too hard. But I also don't think that the exact list of elementary functions is concrete. It would be interesting to know what mathematica or W|A considers elementary, because they both certainly have an inbuilt list.
-
3
– user17762 Mar 9 '12 at 5:29
2
– Jonas Meyer Mar 9 '12 at 5:30
This is one of the few cases where I do not agree with wikipedia's claim then, I suppose. – mixedmath♦ Mar 9 '12 at 7:28
1
@mixedmath: Wikipedia's authors did not invent the convention. This is the same meaning used in the statements of theorems such as that $e^{x^2}$ and $\frac{1}{\ln(x)}$ have no elementary antiderivative. – Jonas Meyer Mar 9 '12 at 15:07
@mixedmath The sine integral is important because, like the error integral, it actually turned up in applications, so that people tried very hard to "solve" it before they were able to even formulate what "unsolvable" means. – Phira Apr 26 '12 at 13:18
Another point, sometimes left out of the discussion, is the setting. Something like: A meromorphic function defined on a connected domain in the complex plane is called an elementary function if ...
Then you can do variants. Functions defined on an interval in the real line, perhaps omitting trig functions in your definition ... An abstract extension of a differential field $F(x)$ ...
-
I dare to give a simple ("common denominator") answer: it is historical and social convention that defines elementary functions. If we accept this, there is still a large room for discussion about what exactly one means by "elementary" and for what subjective reasons.
-
I believe that a system of axioms for elementary functions is not only hard to build, but also almost useless. Actually, there functions that some teacher/professor would call elementary and other teacher/professors would definitely not. For instance, I strongly suggest that characteristic functions of finite unions of intervals should be elementary: their graphs are simply horizontal segments, which is rather elementary. Moreover, students grow up thinking that every function is actually a $C^\infty$ function, since they believe that only "elementary" functions exist! It is time to allow elementary functions to be (at least) discontinuous at a finite numer of points.
More conservative collegues found a useful recipe: elementary functions are
1) polynomials; 2) exponentials and logarithms; 3) goniometric functions (sin, cos, arcsin, arccos, and their sons/daughters like tan, arctan, cotan, ecc.).
-
Elementary functions can all be generated through an infinite sum $\sum a_n x^n$, aswell as finite number of compositions, $f((g(x))$, they are the additive completion of polynomials, they are elementary because polynomials are easy to handle.
Non-elementary functions such as the zeta-function and gamma-function, can not be written as infinite polynomials, but are most natural defined by infinite products. They are non-elementary because multiplication is more complex then addition, they are a step up in the hiearchy.
-
10
There is am immense number of functions which do have a development as an infinite sum of the form you mention yet which are generally not said to be elementary... – Mariano Suárez-Alvarez♦ Mar 9 '12 at 6:05
8
Every differentiable function of a complex variable! If you want one with a name.... say, the Airy functions. – Mariano Suárez-Alvarez♦ Mar 9 '12 at 6:08
5
I don't know where you looked. But the Airy functions $Ai$, for example, have Taylor series developments around zero with real coefficients and valid on the whole of $\mathbb R$. – Mariano Suárez-Alvarez♦ Mar 9 '12 at 6:20
6
I think you have described all analytic functions, which is a strictly larger set. The error function, for example, is not elementary but its Taylor series converges everywhere. – Rahul Narain Mar 9 '12 at 6:39
5
By golly, how did I know you were going to ask that. – anon Mar 9 '12 at 6:59
show 9 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353771805763245, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/5982/burn-in-period-for-random-walk?answertab=oldest
|
# Burn-in period for random walk
We are trying to make simulation experiment involving a common stochastic trend, that is described by the random walk (or $I(1)$ process) $Y_t = Y_{t-1} + \varepsilon_t$, where innovations $\varepsilon_t$ ~ $N(0,1)$. However when could we be sure that the past innovations are more or less reasonably included into the stochastic trend? Is there any good proxy for random-walk's burn-in period?
I've looked for R's burn-in default suggestion for $AR(p)$ part: `ceiling(6/log(minroots))`, meaning that in case of unit root we get infinity here and roughly tried to take close to unity root 1.0001 (actually it is the same like taking 60000 at once). So any reasonable suggestions or rule of thumbs you do use in practice?
-
Could you add a bit more context please? e.g. what R function and package are you using? – onestop Jan 4 '11 at 19:50
A link explaining why random-walk needs a burn-in also would help. – mpiktas Jan 4 '11 at 20:12
Well we will use diffinv() directly, but the burn-in default could be found in arima.sim() function. – Dmitrij Celov Jan 4 '11 at 20:39
## 1 Answer
Burn-in doesn’t make sense here. The random walk you describe does not have a stationary distribution.
-
Well, I do know that the process is not stationary. Burn-in period's purpose is to be more or less confident that the starting history values are already present in the stochastic trend, in the sense that we may assume that we have started from the long long past. It is not clear why I have to go from zero if I don't want to. And else it would be interesting to hear some suggestions about the same questions when the process is random walk with drift component. – Dmitrij Celov Jan 5 '11 at 7:12
2
If I understand correctly, you want to simulate a stochastic trend. Why not initialize $Y_0$ to a random value, say a sample from $N(0,\sigma^2)$ with $\sigma^2$ large? – vqv Jan 5 '11 at 18:22
One more comment. The problem with burn-in is that, in addition to being non-stationary, the process is not ergodic. So what you are asking for is not really possible. You will always be able to infer (roughly) what the initial value of the process is. – vqv Jan 5 '11 at 18:39
1
Suppose $Y_t = \alpha Y_{t-1} + \epsilon_t$ where $\epsilon_t$ are i.i.d. $N(0,1)$ and $|\alpha| \leq 1$. The conditional distribution of $Y_t$ given $Y_0$ is $N(\alpha^t Y_0, \sigma_{\alpha,t}^2)$, where $\sigma_{\alpha,t}^2 \leq t$ depends only on $\alpha$ and $t$. If $\alpha = 1$, note that $Y_t$ given $Y_0$ remains centered at $Y_0$ even as $t \to \infty$. In that sense, you can always infer $Y_0$. On the other hand, if $|\alpha|< 1$ then the conditional mean of $Y_t$ given $Y_0$ tends to 0 as $t\to\infty$ so that asymptotically $Y_t$ is conditionally independent of $Y_0$. – vqv Jan 6 '11 at 15:42
1
If $|\alpha| = 1$, then $\sigma_{\alpha,t}^2 = t$. If $|\alpha| < 1$, then $\sigma_{\alpha,t}^2 = (1-\alpha^t) / (1-\alpha)$. So although you can infer $Y_0$ from $Y_t$ when $\alpha = 1$ the standard error will be of the order $\sqrt{t}$. I think to better answer your question you need to be less specific and explain what you will use $Y_t$ for. – vqv Jan 6 '11 at 15:49
show 5 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243835806846619, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/1775/why-is-there-no-absolute-maximum-temperature/1778
|
Why is there no absolute maximum temperature?
If temperature makes particles vibrate faster, and movement is limited by the speed of light, then temperature must be limited as well I would assume. Why there is no limits?
-
1
Forget SR considerations and lets focus on low velocity/KE particles. If I am standing, with my thermometer, in a stream of unidirectional particles going on average 1 mile per second I measure a certain temperature. If I am accelerated by an outside force to 1 mps then the particles appear stationary except for some wiggle. Has the temperature measured by my thermometer dropped?? Related to the above does the measured temperature depend on the random spread of energies about the mean or is it solely related to the mean regardless of spread – user2607 Mar 17 '11 at 18:18
6 Answers
I think the problem here is that you're being vague about the limits Special Relativity impose. Let's get this clarified by being a bit more precise.
The velocity of any particle is of course limited by the speed of light c. However, the theory of Special Relativity does not imply any limit on energy. In fact, as energy of a massive particle tends towards infinity, its velocity tends toward the speed of light. Specifically,
$$E = \text{rest mass energy} + \text{kinetic energy} = \gamma mc^2$$
where $\gamma = 1/\sqrt{1-(u/c)^2}$. Clearly, for any energy and thus any gamma, $u$ is still bounded from above by $c$.
We know that microscopic (internal) energy relates to macroscopic temperature by a constant factor (on the order of the Boltzmann constant), hence temperature of particles, like energy, has no real limit.
-
Yeah. So it should be explicitly noted (and I am perhaps blind but I don't see this anywhere in your answer) that the appearance that temperature relates to the velocity (as opposed to the energy) is just a low-energy approximation. In SR concepts of energy and velocity depart greatly whereas in classical mechanics they are connected by simple kinetic energy law. – Marek Dec 9 '10 at 23:54
@Marek: Well I think it's noted quite clearly in the SR equation for `E`. Saying that, it might not be immediately apparent that $\gamma$ (that appears in the equation for E) depends on velocity u. – Noldorin Dec 10 '10 at 0:17
@Noldorin: I was thinking more along the lines that $E$ doesn't depend on velocity at all for photons so the two concepts totally depart in SR (and your $\gamma$ formula falls apart). And the reason why I am talking about stating this explicitly is that apparently OP asked his question precisely because he thought temperature has to do with velocity. – Marek Dec 10 '10 at 0:33
@Marek: Massless particles don't come into the question here. I don't want to get broader than need be... – Noldorin Dec 10 '10 at 0:34
@Noldorin: well sure, they don't come in if you don't mention them. But I have a feeling that something is missing. On the other hand, this answer isn't intended for me so be it. Just one last remark: if I were to answer OP's question (which I probably won't anymore), I'd point out the black body which makes it obvious that velocity has nothing to do with temperature. – Marek Dec 10 '10 at 0:41
show 1 more comment
The speed of light is an upper limit for the speed of a massive object, but there is no upper bound on the kinetic energy of an object. In fact, that's why the speed of light is an upper limit (one of many reasons, anyway)-- an object moving at the speed of light would have infinite kinetic energy.
The temperature is a measure of the average kinetic energy of particles in a sample. Since kinetic energy does not have an upper limit, temperature does not have an absolute maximum.
(In equations, the kinetic energy is: $K=(\gamma - 1)mc^2 = (\frac{1}{\sqrt{1-v^2/c^2}}-1)mc^2$ which becomes infinitely large as v gets very close to the speed of light c.)
-
There is an absolute maximum temperature, and it is $0^{-}$. :)
Okay, that sounds silly, but look it up in L&L: Statistical Physics I.
Think about an Ising paramagnet in an external field: At "zero" temperature (or actually $0^{+}$) the free energy of a system will be minimized by a unique minimum energy configuration. As we raise the temperature, the number of microstates with slightly higher energy grows rapidly, so we have a lower free energy in these entropically favorable configurations. Now we continue all the way to infinite temperature, at which point the system becomes completely disordered.
But wait, what if we drive the system to even higher energy? In that case there are fewer microstates and so the derivative that defines temperature goes negative, and the temperature that corresponds to these configurations is $-\infty$. This actually corresponds to the principle of "population inversion" in lasers. Anyway, higher and higher energy configurations (with their continually decreasing entropy) correspond to decreasing negative temperatures, until all of the spins point against the external field at $T=0^-$.
-
That's a fantastic answer. There might be some skepticism about the "negative temperature" part however you do mention its relation with "population inversion" a routine occurrence in laser physics. Question: Has anyone setup an experiment which can 'measure' these negative temperatures? – user346 Jan 23 '11 at 0:05
None that I know of, and if such a set-up does exist it must be exceptionally clever. I guess the point is that, to do thermometry in the usual sense, the system that you're measuring has to act as a reservoir with respect to your probe - population inversion is easy enough, but to maintain such an unstable state? and with enough degrees of freedom to behave as a thermal reservoir? It seems unreasonably difficult. – wsc Jan 23 '11 at 0:15
you need to be careful that a system can have different temperatures; for instance, one could say that galactic halos have pretty uniform movement relative to the disk, so one would say that the $\Delta E$ is small and hence small temperature, but the halo might be composed of stars will high temperatures themselves! so a temperature may be adequate only to a specific scale of the system – lurscher Mar 17 '11 at 19:52
1
@Deepak : An experimental negative temperature of -350 K was demonstrated in 1951 paper "A Nuclear Spin System at Negative Temperature" link.aps.org/abstract/PR/v81/p279 , found via en.wikipedia.org/wiki/Negative_temperature . – Frédéric Grosshans Mar 18 '11 at 18:01
Note that negative temperatures are possible in systems like this because there is a limit on the energy available per particle in the context of the system. In the context of kinetic temperatures the necessary condition does not obtain. – dmckee♦ Jan 26 '12 at 2:52
If there is a maximum possible physical temperature it is well above anything we can reach experimentally and would require a complete theory of quantum gravity to understand it fully.
Neutron stars are some of the hottest objects in the universe today with temperatures up to around 10 trillion degrees Kelvin ($10^{12} K$). Similar temperatures have been reached in heavy ion collisions recently at the Large Hadron Collider for very small volumes and times. At these temperatures even the protons and neutrons in nuclear matter are torn apart leaving just a plasma of quarks and glouns.
But these temperatures are cool compared to the earliest moments of the big bang. According to our incomplete theories something really odd happens when you get to the Planck temperature which is around $10^{32} K$, so a good 20 orders of magnitude higher than anything we can produce.
At such temperature spacetime itself must be highly energised by gravitational interactions with hot matter. Some people think that spacetime passes through some kind of phase transition at this point, but if it does we have very little understanding of what kind of phase state lies beyond or whether temperatures can be raised further. Such understanding is in the realm of quantum gravity which is not yet fully developed. Such physics may describe the very earliest moments of the big bang and perhaps nowhere else in the universe.
-
While special relativity does not, a priori, place any constraints on the maximum temperature a system can attain, the situation changes when we consider the quark-gluon plasma - a stage you will eventually reach if you heat up any hadronic matter sufficiently. Rolf Hagedorn realized that for hadronic matter there exists a maximum temperature above which the partition function of the system is not well-defined. In other words you can only heat up hadronic matter to a maximum given by the Hagedorn temperature $T_H$.
Since hadronic-matter constitutes the vast majority of the matter we interact with (excluding dark matter and dark energy), in some sense $T_H$ is the maximum temperature that ordinary matter can attain, though this is by no means the end of the story ...
Of course, even with special relativity alone, one can see that when the temperature of a gas of particles becomes comparable to the rest energy of the particles in question, any attempt to increase the temperature beyond that point will only lead to pair creation. This was, vaguely speaking, the reasoning behind Hagedorn's work.
You might also find this Nova column on the Hagedorn phase enlightening.
-
4
But this is like saying that the maximum temperature of liquid water is 100 degrees C; it's, strictly speaking, correct, but sort of misses the point, which is that a phase transition happens and you can have the same matter at higher temperatures in a different phase. For hadronic matter, heating it to higher temperatures produces a deconfined plasma. – Matt Reece Dec 10 '10 at 16:28
This is incorrect, Hagedorn later realized that the "maximum temperature" is a sign of a phase transition. There is no maximum hadronic temperature, because the exponentially growing number of states are spatially more and more extended. – Ron Maimon Aug 22 '11 at 2:50
We have two reasons for there not being a limit. As everyother commenter here has said, SR does not limit the energy per particle. Actually energy per degree of freedom would be a more precise statement. In any case temperature does not equate directly to the energy per particle DOF, but rather to the staistical probabilities, namely that the relative probabilities of a particular state being occupied is proportional to e(- deltaE/kT ). (Even that only applies to the low density limit, fermions are limited to one particle per allowable state, so in some high density low temperature limits (solid state, and degenerate state (some stellar interiors, white dwarfs etc.)) the lowest energy states are almost fully occupied. But, in any case, temperature applies to the probability distribution of the occupation of states with different energies, average energy per particle is just the normalized integral of this density time energy.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399750828742981, "perplexity_flag": "middle"}
|
http://nrich.maths.org/7169/solution
|
nrich enriching mathematicsSkip over navigation
### Consecutive Numbers
An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore.
### 14 Divisors
What is the smallest number with exactly 14 divisors?
### Summing Consecutive Numbers
Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way?
# Weekly Problem 21 - 2011
##### Stage: 2 and 3 Short Challenge Level:
In total there are $24$ different ways to paint the tile, which means there are $23$ other ways to paint it.
This problem is taken from the UKMT Mathematical Challenges.
View the previous week's solution
View the current weekly problem
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9169915914535522, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/47851/action-of-the-lorentz-group-on-scalar-fields
|
# Action of the Lorentz group on scalar fields
The Lorentz groups act on the scalar fields as: $\phi'(x)=\phi(\Lambda^{-1} x)$
The conditions for an action of a group on a set are that the identity does nothing and that $(g_1g_2)s=g_1(g_2s)$. This second condition is not fulfilled because of the inverse on $\Lambda$. What is then the action of the Lorentz group on the scalar fields?
-
1
Would you mind writing out more carefully why the second condition isn't fulfilled? – user1504 Dec 29 '12 at 13:11
$(\Lambda_1 \Lambda_2)^{-1}=\Lambda_2^{-1} \Lambda_1^{-1}$ which is not $\Lambda_1^{-1} \Lambda_2^{-1}$ – inquisitor Dec 29 '12 at 13:20
It's OK, nothing else is needed. – Vladimir Kalitvianski Dec 29 '12 at 14:01
## 1 Answer
Denote by $g_1\phi$ the field transformed by the action of $\Lambda_1$ : $$(g_1\phi)(x) = \phi(\Lambda_1^{-1}(x))$$ Similarly $g_2$ has action $$(g_2\psi)(x) = \psi(\Lambda_2^{-1}(x))$$ Substitute $g_1\phi$ for $\psi$ $$(g_2g_1\phi)(x) = (g_1\phi)(\Lambda_2^{-1}(x)) = \phi(\Lambda_1^{-1}\Lambda_2^{-1}(x)) = \phi((\Lambda_2\Lambda_1)^{-1}(x))$$ So the group action looks correct.
-
+1, you forgot the subscripts on the first two lines, though it's not really unclear :) – kηives Dec 29 '12 at 18:37
@kηives : thanks - duly edited now! – twistor59 Dec 29 '12 at 19:16
Thank you, just to write it on the original notation: $(g_2g_1)\phi(x)=\phi((\Lambda_2 \Lambda_1)^{-1} x)=\phi(\Lambda_1^{-1}\Lambda_2^{-1} x)$, and $g_2(g_1\phi)(x)=g_1\phi(\Lambda_2^{-1} x)= \phi(\Lambda_1^{-1}\Lambda_2^{-1} x)$ – inquisitor Dec 29 '12 at 21:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8990083336830139, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Exsecant
|
# Exsecant
exsecant (blue) and excosecant (green)
The trigonometric functions, including the exsecant, can be constructed geometrically in terms of a unit circle centered at O. The exsecant is the portion DE of the secant exterior to (ex) the circle.
The exsecant, also abbreviated exsec, is a trigonometric function defined in terms of the secant function sec(θ):
$\operatorname{exsec}(\theta) = \sec(\theta) - 1. \,$
Once important in fields such as surveying, astronomy, and spherical trigonometry, the exsecant function is now little-used. Mainly, this is because the availability of calculators and computers has removed the need for trigonometric tables of specialized functions such as this one.
A related function is the excosecant (excsc), the exsecant of the complementary angle:
$\operatorname{excsc}(\theta) = \operatorname{exsec}(\pi/2 - \theta) = \csc(\theta) - 1. \!$
The reason to define a special function for the exsecant is similar to the rationale for the versine: for small angles θ, the sec(θ) function approaches one, and so using the above formula for the exsecant will involve the subtraction of two nearly equal quantities and exacerbate roundoff errors. Thus, a table of the secant function would need a very high accuracy to be used for the exsecant, making a specialized exsecant table useful. Even with a computer, floating point errors can be problematic for exsecants of small angles. A more accurate formula in this limit would be to use the identity:
$\operatorname{exsec}(\theta) = \frac{1-\cos(\theta)}{\cos(\theta)} = \frac{\operatorname{versin}(\theta)}{\cos(\theta)} = 2 \sin^2(\theta/2) \sec(\theta).\$
Prior to the availability of computers, this would require time-consuming multiplications.
The name exsecant can be understood from a graphical construction, at right, of the various trigonometric functions from a unit circle, such as was used historically. sec(θ) is the secant $\overline{OE}$, and the exsecant is the portion $\overline{DE}$ of this secant that lies exterior to the circle (ex is Latin for out of).
## References
• M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (Dover: New York, 1972), p. 78. (See Abramowitz and Stegun.)
• James B. Calvert, Trigonometry (2004). Retrieved 25 December 2004.
This geometry-related article is a stub. You can help Wikipedia by expanding it.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.88120037317276, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/70068?sort=oldest
|
## Conic hulls and cones
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose I have a number of vectors in $\mathbb{R}^n.$ The first question is: what is the most efficient algorithm to compute their "conic hull" (the minimal convex cone which contains them)? The next question is: suppose I have a number of vectors $v_1, \dotsc, v_n,$ as before, and a convex cone $C.$ I want to find the conic hull of `$\{v_1, \dotsc, v_n\} \cup C.$` In case it matters, in my application $C$ is the semidefinite cone. By "compute the conic hull", I mean: I want to find the subset of the $v_i$ on the boundary of the hull.
EDIT Thanks for all the comments. It is certainly true that the conic hull is equivalent to the intersection with a plane, and as @Will pointed out, the only problem is finding the plane. In the PSD case, we know that identity is PSD, so this gives us a choice of planes.
As for the algorithm, I had come up with @Matus' algorithm, but was not sure (and still am not) that this is the most efficient, since it looks like there is a lot of recomputation. The fact that the PSD cone is not a polyhedral cone is very true. Notice that you can still ask for the extremal points from the original set, and in fact, the same algorithm works, except that instead of solving a linear program at each step, we need to solve a semidefinite program, which hurts a bit, but is certainly tractable for small dimension.
If you ask for the full convex hull, I am not at all sure of how the answer should even look like, since one will need to describe the "exposed" pieces of the cone. Surely mankind has wondered about this is in the context of, eg, the convex hull of a collection of disks in the plane, or some such.
-
1
Hi Igor, You can google for quickhull. Maybe that algorithm can be modified to compute the "conic hull"?. – Alex Eskin Jul 12 2011 at 6:16
2
Hi Igor. It looks like your question is really about taking the convex hull of n points in $R^{n−1}$.That is, if you can find a hyperplane $P$ for which all of the $v_i$ are on one side (presumably not difficult?) you can then consider the intersections $x_i$ with a translate $P$ of the lines parallel to the $v_i$ through $O$.The convex hull of the $x_i$ is the projectivization of the conic hull, right? So the "quickhull" algorithm mentioned by Alex Eskin should do what you want? – Jean-Marc Schlenker Jul 12 2011 at 6:42
## 2 Answers
Question 1.
Usually, finding the convex hull means finding the vertices on each face of the convex hull; in this case, there are algorithms with running time $\mathcal O(n^{d/2})$ (where $n$ is the number of points, each in $\mathbb{R}^d$), and polytopes providing a lower bound (some of this is in the qhull material mentioned in the comments; I also found it in matousek's book "lectures on discrete geometry", specifically in the bibliographic remarks of section 5.5).
For your scenario of just finding the list of vertices, It seems like you can take $d$ out of the exponent. I'm not finding references on this which makes me a little nervous, but I'll give the algorithm in a second and you can decide how you feel about it.
Question 2.
I'm not sure what you mean because your example of the PSD cone is not polyhedral; that is, it is not an intersection of finitely many halfspaces, equivalently (by what is sometimes called the Minkowski-Weyl theorem) there does not exist a finite set of points generating it. If your $C$ were finitely generated, then I'd say run the conic hull algorithm for the union of both point sets, but I'm hoping you'll comment to clarify..
Algorithm.
The algorithm is greedy. It starts with all points in the conic hull, and greedily removes points that can be represented as conic combinations of other points. Thus it attempts to remove points $n$ times, and each iteration must try to rewrite each of the $< n$ points using the others; it terminates when passing over the remaining vertices finds no rewrites, meaning there are $\mathcal O(n^2)$ total iterations.
Each iteration solves a linear program of the following form. The goal is to rewrite some vertex $b$ using the other remaining vertices, collected as columns into the matrix $A$ (let $|A|$ denote the number of columns). This is a linear feasibility problem $$\textrm{find } x \in \mathbb{R}^{|A|} \textrm{ such that } Ax = b, x \geq 0.$$ A standard linear programming solver can do this in time $\mathcal O(m^{3.5} \ln(1/\epsilon))$ (where $m:=\max\{n,d\}$), where $\epsilon > 0$ will depend on how close some points are to being vertices of the cone. These algorithms will either find a satisfactory $x$, or will terminate with a dual certificate. Actually the duality is strong here; by Farkas's lemma you either can write a point in the desired way (it is in the cone), or you can separate it from the cone by a hyperplane.
To see that this algorithm is correct. First note that it never terminates when there are points that are conic combinations of others. Next note that it never removes a vertex of the conic hull, because these points can not be written as conic combinations of the other points.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
[Edited: previous version was flaky, sorry; also edited per Matus]
There is a variant of Matus's approach that takes $O(nT_A)$ work, where $A\le n$ is the size of the answer, that is, the number of extreme points, and $T_A$ is the work to solve an LP (or here an SDP) as Matus describes, but for $A+1$ points instead of $n$.
The algorithm is: (after converting from conic to convex hull) maintain an output set $S$, that starts empty, and test each point $v_i$ against $S$ one by one. Solve the LP (or SDP) as Matus describes. If $v_i$ is proven to be in the convex hull of $S$, discard it. Otherwise, the dual certificate gives a direction (at least in the LP case, and something similar should apply in the SDP case) perpendicular to that separating hyperplane, such that the input point that is extreme in that direction is not already in $S$. (While $v_i$ is not in the convex hull of $S$, it may not be extreme itself.) Find that input point and add it to $S$.
Testing each of the $n$ points costs $T_A$, and the $O(n)$ work for finding an extreme point in a given direction yields a new member of $S$, so such tasks need $O(nA)\le O(nT_A)$ work.
This trick and related ones appeared here ("More output-sensitive..."); the notes for the paper give pointers to some related work.
-
Welcome to MO! That is a very nice algorithm! Just a note--I can't figure out how to directly convert this to conic hulls, so it seems necessary to use a conic-to-convex reduction as remarked by Jean-Marc Schlenker in the comments above. (To be specific about the problem: it is possible that a non-extremal point has the largest projection onto the dual certificate, simply because it is "farthest down" that polyhedral face. I tried a couple quick fixes, but all failed. Anyway, the projection solution by Jean-Marc is fast and sufficient.) – Matus Telgarsky Jul 13 2011 at 9:39
BTW I believe there is a typo in your description: primal certificates mean no extreme point, and dual certificates mean some extreme point amongst the remaining input points. – Matus Telgarsky Jul 13 2011 at 9:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492824077606201, "perplexity_flag": "head"}
|
http://nrich.maths.org/7670/clue
|
### Integration Matcher
Match the charts of these functions to the charts of their integrals.
### Brimful
Can you find the volumes of the mathematical vessels?
### Lennard Jones Potential
Investigate why the Lennard-Jones potential gives a good approximate explanation for the behaviour of atoms at close ranges
# Interpolating Polynomials
##### Stage: 5 Challenge Level:
To find four points that a quadratic couldn't possibly fit, remember that quadratics only have one turning point.
To find the quadratic that fits three points, make sure you understand how you can add and subtract graphs, and what happens to the result. Don't try to fit all three points at once – fit two and then “fix” your line to fit the third.
Uniqueness: The Factor Theorem states that if $p$ is a polynomial and $p(a) = 0$, then there is a polynomial $q$ such that $p(x) = (x-a)q(x)$. What does this mean about the degrees of $p$ and $q$?
Finally, to prove two polynomials are equal, try proving their difference is zero.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9199730753898621, "perplexity_flag": "middle"}
|
http://www.contrib.andrew.cmu.edu/~ryanod/?p=877
|
## Analysis of Boolean Functions
by Ryan O'Donnell
Fall 2012 course at Carnegie Mellon
# §5.3: The Fourier coefficients of Majority
In this section we will analyze the Fourier coefficients of $\mathrm{Maj}_n$. In fact, we give an explicit formula for them in Theorem 16 below. But most of the time this formula is not too useful; instead, it’s better to understand the Fourier coefficients of $\mathrm{Maj}_n$ asymptotically as $n \to \infty$.
Let’s begin with a few basic observations. First, $\mathrm{Maj}_n$ is a symmetric function and hence $\widehat{\mathrm{Maj}_n}(S)$ only depends on $|S|$ (Exercise 1.29). Second, $\mathrm{Maj}_n$ is an odd function and hence $\widehat{\mathrm{Maj}_n}(S) = 0$ whenever $|S|$ is even (Exercise 1.9). It remains to determine the Fourier coefficients $\widehat{\mathrm{Maj}_n}(S)$ for $|S|$ odd. By symmetry, $\widehat{\mathrm{Maj}_n}(S)^2 = \mathbf{W}^{k}[\mathrm{Maj}_n]/\binom{n}{k}$ for all $|S| = k$, so if we are content to know the magnitudes of $\mathrm{Maj}_n$’s Fourier coefficients, it suffices to determine the quantities $\mathbf{W}^{k}(\mathrm{Maj}_n)$.
In fact, for each $k \in {\mathbb N}$ the quantity $\mathbf{W}^{k}(\mathrm{Maj}_n)$ converges to a fixed constant as $n \to \infty$. We can deduce this using our analysis of the noise stability of majority. From the previous section we know that for all $|\rho| \leq 1$, \begin{equation} \label{eqn:maj-stab-series} \lim_{n \to \infty} \mathbf{Stab}_\rho[\mathrm{Maj}_n] = \tfrac{2}{\pi} \arcsin \rho = \tfrac{2}{\pi}\Bigl(\rho + \tfrac16 \rho^3 + \tfrac{3}{40} \rho^5 + \tfrac{5}{112} \rho^7 + \cdots \Bigr), \end{equation} where we have used the power series for $\arcsin$, \begin{equation} \label{eqn:arcsin} \arcsin z = \sum_{k \text{ odd}} \ \frac{2}{k2^k} \binom{k-1}{\frac{k-1}{2}} \cdot z^k, \end{equation} valid for $|\rho| \leq 1$ (see the exercises). Comparing \eqref{eqn:maj-stab-series} with the formula $$\mathbf{Stab}_\rho[\mathrm{Maj}_n] = \sum_{k \geq 0} \mathbf{W}^{k}[\mathrm{Maj}_n] \cdot \rho^k$$ suggests the following: For each fixed $k \in {\mathbb N}$, \begin{equation} \label{eqn:maj-one-function} \lim_{n \to \infty} \mathbf{W}^{k}[\mathrm{Maj}_n] = [\rho^k] (\tfrac{2}{\pi} \arcsin \rho) = \begin{cases} \frac{4}{\pi k2^k} \binom{k-1}{\frac{k-1}{2}} & \text{if $k$ odd,} \\ 0 & \text{if $k$ even.} \end{cases} \end{equation} (Here $[z^k]F(z)$ denotes the coefficient on $z^k$ in power series $F(z)$.) Indeed, we prove this identity below in Theorem 19. The noise stability method which suggests it can also be made formal (see the exercises).
Identity \eqref{eqn:maj-one-function} is one way to formulate precisely the statement that the “Fourier spectrum of $\mathrm{Maj}_n$ converges”. Introducing notation such as “$\mathbf{W}^{k}(\mathrm{Maj})$” for the quantity in \eqref{eqn:maj-one-function}, we have the further asymptotics \begin{equation} \label{eqn:maj-asympt-asympt} \begin{aligned} \text{for $k$ odd,} \qquad \mathbf{W}^{k}(\mathrm{Maj}) &\sim \left(\tfrac{2}{\pi}\right)^{3/2} k^{-3/2},\\ \mathbf{W}^{>k}(\mathrm{Maj}) &\sim \left(\tfrac{2}{\pi}\right)^{3/2} k^{-1/2} \qquad \text{as $k \to \infty$}. \end{aligned} \end{equation} (You are asked to show the above in the exercises.) The estimates \eqref{eqn:maj-asympt-asympt}, together with the precise value $\mathbf{W}^{1}(\mathrm{Maj}) = \tfrac{2}{\pi}$, are usually all you need to know about the Fourier coefficients of majority.
Nevertheless, let’s now compute the Fourier coefficients of $\mathrm{Maj}_n$ exactly.
Theorem 16 If $|S|$ is even then $\widehat{\mathrm{Maj}_n}(S) = 0$. If $|S| = k$ is odd, $$\widehat{\mathrm{Maj}_n}(S) = (-1)^{\frac{k-1}{2}} \frac{\binom{\frac{n-1}{2}}{\frac{k-1}{2}}}{\binom{n-1}{k-1}} \cdot \tfrac{2}{2^n} {\textstyle \binom{n-1}{\frac{n-1}{2}}}.$$
Proof: The first statement holds because $\mathrm{Maj}_n$ is an odd function; henceforth we assume $|S| = k$ is odd. The trick will be to compute the Fourier expansion of majority’s derivative $\mathrm{D}_n \mathrm{Maj}_n = \mathrm{Half}_{n-1} : \{-1,1\}^{n-1} \to \{0,1\}$, the $0$-$1$ indicator of the set of $(n-1)$-bit strings with exactly half of their coordinates equal to $-1$. By the derivative formula and the fact that $\mathrm{Maj}_n$ is symmetric, $\widehat{\mathrm{Maj}_n}(S) = \widehat{\mathrm{Half}_{n-1}}(T)$ for any $T \subseteq [n-1]$ with $|T| = k-1$. So writing $n-1 = 2m$ and $k-1 = 2j$, it suffices to show \begin{equation} \label{eqn:half-formula} \widehat{\mathrm{Half}_{2m}}([2j]) = (-1)^{j} \frac{\binom{m}{j}}{\binom{2m}{2j}}\cdot \tfrac{1}{2^{2m}}{\textstyle \binom{2m}{m}}. \end{equation}
By the probabilistic definition of $\mathrm{T}_\rho$, for any $\rho \in [-1,1]$ we have $$\mathrm{T}_\rho \mathrm{Half}_{2m}(1, 1, \dots, 1) = \mathop{\bf E}_{{\boldsymbol{x}} \sim N_\rho((1, 1, \dots, 1))}[\mathrm{Half}_{2m}({\boldsymbol{x}})] = \mathop{\bf Pr}[{\boldsymbol{x}} \text{ has $m$ $1$'s and $m$ $-1$'s}],$$ where each coordinate of ${\boldsymbol{x}}$ is $1$ with probability $\tfrac{1}{2} + \tfrac{1}{2} \rho$. Thus \begin{equation} \label{eqn:half1} \mathrm{T}_\rho \mathrm{Half}_{2m}(1, 1, \dots, 1) = {\textstyle \binom{2m}{m}}(\tfrac{1}{2} + \tfrac{1}{2} \rho)^{m} (\tfrac{1}{2} – \tfrac{1}{2} \rho)^{m} = \tfrac{1}{2^{2m}} {\textstyle \binom{2m}{m}} (1-\rho^2)^m. \end{equation} On the other hand, by the Fourier formula for $\mathrm{T}_\rho$ and the fact that $\mathrm{Half}_{2m}$ is symmetric we have \begin{equation} \label{eqn:half2} \mathrm{T}_\rho \mathrm{Half}_{2m}(1, 1, \dots, 1) = \sum_{U \subseteq [2m]} \widehat{\mathrm{Half}_{2m}}(U) \rho^{|U|} = \sum_{i=0}^{2m} {\textstyle \binom{2m}{i}} \widehat{\mathrm{Half}_{2m}}([i]) \rho^{i}. \end{equation} Since we have equality $\eqref{eqn:half1} = \eqref{eqn:half2}$ between two degree-$2m$ polynomials of $\rho$ on all of $[-1,1]$, we can equate coefficients. In particular, for $i = 2j$ we have $${\textstyle \binom{2m}{2j}} \widehat{\mathrm{Half}_{2m}}([2j]) = \tfrac{1}{2^{2m}} {\textstyle \binom{2m}{m}} \cdot [\rho^{2j}](1-\rho^2)^m = \tfrac{1}{2^{2m}} {\textstyle \binom{2m}{m}} \cdot (-1)^j {\textstyle \binom{m}{j}},$$ confirming \eqref{eqn:half-formula}. $\Box$
You are asked to prove the following corollaries in the exercises:
Corollary 17 $\widehat{\mathrm{Maj}_n}(S) = \widehat{\mathrm{Maj}_n}(T)$ whenever $|S| + |T| = n+1$. Hence also $\mathbf{W}^{n-k+1}[\mathrm{Maj}_n] = \frac{k}{n-k+1} \mathbf{W}^{k}[\mathrm{Maj}_n]$.
Corollary 18 For any odd $k$, $\mathbf{W}^{k}[\mathrm{Maj}_n]$ is a strictly decreasing function of $n$ (for $n \geq k$ odd).
We can now prove the identity \eqref{eqn:maj-one-function}:
Theorem 19 For each fixed odd $k$, $$\mathbf{W}^{k}[\mathrm{Maj}_n] \searrow [\rho^k] (\tfrac{2}{\pi} \arcsin \rho) = \tfrac{4}{\pi k2^k} {\textstyle \binom{k-1}{\frac{k-1}{2}}}$$ as $n \geq k$ tends to $\infty$ (through the odd numbers). Further, we have the error bound \begin{equation} \label{eqn:maj-weight-err} [\rho^k] (\tfrac{2}{\pi} \arcsin \rho) \leq \mathbf{W}^{k}[\mathrm{Maj}_n] \leq (1+2k/n) \cdot [\rho^k] (\tfrac{2}{\pi} \arcsin \rho) \end{equation} for all $k < n/2$. (For $k > n/2$ one can use Corollary 17.)
Proof: Corollary 18 tells us that $\mathbf{W}^{k}[\mathrm{Maj}_n]$ is decreasing in $n$; hence we only need to justify \eqref{eqn:maj-weight-err}. Using the formula from Theorem 16 we have $$\frac{\mathbf{W}^{k}[\mathrm{Maj}_n]}{[\rho^k] (\tfrac{2}{\pi} \arcsin \rho)} = \frac{\binom{n}{k}\tfrac{4}{2^{2n}}\binom{n-1}{\frac{n-1}{2}}^2\left.\binom{\frac{n-1}{2}}{\frac{k-1}{2}}^2/\binom{n-1}{k-1}^2\right.}{\frac{4}{\pi k2^k} \binom{k-1}{\frac{k-1}{2}}} = \tfrac{\pi}{2} n \cdot 2^{k-n}{\textstyle \binom{n-k}{\frac{n-k}{2}}} \cdot 2^{1-n}{\textstyle \binom{n-1}{\frac{n-1}{2}}},$$ where the second identity is verified by expanding all binomial coefficients to factorials. By Stirling’s approximation we have $2^{-m}\binom{m}{m/2} \nearrow \sqrt{\frac{2}{\pi m}}$, meaning that the ratio of the left side to the right side increases to $1$ as $m \to \infty$. Thus $$\frac{\mathbf{W}^{k}[\mathrm{Maj}_n]}{[\rho^k] (\tfrac{2}{\pi} \arcsin \rho)} \nearrow \frac{n}{\sqrt{n-k}\sqrt{n-1}} = (1-\tfrac{k+1}{n} +\tfrac{k}{n^2})^{-1/2},$$ and the right-hand side is at most $1+2k/n$ for $1 \leq k \leq n/2$ (an exercise). $\Box$
Finally, we can deduce the asymptotics \eqref{eqn:maj-asympt-asympt} from this theorem (as you are asked in the exercises):
Corollary 20 Let $k \in {\mathbb N}$ be odd and assume $n = n(k) \geq 2k^2$. Then \begin{align*} \mathbf{W}^{k}(\mathrm{Maj}_n) &= \left(\tfrac{2}{\pi}\right)^{3/2} k^{-3/2} \cdot (1\pm O(1/k)), \\ \mathbf{W}^{>k}(\mathrm{Maj}_n) &= \left(\tfrac{2}{\pi}\right)^{3/2} k^{-1/2} \cdot (1\pm O(1/k)), \end{align*} and hence the Fourier spectrum of $\mathrm{Maj}_n$ is $\epsilon$-concentrated on degree up to $\frac{8}{\pi^3} \epsilon^{-2} + O_{\epsilon}(1)$.
April 6th, 2012 | Tags: majority | Category: All chapter sections, Chapter 5: Majority and threshold functions
### 2 comments to §5.3: The Fourier coefficients of Majority
• Alex Nikolov
I think showing that the fourier coefficient of even size sets is 0 when the function is odd is exercise 1.9, not 1.8.
• Thanks, fixed!
### Recent comments
• Deepak: Cool! That all makes sense.
• Ryan O'Donnell: Thanks on both!
• Ryan O'Donnell: 1. Fixed, thanks 2. Fixed, thanks 3. For the first two occ...
• Avishay Tal: Two small comments regarding the proof of theorem 16. - In ...
• Deepak: A couple of small things: Ex. 42, under equation (2), \pi...
• Ryan O'Donnell: Correct, thanks!
• Tim Black: I think your $j$ should be an $i$ (or vice-versa) in the sen...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7545831799507141, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/62282/mathbbpn-is-simply-connected
|
## $\mathbb{P}^n$ is simply connected
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In his chapter about Hurwitz' theorem for curves, Hartshorne shows that $\mathbb{P}^1$ is simply connected, i.e. every finite étale morphism $X \to \mathbb{P}^1$ is a finite disjoint union of $\mathbb{P}^1$s. In an exercise the reader is invited to show that $\mathbb{P}^n$ is simply connected, using the result for $\mathbb{P}^1$.
I have no idea how to do this. Perhaps someone can give a hint? There are closed immersions $\mathbb{P}^1 \to \mathbb{P}^n$, along which we may pull back a finite étale morphism, but the trivializations don't have to coindice ... perhaps we can resolve this using cohomology theory? I'm a bit confused since $\mathbb{P}^n$ is $n$-dimensional, but this is in Hartshorne's chapter about curves. I don't want to use the more advanced material of SGA.
-
5
This comment is a little late, since the question has been answered, but let me just say that the statement "There are n obvious closed immersions P^1→P^n, along which we may pull back a finite étale morphism, but they do not cover P^n" seems a bit strange to me. (There are a lot more than n ways to embed P^1 in P^n, and they all seem to me to be equally obvious.) Am I missing something? – Artie Prendergast-Smith Apr 20 2011 at 7:16
## 7 Answers
Here is a sketch of an argument which directly uses simple connectedness of $\mathbb P^1$, and is related to the simple connectedness of rationally connected smooth varieties mentioned by Sandor in one of his answers.
The idea is to treat the $\mathbb P^1$s in $\mathbb P^n$ as analogous to arcs in a topological space, and to make a lifting argument (just as one does in the basic topological theory of covering spaces).
Let $Y \to \mathbb P^n$ be a finite etale map. Fix a base points $x \in \mathbb P^n$ and a point $y \in Y$ lying over $X$. If $x' \in \mathbb P^n \setminus {x}$, there is a unique line $L$ joining $x$ and $x'$. The preimage of $L$ is a disjoint union of curves $L'$, each mapping isomorphically to $L$ (by simple connectedness of $\mathbb P^1$), and we can choose a unique $L'$ containing $y$. Now let $y'$ be the point of $L'$ lying over $x'$.
The map $x' \mapsto y'$ (and of course mapping our original point $x$ to $y$) gives a section to the given map $Y\to \mathbb P^n$, which is what we wanted.
Added: Here is one explanation of why the map $x' \mapsto y'$ is algebraic. Let $\pi:Y \to \mathbb P^n$ be our given etale map. First note that $x' \mapsto \pi^{-1}(L)$ (where $L$ is the line joining $x$ and $x'$, as above) is a morphism from $\mathbb P^n \setminus {x}$ to the Hilbert scheme of $Y$. Now picking out the connected component $L'$ of $\pi^{-1}(x')$ containing $y$ is a morphism from our given locally closed subset of the Hilbert scheme to the Hilbert scheme, and so altogether we see that $x' \mapsto L'$ is a morphism. Finally, mapping $L'$ to $x'$ (which can be described as forming the intersection $L' \cap \pi^{-1}(x')$) is again a morphism. So altogether we have a section $\mathbb P^n \setminus{x} \to Y$. One way to show that this extends as a section over all of $\mathbb P^n$ (by sending $x$ to $y$) is just to repeat the whole process for a different choice of $x$, and glue the two resulting sections.
-
2
This seems to be the most elementary approach, but still there are some details I don't understand. Namely, $x' \mapsto y'$ is first defined only as a set-theoretical map $\mathbb{P}^n \backslash x \to X \backslash x$. Why is it a morphism? And why can we extend it on $\mathbb{P}^n$? Why does it suffice to find a section? – Martin Brandenburg Apr 21 2011 at 8:22
@Martin : as in Sandor's answer, $Y$ must be silently assumed connected, so that a section is enough. Also, I have edited my answer, which seems to be a more cumbersome version of the same idea, but seemingly not needing extension. Does it seem more airtight to you ? I surmised that such simple geometric reasoning cannot let you out of algebraic geometry over anything, although I never felt so much assured with coverings in positive characteritic. – BS Apr 21 2011 at 15:47
There is no need to assume that $Y$ is connected (although of course it is harmless to do so): any section of a finite etale morphism over a connected base induces an isomorphism between the base and a connected component of the cover. Thus the statement that a connected scheme is simply connected is equivalent to the statement that any finite etale morphism admits a section. – Emerton Apr 21 2011 at 16:17
Also, the map extends to $\mathbb P^n$ just by sending $x$ to $y$. – Emerton Apr 21 2011 at 16:20
"any section of a finite etale morphism over a connected base induces an isomorphism between the base and a connected component of the cover." Why? Also, why is your set-map (yes I knew that we map $x$ to $y$) a morphism? – Martin Brandenburg Apr 21 2011 at 16:34
show 4 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is somewhere a theorem in Hartshorne's book saying that an ample divisor on a normal projective connected scheme of dimension at least 2 is connected. Now proceed by induction on $n$. If there is a non-trivial étale cover $X \to \mathbb P^n$, consider the inverse image of a hyperplane $\mathbb P^{n-1}$; this is connected, since the pullback of an ample divisor by a finite map is ample, and this give the required inductive step.
-
5
The theorem is III.7.9 in Hartshorne. – Dave Anderson Apr 19 2011 at 18:53
(From SGA I, Exposé XI).
You can prove it using the following two facts:
1) A product of simply connected proper varieties is simply connected (SGA I, X, 1.7). (*)
2) The fundamental group -- so in particular, being simply connected -- is a birational invariant of proper regular varieties (SGA I, X, 3.4).
(*) I do not know whether the properness is necessary here; it is required for the more general computation of the fundamental group of a product: in positive characteristic, one of the factors needs to be proper.
-
@ACL: I did some minor copyediting on your answer; I hope you don't mind. (I was having trouble with the sentence in 2), which I read as saying that the fundamental group itself was simply connected...) – Pete L. Clark Apr 20 2011 at 14:11
@Pete. Thanks a lot! – ACL Apr 21 2011 at 6:48
1
Nice! so 1) gives us that n-fold product of $P^1$'s is simply connected and since product variety is birational to $P^n$, we get the required result by 2). is this correct? – SGP Apr 24 2011 at 17:12
We may assume that $n\geq 2$. Let $f:X\to \mathbb P^n$ be a finite étale morphism where $X$ is connected and $H\subset \mathbb P^n$ a hyperplane. Then $f^*H$ is an ample divisor on $X$ and hence connected. By induction, then the restriction $f^*H\to H$ is an isomorphism, so $\deg f=1$ and $f$ is an isomorphism.
EDIT added previously silently assumed assumption that $X$ is connected.
-
@Sandor: You cannot conclude that $f$ is an isomorphism, just a finite disjoint union of isomorphisms.. – Martin Brandenburg Apr 19 2011 at 20:00
@Martin: sorry, I meant to say that you can assume at the start that $X$ is connected. – Sándor Kovács Apr 19 2011 at 21:21
How can we show that $f^* H$ is ample? And how the degree of $f$ is related with the degree of the restriction to $H$? – Martin Brandenburg Apr 22 2011 at 20:11
@Martin: The pull back of an ample divisor via a finite map is ample. This is an exercise in Hartshorne and can be proved using pretty much any characterization of ampleness. As for the degree: $f$ is unramified, so the degree is equal to the number of preimages of any (closed) point. This remains the same for any subvariety. – Sándor Kovács Apr 22 2011 at 22:56
I meant to add that there are other interesting ways to think about this issue. These do not conform to the request of a simple proof, but seem relevant to mention.
1
Every rationally connected smooth variety is simply connected (at least over $\mathbb C$) this is a result of Kollár-Miyaoka-Mori and Campana independently.
2
Hartshorne's conjecture, proved by Mori says that $\mathbb P^n$ is the only smooth projective variety whose tangent bundle is ample. This allows for a simple proof that $\mathbb P^n$ is simply connected: Let $f:X\to \mathbb P^n$ be a finite étale morphism and assume that $X$ is connected. Then clearly $X$ is smooth and projective and furthermore it follows that $\Omega_X\simeq f^*\Omega_{\mathbb P^n}$ and hence the tangent bundle of $X$ is also ample. By Mori's theorem it is then isomorphic to $\mathbb P^n$. However, $\mathbb P^n$ does not admit unramified self-maps of degree $d>1$ (because the induced map on the Picard group would be multiplication by $d$ and then it would imply that $\deg K_{\mathbb P^n}=0$), so $f$ has to be an isomorphism.
-
What does the "at least over $\mathbb C$ mean in the first point? It is true over $\mathbb C$, but what does happen over other fields? I hate "at least"! :P – Mariano Suárez-Alvarez Apr 24 2011 at 17:23
Let me give another answer, even though it does not fit into Hartshorne's context:
Show that $\pi_1(\mathbb{P}^n)$ has to be abelian.
Use Kummer-Theory to relate coverings to torsion in $Pic (\mathbb{P}^n)=\mathbb{Z}$, see e.g. Milne's Etale Cohomology, Prop 4.11. This implies that there are no nontrivial étale coverings of degree prime to the base characteristic.
Then use Artin-Schreier theory to relate the rest of the coverings to `$\Gamma(\mathbb{P}^n,\mathcal{O}_{\mathbb{P}^n})/(F-1)\Gamma(\mathbb{P}^n,\mathcal{O}_{\mathbb{P}^n})=0$`, and $H^1(\mathbb{P}^n,\mathcal{O}_{\mathbb{P}^n})^F=0$, where $F$ is the Frobenius, see e.g. Milne's, Prop 4.12.
-
There is one thing which has always confused me with this arugment. Since H^1(X,Z) is essentially the abelianisation of the fundamental group, how are you ruling out the fact that the fundamental group might have trivial abelianisation i.e. is a perfect group? – Daniel Loughran Apr 20 2011 at 15:39
Oops, you are of course perfectly right. In this particular case we can be saved though, if I am not mistaken: G_m^n is an open subscheme of $\mathbb{P}^n$,so $\pi_1(\mathbb{P}^n)$ is a quotient of the abelian group $\pi_1(G_m^n)$. – Lars Apr 20 2011 at 17:36
@Lars: so this works only in char. zero, otherwise $\pi_1(\mathbb{G}_m)$ is not abelian. – Laurent Moret-Bailly Apr 21 2011 at 6:40
Again oops, what was I thinking. I guess I don't know of a "easy" proof that $\pi_1(\mathbb{P}^n)$ is abelian then (as $\pi_1(\mathbb{A}_k^1)^{(p)}$ is free pro-p on $\#k$ generators). – Lars Apr 21 2011 at 10:04
You can induct on $n$. Let $f:X\to\mathbb{P}^n$ be finite and étale.
If $H$ is a hyperplane in $\mathbb{P}^n$, there is a trivialization $\phi:f^{-1}(H)\simeq H\times F$, for a finite $F$, by the induction hypothesis.
If $L$ is any line in $\mathbb{P}^n$, $f^{-1}(L)$ is a finite disjoint union of $\mathbb{P}^1$'s, and you can label the components by elements of $F$ using the trivialization at any point of $L\cap H$ (in case $L\subset H$, otherwise there is only one).
Now any fiber $f^{-1}(x)$, $x\in \mathbb{P}^n$, is identified with $F$ through the labeling of the components of $f^{-1}(L)$, for any line $L$ through $x$ (this doesn't depend on the line through $x$, their space being connected).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 131, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410375952720642, "perplexity_flag": "head"}
|
http://www.r-bloggers.com/simulated-powerprecision-analysis/
|
## R-bloggers
R news and tutorials contributed by (452) R bloggers
# Simulated Power/Precision Analysis
February 21, 2013
By BioStatMatt
(This article was first published on BioStatMatt » R, and kindly contributed to R-bloggers)
I cringe when I see research proposals that describe a sophisticated statistical approach, yet do not evaluate this approach in their power/precision/sample size planning. It's often the case that a simplified version of the proposed statistical approach is used instead. Presumably, this is due to the limited availability of power/precision/sample size planning software for sophisticated statistical analyses.
In my own planning, I have defaulted to implementing power/precision analyses with Monte Carlo methods (i.e., simulation). I refer to the approach as "simulated power/precision analysis", but I concede that this may not be the best name. Indeed, there may be a more established name that is unknown to me. This approach initially requires more effort than using one of the many power/precision software packages. However, it's almost always more relevant to the proposed research. With practice, the simulation approach has become second nature, and I use it for complex and simple statistical strategies alike.
Below is an example of the simulation approach to compute the power of a test in a simple crossover design. Whenever a simulated power analysis is implemented, it's necessary to specify (1) how the data will arise, and (2) what statistical procedure will be applied. Note that there is no requirement that the statistical procedure should "match" the data generating mechanism. Rather, it's important that (1) is an accurate reflection of prior belief, and (2) is an accurate representation of the proposed statistical procedure. When (1) and (2) do match, as they do in this example, I am sometimes concerned that the resulting computations are optimistic.
In this example, $n$ patients will be recruited and given each of two treatments, where the order of treatments is randomized in a block fashion, so that the design is balanced in this regard. We assume that the data arise from a linear mixed effects model, where there is a random intercept for each patient, a treatment effect, an order effect, and a treatment-order interaction effect. The magnitude of each effect is specified, but may be zero. The statistical procedure is to fit a linear mixed effects model, compute a $(1-\alpha)\%$ confidence interval for the magnitude of the treatment effect, and finally to make an inference about its significance. We conclude that the treatment effect is (level $(1-\alpha)$) significant when the associated confidence interval fails to include the value zero:
```# Simulate a crossover design with the formula:
# Response ~ 1 + Treatment + Order + Treatment:Order + (1 | Patient)
# Fit simulated data with linear mixed effects model. Make
# significance decision about treatment effect on the basis
# of 95% confidence interval (i.e., significant if 95% CI fails
# to include zero).
# n - number of patients in each order group
# sdW - within patient standard deviation
# sdB - between patient standard deviation
# beta - coefficient vector c(Intercept, Treatment, Order, Treatment:Order)
simulate <- function(n, sdW=4, sdB=1, beta=c(8, 4, 0, 0), alpha=0.05) {
require("lme4")
Patient <- as.factor(rep(1:(2*n), rep(2, 2*n)))
Treatment <- c(rep(c("Treatment1", "Treatment2"), n),
rep(c("Treatment2", "Treatment1"), n))
Order <- rep(c("First", "Second"), 2*n)
Data <- data.frame(Patient, Treatment, Order)
CMat <- model.matrix(~ Treatment * Order + Patient, data=Data)
Response <- CMat %*% c(beta, rnorm(2*n-1, 0, sdB)) + rnorm(4*n, 0, sdW)
Data$Response <- Response
Fit <- lmer(Response ~ (1 | Patient) + Treatment * Order, data=Data)
Est <- fixef(Fit)[2]
Ste <- sqrt(vcov(Fit)[2,2])
prod(Est + c(-1,1) * qnorm(1-alpha/2) * Ste) > 0
}
# type I error for n=20 (result: 0.059)
#mean(replicate(1000, simulate(n=20, beta=c(8, 0, 0, 0))))
# type I error for n=50 (result: 0.057)
#mean(replicate(1000, simulate(n=50, beta=c(8, 0, 0, 0))))
# type I error for n=20 and order effect 2 (result: 0.062)
#mean(replicate(1000, simulate(n=20, beta=c(8, 0, 2, 0))))
# type I error for n=50 and order effect 2 (result: 0.05)
#mean(replicate(1000, simulate(n=50, beta=c(8, 0, 2, 0))))
# power for n=20 and treatment effect 4 (result: 0.869)
#mean(replicate(1000, simulate(n=20, beta=c(8, 4, 0, 0))))
# power for n=50 and treatment effect 4 (result: 0.997)
#mean(replicate(1000, simulate(n=50, beta=c(8, 4, 0, 0))))
```
Several scenarios are considered, including some checks on the type I error associated with the proposed procedure, and its power under three hypothetical data generating mechanisms. ***update 2012/02/23: commenter Paul rightly points out below that 1000 replications is insufficient for the implied precision of three decimal places!*** It's quite late as I'm writing this, and so I will end the discussion here. Indeed, I am trying to shorten my posts in an effort to make them more frequent! Please do comment if I've left out an important detail!
|
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271382093429565, "perplexity_flag": "middle"}
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Rule_of_inference
|
# All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Rule of inference
In logic, especially in mathematical logic, a rule of inference is a scheme for constructing valid inferences. These schemes establish syntactic relations between a set of formulas called premises and an assertion called a conclusion. These syntactic relations are used in the process of inference, whereby new true assertions are arrived at from other already known ones. Rules also apply to informal logic and arguments, but the formulation is much more difficult and controversial.
As stated, the application of a rule of inference is a purely syntactic procedure. Nevertheless it must also be valid, or more precisely validity preserving. In order for the requirement of validity preservation to make sense, some form of semantics is necessary for the assertions the rule of inference relates and the rule of inference itself. For a discussion of the interrelation between rules of inference and semantics, see the article on propositional logic.
Prominent examples of rules of inference in propositional logic are the rules of modus ponens and modus tollens. For first-order predicate logic, rules of inference are needed to deal with logical quantifiers . See also validity for more information on the informal description of such arguments. And see first-order resolution for a uniform treatment of all rules of inference as a single rule in the case of first order predicate logic.
Note that there are many different systems of formal logic each one with its own set of well-formed formulas, rules of inference and semantics. See for instance temporal logic, modal logic, or intuitionistic logic. Quantum logic is also a form of logic quite different from the ones mentioned earlier. See also proof theory. In predicate calculus, an additional inference rule is needed. It is called Generalization.
In the setting of formal logic (and many related areas), rules of inference are usually given in the following standard form:
Premise#1
Premise#2
...
Premise#n
Conclusion
This expression states, that whenever in the course of some logical derivation the given premises have been obtained, the specified conclusion can be taken for granted as well. The exact formal language that is used to describe both premises and conclusions depends on the actual context of the derivations. In a simple case, one may use logical formulae, such as in
A→B
A
B
which is just the rule modus ponens of propositional logic. Rules of inference are usually formulated as rule schemata by the use of universal variables. In the rule (schema) above, A and B can be instantiated to any element of the universe (or sometimes, by convention, some restricted subset such as propositions) to form an infinite set of inference rules.
A proof system is formed from a set of rules, which can be chained together to form proofs, or derivations. Any derivation has only one final conclusion, which is the statement proved or derived. If premises are left unsatisfied in the derivation, then the derivation is a proof of a hypothetical statement: "if the premises hold, then the conclusion holds."
## Admissibility and Derivability
In a set of rules, an inference rule could be redundant in the sense that it is admissible or derivable. A derivable rule is one whose conclusion can be derived from its premises using the other rules. An admissible rule is one whose conclusion holds whenever the premises hold. All derivable rules are admissible. To appreciate the difference, consider the following set of rules for defining the natural numbers (the judgment $n\,\,\mathsf{nat}$ asserts the fact that n is a natural number):
$\begin{matrix} \frac{}{\mathbf{0} \,\,\mathsf{nat}} & \frac{n \,\,\mathsf{nat}}{\mathbf{s(}n\mathbf{)} \,\,\mathsf{nat}} \\ \end{matrix}$
The first rule states that 0 is a natural number, and the second states that s(n) is a natural number if n is. In this proof system, the following rule demonstrating that the second successor of a natural number is also a natural number, is derivable:
$\frac{n \,\,\mathsf{nat}}{\mathbf{s(s(}n\mathbf{))} \,\,\mathsf{nat}}$
Its derivation is just the composition of two uses of the successor rule above. The following rule for asserting the existence of a predecessor for any nonzero number is merely admissible:
$\frac{\mathbf{s(}n\mathbf{)} \,\,\mathsf{nat}}{n \,\,\mathsf{nat}}$
This is a true fact of natural numbers, as can be proven by induction. (To prove that this rule is admissible, one would assume a derivation of the premise, and induct on it to produce a derivation of $n \,\,\mathsf{nat}$.) However, it is not derivable, because it depends on the structure of the derivation of the premise. Because of this derivability is stable under additions to the proof system, whereas admissibility is not. To see the difference, suppose the following nonsense rule were added to the proof system:
$\frac{}{\mathbf{s(-3)} \,\,\mathsf{nat}}$
In this new system, the double-successor rule is still derivable. However, the rule for finding the predecessor is no longer admissible, because there is no way to derive $\mathbf{-3} \,\,\mathsf{nat}$. The brittleness of admissibility comes from the way it is proved: since the proof can induct on the structure of the derivations of the premises, extensions to the system add new cases to this proof, which may no longer hold.
Admissible rules can be thought of as theorems of a proof system. For instance, in a sequent calculus where cut elimination holds, the cut rule is admissible.
## Other Considerations
Inference rules may also be stated in this form: (1) some (perhaps zero) premises, (2) a turnstile symbol $\vdash$ which means "infers", "proves" or "concludes", (3) a conclusion. The turnstile symbolizes the executive power. The implication symbol $\rightarrow$ has no such power: it only indicates potential inference. $\rightarrow$ is another logical operator, it operates on truth values. $\vdash$ is not a logical operator. It is rather a catalyst which metabolizes true statements to create new statements.
Rules of inference must be distinguished from axioms of a theory, which are assertions that are assumed to be true without proof. In terms of semantics, axioms are valid assertions. Axioms are usually regarded as starting points for applying rules of inference and generating a set of conclusions. Note that there is no sharp distinction between a rule of inference and an axiom, in the sense that a rule can be artificially encoded as an axiom and vice-versa. For instance, the set premises of a rule could be void, so that the conclusion is always true. Conversely, an axiom is commonly supposed to be a single clause, but in fact one could specify a schema that generates an infinite set of axioms, which would superficially have the same form as a rule of inference.
Rules of inference play a vital role in the specification of logical calculi as they are considered in proof theory, such as the sequent calculus and natural deduction.
## See also
03-10-2013 05:06:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9326989650726318, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/93780/list
|
## Return to Answer
2 added 294 characters in body
A subgroup of maximal rank of maximal dimension is certainly a maximal subgroup of maximal rank. Maximal connected subgroups of maximal rank in $Spin(n)$ correspond to maximal reductive Lie subalgebras of maximal rank in $so(n)_{\mathbf{C}}$. Such subalgebras in semisimple Lie algebras were classified by Dynkin in 1952, see Onishchik and Vinberg (Eds.), Lie Groups and Lie Algebras III, Encyclopaedia of Mathematical Sciences, vol. 41, Tables 5 and 6. For $so(n)$ all such subalgebras are $so(2k)\oplus so(n-2k)$, and also $gl(n/2)$ for $n$ even. The subalgebras of largest highest dimension are probably $so(n-1)$ for $n$ odd and $gl(n/2)$ for $n$ even.
EDIT: For $n=2l\ge 10$, the subalgebra of highest dimension and of maximal rank in $so(n)$ is $so(n-2)\oplus so(2)$ of dimension $2l^2-5l+4=l^2+l(l-5)+4$, and NOT $gl(n/2)$ of dimension $l^2$. For example, for $n=10$ we have ${\rm dim} (so(8)\oplus so(2))=29$, while ${\rm dim}\ gl(5)=25$.
1
A subgroup of maximal rank of maximal dimension is certainly a maximal subgroup of maximal rank. Maximal connected subgroups of maximal rank in $Spin(n)$ correspond to maximal reductive Lie subalgebras of maximal rank in $so(n)_{\mathbf{C}}$. Such subalgebras in semisimple Lie algebras were classified by Dynkin in 1952, see Onishchik and Vinberg (Eds.), Lie Groups and Lie Algebras III, Encyclopaedia of Mathematical Sciences, vol. 41, Tables 5 and 6. For $so(n)$ all such subalgebras are $so(2k)\oplus so(n-2k)$, and also $gl(n/2)$ for $n$ even. The subalgebras of largest dimension are probably $so(n-1)$ for $n$ odd and $gl(n/2)$ for $n$ even.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8655229210853577, "perplexity_flag": "head"}
|
http://headinside.blogspot.com/2012/07/estimating-square-roots.html?m=0
|
3
## Estimating Square Roots
Published on Sunday, July 08, 2012 in fun, math, self improvement
I've shown how to find square roots of perfect squares in past tutorials, but how do you handle numbers that aren't perfect squares?
In this tutorial, you'll learn how to quickly determine an approximate square root for any number from 1 to 1,000. The method is a little challenging, but the results are impressive and worth the work.
In the feat itself, you're going to have someone have someone enter the square root into their calculator in a way that's easy for the calculator to understand, and then have them square it. If they give you the number 269, you instantly tell them to divide 13 by 33, and then add 16, explaining that you've determined 16 and 13/33rds to be the approximate square root of 269. When they square that number, they'll see that the answer is quite close (roughly 268.761).
Before you learn this feat, there are a couple of other feats you should learn. You should be comfortable with squaring 2-digit numbers, and being able to find the square roots of perfect squares. You'll also need to know the squares of the numbers from 1 to 31 off the top of your head, in order to handle the numbers from 1 to 1,000.
During this feat, you'll be subtracting 3 digits numbers. You can brush up on your mental 3-digit subtraction with help from this video.
That's enough for the preparation, how do you actually do the feat?
Start by asking someone to take out their calculator, make sure it's cleared, and then ask them for any number from 1 to 1,000. As an example, we'll use 149, which we'll refer to as the given number.
Step 1: Find the closest perfect square that is less than or equal to the given number. We'll refer to as the reference square or ref. square, and the root of the reference square will be called the reference root or ref. root. If they happen to give you a perfect square, you can state the square root instantly (and impressively).
With 149, you should instantly recognize that the closest perfect square below it is 144 (122). So our reference square is 144 and the reference root is 12.
$\\ given \ number=149\\ 1. \ ref. \ square=144\\ 1. \ ref. \ root=\sqrt{144}=12\\$
Starting with 149, we subtract 144 (the reference square) to get 5.
$\\ 2. \ 149 \ (given \ number)-144 \ (ref. \ square)=5 \ (numerator)\\$
Step 3: Ask the person who suggested the given number to enter the numerator into their calculator, and then press the division key (÷).
Continuing with our example, they would enter 5, and then press the ÷ key.
$\\ 3. \ CALCULATOR: \ 5 \ (numerator) \div \\$
Step 4: While they're entering the information from step 3 into the calculator, double your reference root and then add 1. This total will be referred to as the denominator.
The reference root is 12, so we double that to get 24, then add 1, giving a total of 25. 25 will be our denominator.
$\\ 4. \ (12 \ (ref. \ root) \times2)+1=24+1=25 \ (denominator)\\$
Note: You might be curious as to why you're doubling the reference root and adding 1. This is a short cut for finding the differences between your reference square, and the next perfect square.
If you think of the reference root as x, then the next number must be x + 1. For any perfect square x2, the next perfect square is (x + 1)2. The long way to determine the difference between them would be to work through the equation (x + 1)2 - x2.
However, it turns out that (x + 1)2 - x2 simplifies to 2x + 1! Doubling our reference root and adding 1 is much quicker than working through exponents!
Step 5: Have them enter the denominator into the calculator, and press the equals (=) button. The answer displayed will now be a decimal equal to the numerator divided by the denominator.
In our example, they've divided 5 by 25, which is .2.
$\\ 5. \ CALCULATOR: \ 25 \ (denominator) =\\ 5. \ (calculator \ display = .2)\\$
Note: What the calculator is displaying at this point has a very useful double meaning. Our reference square in this example is 144, which is 122 (as we've already determined). The next perfect square is 132, or 169.
Picture the range of 144 to 169 as a line, and 149 as a single point along that line, as in this Wolfram|Alpha diagram. The first meaning of the 5/25 is that our given number 149 is 5/25 of the way between two perfect squares.
Since 149 is 5/25 of the way between 144 and 169, then it's reasonable to assume that 149's square root would be about 5/25 of the way between 12 and 13. This is the second meaning: It's the fraction we need to add to the reference root.
What we've been doing up to this point, then, is finding out how far between two perfect squares we have to travel, and expressing that as a fraction. Because of the way squaring works, this won't be an exact square root, but will come very close.
Step 6: Have them enter the addition (+) key, then enter the reference root, and then the equals (=) key.
Continuing with the example, we'd have them enter + 12 (the ref. root) =, so the calculator should now display 12.2.
$\\ 6. \ CALCULATOR:+ \ 12 \ (ref. \ root) =\\ 6. \ (calculator \ display = 12.2)\\$
Step 7: To prove how good your mental estimate is, have them press the x2 button on their calculator. If they don't have one, the same result can be achieved by pressing the × button, followed immediately by the = button.
With 12.2 displayed, they now press x2, and see a number approximately equal to 148.8399, which is very close to the given number 149!
$\\ 7. \ CALCULATOR: x^{2} \ button \ (or \times button, then = )\\ 7. \ (calculator \ display \approx 148.8399)\\$
Just to lock it in, let's try with another example. Let's say you're given a much higher number, such as 806.
Working through the process as above, we find the reference square, the reference root, and work through the process from there:
$\\ given \ number=806\\ 1. \ ref. \ square=784\\ 1. \ ref. \ root=\sqrt{784}=28\\ 2. \ 806 \ (given \ number)-784 \ (ref. \ square)=22 \ (numerator)\\ 3. \ CALCULATOR: \ 22 \ (numerator) \div \\ 4. \ (28 \ (ref. \ root) \times2)+1=56+1=57 \ (denominator)\\ 5. \ CALCULATOR: \ 57 \ (denominator) =\\ 5. \ (calculator \ display \approx .38596)\\ 6. \ CALCULATOR:+ \ 28 \ (ref. \ root) =\\ 6. \ (calculator \ display \approx 28.38596)\\ 7. \ CALCULATOR: x^{2} \ button \ (or \times button, then = )\\ 7. \ (calculator \ display \approx 805.763)\\$
Once again, the squared result of 805.763 is very close to the given number 806!
As mentioned above, this works because we're working out the distance between two squares, and then seeing how far along that distance is the given number. Working that out as a fraction allows us to scale this answer down to be used as part of the given number's root. You can use this online web app I've developed to understand this concept more completely.
You might be wondering how close your estimates, when squared, will be to the original given number. The range of numbers with the biggest divisors, of course, will be the numbers from 961 up to 1,000. Using the process I teach above, here's a list of the results you'll get for each of those numbers.
Notice that the results are all just under the given number. For example, when you're given 962, your estimated square root, when squared, will return an approximate result of 961.984. If we look at just the margins of error for each number from 961 to 1,000, you'll note that it never gets farther away than .25 (or ¼)!
If you're presenting this as a bet, you can include the proposition that you have to be within plus or minus ½ in your estimate. This is only a smoke screen, as you know the resulting square will always be less than the given number, and it will never be off by more than ¼.
Instead of verbally instructing someone to enter the numbers in the calculator, you could write the answer down first. In this case, you would work through the process almost exactly backwards. Let's use 638 as an example.
The reference square, in this case, would be 625, and the reference root would be 25. Write down the reference root on the paper first.
$\\ given \ number=638\\ ref. \ square=625\\ ref. \ root=\sqrt{625}=25\\ \\ PAPER: 25\\$
Next, work out the denominator by doubline the reference root, then adding 1 to it. Write this as the denominator of the fraction on the paper.
The ref. root in this example is 25. We double that to get 50, and add 1 for a denominator of 51.
$\\ (25 \ (ref. \ root) \times2)+1=50+1=51 \ (denominator)\\ \\ PAPER: 25\frac{ }{51}\\$
Finally, subtract the reference square from the given number to get the numerator.
638 - 625 is 13, so 13 is the numerator.
$\\ 638 \ (given \ number)-625 \ (ref. \ square) = 13 \ (numerator)\\ \\ PAPER: 25\frac{13}{51}\\$
Sure enough, 25 and 13/51sts, when squared, gives approximately 637.81! Note that this is within our established -¼ margin of error.
You can practice using the random number generators at Wolfram|Alpha or Random.org and any handy calculator.
Naturally, the more squares you memorize, the higher you can go. If you memorize the squares of numbers up to 100, then you'll be able to estimate square roots of any number from 1 to 10,000! And yes, even at that scale, the resulting square will still never vary from the given number by more than ¼.
You can find out more about presenting this feat by reading the next post, Estimating Square Roots: Tips & Tricks.
### Post Details
Posted by Pi Guy on Jul 8, 2012
Labels: fun, math, self improvement
### 3 Response to Estimating Square Roots
Anonymous
7:12 PM
Use the original square root method from fourth grade math.
Or memorize the one page thee place log table.
Jay
9:18 PM
This is a good method for a quick estimate. If you are proficient with mental calculation then this article has great algorithms and solutions for square roots to almost arbitrary precision and many other types of calculations, including logarithms
http://www.myreckonings.com/Dead_Reckoning/Online/Online_Material.htm
Dead Reckoning the book is, in my opinion, the next step after Secrets of Mental math. I've had it for years and am still finding great stuff out of it. It is very hard and will take a lifetime to master everything, but it is sooo much fun to practice.
11:51 PM
this is a great method that reduces the computational intensity to almost nothing. Memorizing the squares up to 31 is pretty easy as most young people should have the squares up to 12 memorized (in America- I believe it's 15 or 30 in India?), the squares in the teens are relatively easy, the squares of any even number can be 4 times 1/2 of the square of the number you're trying to square, the square of numbers ending in 5 is simple, and the rest can be found and committed to memory easily enough.
Simply finding the squares of numbers gives one the most important tool for learning this method, and once learned, the answers almost fall out from one's mind.
Post a Comment
Subscribe to: Post Comments (Atom)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404891133308411, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2009/03/13/eigenpairs/?like=1&source=post_flair&_wpnonce=420e8d64d7
|
# The Unapologetic Mathematician
## Eigenpairs
Well, Wednesday I was up at the University of Pennsylvania again, and yesterday I was making arrangements for a visit to San Diego in a couple weeks. And next week is an exam week, so I’ll have to inch forward today.
We’ve seen a lot about Jordan normal forms, which can pretty much capture the behavior of any single linear transformation over an algebraically closed field. But not all fields are algebraically closed, and one of them is very important to us. We want to investigate the situation over the field $\mathbb{R}$ of real numbers a little more deeply.
The key point about algebraically closed fields is that we can find some upper-triangular matrix. And the crux of that is the fact that any linear transformation has at least one eigenvalue. And that happens because the characteristic polynomial always has a root over an algebraically closed field. So if your field isn’t algebraically closed a characteristic polynomial might not have roots, and your transformation might have no eigenvalues.
And indeed, some real polynomials have no roots. But all is not lost! We do know something about factoring real polynomials. We can break any one down into the product of linear terms like $(X-\lambda)$ and quadratic terms like $(X^2-\tau X+\delta)$. If we’re factoring the characteristic polynomial of a linear endomorphism $T$, then a linear term $(X-\lambda)$ gives us an eigenvalue $\lambda$, so the new and interesting stuff is in the quadratic terms. I’m going to use the nonstandard term “eigenpair” to describe a pair of real numbers $(\tau,\delta)$ that shows up in this way.
If we were working over the complex numbers, we could factor a quadratic term into a pair of linear terms:
$\displaystyle X^2-\tau X+\delta=\left(X-\frac{\tau+\sqrt{\tau^2-4\delta}}{2}\right)\left(X-\frac{\tau-\sqrt{\tau^2-4\delta}}{2}\right)$
which gives us two complex eigenvalues
$\displaystyle\frac{\tau\pm\sqrt{\tau^2-4\delta}}{2}$
This gives us no problem over the real numbers if $\tau^2\geq4\delta$, so an eigenpair must have $\tau^2<4\delta$. In this case the two complex roots are a conjugate pair. Their sum is $\tau$, and their product is $\delta$.
So how can this arise in practice? Well, since it’s a quadratic term it’s the characteristic polynomial of an endomorphism on$\mathbb{R}^2$. So let’s write down a $2\times2$ matrix and take a look:
$\displaystyle\begin{pmatrix}a&b\\c&d\end{pmatrix}$
The characteristic polynomial is the determinant of ${X}$ times the identity matrix minus this matrix. We calculate
$\displaystyle(X-a)(X-d)-(-b)(-c)=X^2-(a+d)X+(ad-bc)$
So we can define $\tau$ to be the trace of this matrix, and $\delta$ to be its determinant. If $\tau^2<4\delta$, we’ve got an eigenpair.
### Like this:
Posted by John Armstrong | Algebra, Linear Algebra
## 1 Comment »
1. [...] a factor of the characteristic polynomial of this formula is exactly what we defined to be an eigenpair. That is, just as eigenvectors — roots of the characteristic polynomial — correspond to [...]
Pingback by | April 2, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9117011427879333, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2009/09/28/euclidean-spaces/?like=1&source=post_flair&_wpnonce=a34a077b61
|
# The Unapologetic Mathematician
## Euclidean Spaces
In light of our discussion of differentials, I want to make a point here that is usually glossed over in most treatments of multivariable calculus. In a very real sense, the sources and targets of our functions are not the vector spaces $\mathbb{R}^n$.
Let’s think about what we need to have a vector space. We need a way to add vectors and to multiply them by scalars. Geometrically, addition proceeds by placing vectors as arrows “tip-to-tail” and filling in the third side of the triangle. Scalar multiplication takes a vector as an arrow and stretches, shrinks, or reverses it depending on the value of the scalar. But both of these require us to think of a vector as an arrow which points from the origin to the point with coordinates given by the components of our vector.
But this makes the origin a very special point indeed. And why should we have any such special point, from a geometric perspective? We already insisted that we didn’t want to choose a basis for our space that would make some directions more special than others, so why should we have to choose a special point?
What really matters in our spaces is their topology. But we don’t want to forget all of the algebraic structure either. There are still some vestiges of the structure of a vector space that still make sense in the absence of an origin. Indeed, we can still talk about it as an affine space, where the idea of displacement vectors between points still makes sense. And these displacement vectors will be actual vectors in $\mathbb{R}^n$. Like any torsor, this means that our space “looks like” the group (here, vector space) we use to describe displacements, but we’ve “forgotten” which point was the origin. We call the result a “Euclidean” space, since such spaces provide nice models of the axioms of Euclidean geometry.
So let’s try to be a little explicit here: we actually have two different kinds of geometric objects floating around right now. First are the points in an $n$-dimensional Euclidean space. We can’t add these points, or multiply them by scalars, but we can find a displacement vector between two of them. Such a displacement vector will be in the $n$-dimensional real vector space $\mathbb{R}^n$. When it’s convenient to speak in terms of coordinates, we first pick an (arbitrary) origin point. Now if we’re sloppy we can identify a point in the Euclidean space with its displacement vector from the origin, and thus confound the Euclidean space of points and the vector space of displacements. We can proceed to choose a basis of our vector space of displacements, which gives coordinates to the Euclidean space of points; the point $(x^1,\dots,x^n)$ is the one whose displacement vector from the origin is $x^ie_i$.
Now, the rant. Some multivariable calculus books are careful about not doing nonsense things like “adding” or “scalar multiplying” points, but many do exactly these sorts of things, giving the impression to students that points are vectors. Even among the texts that are careful, I don’t recall seeing any that actually go so far to mention that a point is not a vector. When I teach the course I’m careful to point out that they’re not quite the same thing (though not in quite as much detail as this) and I go so far as to write them differently, with vector coordinates written out between angle brackets instead of parens. Without some sort of distinction being explicitly drawn between points and vectors, more students do fall into the belief that the two are the same thing, or (worse) that each is “the same thing as” a list of numbers in a coordinate representation. Within the context of a course on multivariable calculus, it’s possible to get by with these ideas, but in the long run they will have to be corrected before proceeding into more general contexts.
So, why bring this up now in particular? Because it explains the notation we use in the differential. When we write $df(x;t)$, the semicolon distinguishes between the point variable and the vector variable. It becomes even more apparent when we choose coordinates and write $df(x^1,\dots,x^n;t^1,\dots,t^n)$. Notice that we only ask that $df$ act linearly on the vector variable, since “linear transformations” are defined on vector spaces, not Euclidean spaces.
### Like this:
Posted by John Armstrong | Analysis, Calculus, rants, Topology
## 3 Comments »
1. [...] Okay, for the moment let’s pick an orthonormal basis for our vector space . This gives us coordinates on the Euclidean space of points. It also gives us the dual basis of the dual space . This lets us [...]
Pingback by | September 29, 2009 | Reply
2. [...] That is, given the point , we set up a vector pointing from to (which we can do in a Euclidean space). Then this vector has components in terms of the [...]
Pingback by | November 5, 2009 | Reply
3. [...] rule, the differential at the point defines a linear transformation from the -dimensional space of displacement vectors at to the -dimensional space of displacement vectors at , and the matrix entries with respect to [...]
Pingback by | November 11, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240326881408691, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/303764/countably-generated-versus-being-generated-by-a-countable-partition
|
# Countably generated versus being generated by a countable partition
(1) Apparently a general term of a sigma-field generated by a countable partition can be written down. For example, if $\mathcal{B} = \sigma(B_n,n\ge 1)$ and $\{B_n\}_{n\ge1}$ is a partition of the ground set $\Omega$, then a general element of $\mathcal{B}$ is of the form $\cup_{n \in I} B_n$ for some $I \subset \mathbb{N}$.
(2) Apparently, Borel $\sigma$-field (on $\mathbb{R}$) is countably generated (say by $\{(-\infty,q]:\; q\in \mathbb{Q}\}$) and I am told that there is no writing down such a generic formula for its elements.
• (1) seems to be a special case of a countably generated $\sigma$-field. Does this have a name? Can some more light be shed on the differences between this case and a more general countably generated $\sigma$-field? Or am I making some very obvious mistakes in the above statements?
-
## 1 Answer
If $(\Omega,\Sigma)$ is a countably generated measurable space, there is a natural partition of $\Omega$ into atoms, non-empty measurable sets that have no proper non-empty measurable subsets. Let $\mathcal{C}$ be a countable family such that $\sigma(\mathcal{C})=\Sigma$. Without loss of generality, we can assume that $\mathcal{C}$ is closed under complements. The atom containing $x$ is then exactly $$A(x)=\bigcap_{C\in\mathcal{C},x\in C}C.$$ Every measurable set is a union of atoms.
If the $\sigma$-algebra is generated by a countble partition, the atoms will be exactly the blocks of the partition. But a countably generated measurable space may have uncountably many atoms. For example, the real line with the Borel $\sigma$-field has the family of all singletons $\{r\}$ of real numbers $r$ as its atoms.
Now, every measurable set $B$ in a countably generated $\sigma$-algebra is a union of atoms. If there are only countably many atoms, every union of atoms will be a countable union and therefore measuable. But uncountable unions may not be measurable. If $N$ is not a Borel set, then it will still be a union of singletons and therefore a union of atoms.
So the general case of a countably generated measurable is more complicated because one cannot identify measurable sets with arbitrary unions of atoms.
-
Thanks for the thorough response. I have heard atoms in the context of measures, and hadn't heard of them being used for $\sigma$-fields. Is this standard terminology? Do you have a reference which talks more about these? – passerby51 Feb 14 at 12:31
1
A standard reference for this material is Borel Spaces by Rao and Rao. The term comes originally from Boolean algebra, and the meaning there applies directly. The term is not frequent, but standard. – Michael Greinecker Feb 14 at 12:58
Thanks. That one seems to be hard to find. There is no online version I suppose? and it seems to be out of print. – passerby51 Feb 14 at 14:03
1
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9562071561813354, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/87429/generators-of-a-certain-ideal
|
## Generators of a certain ideal
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In view of Mariano Suárez-Alvarez's answer I see how badly phrased my question was, and decided to rewrite it. The drawback is that some comments of Martin Brandenburg are now incomprehensible, but I thought it would suffice to say here that Martin made some legitimate constructive criticisms to the original wording of the question. By the way, thank you also to Vladimir Dotsenko for his comments.
Let $K$ be a commutative ring, and let $X_1,\dots,X_n$ be indeterminates. Here $n$ is an integer $\ge3$. For $1\le i < j\le n$ put $$x_{ij}:=\frac{1}{X_i-X_j}$$ and let $Y_{ij}$ be an indeterminate. Let $I$ be the kernel of the $K$-algebra morphism $$\varepsilon:K[(Y_{ij})]\to K[(x_{ij})],\quad Y_{ij}\mapsto x_{ij}.$$
Is $I$ finitely generated? If it is, can one give an explicit finite set of generators?
Note that the identity $$\frac{1}{a-b}\ \frac{1}{a-c}+\frac{1}{b-a}\ \frac{1}{b-c}+\frac{1}{c-a}\ \frac{1}{c-b}=0.$$ shows that $I$ is nonzero.
(I put the homological algebra tag because the ultimate goal is to know whether there is a functorial free resolution of $K[(x_{ij})]$, viewed as a $K[(Y_{ij})]$-module, and, if it exists, what can be said about it.)
The question had been posted before on Mathematics Stack Exchange (link).
-
1) Can you give some specific examples for $y_m$? (because my first guess was that $I$ is generated by the $y_{ij}$ and I'm still not convinced of the contrary - I don't want to get through all these indices). 2) Have you tried the cases $n=2$ and $n=3$? – Martin Brandenburg Feb 3 2012 at 14:05
PS: The homological-algebra tag is not appropriate because you don't resolve a module by free modules, but rather you want to resolve an algebra by free algebras aka find a presentation of it. – Martin Brandenburg Feb 3 2012 at 14:07
Dear @Martin: Thanks for your comments. I edited the question. – Pierre-Yves Gaillard Feb 3 2012 at 15:12
Thank you for your obligingness. Is the relation you have written down for $n=3$ the only one coming from the $y_m$ or is this just an example? After all I could take any $3$-tuple of positive integers? – Martin Brandenburg Feb 3 2012 at 16:24
@Martin: while I too think that the "homological algebra" tag might be slightly misleading, Pierre-Yves is quite right saying that finding a presentation for an augmented algebra is intimately related to finding the first two levels of a resolution of the trivial module by free modules. I personally would think that "syzygies" or something alike would be the most instructive tag. – Vladimir Dotsenko Feb 3 2012 at 17:15
## 1 Answer
A polynomial $f\in K[\underline Y]$ is in the kernel of your map iff $f$ is zero in the quotient $$\frac{k[X,Y]}{\bigl((X_i-X_j)Y_{i,j}-1:1\leq i<j\leq n\bigr)}.$$In other words, your kernel is the intersection of the ideal in the denominator with the ring $k[Y]$, $$\ker\varepsilon=k[Y]\cap\bigl((X_i-X_j)Y_{i,j}-1:1\leq i<j\leq n\bigr).$$This intersection is generated by the elements of a Groebner basis which only contain $Y$s, assuming you are using a monomial order which eliminates the $X$s; this is explained in the book by Cox, Little and O'Shea, for example.
Doing small examples shows that
$(\star)$ the intersection is generated by all polynomials of the form $$Y_{i,j} Y_{i,k}+ Y_{j,k}Y_{j,i}+Y_{k,i}Y_{k,j}$$ with $i$, $j$ and $k$ distinct. (I am identifying $Y_{i,j}$ with $-Y_{j,i}$ here when $i\neq j$)
Ordering the variables as in $$X_1,X_2,X_3,X_4,Y_{1,2},Y_{1,3},Y_{1,4},Y_{2,3},Y_{2,4},Y_{3,4}$$ for $n=4$ we find the Groebner basis $$\begin{array}{l} Y_{2,3} Y_{2,4}+Y_{3,4} Y_{2,4}-Y_{2,3} Y_{3,4} \\ Y_{1,3} Y_{1,4}+Y_{3,4} Y_{1,4}-Y_{1,3} Y_{3,4} \\ Y_{1,2} Y_{1,4}+Y_{2,4} Y_{1,4}-Y_{1,2} Y_{2,4} \\ Y_{1,2} Y_{1,3}+Y_{2,3} Y_{1,3}-Y_{1,2} Y_{2,3} \\ X_3 Y_{3,4}-X_4 Y_{3,4}-1 \\ X_2 Y_{2,4}-X_4 Y_{2,4}-1 \\ X_2 Y_{2,3}-X_3 Y_{2,3}-1 \\ X_1 Y_{1,4}-X_4 Y_{1,4}-1 \\ X_1 Y_{1,3}-X_3 Y_{1,3}-1 \\ X_1 Y_{1,2}-X_2 Y_{1,2}-1 \end{array}$$ The same pattern is seen for all $n$. It is very easy to see that all these polynomials are in $((X_i-X_j)Y_{i,j}-1:1\leq i<j\leq n)$, and it should not be difficult to show that they are a Groebner basis in general. I expect checking that the above claim $(\star)$ can actually be proved without much pain.
-
1
These are the "same" relations that occur in Arnold's presentation of the de Rham cohomology of the complement of the braid arrangement or, equivalently, the cohomology of the pure braid group with real coefficients. The only difference is that Arnold's generators anticommute and square to zero, while yours commute. – Mariano Suárez-Alvarez Feb 3 2012 at 18:38
Dear Mariano: Thanks a lot! Your answer is very concise, but I'm sure it contains a lot of maths. I'll try to digest and assimilate it. – Pierre-Yves Gaillard Feb 3 2012 at 18:44
1
@Mariano: in fact, Arnold's generators satisfy X_{ij}=X_{ji}, so the difference is more subtle. However, there is an even analogue of Arnold's relation (still with square zero though) for antisymmetric generators, it is described in a paper by Olivier Mathieu called "The symplectic operad" from a collection of articles for Gelfand's 80th birthday. The relations are precisely these ones. Now one just has to figure out what parts of this observation are coincidental and what are not. – Vladimir Dotsenko Feb 3 2012 at 20:54
I wrote throughout the $Y_{i,j}$ with the convention that $Y_{i,j}=-Y_{j,i}$, too. If Pierre-Ives wants to work without making that identification, then one should add the relations $Y_{i,j}+Y_{j,i}$ along with the quadratic relations above to generate the ideal. – Mariano Suárez-Alvarez Feb 3 2012 at 21:21
Yes, I understand that - and that's partly because I am making my comment. In Mathieu's setting, the generators are in a sense dual to Poisson brackets \{x_i,x_j\} and hence are naturally antisymmetric. So it matches the story very well (unlike the Arnold's story, where there is symmetry, not skew-symmetry in i and j), except for the square zero condition which is absent here. – Vladimir Dotsenko Feb 3 2012 at 21:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382195472717285, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/144314/orthogonal-vectors-question?answertab=votes
|
# Orthogonal vectors question
Can orthogonal vectors have some values that are the same? such as are (1,2,5) and (1,2,-5). orthogonal as the dot product is zero?
Thanks in advance!
-
These two are not orthogonal :) – Artem May 12 '12 at 19:17
1
$(1,2,5) \cdot (1,2,-5) = 1 + 4 -25 = -20 \ne 0$, so they arent orthogonal, but $(1,2, \sqrt 5)$ and $(1,2,-\sqrt 5)$ are, so the answer to your first question is yes. – martini May 12 '12 at 19:17
## 1 Answer
Just think of the orthogonal standard basis $(1,0,0), (0,1,0)$ and $(0,0,1)$. If you take them two by two, they always have a zero in common. When asking yourself questions about "does an example satisfying blablabla property exist?" try to think of the examples you know that are the easiest first ; then if they don't work you start thinking about weird examples.
Hope that helps,
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9254686832427979, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/114691/natural-relations-between-substitutions
|
## Natural relations between substitutions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider two contexts $\Gamma,\Delta$ (from some background type theory), and substitutions $s_1,s_2:\Gamma\rightarrow \Delta$. In the case of $1$-element contexts, we get that a substitution is simply a term in the $\lambda$-calculus; in that case, we know that the 'natural' relations between terms arise from $\beta$-reduction (and sometimes $\eta$-contraction, $\delta$-convertibility and $\alpha$-renaming). For example, Robert Seely conjectured [that article claims a proof, but a full proof only came many years later] that if the contexts belong from some dependently typed theory, there is a correspondance with locally cartesian closed categories, where the $2$-cells are given exactly by rewrites engendered by $\beta$ and $\eta$.
My question is, in general, what are the general relations which are considered natural between substitutions $s_1,s_2:\Gamma\rightarrow\Delta$ ?
Since the category of contexts is the opposite of the category of theory presentations, it is fairly natural to conjecture that this will be related to morphisms of theory interpretations. So, looking at $2$-categories as CAT-enriched categories, one finds some work on categories of interpretations, but it is not clear to me that this is indeed the right direction to look into.
In particular (from my admittedly CS-biased point-of-view), I would have expected something which more closely resembles term-rewriting in some guise. But maybe I am simply not recognizing that what is given above is a semantic counterpart to something more operational.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398504495620728, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/352/volatility-pumping-in-practice?answertab=active
|
# Volatility pumping in practice
The fascinating thing about volatility pumping (or optimal growth portfolio, see e.g. here) is that here volatility is not the same as risk, rather it represents opportunity. Additionally it is a generic mechanical strategy that is independent of asset classes.
My question:
Do you know examples where volatility pumping is actually implemented? What are the results? What are the pitfalls?
-
1
Is it connected to the concept of "Growth optimal Portfolio" (GOP in short) as stated by Platen and Heath ? – TheBridge Feb 9 '11 at 12:29
@TheBridge: I have not heard of this reference - could you please provide a link? Thank you – vonjd Feb 9 '11 at 12:50
2
– TheBridge Feb 9 '11 at 13:56
Thank you - unfortunately the part where they write about the GOP is not online, but I guess that should be the same idea. – vonjd Feb 9 '11 at 14:52
@TheBridge: Now I bought this book and, Yes, it is the same concept. – vonjd Jul 4 '12 at 13:39
## 4 Answers
The optimal growth portfolio is obtained by applying the Kelly criterion which is one of the pillars of the sound risk management.
Ed Thorp's weekend forays to Las Vegas to play blackjack were one of the first historically documented cases of successful practical implementation of the Kelly strategy. Since then this method and its modifications have been systematically used by Thorp himself and other hedge fund managers as an important risk control tool.
-
1
Yes and No. The ideas are similar because they are both based on concepts from information theory and entropy. The difference is that the Kelly criterion is mainly for risk management (so additionally to a trading strategy) and vola pumping is a trading strategy on its own. – vonjd Feb 9 '11 at 12:49
3
@vonjd: Well, in my opinion, these are just synonyms for the same thing. What you basically do is maximizing the expected value of $\log X$ where $X$ is your current capital. – olaker♦ Feb 9 '11 at 12:57
Yes, of course you are right. – vonjd Jul 4 '12 at 13:41
It was discussed long ago by Claude Shannon and discussed a bit in Fortune's Formula.
In the 1960s, Shannon gave a lecture in a hall packed with students and teachers alike in MIT, on the topic of maximizing the growth rate of wealth. He detailed a method on how you can grow your portfolio by rebalancing your fund between a stock and cash, while this stock stays in a random ranging market. (He used a geometric Wiener example). Essentially, you buy more when stock price is low, using the cash at hand, or sell more when stock price is high, with allocation of 50-50% value at each interval.
In addition, the ideas were further explored by Thomas Cover, with his Universal Portfolios concept. There was some word that he had left academia at one point to work on a hedge fund.
Having done some research in this area myself, I can say that one of the issues is that it takes a very long time for the results to converge (i.e. it would take a great amount of patience for your portfolio to converge towards the best asset's performance).
-
Here is an interesting example which makes use of these concepts in emerging markets. Emerging markets are ideal because volatility tends to be higher so it can better be harvested:
Diversifying and rebalancing emerging market countries by David Stein et al.
Abstract:
We discuss the diversification and rebalancing of Emerging Market countries. Emerging country risks are high and relatively uncorrelated, and the cap-weighted index is oncentrated. In the absence of prior information on returns, these characteristics lead us to expect that a structured rebalanced portfolio will out-perform a capweighted one over the long term. We study this phenomenon with a theoretical model of portfolio returns – this allows us to quantify performance advantages and understand what drives them. It turns out that, even though Emerging Markets suffer high transaction costs and unreliable information, pragmatic portfolio implementations with relatively little trading are still possible. For real implementation, we want to gain some confidence that performance benefits will continue into the future, so we review how the key drivers of excess performance have been evolving during the recent past of increasing globalization.
-
Here you have a example "applying Volatility Pumping to real stock market".
http://parrondoparadox.blogspot.com.es/2011/02/parrondos-paradox-stock-market.html
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428915977478027, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/8809/is-every-subgroup-of-an-algebraic-group-a-stabilizer-for-some-action/8831
|
## Is every subgroup of an algebraic group a stabilizer for some action?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose G is an algebraic group (over a field, say; maybe even over ℂ) and H⊆G is a closed subgroup. Does there necessarily exist an action of G on a scheme X and a point x∈X such that H=Stab(x)?
Before you jump out of your seat and say, "take X=G/H," let me point out that the question is basically equivalent† to "Is G/H a scheme?" If G/H is a scheme, you can take X=G/H. On the other hand, if you have X and x∈X, then the orbit of x (which is G/H) is open in its closure, so it inherits a scheme structure (it's an open subscheme of a closed subscheme of X).
†I say "basically equivalent" because in my argument, I assumed that the action of G on X is quasi-compact and quasi-separated so that the closure of the orbit (i.e. the scheme-theoretic closed image of G×{x}→X) makes sense. I'm also using Chevalley's theorem to say the the image is open in its closure, which requires that the action is locally finitely presented. I suppose it's possible that there's a bizarre example where this fails.
-
## 2 Answers
In his book "Linear algebraic groups", 6.8, p98, Borel shows that the quotient of an affine algebraic group over a field by an algebraic subgroup exists as an algebraic variety, and he notes p.105 that Weil proved a similar result for arbitrary algebraic groups.
-
1
Great. I think the precise reference is Proposition 2 of this paper: jstor.org/stable/2372637 – Anton Geraschenko♦ Dec 14 2009 at 3:25
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The representability theorem [Demazure-Gabriel, III.2.7.1, p. 318] implies the following.
## Theorem
Let $A$ be a local artinian ring, let $G$ be a group over $A$ locally of finite type, and let $H\hookrightarrow G$ be a closed subgroup which is flat over $A$. Then the quotient $G/H$ in the category of fppf sheaves is a scheme; and the canonical morphism $G\rightarrow G/H$ is faithfully flat and of finite presentation.
Note that the group $G$ in the above theorem need not be either affine or flat over $A$; also, Demazure-Gabriel write in comprehensible language, unlike Weil.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278793931007385, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2008/05/06/associativity-in-series-i/?like=1&source=post_flair&_wpnonce=a832ba55c5
|
# The Unapologetic Mathematician
## Associativity in Series I
As we’ve said before, the real numbers are a topological field. The fact that it’s a field means, among other things, that it comes equipped with an associative notion of addition. That is, for any finite sum we can change the order in which we perform the additions (though not the order of the terms themselves — that’s commutativity).
The topology of the real numbers means we can set up sums of longer and longer sequences of terms and talk sensibly about whether these sums — these series — converge or not. Unfortunately, this topological concept ends up breaking the algebraic structure in some cases. We no longer have the same freedom to change the order of summations.
When we write down a series, we’re implicitly including parentheses all the way to the left. Consider the partial sums:
$\displaystyle s_n=\sum\limits_{k=0}^na_k=((...(((a_0+a_1)+a_2)+a_3)...+a_{n-1})+a_n)$
But what if we wanted to add up the terms in a different order? Say we want to write
$\displaystyle s_6=(((a_0+a_1)+(a_2+a_3))+((a_4+a_5)+a_6))$
Well this is still a left-parenthesized expression, it’s just that the terms are not the ones we looked at before. If we write $b_0=a_0+a_1$, $b_1=a_2+a_3$, and $b_2=a_4+a_5+a_6$ then we have
$\displaystyle s_6=((b_0+b_1)+b_2)=\sum\limits_{j=0}^2b_j=t_2$
So this is actually a partial sum of a different (though related) series whose terms are finite sums of terms from the first series.
More specifically, let’s choose a sequence of stopping points: an increasing sequence of natural numbers $d(j)$. In the example above we have $d(0)=1$, $d(1)=3$, and $d(3)=6$. Now we can define a new sequence
$\displaystyle b_0=\sum\limits_{k=0}^{d(0)}a_k$
$\displaystyle b_j=\sum\limits_{k=d(j-1)+1}^{d(j)}a_k$
Then the sequence of partial sums $t_m$ of this series is a subsequence of the $s_n$. Specifically
$\displaystyle t_m=\sum\limits_{j=0}^mb_j=\sum\limits_{k=0}^{d(0)}a_k+\sum\limits_{j=1}^m\left(\sum\limits_{k=d(j-1)+1}^{d(j)}a_k\right)=\sum\limits_{k=0}^{d(m)}a_k=s_{d(m)}$
We say that the sequence $t_m$ is obtained from the sequence $s_n$ by “adding parentheses” (most clearly notable in the above expression for $t_m$). Alternately, we say that $s_n$ is obtained from $t_m$ by “removing parentheses”.
If the sequence $s_n$ converges, so must the subsequence $t_m=s_{d(m)}$, and moreover to the same limit. That is, if the series $\sum_{k=0}^\infty a_k$ converges to $s$, then any series $\sum_{j=0}^\infty b_j$ obtained by adding parentheses also converges to $s$.
However, convergence of a subsequence doesn’t imply convergence of the sequence. For example, consider $a_k=(-1)^k$ and use $d(j)=2j+1$. Then $s_n$ jumps back and forth between zero and one, but $t_m$ is identically zero. So just because a series converges, another one obtained by removing parentheses may not converge.
### Like this:
Posted by John Armstrong | Analysis, Calculus
## 2 Comments »
1. [...] in Series II We’ve seen that associativity doesn’t hold for infinite sums the way it does for finite sums. We can always “add parentheses” to a convergent [...]
Pingback by | May 7, 2008 | Reply
2. [...] in Series I We’ve seen that associativity may or may not hold for infinite sums, but it can be improved with extra assumptions. As it happens, commutativity [...]
Pingback by | May 8, 2008 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9110419750213623, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/184679/when-does-n-divide-ad1/184702
|
# When does $n$ divide $a^d+1$?
$\newcommand{\ord}{\operatorname{ord}}$
For what values of $n$ will $n$ divide $a^d+1$ where $n$ and $d$ are positive integers?
Apparently $n$ can not divide $a^d+1$ if $\ord_n a$ is odd.
If $n\mid (a^d+1)\implies a^d\equiv -1\pmod n\implies a^{2d}≡1\pmod n \implies\ord_na\mid 2d$ but $\nmid d$.
For example, let $a=10$, the factor(f)s of $(10^3-1)=999$ such that $\ord_f10=3$ are $27,37,111,333$ and $999$ itself. None of these should divide $10^d+1$ for some integer $d$.
Please rectify me if there is any mistake.
Is anybody aware of a better formula?
-
## 1 Answer
There are various useful bits of information that one may deduce about factors of integers of the form $\rm\:b^n\pm 1.\:$ A good place to learn about such is Wagstaff's splendid introduction to the Cunningham Project, whose goal is to factor numbers of the form $\rm\:b^n\pm 1.\:$ There you will find mentioned not only old results such as Legendre (primitive divisors of $\rm\:b^n\pm 1\:$ are $\rm\,\equiv 1\pmod{2n},$ but also newer results, e.g. those exploiting cyclotomic factorizations. e.g. see below.
Often number identities are more perceptively viewed as special cases of function or polynomial identities. For example, Aurifeuille, Le Lasseur and Lucas discovered so-called Aurifeuillian factorizations of cyclotomic polynomials $\rm\;\Phi_n(x) = C_n(x)^2 - n\ x\ D_n(x)^2\;$. These play a role in factoring numbers of the form $\rm\; b^n \pm 1\:$, cf. the Cunningham Project. Below are some simple examples of such factorizations (e.g. see below).
$$\begin{array}{rl} x^4 + 2^2 \quad=& (x^2 + 2x + 2)\;(x^2 - 2x + 2) \\\\ \frac{x^6 + 3^2}{x^2 + 3} \quad=& (x^2 + 3x + 3)\;(x^2 - 3x + 3) \\\\ \frac{x^{10} - 5^5}{x^2 - 5} \quad=& (x^4 + 5x^3 + 15x^2 + 25x + 25)\;(x^4 - 5x^3 + 15x^2 - 25x + 25) \\\\ \frac{x^{12} + 6^6}{x^4 + 36} \quad=& (x^4 + 6x^3 + 18x^2 + 36x + 36)\;(x^4 - 6x^3 + 18x^2 - 36x + 36) \\\\ \end{array}$$
-
Bill, thanks for the update. – lab bhattacharjee Aug 22 '12 at 18:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9075161218643188, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/27578/why-do-non-equioriented-asubn-sub-quivers-have-singularities-identical-to-the
|
## Why do non-equioriented A<sub>n</sub> quivers have singularities identical to the singularities of Schubert varieties?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In the general case, quiver cycles are of the form of orbit closures of $GL\cdot V_{\vec{r}}$, where $GL= \prod_{i=0}^n GL_{r_i}$ is the possible changes of basis on all of the vector spaces on each of the vertices and $V_{\vec{r}}$ is any representation of the quiver with fixed dimension vector $\vec{r}$. In the equioriented An case, these are well understood by Zelevinsky and Lakshmibai-Magyar by showing them isomorphic to open sets in Schubert varieties. Bobinski and Zwara claim to reduce the non-equioriented case to the equioriented case, but I don't see how they are doing that.
In the introduction to Normality of Orbit Closures for Dynkin Quivers" (manuscripta math. 2001), Bobinski and Zwara say that they will generalize the result that equioriented An quivers have the same singularities as Schubert varieties to non-equioriented An quivers. They claim that they will do this by reducing the non-equioriented case to the equioriented case. So far, so good. But then, they say that this result follows from the proposition that they will prove, which I don't see has to do with the theorem at all.
The proposition is about a Dynkin quiver, Q, of type Ap+q+1 with p arrows in one direction and q arrows in the other and Q' an equioriented Dynkin quiver of type Ap+2q+1, their respective path algebras B=kQ and A=kQ', and respective Auslander-Reiten quivers &GammaB and &GammaA over the category of finite dimensional left modules over A and B. The proposition says Let A=kQ' and B=kQ be the path algebras of quivers Q' and Q, respectively, where Q and Q' are Dynkin quivers of type A. Assume there exists a full embedding of translation quivers $F: \Gamma_B \to \Gamma_A$. Then there exists a hom-controlled exact functor $\mathcal{F}: \text{mod }B \to \text{mod }A$."
Can anyone tell me how (or if) their results translate into a result that tells me a recipe for constructing a Kazhdan-Lustzig variety from my non-equioriented quiver? (By K-L variety, I mean a Schubert variety intersect an opposite Bruhat cell.) Alternately, is there a way to see which particular sub-variety of the representation variety of equioriented Ap+2q+1 I get out of this theorem and how that is (maybe a GIT quotient away from) a Kazhdan-Lustzig variety?
Thanks,
Anna
-
His name is Bobinski, without the extra n :P – Mariano Suárez-Alvarez Jun 9 2010 at 13:29
## 1 Answer
The relevance of hom-controlled functors comes from Zwara's paper "Smooth morphisms of module schemes" (Theorem 1.2). The definition there is that two schemes with basepoints $(X,x)$ and $(Y,y)$ have identical singularities if there is a smooth morphism $f \colon X \to Y$ such that $f(x) = f(y)$.
Let $F$ be a hom-controlled functor. He shows that when we're dealing with module varieties, and $X$ is an orbit closure $\overline{O}_M$ with basepoint $x$ some closed point of $O_N$ (so $x$ represents the isomorphism class of a module $N$), then $(\overline{O}_M, x)$ has identical singularities as $(\overline{O}_{FM}, y)$ where $y$ is a closed point of $O_{FN}$.
So if one starts with non-equioriented ${\rm A}_n$ and picks an orbit closure $\overline{O}_M$ together with a closed point in it, then one knows that there is a smooth morphism to some orbit closure in a bigger ${\rm A}_m$. The orbit closure is just the image of $M$ under the hom-controlled functor constructed in Bobinski and Zwara's paper (though this construction is long and I don't remember the details). Then one can use Lakshmibai–Magyar to get a smooth morphism from this orbit closure to some Schubert variety.
So it's enough to understand how to construct $F$, which I remember being explicit but requiring quite a few steps, if we just want the varieties together with the singularities, but constructing the smooth morphism itself would take a lot more digging to construct explicitly.
-
"...then one knows that there is a smooth morphism to some orbit closure in a bigger $A_m$." Is it really to, and not from? – Allen Knutson Jun 9 2010 at 17:54
Looking back at the proof, it might actually be neither. In the proof of Theorem 1.2, Zwara factors $F$ into several hom-controlled functors, and it seems that each of them produces maps going in opposite directions. – Steven Sam Jun 9 2010 at 18:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301645159721375, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/tagged/homework+vector-calculus
|
# Tagged Questions
3answers
842 views
### Finding unit tangent, normal, and binormal vectors for a given r(t)
For my Calc III class, I need to find $T(t), N(t)$, and $B(t)$ for $t=1, 2$, and $-1$, given $r(t)=\{t,t^2,t^3\}$. I've got Mathematica, but I've never used it before and I'm not sure how to coerce ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9003062844276428, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/192308-what-dimension-solution-set-nonhomogeneous-equation-system.html
|
# Thread:
1. ## What is the dimension of the solution set of a nonhomogeneous equation system?
I thought its dimension is equal to the dimension of the solution set of the same homogenous system. Am I right?
2. ## Re: What is the dimension of the solution set of a nonhomogeneous equation system?
Not necessarily. We can think of a homogeneous system of n equations as a matrix equation Ax= 0 where A is the matrix of coefficients of the system of equations. The solution set of that homogeneous system of equations is the kernel of the matrix A and its dimension is the "nullity", k, of A. by the rank-nullity property, if A maps $R^n$ to $R^m$, then th e sum of the rank and nullity of A is n. That is, A maps all of $R^n$ to a n- k dimensional subspace of $R^m$.
We can write a non-homogeneous system as Ax= b where b is the vector containing the right side of the equations. If be happens to lie in the n- k dimensional subspace that A maps all of $R^n$ into (the "image of $R^n$ under A") then the solution set has dimension k, the same as the kernel. But if b is not in that subspace there is no solution. That is what is sometimes called the "Fredholm alternative".
3. ## Re: What is the dimension of the solution set of a nonhomogeneous equation system?
often the non-homogeneous case is represented by the dictum: general solution = homogeneous solution + particular solution.
note that if there is NO particular solution, then the number of homogeneous solutions (that is, the rank of the matrix) is irrelevent.
this often happens in problems where one is to determine if b is in col(A). the smaller the rank of A, the less likely it is that this will be true,
although if it IS true, there are often several ways to combine the columns of A to get b.
(if rank(A) << n, where A is nxn, then we have multiple ways to make a basis from the columns of A).
said yet another way: just because rref(A) has rank k, does not mean the system is consistent. the augmented
matrix might have rank k+r, which would have no solutions.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328279495239258, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/70748/model-of-a-group-in-category-of-rings/173613
|
# Model of a group in category of rings
I'm curious, is a model for a theory of groups (in the sense of Lawvere's algebraic theories) in the category of rings a group ring? Similarly, is a model for a theory of rings in the category of groups a group ring?
-
## 3 Answers
To address the question in a comment, there isn't a variety of group rings. In any variety of universal algebra, the underlying set of a product of two algebras is the product of the underlying sets. However, letting $\mu_n$ be the cyclic group of $n$ elements and $\omega$ be a primitive cube root of unity, we have:
• $$Z[\mu_2] \times Z[\mu_1] \cong (Z \times Z) \times Z$$
• $$Z[\mu_3] \cong Z \times Z\left[\omega\right]$$
$Z[\mu_3]$ is the only group ring whose additive group is $Z^3$, so these two rings would have to be isomorphic, but they are not. Thus, the product of group rings is not given by the product of the underlying sets.
From some perspective, there are two obstacles here. The first is that the theory of group rings is naturally two sorted: one sort for the group and one for the ring. The other is that the group ring is supposed to be free in the appropriate sense.
-
I don't think the theory of group rings is essentially algebraic. The product (in the sense of universal algebra) of two group rings is not a group ring (because it's not even an integral domain!) but the product of any two models of an essentially algebraic theory is always another model of the same theory. – Zhen Lin Jul 21 '12 at 14:48
Hrm. What am I missing then? You have two sorts, you add in the group axioms, you add in the ring axioms, and then you add in the axiom that says the group is a subobject of the ring. Oh! I guess what's missing is that the group ring is free, which is (?) a geometric axiom, not an essentially algebraic one. I've fixed my answer. I think I had in my mind a more general theory that merely needed the groups to be a subgroup of the multiplicative group of the ring. – Hurkyl Jul 21 '12 at 15:13
There are no nonzero group objects in the category of rings. The problem is that the identity is supposed to be a morphism $e : 1 \to R$ where $1$ is the terminal object, but in $\text{Rng}$ the terminal object is the zero ring, and there are no morphisms from the zero ring to any nonzero ring.
(A group object in the opposite of the category of commutative rings, on the other hand, is a group scheme. And if you want to think about group rings categorically, the way to do it is to consider the left adjoint to the forgetful functor $\text{Rng} \to \text{Grp}$ sending a ring to its group of units.)
-
What about the second question? What is a model for the theory of rings in the category of groups? Also can I use that adjunction to form a monad that has group rings as its algebras? – Joe Oct 8 '11 at 1:12
@Matt: The terminal group is the zero group, so both $0$ and $1$ have to be the identity element of the group, so $0=1$. – Hurkyl Jul 21 '12 at 14:16
While the notion of group object internal to the category of rings (without $1$) yields a ring with zero multiplication, and a group object internal to the category of groups yields an abelian group, it is an important starting point that groupoids internal to the same categories are very much non trivial objects. The subject is studied in general in for example
Internal Categories and Groupoids in Congruence Modular Varieties Authors: Janelidze G.; Pedicchio M.C., Journal of Algebra, 193 (1997) 552-570.
Groupoids internal to groups, or groups internal to groupoids, were shown to be equivalent to crossed modules in
R. Brown and C.B. Spencer, $\cal G$-groupoids, crossed modules and the fundamental groupoid of a topological group'', Proc. Kon. Ned. Akad. v. Wet. 7 (1976) 296-302.
Since crossed modules occur in homotopy theory, for second relative homotopy groups, all this was motivation for seeking and applying higher homotopy groupoids, and so going from abelian homotopy groups to more complicated nonabelian algebraic structures to model homotopy theory.
Groupoids internal to groupoids are called double groupoids, and are even somewhat mysterious, though many special cases have been studied.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461292624473572, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/44430/momentum-and-energy-problem?answertab=active
|
# Momentum And Energy Problem
The information to the question is, "A $59.0~kg$ boy and his $38.0~kg$ sister, both wearing roller blades, face each other at rest. The girl pushes the boy hard, sending him backward with a velocity $3.00~m/s$ toward the west. Ignore friction."
The specific question I am working on is, "How much potential energy in the girl's body is converted into mechanical energy of the boy–girl system?"
Well, I said that $E_{mech,~i}=0$, because no one has kinetic energy (no one is moving), and neither of them have the potential to move, that is, until the girl commences with the push; but, $E_{mech,~f}=PE=KE$, when she uses her potential energy to apply a force over a distance, on her brother, and by doing so she changes the energy of the system, by changing the speed of both of them, meaning the potential energy will convert to kinetic energy.
I've tried to plug every piece of data given to me, but I still couldn't find how much $PE$ was converted to mechanical energy. How do I find it? Is my analysis correct; is there anyway to improve it?
Edit:
I have another question: why can't internal forces of a system cause momentum to not be conserved?
-
## 1 Answer
You have this equation:
$m_bv_b+m_gv_g=0$ (conservation of momentum)
and $m_b,v_b,m_g$ are given. You can easily solve for $v_g$. Once you have this, $|\Delta PE|=\Delta KE_f=\frac12m_bc_b^2+\frac12m_gv_g^2$
I have another question: why can't internal forces of a system cause momentum to not be conserved?
Who said that they can't? Momentum is always conserved when there are no external forces on the system.
-
@Manshearth No, I was asking why internal forces don't affect the total momentum of a system. – Mack Nov 17 '12 at 17:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9563706517219543, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/36034?sort=newest
|
## Mod l local Galois representations (l different from p)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
My question is referred to the statement and proof of Prop. 2.4 of Diamond's article "An extension of Wiles' Results", in Modular Forms and Fermat Last Theorem, page 479.
More precisely: fix $l$ and $p$ two distinct primes, with $l$ odd. Let $\sigma$ be an irreducible, continuous, degree 2 representation of the absolute Galois group $G_{p}$ of $Q_{p}$, with coefficients in $k$, an algebraic closure of the finite field with $l$ elements. Proposition 2.4 states that if the restriction of $\sigma$ to the inertia subgroup of $G_{p}$ is irreducible and $p$ is odd, then $\sigma$ is isomorphic to the representation induced from a character of the Galois group of a quadratic ramified extension $M$ of $Q_{p}$. The proof given works if the restriction of $\sigma$ to the wild inertia of $G_{p}$ is reducible (I think there's a typo in the first line of the proof). What if $\sigma$ is irreducible on wild inertia (and $p$ is always odd)? It seems to me that this case is not covered in the proof of the Proposition, but maybe I'm not seeing something obvious.. If such a representation exists, it cannot be induced from a quadratic extension $M$ as above, so how does it fit in the description given by the Proposition? Can one say something about such a representation (for example something about its projective image?).
Thanks
-
## 1 Answer
The image of wild inertia is a finite $p$-group, and if $d$ is the degree of an irreducible representation of a $p$-group over an algebraically closed field of characteristic $\ne p$, then $d$ is a power of $p$. So for $p$ odd the image of wild inertia is always reducible.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306147694587708, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/51886/hamiltons-equations-for-a-simple-pendulum
|
# Hamilton's equations for a simple pendulum
I don't get how to use Hamilton's equations in mechanics, for example let's take the simple pendulum with $$H=\frac{p^2}{2mR^2}+mgR(1-\cos\theta)$$ Now Hamilton's equations will be: $$\dot p=-mgR\sin\theta$$ $$\dot\theta=\frac{p}{mR^2}$$ I know one of the points of Hamiltonian formalism is to get first order diff. equations instead of second order that Lagrangian formalism gives you, but how can I proceed from here without just derivating again wrt. $\dot\theta$ and substituting $\dot p$ to get the same equation that I get with the Lagrangian formulation? Or is that the way to do it? And how could I get the path of the system on the phase space with those equations?
-
## 1 Answer
Generally both formulations (Largangian and Hamiltonian) are equivalent, but in your case, if $\theta$ is small, you have a simplified equation for $p$ and you can use a solution ansatz like $e^{i\omega t}$ for both $p$ and $\theta$.
To draw a path in the phase space, you have to solve the equations and/or manage to express $p(\theta)$ or $\theta(p)$.
-
Thanks for the info, so I will get to the same second order differential equation? And about the phase space? – MyUserIsThis Jan 22 at 16:41
Not obligatory, you can solve $\dot{p}\propto \theta$ and $\dot{\theta}\propto p$ by the ansatz. – Vladimir Kalitvianski Jan 22 at 16:48
2
To add to what Vladimir said, if you consider a system with $n$ generalized coordinates (in this case $n=1$ since your system is described by the coordinate $\theta$), then you will obtain $2n$ first order differential equations, $n$ for the coordinates and $n$ for their corresponding canonical momenta. To get the path in phase space, you can do what Vladimir suggests; this will allow you to obtain $\theta(t)$ and $p(t)$, and then you simply plot these functions as a parametric curve on the $\theta$-$p$ plane. Cheers! – joshphysics Jan 22 at 16:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9172859787940979, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2009/09/23/
|
# The Unapologetic Mathematician
## Directional Derivatives
Okay, now let’s generalize away from partial derivatives. The conceptual problem there was picking a bunch of specific directions as our basis, and restricting ourselves to that basis. So instead, let’s pick any direction at all, or even more generally than that.
Given a vector $u\in\mathbb{R}^n$, we define the directional derivative of the function $f:\mathbb{R}^n\rightarrow\mathbb{R}$ in the direction of $u$ by
$\displaystyle\left[D_uf\right](x)=\lim\limits_{t\to0}\frac{f(x+ut)-f(x)}{t}$
It’s common to omit the brackets I’ve written in here, but that doesn’t make it as clear that we have a new function $D_uf$, and we’re asking for its value at $x$. Instead, $D_uf(x)$ can suggest that we’re applying $D_u$ to the value $f(x)$. It’s also common to restrict $u$ to be a unit complex number, which is then used as a representative vector for all of those pointing in the same direction. I find that to be a needless hindrance, but others may disagree.
Anyhow, this looks a lot like our familiar derivative. Indeed, if we’re working in $\mathbb{R}^1$ and we set $u=1$ we recover our regular derivative. And we have the same sort of interpretation: if we move a little bit $\Delta t$ in the direction of $u$ then we can approximate the change in $f$
$\displaystyle f(x+u\Delta t)\approx f(x)+\left[D_uf\right](x)\Delta t$
$\displaystyle\Delta f=f(x+u\Delta t)-f(x)\approx\left[D_uf\right](x)\Delta t$
$\displaystyle\frac{\Delta f}{\Delta t}\approx\left[D_uf\right](x)$
Now, does the existence of these limits guarantee the continuity of $f$ at $x$? No, not even the existence of all directional derivatives at a point assures us that the function will be continuous at that point. Indeed, we can consider another of our pathological cases
$\displaystyle f(x,y)=\frac{x^2y}{x^4+y^2}$
and patch it by defining $f(0,0)=0$. We take the directional derivative at $(x,y)=(0,0)$ using the direction vector $(u,v)$
$\displaystyle\begin{aligned}\left[D_{(u,v)}f\right](0,0)&=\lim\limits_{t\to0}\frac{f(ut,vt)-f(0,0)}{t}\\&=\lim\limits_{t\to0}\frac{\frac{t^3u^2v}{t^4u^4+t^2v^2}}{t}\\&=\lim\limits_{t\to0}\frac{u^2v}{t^2u^4+v^2}\end{aligned}$
If $v\neq0$ then we find $\left[D_{(u,v)}f\right](0,0)=\frac{u^2}{v}$, while if $v=0$ we find $\left[D_{(u,v)}f\right](0,0)=0$. But we know that this function can’t be continuous, since if we approach the origin along the parabola $y=x^2$ we get a limit of $\frac{1}{2}$ instead of $f(0,0)=0$.
Again, the problem is that directional derivatives imply continuity along straight lines in various directions, but even continuity along every straight line through the point isn’t enough to assure continuity as a function of two variables, let alone more. We need something even stronger than directional derivatives.
On the other hand, directional derivatives are definitely stronger than partial derivatives. First of all, we haven’t had to make any choice of an orthonormal basis. But if we do have an orthonormal basis $\left\{e_i\right\}_{i=1}^n$ at hand, we find that partial derivatives are just particular directional derivatives
$\displaystyle\begin{aligned}\left[D_{e_k}f\right](x)&=\lim\limits_{t\to0}\frac{f(x+e_kt)-f(x)}{t}\\&=\lim\limits_{t\to0}\frac{f(x^ie_i+e_kt)-f(x^ie_i)}{t}\\&=\lim\limits_{t\to0}\frac{f(x^1,\dots,x^k+t,\dots,x^n)-f(x^1,\dots,x^k,\dots,x^n)}{t}\\&=f_k(x^1,\dots,x^n)\end{aligned}$
Incidentally, I’ve done two things here worth noting. First of all, I’ve gone back to using superscript indices for vector components. This allows the second thing, which is the transition from writing a function as taking one vector variable $f(x)$ to rewriting the vector in terms of the basis at hand $f(x^ie_i)$ to writing the function as taking $n$ real variables $f(x^1,\dots,x^n)$. I know that some people don’t like superscript indices and the summation convention, but they’ll be standard when we get to more general spaces later, so we may as well get used to them now. Luckily, when we really understand something we shouldn’t have to pick coordinates, and indices only come into play when we do pick coordinates. Thus all the really meaningful statements shouldn’t have many indices to confuse us.
Posted by John Armstrong | Analysis, Calculus | 13 Comments
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287596940994263, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2010/08/17/associated-metric-spaces-and-absolutely-continuous-measures-ii/?like=1&source=post_flair&_wpnonce=3fdfac1636
|
# The Unapologetic Mathematician
## Associated Metric Spaces and Absolutely Continuous Measures II
Yesterday, we saw that an absolutely continuous finite signed measure $\nu$ on a measure space $(X,\mathcal{S},\mu)$ defines a continuous function on the associated metric space $\mathfrak{S}$, and that a sequence of such finite signed measures that converges pointwise is actually uniformly absolutely continuous with respect to $\mu$.
We’re going to need to assume that $\nu$ is nonnegative. We’d usually do this by breaking $\nu$ into its positive and negative parts, but it’s not so easy to get ahold of the positive and negative parts of $\nu$ in this case. However, we can break each $\nu_n$ into $\nu_n^+$ and $\nu_n^-$. Then we can take the limits $\nu^{\geq0}(E)=\lim_n\nu_v^+(E)$ and $\nu^{\leq0}(E)=\lim_n\nu_v^-(E)$, which will still satisfy $\nu(E)=\nu^{\geq0}(E)-\nu^{\leq0}$. The only difference between this decomposition and the positive and negative parts is that this pair of set functions might have some redundancy that gets cancelled off in this subtraction. And so, without loss of generality, we will assume that all the $\nu_n$ are nonnegative, and that their limit $\nu$ is as well.
Now, given such a sequence, define the limit function $\nu(E)=\lim_n\nu_n(E)$. I say that $\nu$ is itself a finite signed measure, and that $\nu\ll\mu$. Indeed, $\nu(E)$ is finite by assumption, and additivity is easy to check. As for absolute continuity, if $\mu(E)=0$, then each $\nu_n(E)=0$ since $\nu_n\ll\mu$, and so $\nu(E)=0$ as the limit of the constant zero sequence.
What we need to check is continuity. We know that it suffices to show that $\nu$ is continuous from above at $\emptyset$. So, let $\{E_m\}$ be a decreasing sequence of measurable sets whose limit is $\emptyset$. We must show that the limit of $\nu(E_m)$ is zero. But we know that the limit of $\mu(E_m)$ is zero, and thus for a large enough $m$ we can make $\mu(E_m)<\delta$ for any given $\delta$. And since $\nu\ll\mu$ we know that for any $\epsilon$ there is some $\delta$ so that if $\mu(E)<\delta$ then $\nu(E)<\epsilon$. Thus we can always find a large enough $m$ to guarantee that $\nu(E_m)<\epsilon$, and so the limit is zero, as asserted.
Finally, what happens if we remove the absolute continuity requirement from the $\nu_n$? That is: what can we say if $\{\nu_n\}$ is a sequence of finite signed measures on $X$ so that $\nu(E)=\lim_n\nu_n(E)$ exists and is finite for each $E\in\mathcal{S}$. I say that $\nu(E)$ is a signed measure. What we need is to find some measure $\mu$ so that all the $\nu_n\ll\mu$, and then we can use the above result.
Since $\nu_n$ is a finite signed measure, we can pick some upper bound $c_n\geq\lvert\nu_n(E)\rvert$. Then we define
$\displaystyle\mu(E)=\sum\limits_{n=1}^\infty\frac{1}{2^nc_n}\lvert\nu_n\rvert(E)$
If any $\lvert\nu\rvert(E)\neq0$, then $\mu(E)\neq0$, and so $\lvert\nu_n\rvert\ll\mu$. And thus $\nu_n\ll\mu$ for all $n$, as desired.
### Like this:
Posted by John Armstrong | Analysis, Measure Theory
No comments yet.
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 55, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9558295607566833, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/48655-question-about-proof-prime-number-theorem.html
|
# Thread:
1. ## A question about proof of prime number theorem
The following is the basic steps in one proof of the prime number theorem (many steps are left out)
$\pi(x)\sim \frac{x}{\ln(x)}\quad\text{iff}\quad \psi(x)\sim x$
Now,
$\psi_0(x)=-\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\frac{\zeta'(s)}{\zeta(s)}\fra c{x^s}{s}ds$
where:
$\psi_0(x)=\left\{\begin{array}{ccc} \psi(x) & \text{for}& x\ne p^m \\<br /> \psi(x)-1/2\ln(p) & \text{for}& x=p^m<br /> \end{array}\right.<br />$
That is, $\psi_0(x)$ differs from $\psi(x)$ only when x is a prime power, the difference being $1/2\ln(p)$. Now, via residue integration:
$\psi_0(x)=x-\sum_{\rho}\frac{x^{\rho}}{\rho}-\ln(2\pi)-1/2\ln\left(1-1/x^2\right)$
where the sum is over all the non-trivial zeros of the zeta function. Dividing through by x and letting x tend to infinity:
$\frac{\psi(x)}{x}\to 1-\lim_{x\to\infty}\frac{1}{x}\sum_{\rho}\frac{x^{\r ho}}{\rho}$
I think it can be shown that: $\sum_{\rho}\frac{x^{\rho}}{\rho}=\textbf{O}(\sqrt{ x})$ and therefore:
$\frac{\psi(x)}{x}\sim 1$ and thus $\pi(x)\sim\frac{x}{\ln(x)}$
I'm not sure about the order of the sum. Can someone confirm this or explain further how this sum is bounded?
2. Hello,
I am not an expert, this is what I found in books. (Mainly "The theory of the Riemann zeta-function" by S.J.Patterson).
The explicit formula for psi_0(x) is due to von Mangoldt.
If we let $S(x, T)=\Sigma_\rho \frac{X^{\rho}}{\rho}$ where $\rho$ runs the zeros of the zeta function with $|Im(\rho)|<T$, then |x^(rho)|<=x, 1/rho=O(1/T), there are O(log T) such zeros. Thus, S(x, T)=O((x log T)/T).
I don't know where you got $\lim_{T\to\infty}S(x, T)=O(\sqrt{x})$.
Bye.
3. Ok. I was wrong (I thought it might be $\sqrt{x}$). Thanks a bunch.
I'll try to find that book. I got a question about the number of zeros: I thought the number of roots between $0$ and $T$ is approximately:
$\frac{T}{2\pi}\ln\left(\frac{T}{2\pi}\right)-\frac{T}{2\pi}$. Can someone explain to me how that's $\textbf{O}(\ln(T))$?
4. Hello,
Originally Posted by shawsend
I got a question about the number of zeros: I thought the number of roots between $0$ and $T$ is approximately:
$\frac{T}{2\pi}\ln\left(\frac{T}{2\pi}\right)-\frac{T}{2\pi}$. Can someone explain to me how that's $\textbf{O}(\ln(T))$?
Sorry, I was wrong. Forget my first post. I hope someone wiser might help.
Bye.
5. Hey guys, Wikipedia under Chebyshev function gives:
$\sum_{\rho}\frac{x^{\rho}}{\rho}=\textbf{O}(\sqrt{ x}\ln^2 x)$
when this is substituted into the expression for $\frac{\psi(x)}{x}$, I get:
$\lim_{x\to\infty}\frac{\textbf{O}(\sqrt{x}\ln^2 x)}{x}\to 0$
which is what one expects.
Would be interesting to show how this order is determined. I'll try.
6. Hello,
Originally Posted by shawsend
Hey guys, Wikipedia under Chebyshev function gives:
$\sum_{\rho}\frac{x^{\rho}}{\rho}=\textbf{O}(\sqrt{ x}\ln^2 x)$
The Wikipedia says that you can prove this estimate "if the Riemann Hypothesis is TRUE." (In fact, the estimate is equivalent to RH.) Prove it, and you get a prize!
Bye.
7. Hey Wisterville. I think the two are separate and Wikipedia is alluding to the fact the sum would not be of this order if other zeros outside the critical line were included.
I believe the sum can be considered completely independently of the Riemann Hypothesis like this: What is the order of the sum $\sum_{\rho}\frac{x^{\rho}}{\rho}$ assuming $\rho=1/2+it$ and the density of the set $\{\rho_n\}$ in the range $(0,T)$ is of order $\frac{T}{2\pi}\ln\frac{T}{2\pi}-\frac{T}{2\pi}$.
Note: The sum is taken symmetrically over the zeros:
$\sum_{\rho}\frac{x^{\rho}}{\rho}=\lim_{T\to\infty} \sum_{|t|\leq T}\frac{x^{\rho}}{\rho};\quad \rho=1/2+it$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9449701309204102, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/2701-div-grad-curl.html
|
Thread:
1. div, grad, curl
I have the following query:
I need to verfiy:
curl(AxB) = (B. $\nabla$)A - B(divA) - (A. $\nabla$)B + A(divB)
for A=(1,0,0) and B=(x,y,z).
I can do the left side and the div terms of the right side but I am unsure of how to go about working out the other two terms with the $\nabla$ in them.
Please could someone explain to me how to calculate these.
Thanks.
2. Originally Posted by jedoob
I have the following query:
I need to verfiy:
curl(AxB) = (B. $\nabla$)A - B(divA) - (A. $\nabla$)B + A(divB)
for A=(1,0,0) and B=(x,y,z).
I can do the left side and the div terms of the right side but I am unsure of how to go about working out the other two terms with the $\nabla$ in them.
Please could someone explain to me how to calculate these.
Thanks.
By definition $\nabla$ is a vector. So:
$\nabla \equiv \left ( \partial _x , \, \partial _y , \, \partial _z \right )$
That means:
$A \cdot \nabla = \left ( 1 \partial _x + 0 \partial _y + 0 \partial_z \right ) = \partial _x$
(Recall that the dot product always produces a scalar quantity.)
and
$B \cdot \nabla = \left ( x \partial _x + y \partial _y + z \partial_z \right )$
Note that the order of these are important: $A \cdot \nabla \neq \nabla \cdot A$.
That means that
$\left ( A \cdot \nabla \right ) B = \partial _x (x,y,z) = (1,0,0)$
and
$\left ( B \cdot \nabla \right ) A = \left ( x \partial _x + y \partial _y + z \partial_z \right ) (1,0,0) = 0$
since A is a constant vector.
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8930577039718628, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/61404?sort=oldest
|
## Intersection probability for ‘N’ fixed-length rods in one- or two-dimensions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Please consider the case where I have 'N' rods of length L (and width W) placed on a one- or two-dimensional surface with dimensions [0, A] in 1D, and [ [0, A], [0, B] ] in 2D. For the two-dimensional case, L and W are << A or B.
As a function of the number of rods N, and the relative dimensions of the rods and the surface on which they are placed, is there a reasonably easy derivation for the number of expected intersections between rods / the probability of an intersection occurring? I feel like this should have been solved somewhere in the literature, but I was unable to find anything.
Edit - When I mentioned the rods should be 'placed', I failed to clarify that my treatment has been to assume that one end of each rod is placed with uniform random probability somewhere inside the specified dimensions of the one- or two-dimensional surface, the angle of the rod should be random, and any rod sections outside the bounded surface should either be treated or ignored depending on how easy it makes treating boundary conditions.
-
How are they placed? (Of course, somehow 'randomly', but I could imagine that at least in the two dimensional case, there are different natural ways how one could think of placing them 'randomly', affecting the result. So it might be good to specifiy this or at least to state that this is not so, or you do not care about this aspect.) – quid Apr 12 2011 at 14:09
Thanks, I hopefully just clarified what I meant. I basically want the easiest possible treatment for the boundary of the surface. – Rob Grey Apr 12 2011 at 14:57
This isn't quite what you're asking about, but percolation of randomly placed rods often goes by the name "stick percolation" in the literature. See for instance this paper of Rahul Roy for some rigorous work math.bme.hu/~balint/oktatas/perkolacio/… , and this paper of Jiantong Li and Shi-Li Zhang for some numerical work link.aps.org/doi/10.1103/PhysRevE.80.040104 – jc Apr 12 2011 at 16:49
Thanks jc... I wouldn't have thought to use the word 'stick'. – Rob Grey Apr 12 2011 at 17:05
## 1 Answer
To compute the expected number of intersections, use the fact that expectation is additive. The expected number of intersections is just $\binom{N}{2}p$ where $p$ is the probability of an intersection.
To compute the probability of an intersection you can make your life easier (with essentially no cost to the accuracy) by assuming your surface/line wraps around. This means there are no special places on your surface. Now you need to compute the probability that two rods intersect.
In one dimension imagine the first rod is placed somewhere. What is the probability that the second rod intersects it? $2L/A$ (either the second rod is placed so its left end lies inside the first rod; or that its right end lies inside the first rod).
In two dimensions, fix the position of the first rod. Then suppose the second rod is inclined at angle $\theta$ to the first rod (we can assume that $0<\theta<\pi/2$). We now need the area of the set of positions of the "top left" vertex of the second rod such that there is an intersection with the first rod. This set of positions is an octagon (whose area (as long as my geometry is correct) is $2L^2\sin\theta+2W^2\sin\theta+2WL(1+\cos\theta)$). (In the case where $W\ll L$ it reduces to a rhombus for which the area is just $L^2\sin\theta$). The intersection probability is then just $(2/\pi)$ times the integral of this area over $\theta$ divided by $AB$. i.e. $(4(L^2+W^2)/\pi+LW(2+4/\pi))/AB$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315849542617798, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/tagged/locales
|
## Tagged Questions
1answer
58 views
### On the openness of the map X^I -> X * X.
Hello ! Let $X$ be a locale or a topological spaces. $I$ denote the unit interval of the real numbers, and $X^I$ the space of function from $I$ to $X$ (The locale exponential if \$ …
0answers
211 views
### Are $\infty$-topoi determined by their localic points ?
Hello ! If $T$ is an infinity topos, then you can consider the infinity category of geometric morphism from $Sh_{\infty}(\mathcal{L})$ to $T$ for any locale $\mathcal{L}$. This as …
0answers
484 views
### $\infty$-topos and localic $\infty$-groupoids ?
Hello ! It's known that every classical (Grothendieck) topos is equivalent to the topos of sheaves on a localic groupoid (a groupoid in the category of locales). For the record, …
2answers
226 views
### Given a Grothendieck topos, what does its localic groupoid look like? [closed]
Possible Duplicate: Toposes (topoi) as classifying toposes of groupoids For example, if a topos E is the object classifier, or the preseaf topos on a small category C, is …
3answers
1k views
### Locales and Topology.
As someone more used to point-set topology, who is unfamiliar with the inner workings of lattice theory, I am looking to learn about the localic interpretation of topology, of whic …
0answers
51 views
### Intersection of open sublocale of a compact regular locale ?
Hello ! It's well know that any sublocale of regular locale is the intersection of a familly of open sublocale. Hence if $X$ is a regular locale, the map which to a sublocal \$Y \s …
1answer
275 views
### Counterexemple to Urysohn’s lemma in a topos without denombrable choice ?
Hello ! The Urysohn's Lemma assert that in every topological spaces which is normal two closed subset may be separated by a real valued function. It's proof use axiom of countable …
0answers
95 views
### surjection of localic infinity toposes?
Hello! Is there a simple 'topological' condition to detect whenever a morphism of locales $f : X \rightarrow Y$ induces a surjection of infinity-toposes \$f : \mathrm{Sh}_{\infty} …
0answers
424 views
### Which complete Boolean algebras arise as the algebras of projections of commutative von Neumann algebras?
Projections in an arbitrary commutative von Neumann algebra form a complete Boolean algebra. Moreover, a morphism of commutative von Neumann algebras induces a continuous morphism …
3answers
537 views
### Localic locales? Towards very pointless spaces by iterated internalization.
One can think of locales as (generalizations of) topological spaces which don't necessary have (enough) points. Of course when one studies locales, one "actually" studies frames, …
5answers
854 views
### Stone Spaces, Locales, and Topoi for the (relative) beginner
I am currently reading Vickers' text "topology via logic" and Peter Johnstone's "stone spaces", and I understand the material in both of these texts to pertain directly to construc …
1answer
306 views
### Do strict pro-sets embed in locales?
It is well-known that the category of profinite groups (by which I mean Pro(FiniteGroups), i.e. the category of formal cofiltered limits of finite groups) is equivalent to a full s …
2answers
382 views
### Definition of Category of Locales
In the wikipedia entry for 'frames and locales', pains are taken to distinguish between the category of locales - defined to be the opposite of the category of frames - and the cat …
1answer
160 views
### Strong monics in the category of locales
Are there non-regular strong monics in the category of locales?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8496586680412292, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/81119/compare-light-volume-of-two-light-bulbs-with-different-degree-spreads
|
# Compare light volume of two light bulbs with different degree spreads
I got two light bulbs, they are the same, but one spreads the light with an angle of 45 degrees, while the other 60 degrees.
I have measured their lux from a height of 1 meter, starting right below the light bulb an, then measured again, walking 20 cm to the right, and again at 40 cm and so on. By doing this I was able to measure how well the light bulb spreads the light. Here are the actual measurements.
```` 0 cm 20 cm 40 cm 60 cm 80 cm 100 cm
45° 624 lux 371 lux 82 lux 18 lux 6 lux 3 lux
60° 327 lux 307 lux 152 lux 54 lux 17 lux 7 lux
````
Now my question, how do I verify that the light volume is in fact the same only spread differently?
I'm thinking about giving each measuring point a value, to help calculate their importance. A lux value at distance of 100 cm, would count more at than at 0 cm.
Hope it's clear what I want. But I'm looking for a way to validate that the two light bulb are in fact sending out the same amount of light, but with different spreads.
-
For those unfamiliar with photometric units, "lux" is the same "as lumens per square meter", and "lumen" is a unit of photometric flux. So we can also think of the lamp as a device that shoots "bullets" randomly in different direction, where the density of bullets landing on different infinitesimal areas of the floor is as in the OP's table. The OP is then asking whether the two sources shoot the same number of bullets in total. – Henning Makholm Nov 11 '11 at 14:49
I don't understand the downvote. It seems like a good question to me. – Ross Millikan Nov 11 '11 at 15:14
## 1 Answer
It would be better to take the data over a sphere centered on the light bulb. You want to take your data far enough away that the size to the bulb doesn't matter, but close enough that "all" the light falling on your meter comes from the bulb. Then it shouldn't matter what distance you use. The solid angle only depends upon the central angle. You would hope the pattern is symmetric around the axis of the bulb and all you need to do is map the intensity as a function of polar angle. Then if you integrate lux$(\theta)\cdot \theta$ out to where the intensity goes to zero you should get the same value for each bulb. The factor of $\theta$ reflects the fact that the solid angle between $\theta$ and $\theta + \Delta \theta$ is proportional to $\theta$, it is like the circumference of a circle.
-
Almost surely the angles are not the half cone. If the angles were the half cone, at 100cm, even if the light source is non-uniformly emitting the light in the solid angle, its intensity should be closer to 1/2 the value at 0cm, and not 1/40. At 1m below the bulb, a sharp half-cone of 22.5 degrees will be completely dark beyond ~42cm, and half-cone of 30 degree will be so beyond ~58cm, which I think corresponds better with the data. – Willie Wong♦ Nov 11 '11 at 15:54
@WillieWong: I didn't read carefully enough to see the way the data was taken. I'll update the answer. – Ross Millikan Nov 11 '11 at 15:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328312873840332, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Gauss's_law
|
# Gauss's law
This article is about Gauss's law concerning the electric field. For analogous laws concerning different fields, see Gauss's law for magnetism and Gauss's law for gravity. For Gauss's theorem, a mathematical theorem relevant to all of these laws, see Divergence theorem.
Electromagnetism
• Coulomb's law
• Gauss's law
Scientists
In physics, Gauss's law, also known as Gauss's flux theorem, is a law relating the distribution of electric charge to the resulting electric field.
The law was formulated by Carl Friedrich Gauss in 1835, but was not published until 1867.[1] It is one of the four Maxwell's equations which form the basis of classical electrodynamics, the other three being Gauss's law for magnetism, Faraday's law of induction, and Ampère's law with Maxwell's correction. Gauss's law can be used to derive Coulomb's law,[2] and vice versa.
## Qualitative description of the law
In words, Gauss's law states that:
The net outward normal electric flux through any closed surface is proportional to the total electric charge enclosed within that closed surface.[3]
Gauss's law has a close mathematical similarity with a number of laws in other areas of physics, such as Gauss's law for magnetism and Gauss's law for gravity. In fact, any "inverse-square law" can be formulated in a way similar to Gauss's law: For example, Gauss's law itself is essentially equivalent to the inverse-square Coulomb's law, and Gauss's law for gravity is essentially equivalent to the inverse-square Newton's law of gravity.
Gauss's law is something of an electrical analogue of Ampère's law, which deals with magnetism.
The law can be expressed mathematically using vector calculus in integral form and differential form, both are equivalent since they are related by the divergence theorem, also called Gauss's theorem. Each of these forms in turn can also be expressed two ways: In terms of a relation between the electric field E and the total electric charge, or in terms of the electric displacement field D and the free electric charge.[4]
## Equation involving E-field
Gauss's law can be stated using either the electric field E or the electric displacement field D. This section shows some of the forms with E; the form with D is below, as are other forms with E.
### Integral form
Gauss's law may be expressed as:[5]
$\Phi_E = \frac{Q}{\varepsilon_0}$
where ΦE is the electric flux through a closed surface S enclosing any volume V, Q is the total charge enclosed within S, and ε0 is the electric constant. The electric flux ΦE is defined as a surface integral of the electric field:
$\Phi_E =$${\scriptstyle S}$$\mathbf{E} \cdot \mathrm{d}\mathbf{A}$
where E is the electric field, dA is a vector representing an infinitesimal element of area,[note 1] and • represents the dot product of two vectors.
Since the flux is defined as an integral of the electric field, this expression of Gauss's law is called the integral form.
#### Applying the integral form
Main article: Gaussian surface
See also Capacitance (Gauss's law)
If the electric field is known everywhere, Gauss's law makes it quite easy, in principle, to find the distribution of electric charge: The charge in any given region can be deduced by integrating the electric field to find the flux.
However, much more often, it is the reverse problem that needs to be solved: The electric charge distribution is known, and the electric field needs to be computed. This is much more difficult, since if you know the total flux through a given surface, that gives almost no information about the electric field, which (for all you know) could go in and out of the surface in arbitrarily complicated patterns.
An exception is if there is some symmetry in the situation, which mandates that the electric field passes through the surface in a uniform way. Then, if the total flux is known, the field itself can be deduced at every point. Common examples of symmetries which lend themselves to Gauss's law include cylindrical symmetry, planar symmetry, and spherical symmetry. See the article Gaussian surface for examples where these symmetries are exploited to compute electric fields.
### Differential form
By the divergence theorem Gauss's law can alternatively be written in the differential form:
$\nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0}$
where ∇•E is the divergence of the electric field, and ρ is the total electric charge density.
### Equivalence of integral and differential forms
Main article: Divergence theorem
The integral and differential forms are mathematically equivalent, by the divergence theorem. Here is the argument more specifically.
Outline of proof
The integral form of Gauss's law is:
$\oint_S \mathbf{E} \cdot \mathrm{d}\mathbf{A} = \frac{Q}{\varepsilon_0}$
for any closed surface S containing charge Q. By the divergence theorem, this equation is equivalent to:
$\iiint\limits_V \nabla \cdot \mathbf{E} \ \mathrm{d}V = \frac{Q}{\varepsilon_0}$
for any volume V containing charge Q. By the relation between charge and charge density, this equation is equivalent to:
$\iiint\limits_V \nabla \cdot \mathbf{E} \ \mathrm{d}V = \iiint\limits_V \frac{\rho}{\varepsilon_0} \ \mathrm{d}V$
for any volume V. In order for this equation to be simultaneously true for every possible volume V, it is necessary (and sufficient) for the integrands to be equal everywhere. Therefore, this equation is equivalent to:
$\nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0}.$
Thus the integral and differential forms are equivalent.
## Equation involving D-field
See also: Maxwell's equations
### Free, bound, and total charge
Main article: Electric polarization
The electric charge that arises in the simplest textbook situations would be classified as "free charge"—for example, the charge which is transferred in static electricity, or the charge on a capacitor plate. In contrast, "bound charge" arises only in the context of dielectric (polarizable) materials. (All materials are polarizable to some extent.) When such materials are placed in an external electric field, the electrons remain bound to their respective atoms, but shift a microscopic distance in response to the field, so that they're more on one side of the atom than the other. All these microscopic displacements add up to give a macroscopic net charge distribution, and this constitutes the "bound charge".
Although microscopically, all charge is fundamentally the same, there are often practical reasons for wanting to treat bound charge differently from free charge. The result is that the more "fundamental" Gauss's law, in terms of E (above), is sometimes put into the equivalent form below, which is in terms of D and the free charge only.
### Integral form
This formulation of Gauss's law states analogously to the total charge form:
$\Phi_D = Q_\text{free}\!$
where ΦD is the D-field flux through a surface S which encloses a volume V, and Qfree is the free charge contained in V. The flux ΦD is defined analogously to the flux ΦE of the electric field E through S:
$\Phi_{D} =$${\scriptstyle S}$$\mathbf{D} \cdot \mathrm{d}\mathbf{A}$
### Differential form
The differential form of Gauss's law, involving free charge only, states:
$\mathbf{\nabla} \cdot \mathbf{D} = \rho_\text{free}$
where ∇•D is the divergence of the electric displacement field, and ρfree is the free electric charge density.
## Equivalence of total and free charge statements
Proof that the formulations of Gauss's law in terms of free charge are equivalent to the formulations involving total charge.
In this proof, we will show that the equation
$\nabla\cdot \mathbf{E} = \rho/\epsilon_0$
is equivalent to the equation
$\nabla\cdot\mathbf{D} = \rho_{\mathrm{free}}$
Note that we're only dealing with the differential forms, not the integral forms, but that is sufficient since the differential and integral forms are equivalent in each case, by the divergence theorem.
We introduce the polarization density P, which has the following relation to E and D:
$\mathbf{D}=\epsilon_0 \mathbf{E} + \mathbf{P}$
and the following relation to the bound charge:
$\rho_{\mathrm{bound}} = -\nabla\cdot \mathbf{P}$
Now, consider the three equations:
$\rho_{\mathrm{bound}} = \nabla\cdot (-\mathbf{P})$
$\rho_{\mathrm{free}} = \nabla\cdot \mathbf{D}$
$\rho = \nabla \cdot(\epsilon_0\mathbf{E})$
The key insight is that the sum of the first two equations is the third equation. This completes the proof: The first equation is true by definition, and therefore the second equation is true if and only if the third equation is true. So the second and third equations are equivalent, which is what we wanted to prove.
## Equation for linear materials
In homogeneous, isotropic, nondispersive, linear materials, there is a simple relationship between E and D:
$\mathbf{D} = \varepsilon \mathbf{E}$
where ε is the permittivity of the material. For the case of vacuum (aka free space), ε = ε0. Under these circumstances, Gauss's law modifies to
$\Phi_E = \frac{Q_\text{free}}{\epsilon}$
for the integral form, and
$\mathbf{\nabla} \cdot \mathbf{E} = \frac{\rho_\text{free}}{\varepsilon}$
for the differential form.
## Relation to Coulomb's law
### Deriving Gauss's law from Coulomb's law
Gauss's law can be derived from Coulomb's law.
Outline of proof
Coulomb's law states that the electric field due to a stationary point charge is:
$\mathbf{E}(\mathbf{r}) = \frac{q}{4\pi \epsilon_0} \frac{\mathbf{e_r}}{r^2}$
where
er is the radial unit vector,
r is the radius, |r|,
$\epsilon_0$ is the electric constant,
q is the charge of the particle, which is assumed to be located at the origin.
Using the expression from Coulomb's law, we get the total field at r by using an integral to sum the field at r due to the infinitesimal charge at each other point s in space, to give
$\mathbf{E}(\mathbf{r}) = \frac{1}{4\pi\epsilon_0} \int \frac{\rho(\mathbf{s})(\mathbf{r}-\mathbf{s})}{|\mathbf{r}-\mathbf{s}|^3} \, d^3 \mathbf{s}$
where $\rho$ is the charge density. If we take the divergence of both sides of this equation with respect to r, and use the known theorem[7]
$\nabla \cdot \left(\frac{\mathbf{r}}{|\mathbf{r}|^3}\right) = 4\pi \delta(\mathbf{r})$
where δ(r) is the Dirac delta function, the result is
$\nabla\cdot\mathbf{E}(\mathbf{r}) = \frac{1}{\varepsilon_0} \int \rho(\mathbf{s})\ \delta(\mathbf{r}-\mathbf{s})\, d^3 \mathbf{s}$
Using the "sifting property" of the Dirac delta function, we arrive at
$\nabla\cdot\mathbf{E}(\mathbf{r}) = \frac{\rho(\mathbf{r})}{\varepsilon_0},$
which is the differential form of Gauss's law, as desired.
Note that since Coulomb's law only applies to stationary charges, there is no reason to expect Gauss's law to hold for moving charges based on this derivation alone. In fact, Gauss's law does hold for moving charges, and in this respect Gauss's law is more general than Coulomb's law.
### Deriving Coulomb's law from Gauss's law
Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's law does not give any information regarding the curl of E (see Helmholtz decomposition and Faraday's law). However, Coulomb's law can be proven from Gauss's law if it is assumed, in addition, that the electric field from a point charge is spherically-symmetric (this assumption, like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if the charge is in motion).
Outline of proof
Taking S in the integral form of Gauss's law to be a spherical surface of radius r, centered at the point charge Q, we have
$\oint_{S}\mathbf{E}\cdot d\mathbf{A} = \frac{Q}{\varepsilon_0}$
By the assumption of spherical symmetry, the integrand is a constant which can be taken out of the integral. The result is
$4\pi r^2\hat{\mathbf{r}}\cdot\mathbf{E}(\mathbf{r}) = \frac{Q}{\varepsilon_0}$
where $\hat{\mathbf{r}}$ is a unit vector pointing radially away from the charge. Again by spherical symmetry, E points in the radial direction, and so we get
$\mathbf{E}(\mathbf{r}) = \frac{Q}{4\pi \varepsilon_0} \frac{\hat{\mathbf{r}}}{r^2}$
which is essentially equivalent to Coulomb's law. Thus the inverse-square law dependence of the electric field in Coulomb's law follows from Gauss's law.
## Notes
1. More specifically, the infinitesimal area is thought of as planar and with area dA. The vector dA is normal to this area element and has magnitude dA.
## References
1. Bellone, Enrico (1980). A World on Paper: Studies on the Second Scientific Revolution.
2. Halliday, David; Resnick, Robert (1970). Fundamentals of Physics. John Wiley & Sons, Inc. pp. 452–53.
3. Serway, Raymond A. (1996). Physics for Scientists and Engineers with Modern Physics, 4th edition. p. 687.
4. I.S. Grant, W.R. Phillips (2008). Electromagnetism (2nd ed.). Manchester Physics, John Wiley & Sons. ISBN 978-0-471-92712-9.
5. I.S. Grant, W.R. Phillips (2008). Electromagnetism (2nd ed.). Manchester Physics, John Wiley & Sons. ISBN 978-0-471-92712-9.
6. Matthews, Paul (1998). Vector Calculus. Springer. ISBN 3-540-76180-2.
7. See, for example, Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall. p. 50. ISBN 0-13-805326-X.
Jackson, John David (1999). Classical Electrodynamics, 3rd ed., New York: Wiley. ISBN 0-471-30932-X.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225776195526123, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/119830?sort=newest
|
## combinatorial lemma (is it well-known?)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The following should be something well?-known, but i haven't seen it anywhere, neither have i met any references about.
Let $M^{n}$ be a $n$-dimensional oriented closed manifold with a (sufficiently small) triangulation $\tau$. We "colour" the vertices of $\tau$ with $n+2$ colors: $v^{o}\rightarrow w(v^{o})\in$ {$1,2,...,n+2$ } and we shall say that the correspondence $w$ is a "coloring" of $\tau$. Take an arbitrary color $i\in$ {$1,2,...,n+2$ } and consider the $n$-simplices whose vertices are colored with exactly the colors {$1,2,...,n+2$ }$\backslash{i}$. Let $\Delta^{n}$ be such a simplex and $v_{1},...,v_{n+1}$ be its vertices ordered according to the positive orientation of $\Delta^{n}$ induced by the orientation of $M^{n}$. Then we write $\sigma_{i}(\Delta^{n})=1$, if the permutation $(w(v_{1}),...,w(v_{n+1}))$ is even, and $\sigma_{i}(\Delta ^{n})=-1$ otherwise. Set $\sigma_{i}(\Delta^{n})=0$ if some vertex of $\Delta^{n}$ is colored $i$, or there are two identically colored vertices. Let finally
$\sigma_{i}(w)=\sum\sigma_{i}(\Delta^{n})$,
where the sum is over all $n$-simplices.
The Claim: The number $\sigma_{i}(w)$ does not depend on $i$: $\sigma_{1}(w)=\sigma_{2}(w)=...=\sigma_{n+2}(w)$. So we have a global invariant $\sigma(w)$ of the coloring $w$.
This invariant has a geometrical meaning: Consider the dual cell complex of the triangulation $\tau$, then since each cell corresponds to a vertex $v^{o}$ of $\tau$, we may color this cell by the color $w(v^{o})$. Let $F_{i}$ be the union of all cells colored $i$, then we get a covering $\lambda=${${F_{1},...,F_{n+2}}$} of $M^{n}$. It is easy to see that the intersection of all $F_{i}$ is empty, so the canonical map of $M^{n}$ into the nerve of $\lambda$ may be considered as a map of $M^{n}$ into the $n$-sphere $\mathbb{S}^{n}$: $\varphi:M^{n}\rightarrow\mathbb{S}^{n}$. Then the degree of $\varphi$ equals $\sigma(w)$:
$\deg\varphi=\sigma(w)$.
As the proofs are not sophisticated at all and the construction seems conceptual, maybe it is worth including this material in an elementary topology textbook. Note also that it gives a method for calculating the degree without smooth approximation.
Of course, i don't want to repeat well-known things without citation, so any references are welcome.
-
1
there is a much simpler way to say the geometrical meaning: consider the standard $n+1$-simplex, with vertices $1, \ldots, n+2$. Then a coloring of the vertices of a triangulation is just the same thing as a simplicial map to this simplex. now the statement about the degree is obvious. – Vivek Shende Jan 27 at 7:17
## 1 Answer
This is closely related to the Generalized Sperner's Lemma, which holds for all for simplicial manifolds with or without boundary. See my old survey for a quick introduction (Section 8.1). Classical references include A.B. Brown and S.S. Cairns, Strengthening of Sperner's lemma applied to homology theory, PNAS, 1960, and D.I.A Cohen, On the Sperner lemma, JCT (1967). I don't immediately see how your result follows from the lemma, but recall that many extensions and generalizations are known. I would start with these references and search forward to find your particular version.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9261122345924377, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/86549?sort=oldest
|
## Schottky locus in genus 2
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\phi_g : \mathcal{M}_g \rightarrow \mathcal{A}_g$ be the period mapping from the open moduli space of genus $g$ Riemann surfaces to the moduli space of $g$-dimensional principally polarized abelian varieties over $\mathbb{C}$. Thus for a Riemann surface $S$ the image $\phi_g(S)$ is the Jacobian of $S$. The Schottky problem consists in determining the image of $\phi_g$.
It is classical that $\text{Im}(\phi_2)$ is exactly the set of abelian varieties that are not isomorphic to a product of elliptic curves. This is asserted in many places, but I have not been able to find a nice discussion of it in the literature. Does anyone know one? The more down-to-earth, the better.
-
## 5 Answers
This will need expansion by a more knowledgable person, but as memory serves, it was proved by Mayer and Mumford that the closure in Ag of the locus of traditional Jacobians is the set of products of Jacobians. This is probably exposed first in a talk in the 1964 Woods Hole talks on James Milne's site. (I see Mumford credits it there, on page 4 of his talk, in part three of the Woods Hole notes, to Matsusaka and Hoyt. Apparently Mayer and Mumford computed the closure in the Satake compactification.) But let us try to explain this more in dim two.
A two diml ppav is a compact 2 torus A containing a curve C carrying the homology class a1xb1 + a2xb2, where the aj,bj are a basic symplectic homology basis of H1(A).
It follows from the topological Pontrjagin product that the induced map from the Albanese variety of C to A, has topological degree one, hence is an isomorphism. (I.e. the map from the Cartesian product of C with itself g times to A, has image whose class is the g fold Pontrjagin product of [C], which equals g! times the fundamental class of A. Hence the induced map from the g fold symmetric product of C, has image with exactly the fundamental class of A. Hence this map has degree one as does that induced from the Jacobian.)
Since it also induces the identity map on C, it also preserves the polarization.
Let me speculate on the special cases. If C is reducible it is known (Complex abelian varieties and theta functions, George Kempf, p. 89, Cor. 10.4) that A is a product of elliptic curves. If C is irreducible and singular then I guess the normalization map extends to a map of the Albanese of C to A. But that seems to imply the image of C in A does not span, a contradiction.
So it seems that any irreducible curve C contained in a two diml ppav A and carrying the class of a principal polarization, is smooth and induces an isomorphism from the Albanese (i.e. Jacobian) of the curve to the ppav.
I hope there is some useful information in this.
-
This was very helpful. Thanks! – G Fiori Jan 25 2012 at 20:21
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In genus 2 there is NO Schottky problem.
g(g+1)/2 = 3g-3.
Dim(Abelian) = dim(moduli curves)
PS Also in g=2 any curve is hyper-elliptic - again dimension count
-
2
Yes, this argument shows that the image of $\phi_2$ is dense in $\mathcal{A}_2$. But why is its complement as I described above (ie the Jacobians of curves of compact type)? – G Fiori Jan 24 2012 at 17:39
As far as I understand the the product of elliptic curves will have decomposable period matrix "B". I.e. B = diag(t1,t2) While it is not decomposable for any curve or any other abelian variety. So we see, that image is contained in the desired set. But you want more... To show that it coincides we need somehow to restore the curve from the abelian variety... Hmmm, how to do this ? – Alexander Chervov Jan 24 2012 at 20:07
By (a possible) definition, a principal polarization on an abelian surface is a curve with self-intersection 2. So, if smooth, it is a genus two curve and the abelian surface is a jacobian. You have to rule out the case of a singular irreducible curve and the remaining possibility is a union of two elliptic curves meeting at a point and then you show that in this case, the surface is the product of the two curves.
-
Geoffrey Mess' paper "The Torelli group of genus 2 or 3 surfaces" provides two proofs of this fact---one cohomological and the other topological. But I understand neither. Maybe somebody could provide some commentary on his demonstrations?
-
I just looked at his paper, and I also understand neither of his proofs. But they look like exactly what I want! Hopefully an expert will come along and unpack them. – G Fiori Jan 24 2012 at 21:50
According to MR0364265 (51 #520) Oort, Frans; Ueno, Kenji "Principally polarized abelian varieties of dimension two or three are Jacobian varieties" (J. Fac. Sci. Univ. Tokyo Sect. IA Math. 20 (1973), 377–381) covers this (I don't have access to the paper and am going solely by the review).
-
This was very helpful to me, and I had trouble deciding whether to accept it or roy smith's answer. Thanks! – G Fiori Jan 25 2012 at 20:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459841847419739, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/92996/conjugate-function-for-matrix-mixed-norm/93251
|
## conjugate function for matrix mixed norm
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am familiar with the conjugate function of the vector norm, which uses the concept of dual norm and is defined as follows:
$\|\mathbf{y}\|_p^*=\max_{\mathbf{x}}\left(\mathbf{x}^T\mathbf{y}-\|\mathbf{x}\|_p\right)=\begin{cases}0~~~\|\mathbf{y}\|_q\leq 1 \\infty ~~~otherwise\end{cases}$ where $\frac{1}{p}+\frac{1}{q}=1$ for $p\geq 1$.
My question is:
Is there an equivalent conjugate function for the mixed matrix norm $\|\mathbf{A}\|_{p,q}$ defined for matrix $\mathbf{A}$?
$\|\mathbf{A}\|_{p,q}=\left(\sum_i \|\mathbf{a}_i\|_p^q\right)^{1/q}$ where $\mathbf{a}_i$ is the $i^{\text{th}}$ column of matrix $\mathbf{A}$.
-
The conjugate function of a norm always exists. You should be able to compute it using its definition. Just observe that the maximum is achieved at the unique critical point of the formula within the parenthesis. So all you have to do is find the critical value. – Deane Yang Apr 3 2012 at 13:01
I defined / studied this function in a recent preprint (for computational purposes). The derivation is standard convex analysis, though slightly tedious to do explicitly. – S. Sra Apr 5 2012 at 20:09
## 1 Answer
Let `$p^*$` and `$q^*$` be the conjugate exponents. Some (slightly laborious) algebra shows that the dual-norm is `$\|A\|_{p^*,q^*}$`. The conjugate function is the indicator function for the (unit) dual-norm ball.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.855047881603241, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/260969/find-the-number-of-homomorphisms/260971
|
# Find the number of homomorphisms
In each of the following examples determine the number of homomorphisms between the given groups:
$(a)$ from $\mathbb{Z}$ to $\mathbb{Z}_{10}$;
$(b)$ from $\mathbb{Z}_{10}$ to $\mathbb{Z}_{10}$;
$(c)$ from $\mathbb{Z}_{8}$ to $\mathbb{Z}_{10}$.
Could anyone just give me hints for the problem? Well, let $f:\mathbb{Z}\rightarrow \mathbb{Z}_{10}$ be homo, then $f(1)=[n]$ for any $[n]\in \mathbb{Z}_{10}$ will give a homomorphism hence there are $10$ for (a)?
-
1
Yes, that's exactly right. f(1) is all that matters, because 1 generates the whole group. – Billy Dec 17 '12 at 20:49
## 1 Answer
Hint:
A homomorphism on a cyclic group is completely determined by its value on a generator of the group.
Edit:
You're thoughts on $(a)$ are indeed correct.
Can you apply similar reasoning to arrive at answers for $(b)$ and $(c)$?
-
Is it 10 for (b) and (c) as well? – Vivek Dec 18 '12 at 13:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9167138934135437, "perplexity_flag": "head"}
|
http://regularize.wordpress.com/category/math/regularization/
|
# regularize
Trying to keep track of what I stumble upon
### Regularization
Archived Posts from this Category
March 26, 2013
## OPTPDE – A Collection of Problems in PDE-Constrained Optimization
Posted by Dirk under Math, Optimization, Regularization | Tags: Basis pursuit denoising, ill-posed problems, Optimal control, optimization, PDE, software, toolboxes |
If you are working on optimization with partial differential equations as constraints, you may be interested in the website
“OPTPDE – A Collection of Problems in PDE-Constrained Optimization”, http://www.optpde.net.
If you have developed an algorithm which can handle a certain class of optimization problems you need to do evaluations and tests on how well the method performs. To do so, you need well constructed test problems. This could be either problems where the optimal solution is known analytically our problems where the solution is known with a rigorous error bound obtained with a bullet-proof solver. Both things are not always easy to obtain and OPTPDE shall serve as a resource for such problems. It has been designed by Roland Herzog, Arnd Rösch, Stefan Ulbrich and Winnifried Wollner.
The generation of test instance for optimization problems seems quite important to me and indeed, several things can go wrong if this is not done right. Frequently, one sees tests for optimization routines on problems where the optimal solution is not known. Since there are usually different ways to express optimality conditions it is not always clear how to check for optimality; even more so, if you only check for “approximate optimality”, e.g. up to machine precision. A frequently observed effect is a kind of “trusted method bias”. By this I mean that an optimal solution is calculated by some trusted method and comparing the outcome of the tested routine with this solution. However, the trusted method uses some stopping criterion usually based on some specific set of formulations of optimality conditions and these can be different from what the new method has been tuned to. And most often, the stopping criteria do not give a rigorous error bound for the solution or the optimal objective value.
For sparse reconstruction problems, I dealt with this issue in “Constructing test instance for Basis Pursuit Denoising” (preprint available here) but I think this methodology could be used for other settings as well.
August 23, 2012
## ISMP – inverse problems with uniform noise and TV does not preserve edges
Posted by Dirk under Conference, Math, Regularization, Signal and image processing | Tags: conference, ill-posed problems, image processing, ismp, parameter choice, regularization, tikhonov |
[5] Comments
Today there are several things I could blog on. The first is the planary by Rich Baraniuk on Compressed Sensing. However, I don’t think that I could reflect the content in a way which would be helpful for a potential reader. Just for the record: If you have the chance to visit one of Rich’s talk: Do it!
The second thing is the talk by Bernd Hofmann on source conditions, smoothness and variational inequalities and their use in regularization of inverse problems. However, this would be too technical for now and I just did not take enough notes to write a meaningful post.
As a third thing I have the talk by Christian Clason on inverse problems with uniformly distributed noise. He argued that for uniform noise it is much better to use an ${L^\infty}$ discrepancy term instead of the usual ${L^2}$-one. He presented a path-following semismooth Newton method to solve the problem
$\displaystyle \min_x \frac{1}{p}\|Kx-y^\delta\|_\infty^p + \frac{\alpha}{2}\|x\|_2^2$
and showed examples with different kinds of noise. Indeed the examples showed that ${L^\infty}$ works much better than ${L^2}$ here. But in fact it works even better, if the noise is not uniformly distributed but “impulsive” i.e. it attains bounds ${\pm\delta}$ almost everywhere. It seems to me that uniform noise would need a slightly different penalty but I don’t know which one – probably you do? Moreover, Christian presented the balancing principle to choose the regularization parameter (without knowledge about the noise level) and this was the first time I really got what it’s about. What one does here is, to choose ${\alpha}$ such that (for some ${\sigma>0}$ which only depends on ${K}$, but not on the noise)
$\displaystyle \sigma\|Kx_\alpha^\delta-y^\delta\|_\infty = \frac{\alpha}{2}\|x_\alpha^\delta\|_2^2.$
The rational behind this is, that the left hand side is monotonically non-decreasing in ${\alpha}$, while the right hand side is monotonically non-increasing. Hence, there should be some ${\alpha}$ “in the middle” which make both somewhat equally large. Of course, we do neither want to “over-regularize” (which would usually “smooth too much”) nor to “under-regularize” (which would not eliminate noise). Hence, balancing seems to be a valid choice. From a practical point of view the balancing is also nice because one can use the fixed-point iteration
$\displaystyle \alpha^{n+1} = 2\sigma\frac{\|Kx_{\alpha^n}^\delta - y^\delta\|_\infty}{\|x_{\alpha_n}^\delta\|_2^2}$
which converges in a few number of iterations.
Then there was the talk by Esther Klann, but unfortunately, I was late so only heard the last half…
Last but not least we have the talk by Christiane Pöschl. If you are interested in Total-Variation-Denoising (TV denoising), then you probably have heard many times that “TV denoising preserves edges” (have a look at the Wikipedia page – it claims this twice). What Christiane showed (in a work with Vicent Caselles and M. Novaga) that this claim is not true in general but only for very special cases. In case of characteristic functions, the only functions for which the TV minimizer has sharp edges are these so-called calibrated sets, introduced by Caselles et el. Building on earlier works by Caselles and co-workers she calculated exact minimizers for TV denoising in the case that the image consists of characteristic functions of two convex sets or of a single star shaped domain, that is, for a given set $B$ she calculated the solution of
$\displaystyle \min_u\int (u - \chi_B)^2dx + \lambda \int|Du|.$
This is not is as easy as it may sound. Even for the minimizer for a single convex set one has to make some effort. She presented a nice connection of the shape of the obtained level-sets with the morphological operators of closing and opening. With the help of this link she derived a methodology to obtain the exact TV denoising minimizer for all parameters. I do not have the images right now but be assured that most of the time, the minimizers do not have sharp edges all over the place. Even for simple geometries (like two rectangles touching in a corner) strange things happen and only very few sharp edges appear. I’ll keep you posted in case the paper comes out (or appears as a preprint).
Christiane has some nice images which make this much more clear:
For two circles edges are preserved if they are far enough away from each other. If they are close, the area “in between” them is filled and, moreover, obey this fuzzy boundary. I remember myself seeing effects like this in the output of TV-solvers and thinking “well, it seems that the algorithm is either not good or not converged yet – TV should output sharp edges!”.
For a star-shaped shape (well, actually a star) the output looks like this. The corners are not only rounded but also blurred and this is true both for the “outer” corners and the “inner” corners.
So, if you have any TV-minimizing code, go ahead and check if your code actually does the right things on images like this!
Moreover, I would love to see similar results for more complicated extensions of TV like Total Generalized Variation, I treated here.
May 21, 2012
## Problems solved: RIP and NSP are NP-hard, Homotopy for l1 has exponential complexity
Posted by Dirk under Math, Regularization, Sparsity
1 Comment
In this post I gladly announce that three problems that bothered me have been solved: The computational complexity of certifying RIP and NSP and the number of steps the homotopy method needs to obtain a solution of the Basis Pursuit problem.
1. Complexity of RIP and NSP
On this issue we have two papers:
• The Computational Complexity of RIP, NSP, and Related Concepts in Compressed Sensing by Andreas M. Tillmann and Marc E. Pfetsch, arXiv/1205.2081
• Certifying the restricted isometry property is hard by Afonso S. Bandeira, Edgar Dobriban, Dustin G. Mixon and William F. Sawin, arXiv/1204.1580
The first paper has the more general results and hence, we start with the second one: The main result of the second paper is this:
Theorem 1 Let a matrix ${A}$, a positive integer ${K}$ and some ${0<\delta<1}$ be given. It is hard for NP under randomized polynomial-time reductions to check if ${A}$ satisfies the ${(K,\delta)}$ restricted isometry property.
That does not yet say that it’s NP-hard to check if ${\delta}$ is an RIP constant for ${K}$-sparse vectors but it’s close. I think that Dustin Mixon has explained this issue better on his blog than I could do here.
In the first paper (which is, by the way, on outcome of the SPEAR-project in which I am involved…) the main result is indeed the conjectured NP-hardness of calculating RIP constants:
Theorem 2 For a given matrix ${A}$ and a positive integer ${K}$, it is NP-hard to compute the restricted isometry constant.
Moreover, this is just a corollary to the main theorem of that paper which reads as
Theorem 3 For a given matrix ${A}$ and a positive integer ${K}$, the problem to decide whether ${A}$ satisfies the restricted isometry property of order ${K}$ for some constant ${\delta<1}$ is coNP-complete.
They also provide a slightly strengthened version of Theorem~1:
Theorem 4 Let a matrix ${A}$, a positive integer ${K}$ and some ${0<\delta<1}$ be given. It is coNP-complete to check if ${A}$ satisfies the ${(K,\delta)}$ restricted isometry property.
Moreover, the paper by Pfetsch and Tillmann also proves something about the null space property (NSP):
Definition 5 A matrix ${A}$ satisfies the null space property of order ${K}$ if there is a constant ${\alpha>0}$ such that for all elements ${x}$ in the null space of ${A}$ it holds that the sum of the ${K}$ largest absolute values of ${x}$ is smaller that ${\alpha}$ times the 1-norm of ${x}$. The smallest such constant ${\alpha}$ is called the null space constant of order ${K}$.
Their main result is as follows:
Theorem 6 F or a given matrix ${A}$ and a positive integer ${K}$, the problem to decide whether ${A}$ satisfies the null space property order ${K}$ for some constant ${\alpha<1}$ is coNP-complete. Consequently, it is NP-hard to compute the null space constant of ${A}$.
2. Complexity of the homotopy method for Basis Pursuit
The second issue is about the basis pursuit problem
$\displaystyle \min_x \|x\|_1\quad\text{s.t.}\ Ax=b.$
which can be approximated by the “denoising variant”
$\displaystyle \min_x \lambda\|x\|_1 + \tfrac12\|Ax-b\|_2^2.$
What is pretty interesting about the denoising variant is, that the solution ${x(\lambda)}$ (if it is unique throughout) depends on ${\lambda}$ in a piecewise linear way and converges to the solution of basis pursuit for ${\lambda\rightarrow 0}$. This leads to an algorithm for the solution of basis pursuit: Start with ${\lambda=\|A^Tb\|_\infty}$ (for which the unique solution is ${x(\lambda)=0}$), calculate the direction of the “solution path”, follow it until you reach a “break point”, calculate the next direction and so on until ${\lambda}$ hits zero. This is for example implemented for MATLAB in L1Homotopy (the SPAMS package also seems to have this implemented, however, I haven’t used it yet). In practice, this approach (usually called homotopy method) is pretty fast and moreover, only detects a few break points. However, an obvious upper bound on the number of break point is exponential in the number of entries in ${x}$. Hence, it seemed that one was faced with a situation similar to the simplex method for linear programming: The algorithms performs great an average but the worst case complexity is bad. That this is really true for linear programming is known since some time by the Klee-Minty example, an example for which the simplex method takes an exponential number of steps. What I asked myself for some time: Is there a Klee-Minty example for the homotopy method?
Now the answer is there: Yes, there is!
The denoising variant of basis pursuit is also known as LASSO regularization in the statistics literature and this explains the title of the paper which comes up with the example:
• Complexity Analysis of the Lasso Regularization Path by Julien Mairal and Bin Yu, arxiv.org/1205.0079
Julien and Bin investigate the number of linear segments in the regularization path and first observe that this is upper bounded by ${(3^p+1)/2}$ is ${p}$ is the number of entries in ${x}$ (i.e. the number of variables of the problem). Then they try to construct an instance that matches this upper bound. They succeed in a clever way: For a given instance ${(A,b)}$ with a path with ${k}$ linear segments they try to construct an instance which has one more variable such that the number of linear segments in increased by a factor. Their result goes like this:
Theorem 7 Let ${A\in{\mathbb R}^{n\times p}}$ have full rank and let ${b\in{\mathbb R}^n}$ be in the range of ${A}$. Assume that the homotopy path has ${k}$ linear segments and denote by ${\lambda_1}$ the regularization parameter which corresponds to the smallest kink in the path. Now choose ${b_{n+1}\neq 0}$ and ${\alpha}$ such that
$\displaystyle 0<\alpha < \frac{\lambda_1}{2\|b\|_2^2 + b_{n+1}^2} \ \ \ \ \ (1)$
and define ${\tilde b\in{\mathbb R}^{n+1}}$ and ${\tilde A\in{\mathbb R}{(n+1)\times (p+1)}}$ by
$\displaystyle \tilde b = \begin{bmatrix} b\\ b_{n+1} \end{bmatrix}, \quad \tilde A = \begin{bmatrix} A & 2\alpha b\\ 0 & \alpha b_{n+1} \end{bmatrix}.$
Then the homotopy path for the basis pursuit problem with matrix ${\tilde A}$ and right hand side ${\tilde b}$ has ${3k-1}$ linear segments.
With this theorem at hand, it is straightforward to recursively build a “Mairal-Yu”-example which matches the upper bound for the number of linear segments. The idea is to start with a ${1\times 1}$ example and let it grow by one row and one column according to Theorem~7. We start with the simplest ${1\times 1}$ example, namely ${A = [1]}$ and ${b=[1]}$. To move to the next bigger example you can choose the next entry ${b_{n+1}}$ and we always choose ${1}$ for convenience. Moreover, you need the next ${\alpha}$ and you need to know the smallest kink in the path. I calculated the paths and kinks with L1Packv2 by Ignace Loris because it is written in Mathematica and can use exact arithmetics with rational numbers (and you will see, that accuracy will be an issue even for small instances) and seemed bullet proof for me. Let’s see where this idea brings us:
Example 1 (Mairal-Yu example)
• Stage 1: We start with ${n=p=1}$, ${b=[1]}$ and ${A=[1]}$. The homotopy path has one kink at ${\lambda_1=1}$ (with corresponding solution ${[0]}$) and hence, two linear segments. Now let’s go to the next larger instance:
• Stage 2: We can choose the entry ${b_2}$ as we like and choose it equals to 1, i.e. our new ${b}$ is
$\displaystyle b = \begin{bmatrix} 1\\1 \end{bmatrix}.$
Now we have to choose ${\alpha}$ according to (1), i.e
$\displaystyle 0 < \alpha < \frac{\lambda_1}{2\|b\|_2^2 + b_{n+1}^2} = \frac{1}{2+1} = \frac{1}{3}$
and we can choose, e.g., ${\alpha = 1/4}$ which gives our new matrix
$\displaystyle A = \begin{bmatrix} 1 & \frac12\\ 0 & \frac14 \end{bmatrix}.$
The calculation of the new regularization path shows that it has exactly the announced number of 5 segments and the parameter of the smallest kink is ${\lambda_1 = \frac{1}{13}}$.
• Stage 2: Again we choose ${b_{n+1} = 1}$ giving
$\displaystyle b = \begin{bmatrix} 1\\1\\1 \end{bmatrix}$
For the choice of ${\alpha}$ we need that
$\displaystyle 0<\alpha < \frac{1}{13(4+1)} = \frac{1}{75}$
and we may choose
$\displaystyle \alpha = \frac1{80}.$
which gives the next matrix
$\displaystyle A = \begin{bmatrix} 1 & \frac12 & \tfrac{1}{40}\\ 0 & \frac14 & \tfrac{1}{40}\\ 0 & 0 & \tfrac{1}{80} \end{bmatrix}.$
We calculate the regularization path, observe that it has the predicted 14 segments and that the parameter of the smallest kink is ${\lambda_1 = \frac{1}{193}}$.
• Stage 3: Again we choose ${b_{n+1} = 1}$ giving
$\displaystyle b = \begin{bmatrix} 1\\1\\1\\1 \end{bmatrix}$
For the choice of ${\alpha}$ we need that
$\displaystyle 0<\alpha < \frac{1}{193(6+1)} = \frac{1}{1351}$
and we see that things are getting awkward here…
Proceeding in this way we always increase the number of linear segments ${k_n}$ for the ${n\times n}$-case from ${k_n}$ to ${k_{n+1} = 3k_n-1}$ in each step and one checks easily that this leads to ${k_n = (3^n+1)/2}$ which is the worst case! If you are interested in the regularization path: I produced picture for the first three dimensions (well, I could not draw a 4d ${\ell^1}$-ball) and here they are:
1d Mairal-Yu example
2d Mairal-Yu example
3d Mairal-Yu example
It is not really easy to perceive the whole paths from the pictures because the magnitude of the entries vary strongly. I’ve drawn the path in red, each kink marked with a small circle. Moreover, I have drawn the according ${\ell^1}$-balls of the respective radii to provide more geometric information.
The paper by Mairal and Yu has more results of the paths if one looks for approximate solutions of the linear system but I will not go into detail about them here.
At least two questions come to mind:
• The Mairal-Yu example is ${n\times n}$. What is the worst case complexity for the true rectangular case? In other words: What is the complexity for ${p\times n}$ in terms of ${p}$ and ${n}$?
• The example and the construction leads to matrices that does not have normed columns and moreover, they are far from being equal in norm. But matrices with normed columns seem to be more “well behaved”. Does the worst case complexity lowers if the consider matrices with unit-norm columns? Probably one can construct a unit-norm example by proper choice of ${b}$…
April 4, 2012
## The Augmented Lagrangian with variational inequalities and on necessary conditions for variational regularization
Posted by Dirk under Math, Regularization | Tags: ill-posed problems, regularization |
Today I’d like to blog about two papers which appeared on the arxiv.
1. Regularization with the Augmented Lagrangian Method – Convergence Rates from Variational Inequalities
The first one is the paper “Regularization of Linear Ill-posed Problems by the Augmented Lagrangian Method and Variational Inequalities” by Klaus Frick and Markus Grasmair.
Well, the title basically describes the content quite accurate. However, recall that the Augmented Lagrangian Method (ALM) is a method to calculate solutions to certain convex optimization problems. For a convex, proper and lower-semicontinuous function ${J}$ on a Banach space ${X}$, a linear and bounded operator ${K:X\rightarrow H}$ from ${X}$ into a Hilbert space ${H}$ and an element ${g\in H}$ consider the problem
$\displaystyle \inf_{u} J(u)\quad\text{s.t.}\quad Ku=g. \ \ \ \ \ (1)$
The ALM goes as follows: Start with an initial dual variable ${p_0}$, choose step-sizes ${\tau_k>0}$ and iterate
$\displaystyle u_k \in \text{argmin}\Big(\frac{\tau_k}{2}\|Ku-g\|^2 + J(u) + \langle p_{k-1},Ku-g\rangle\Big)$
$\displaystyle p_k = p_{k-1}+\tau_k(g-Ku_k).$
(These days one should note that this iteration is also known under the name Bregman iteration…). Indeed, it is known that the ALM converges to a solution of (1) if there exists one. Klaus and Markus consider the ill-posed case, i.e. the range of ${K}$ is not closed and ${g}$ is replaced by some ${g^\delta}$ which fulfills ${\|g-g^\delta\|\leq\delta}$ (and hence, ${g^\delta}$ is generally not in the range of ${K}$). Then, the ALM does not converge but diverges. However, one observes “semi-convergence” in practice, i.e. the iterates approach an approximate “solution to ${Ku=g^\delta}$” (or even a true solution to ${Ku=g}$) first but then start to diverge from some point on. Then it is natural to ask, if the ALM with ${g}$ replaced by ${g^\delta}$ can be used for regularization, i.e. can one choose a stopping index ${k^*}$ (depending on ${\delta}$ and ${g^\delta}$) such that the iterates ${u_{k^*}^\delta}$ approach the solution of (1) if ${\delta}$ vanishes? The question has been answered in the affirmative in previous work by Klaus (here and here) and also estimates on the error and convergence rates have been derived under an additional assumption on the solution of (1). This assumption used to be what is called “source condition” and says that there should exist some ${p^\dag\in H}$ such that for a solution ${u^\dagger}$ of (1) it holds that
$\displaystyle K^* p^\dagger \in\partial J(u^\dagger).$
Under this assumption it has been shown that the Bregman distance ${D(u_{k^*}^\delta,u^\dag)}$ goes to zero linearly in ${\delta}$ under appropriate stopping rules. What Klaus and Markus investigate in this paper are different conditions which ensure slower convergence rates than linear. These conditions come in the form of “variational inequalities” which gained some popularity lately. As usual, these variational inequalities look some kind of messy at first sight. Klaus and Markus use
$\displaystyle D(u,u^\dag)\leq J(u) - J(u^\dag) + \Phi(\|Ku-g\|^2)$
for some positive functional ${D}$ with ${D(u,u)=0}$ and some non-negative, strictly increasing and concave function ${\Phi}$. Under this assumption (and special ${D}$) they derive convergence rates which again look quite complicated but can be reduced to simpler and more transparent cases which resemble the situation one knows for other regularization methods (like ordinary Tikhonov regularization).
In the last section Klaus and Markus also treat sparse regularization (i.e. with ${J(u) = \|u\|_1}$) and derive that a weak condition (like ${(K^*K)^\nu p^\dag\in\partial J(u^\dag)}$ for some ${0<\nu<1/2}$ already imply the stronger one (1) (with a different ${p^\dag}$). Hence, interestingly, it seems that for sparse regularization one either gets a linear rate or nothing (in this framework).
2. On necessary conditions for variational regularization
The second paper is “Necessary conditions for variational regularization schemes” by Nadja Worliczek and myself. I have discussed some parts of this paper alread on this blog here and here. In this paper we tried to formalize the notion of “a variational method” for regularization with the goal to obtain necessary conditions for a variational scheme to be regularizing. As expected, this goal is quite ambitions and we can not claim that we came up with ultimate necessary condition which describe what kind of variational methods are not possible. However, we could first relate the three kinds of variational methods (which I called Tikhonov, Morozov and Ivanov regularization here) and moreover investigated the conditions on the data space a little closer. In recent years it turned out that one should not always use a term like ${\|Ku-g^\delta\|^2}$ to measure the noise or to penalize the deviation from ${Ku}$ to ${g^\delta}$. For several noise models (like Poisson noise or multiplicative noise) other functionals are better suited. However, these functionals raise several issues: They are often not defined on a linear space but on a convex set, sometimes with the nasty property that their interior is empty. They often do not have convenient algebraic properties (e.g. scaling invariance, triangle inequalities or the like). Finally they are not necessarily (lower semi-)continuous with respect to the usual topologies. Hence, we approached the data space from quite abstract way: The data space ${(Y,\tau_Y)}$ is topological space which comes with an additional sequential convergence structure ${\mathcal{S}}$ (see e.g. here) and on (a subset of) which there is a discrepancy functional ${\rho:Y\times Y\rightarrow [0,\infty]}$. Then we analyzed the interplay of these three things ${\tau_Y}$, ${\mathcal{S}}$ and ${\rho}$. If you wonder why we use the additional sequential convergence structure, remember that in the (by now classical) setting for Tikhonov regularization in Banach spaces with a functional like
$\displaystyle \|Ku-g^\delta\|_Y^q + \alpha\|u\|_X^p$
with some Banach space norms ${\|\cdot\|_Y}$ and ${\|\cdot\|_X}$ there are also two kinds of convergence on ${Y}$: The weak convergence (which is replaced by ${\tau_Y}$ in our setting) which is, e.g., used to describe convenient (lower semi-)continuity properties of ${K}$ and the norm ${\|\cdot\|_Y}$ and the norm convergence which is used to describe that ${g^\delta\rightarrow g^\dag}$ for ${\delta\rightarrow 0}$. And since we do not have a normed space ${Y}$ in our setting and one does not use any topological properties of the norm convergence in all the proofs of regularizing properties, Nadja suggested to use a sequential convergence structure instead.
April 2, 2012
## The elastic-net as augmentation and super-resolution by semi-continuous compressed sensing
Posted by Dirk under Math, Regularization, Signal and image processing, Sparsity | Tags: Basis pursuit, regularization, sparsity |
[7] Comments
Today I would like to comment on two arxiv-preprints I stumbled upon:
1. “Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm” – The Elastic Net rediscovered
The paper “Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm” by Ming-Jun Lai and Wotao Yin is another contribution to a field which is (or was?) probably the fastest growing field in applied mathematics: Algorithms for convex problems with non-smooth ${\ell^1}$-like terms. The “mother problem” here is as follows: Consider a matrix ${A\in{\mathbb R}^{m\times n}}$, ${b\in{\mathbb R}^m}$ try to find a solution of
$\displaystyle \min_{x\in{\mathbb R}^n}\|x\|_1\quad\text{s.t.}\quad Ax=b$
or, for ${\sigma>0}$
$\displaystyle \min_{x\in{\mathbb R}^n}\|x\|_1\quad\text{s.t.}\quad \|Ax-b\|\leq\sigma$
which appeared here on this blog previously. Although this is a convex problem and even has a reformulation as linear program, some instances of this problem are notoriously hard to solve and gained a lot of attention (because their applicability in sparse recovery and compressed sensing). Very roughly speaking, a part of its hardness comes from the fact that the problem is neither smooth nor strictly convex.
The contribution of Lai and Yin is that they analyze a slight perturbation of the problem which makes its solution much easier: They add another term in the objective; for ${\alpha>0}$ they consider
$\displaystyle \min_{x\in{\mathbb R}^n}\|x\|_1 + \frac{1}{2\alpha}\|x\|_2^2\quad\text{s.t.}\quad Ax=b$
or
$\displaystyle \min_{x\in{\mathbb R}^n}\|x\|_1 + \frac{1}{2\alpha}\|x\|_2^2\quad\text{s.t.}\quad \|Ax-b\|\leq\sigma.$
This perturbation does not make the problem smooth but renders it strongly convex (which usually makes the dual more smooth). It turns out that this perturbation makes life with this problem (and related ones) much easier – recovery guarantees still exists and algorithms behave better.
I think it is important to note that the “augmentation” of the ${\ell^1}$ objective with an additional squared ${\ell^2}$-term goes back to Zou and Hastie from the statistics community. There, the motivation was as follows: They observed that the pure ${\ell^1}$ objective tends to “overpromote” sparsity in the sense that if there are two columns in ${A}$ which are almost equally good in explaining some component of ${b}$ then only one of them is used. The “augmented problem”, however, tends to use both of them. They coined the method as “elastic net” (for reasons which I never really got).
I also worked on elastic-net problems for problems in the form
$\displaystyle \min_x \frac{1}{2}\|Ax-b\|^2 + \alpha\|x\|_1 + \frac{\beta}{2}\|x\|_2^2$
in this paper (doi-link). Here it also turns out that the problem gets much easier algorithmically. I found it very convenient to rewrite the elastic-net problem as
$\displaystyle \min_x \frac{1}{2}\|\begin{bmatrix}A\\ \sqrt{\beta} I\end{bmatrix}x-\begin{bmatrix}b\\ 0\end{bmatrix}\|^2 + \alpha\|x\|_1$
which turns the elastic-net problem into just another ${\ell^1}$-penalized problem with a special matrix and right hand side. Quite convenient for analysis and also somehow algorithmically.
2. Towards a Mathematical Theory of Super-Resolution
The second preprint is “Towards a Mathematical Theory of Super-Resolution” by Emmanuel Candes and Carlos Fernandez-Granda.
The idea of super-resolution seems to pretty old and, very roughly speaking, is to extract a higher resolution of a measured quantity (e.g. an image) than the measured data allows. Of course, in this formulation this is impossible. But often one can gain something by additional knowledge of the image. Basically, this also is the idea behind compressed sensing and hence, it does not come as a surprise that the results in compressed sensing are used to try to explain when super-resolution is possible.
The paper by Candes and Fernandez-Granada seems to be pretty close in spirit to Exact Reconstruction using Support Pursuit on which I blogged earlier. They model the sparse signal as a Radon measure, especially as a sum of Diracs. However, different from the support-pursuit-paper they use complex exponentials (in contrast to real polynomials). Their reconstruction method is basically the same as support pursuit: The try to solve
$\displaystyle \min_{x\in\mathcal{M}} \|x\|\quad\text{s.t.}\quad Fx=y, \ \ \ \ \ (1)$
i.e. they minimize over the set of Radon measures ${\mathcal{M}}$ under the constraint that certain measurements ${Fx\in{\mathbb R}^n}$ result in certain given values ${y}$. Moreover, they make a thorough analysis of what is “reconstructable” by their ansatz and obtain a lower bound on the distance of two Diracs (in other words, a lower bound in the Prokhorov distance). I have to admit that I do not share one of their claims from the abstract: “We show that one can super-resolve these point sources with infinite precision—i.e. recover the exact locations and amplitudes—by solving a simple convex program.” My point is that I can not see to what extend the problem (1) is a simple one. Well, it is convex, but it does not seem to be simple.
I want to add that the idea of “continuous sparse modelling” in the space of signed measures is very appealing to me and appeared first in Inverse problems in spaces of measures by Kristian Bredies and Hanna Pikkarainen.
November 2, 2011
## Semi-continuous sparse reconstruction and compressed sensing
Posted by Dirk under Math, Regularization, Signal and image processing, Sparsity | Tags: Basis pursuit, compressed sensing, sparsity |
[7] Comments
How many samples are needed to reconstruct a sparse signal?
Well, there are many, many results around some of which you probably know (at least if you are following this blog or this one). Today I write about a neat result which I found quite some time ago on reconstruction of nonnegative sparse signals from a semi-continuous perspective.
1. From discrete sparse reconstruction/compressed sensing to semi-continuous
The basic sparse reconstruction problem asks the following: Say we have a vector ${x\in{\mathbb R}^m}$ which only has ${s<m}$ non-zero entries and a fat matrix ${A\in{\mathbb R}^{n\times m}}$ (i.e. ${n>m}$) and consider that we are given measurements ${b=Ax}$. Of course, the system ${Ax=b}$ is underdetermined. However, we may add a little more prior knowledge on the solution and ask: Is is possible to reconstruct ${x}$ from ${b}$ if we know that the vector ${x}$ is sparse? If yes: How? Under what conditions on ${m}$, ${s}$, ${n}$ and ${A}$? This question created the expanding universe of compressed sensing recently (and this universe is expanding so fast that for sure there has to be some dark energy in it). As a matter of fact, a powerful method to obtain sparse solutions to underdetermined systems is ${\ell^1}$-minimization a.k.a. Basis Pursuit on which I blogged recently: Solve
$\displaystyle \min_x \|x\|_1\ \text{s.t.}\ Ax=b$
and the important ingredient here is the ${\ell^1}$-norm of the vector in the objective function.
In this post I’ll formulate semi-continuous sparse reconstruction. We move from an ${m}$-vector ${x}$ to a finite signed measure ${\mu}$ on a closed interval (which we assume to be ${I=[-1,1]}$ for simplicty). We may embed the ${m}$-vectors into the space of finite signed measures by choosing ${m}$ points ${t_i}$, ${i=1,\dots, m}$ from the interval ${I}$ and build ${\mu = \sum_{i=1}^m x_i \delta_{t_i}}$ with the point-masses (or Dirac measures) ${\delta_{t_i}}$. To a be a bit more precise, we speak about the space ${\mathfrak{M}}$ of Radon measures on ${I}$, which are defined on the Borel ${\sigma}$-algebra of ${I}$ and are finite. Radon measures are not very scary objects and an intuitive way to think of them is to use Riesz representation: Every Radon measure arises as a continuous linear functional on a space of continuous functions, namely the space ${C_0(I)}$ which is the closure of the continuous functions with compact support in ${{]{-1,1}[}}$ with respect to the supremum norm. Hence, Radon measures work on these functions as ${\int_I fd\mu}$. It is also natural to speak of the support ${\text{supp}(\mu)}$ of a Radon measure ${\mu}$ and it holds for any continuous function ${f}$ that
$\displaystyle \int_I f d\mu = \int_{\text{supp}(\mu)}f d\mu.$
An important tool for Radon measures is the Hahn-Jordan decomposition which decomposes ${\mu}$ into a positive part ${\mu^+}$ and a negative part ${\mu^-}$, i.e. ${\mu^+}$ and ${\mu^-}$ are non-negative and ${\mu = \mu^+-\mu^-}$. Finally the variation of a measure, which is
$\displaystyle \|\mu\| = \mu^+(I) + \mu^-(I)$
provides a norm on the space of Radon measures.
Example 1 For the measure ${\mu = \sum_{i=1}^m x_i \delta_{t_i}}$ one readily calculates that
$\displaystyle \mu^+ = \sum_i \max(0,x_i)\delta_{t_i},\quad \mu^- = \sum_i \max(0,-x_i)\delta_{t_i}$
and hence
$\displaystyle \|\mu\| = \sum_i |x_i| = \|x\|_1.$
In this sense, the space of Radon measures provides a generalization of ${\ell^1}$.
We may sample a Radon measure ${\mu}$ with ${n+1}$ linear functionals and these can be encoded by ${n+1}$ continuous functions ${u_0,\dots,u_n}$ as
$\displaystyle b_k = \int_I u_k d\mu.$
This sampling gives a bounded linear operator ${K:\mathfrak{M}\rightarrow {\mathbb R}^{n+1}}$. The generalization of Basis Pursuit is then given by
$\displaystyle \min_{\mu\in\mathfrak{M}} \|\mu\|\ \text{s.t.}\ K\mu = b.$
This was introduced and called “Support Pursuit” in the preprint Exact Reconstruction using Support Pursuit by Yohann de Castro and Frabrice Gamboa.
More on the motivation and the use of Radon measures for sparsity can be found in Inverse problems in spaces of measures by Kristian Bredies and Hanna Pikkarainen.
2. Exact reconstruction of sparse nonnegative Radon measures
Before I talk about the results we may count the degrees of freedom a sparse Radon measure has: If ${\mu = \sum_{i=1}^s x_i \delta_{t_i}}$ with some ${s}$ than ${\mu}$ is defined by the ${s}$ weights ${x_i}$ and the ${s}$ positions ${t_i}$. Hence, we expect that at least ${2s}$ linear measurements should be necessary to reconstruct ${\mu}$. Surprisingly, this is almost enough if we know that the measure is nonnegative! We only need one more measurement, that is ${2s+1}$ and moreover, we can take fairly simple measurements, namely the monomials: ${u_i(t) = t^i}$ ${i=0,\dots, n}$ (with the convention that ${u_0(t)\equiv 1}$). This is shown in the following theorem by de Castro and Gamboa.
Theorem 1 Let ${\mu = \sum_{i=1}^s x_i\delta_{t_i}}$ with ${x_i\geq 0}$, ${n=2s}$ and let ${u_i}$, ${i=0,\dots n}$ be the monomials as above. Define ${b_i = \int_I u_i(t)d\mu}$. Then ${\mu}$ is the unique solution of the support pursuit problem, that is of
$\displaystyle \min \|\nu\|\ \text{s.t.}\ K\nu = b.\qquad \textup{(SP)}$
Proof: The following polynomial will be of importance: For a constant ${c>0}$ define
$\displaystyle P(t) = 1 - c \prod_{i=1}^s (t-t_i)^2.$
The following properties of ${P}$ will be used:
1. ${P(t_i) = 1}$ for ${i=1,\dots,s}$
2. ${P}$ has degree ${n=2s}$ and hence, is a linear combination of the ${u_i}$, ${i=0,\dots,n}$, i.e. ${P = \sum_{k=0}^n a_k u_k}$.
3. For ${c}$ small enough it holds for ${t\neq t_i}$ that ${|P(t)|<1}$.
Now let ${\sigma}$ be a solution of (SP). We have to show that ${\|\mu\|\leq \|\sigma\|}$. Due to property 2 we know that
$\displaystyle \int_I u_k d\sigma = (K\sigma)k = b_k = \int_I u_k d\mu.$
Due to property 1 and non-negativity of ${\mu}$ we conclude that
$\displaystyle \begin{array}{rcl} \|\mu\| & = & \sum_{i=1}^s x_i = \int_I P d\mu\\ & = & \int_I \sum_{k=0}^n a_k u_k d\mu\\ & = & \sum_{k=0}^n a_k \int_I u_k d\mu\\ & = & \sum_{k=0}^n a_k \int_I u_k d\sigma\\ & = & \int_I P d\sigma. \end{array}$
Moreover, by Lebesgue’s decomposition we can decompose ${\sigma}$ with respect to ${\mu}$ such that
$\displaystyle \sigma = \underbrace{\sum_{i=1}^s y_i\delta_{t_i}}_{=\sigma_1} + \sigma_2$
and ${\sigma_2}$ is singular with respect to ${\mu}$. We get
$\displaystyle \begin{array}{rcl} \int_I P d\sigma = \sum_{i=1}^s y_i + \int P d\sigma_2 \leq \|\sigma_1\| + \|\sigma_2\|=\|\sigma\| \end{array}$
and we conclude that ${\|\sigma\| = \|\mu\|}$ and especially ${\int_I P d\sigma_2 = \|\sigma_2\|}$. This shows that ${\mu}$ is a solution to ${(SP)}$. It remains to show uniqueness. We show the following: If there is a ${\nu\in\mathfrak{M}}$ with support in ${I\setminus\{x_1,\dots,x_s\}}$ such that ${\int_I Pd\nu = \|\nu\|}$, then ${\nu=0}$. To see this, we build, for any ${r>0}$, the sets
$\displaystyle \Omega_r = [-1,1]\setminus \bigcup_{i=1}^s ]x_i-r,x_i+r[.$
and assume that there exists ${r>0}$ such that ${\|\nu|_{\Omega_r}\|\neq 0}$ (${\nu|_{\Omega_r}}$ denoting the restriction of ${\nu}$ to ${\Omega_r}$). However, it holds by property 3 of ${P}$ that
$\displaystyle \int_{\Omega_r} P d\nu < \|\nu|_{\Omega_r}\|$
and consequently
$\displaystyle \begin{array}{rcl} \|\nu\| &=& \int Pd\nu = \int_{\Omega_r} Pd\nu + \int_{\Omega_r^C} P d\nu\\ &<& \|\nu|_{\Omega_r}\| + \|\nu|_{\Omega_r^C}\| = \|\nu\| \end{array}$
which is a contradiction. Hence, ${\nu|_{\Omega_r}=0}$ for all ${r}$ and this implies ${\nu=0}$. Since ${\sigma_2}$ has its support in ${I\setminus\{x_1,\dots,x_s\}}$ we conclude that ${\sigma_2=0}$. Hence the support of ${\sigma}$ is exactly ${\{x_1,\dots,x_s\}}$. and since ${K\sigma = b = K\mu}$ and hence ${K(\sigma-\mu) = 0}$. This can be written as a Vandermonde system
$\displaystyle \begin{pmatrix} u_0(t_1)& \dots &u_0(t_s)\\ \vdots & & \vdots\\ u_n(t_1)& \dots & u(t_s) \end{pmatrix} \begin{pmatrix} y_1 - x_1\\ \vdots\\ y_s - x_s \end{pmatrix} = 0$
which only has the zero solution, giving ${y_i=x_i}$. $\Box$
3. Generalization to other measurements
The measurement by monomials may sound a bit unusual. However, de Castro and Gamboa show more. What really matters here is that the monomials for a so-called Chebyshev-System (or Tchebyscheff-system or T-system – by the way, have you ever tried to google for a T-system?). This is explained, for example in the book “Tchebycheff Systems: With Applications in Analysis and Statistics” by Karlin and Studden. A T-system on ${I}$ is simply a set of ${n+1}$ functions ${\{u_0,\dots, u_n\}}$ such that any linear combination of these functions has at most ${n}$ zeros. These systems are called after Tchebyscheff since they obey many of the helpful properties of the Tchebyscheff-polynomials.
What is helpful in our context is the following theorem of Krein:
Theorem 2 (Krein) If ${\{u_0,\dots,u_n\}}$ is a T-system for ${I}$, ${k\leq n/2}$ and ${t_1,\dots,t_k}$ are in the interior of ${I}$, then there exists a linear combination ${\sum_{k=0}^n a_k u_k}$ which is non-negative and vanishes exactly the the point ${t_i}$.
Now consider that we replace the monomials in Theorem~1 by a T-system. You recognize that Krein’s Theorem allows to construct a “generalized polynomial” which fulfills the same requirements than the polynomial ${P}$ is the proof of Theorem~1 as soon as the constant function 1 lies in the span of the T-system and indeed the result of Theorem~1 is also valid in that case.
4. Exact reconstruction of ${s}$-sparse nonnegative vectors from ${2s+1}$ measurements
From the above one can deduce a reconstruction result for ${s}$-sparse vectors and I quote Theorem 2.4 from Exact Reconstruction using Support Pursuit:
Theorem 3 Let ${n}$, ${m}$, ${s}$ be integers such that ${s\leq \min(n/2,m)}$ and let ${\{1,u_1,\dots,u_n\}}$ be a complete T-system on ${I}$ (that is, ${\{1,u_1,\dots,u_r\}}$ is a T-system on ${I}$ for all ${r<n}$). Then it holds: For any distinct reals ${t_1,\dots,t_m}$ and ${A}$ defined as
$\displaystyle A=\begin{pmatrix} 1 & \dots & 1\\ u_1(t_1)& \dots &u_1(t_m)\\ \vdots & & \vdots\\ u_n(t_1)& \dots & u(t_m) \end{pmatrix}$
Basis Pursuit recovers all nonnegative ${s}$-sparse vectors ${x\in{\mathbb R}^m}$.
5. Concluding remarks
Note that Theorem~3 gives a deterministic construction of a measurement matrix.
Also note, that nonnegativity is crucial in what we did here. This allowed (in the monomial case) to work with squares and obtain the polynomial ${P}$ in the proof of Theorem~1 (which is also called “dual certificate” in this context). This raises the question how this method can be adapted to all sparse signals. One needs (in the monomial case) a polynomial which is bounded by 1 but matches the signs of the measure on its support. While this can be done (I think) for polynomials it seems difficult to obtain a generalization of Krein’s Theorem to this case…
### Like this:
September 5, 2011
## Regularization with general similarity measure
Posted by Dirk under Math, Regularization | Tags: ill-posed problems, regularization, topological spaces |
[3] Comments
On my way to ENUMATH 11 in Leicester I stumbled upon the preprint Multi-parameter Tikhonov Regularisation in Topological Spaces by Markus Grasmair. The paper deals with fairly general Tikhonov functionals and its regularizing properties. Markus considers (nonlinear) operators ${F:X\rightarrow Y}$ between two set ${X}$ and ${Y}$ and analyzes minimizers of the functional
$\displaystyle T(x) = S(F(x),y) + \sum_k \alpha_k R_k(x).$
The functionals ${S}$ and ${R_k}$ play the roles of a similarity measure and regularization terms, respectively. While he also treats the issue of noise in the operator and the multiple regularization terms, I was mostly interested in his approach to the general similarity measure. The category in which he works in that of topological spaces and he writes:
“Because anyway no trace of an original Hilbert space or Banach space structure is left in the formulation of the Tikhonov functional ${T}$ [...], we will completely discard all assumption of a linear structure and instead consider the situation, where both the domain ${X}$ and the co-domain ${Y}$ of the operator ${F}$ are mere topological spaces, with the topology of ${Y}$ defined by the distance measure ${S}$.”
The last part of the sentence is important since previous papers often worked the other way round: Assume some topology in ${Y}$ and then state conditions on ${S}$. Nadja Worliczek observed in her talk “Sparse Regularization with Bregman Discrepancy” at GAMM 2011 that it seems more natural to deduce the topology from the similarity measure and Markus took the same approach. While Nadja used the notion of “initial topology” (that is, take the coarsest topology that makes the functionals ${y\mapsto S(z,y)}$ continuous), Markus uses the following family of pseudo-metrics: For ${z\in Y}$ define
$\displaystyle d^{(z)}(y,\tilde y) = |S(z,y)-S(z,\tilde y)|.$
Unfortunately, the preprint is a little bit too brief for me at this point and I did not totally get what he means with “the topology ${\sigma}$ induced by the uniformity induced by the pseudo-metric”. Also, I am not totally sure if “pseudo-metric” is unambiguous.. However, the topology he has in mind seems to be well suited in the sense that ${y^n\rightarrow y}$ if ${S(z,y^n)\rightarrow S(z,y)}$ for all ${z}$. Moreover, the condition that ${S(z,y)=0}$ iff ${z=y}$ implies that ${\sigma}$ is Hausdorff. It would be good to have a better understanding on how the properties of the similarity measure are related to the properties of the induced topology. Are there examples in which the induced topology is both different from usual norm and weak topologies and also interesting?
Moreover, I would be interested, in the relations of the two approaches: via “uniformities” and the initial topology…
August 24, 2011
## News from ILAS 2011
Posted by Dirk under Conference, Math, Regularization | Tags: conference, ILAS |
After a very refreshing and enjoying summer vacation I am back to work and back to blogging. This week there is the ILAS 11 (where ILAS stands for International Linear Algebra Society) here at TU Braunschweig; and since the talks take place in the building just next to where I am, I enjoyed some of them. Two talks have been especially interesting to me.
The first one was Tikhonov Regularization for Large Scale Inverse Problems by Melina Freitag (maybe the first link is not working yet but Melina said that she is going to upload her slides under that address). She talked about the ways the weather forecast is done these days in the UK and in Europe and especially on the process of Data Assimilation where one uses the previous weather forecasts and newly arrived measurements to produce a better estimate of the state. As a matter of fact, two popular methods use in this field (3dVar and 4dVar) are equivalent to classical Tikhonov regularization. In her section on ${L^1}$-penalties (which I usually call ${\ell^1}$-penalties…) she actually introduced a kind of discrete ${TV}$-penalty as a replacement for the usual quadratic (${\ell^2}$) penalty. Her motivation was as usual: Tikhonov regularization smoothes too much and weather fronts are not smooth. She did not have results of this kind of ${TV}$ regularization as a replacement in 4dVar with real weather data but with smaller toy examples (with non-linear advection equations) since the resulting optimization problem is LARGE. However, her results look promising. I am not sure if she did, but one was tempted to arrive at the conclusion that “4dVar with ${TV}$ penalty gives a better resolution of weather fronts”. It happened that during her talk there was thunderstorm with heavy rain in front of the windows which has not been predicted by the forecast (according to which, the thunderstorm should happen the next day). Now: Would a ${TV}$ penalty be able to predict this thunderstorm for the right time? I am not sure. While ${TV}$ penalties do enforce edges, the precise place of the edge is still not too sure. My feeling is, that the accuracy of the position is better, the less the curvature of the edge is, but in general this highly depends on the ill-posed problem at hand.
The second talk was Recent Progress in the Solution of Discrete Ill-Posed Problems by Michiel Hochstenbach. The talk was pretty amusing; I especially liked the slogan for discrete ill-posed problems:
How to wisely divide by zero.
Also he introduced the three forms of variational regularization which I usually call Tikhonov, Ivanov and Morozov regularization (on slide 34) and introduced the Pareto front (under the name it usually has in discrete ill-posed problems: L-curve).
Another appealing slogan was:
Discrete ill-posed problems are essentially underdetermined.
(Since we do not want to solve the respective equation exactly, we usually have a lot of approximate solution of which we have to choose the one we like the most. Of course this is the same for continuous ill-posed problems.) As a consequence: One should use as much prior knowledge as possible.
In the rest of his talk he talked about improvements of the classical Tikhonov regularization (which is a linear method) by other linear methods with respect to both reconstruction quality and computational burden. He motivated his SRSVD (subspace restricted SVD) by the observation that the simple truncated SVD is often failing due to the fact that the first singular vectors do not contain useful information of the solution. He proposed to use a selected set of (orthonormal) vectors and use a restricted SVD in the following. However, does this use the knowledge that the solution shall be a linear combination of these selected vectors? And wouldn’t it be beneficial to use a larger set of (non-orthonormal) vectors, put into a (possible overcomplete) dictionary and use a sparse reconstruction method such a ${\ell^1}$ regularization which automatically selects the relevant vectors? He also proposed the so called “linear combination approach” which basically takes several outputs of various methods and search within the linear combinations of this outputs for a better solution. To do so he proposed to use another Ivanov-type regularization (slide 79). I still did not get why he uses the largest available norm as a constraint here… However there should be an answer somewhere is his papers.
Edit: Michiel sent me the following answer:
I used the largest available norm, since the norms of many solution approaches are often smaller than, or approximately equal to the true norm. In the paper Discrete ill-posed least-squares problems with a solution norm constraint we reach the conclusion that
“For the approach based on the solution norm constraint, it seems important that $\|x\|$ not be underestimated.”
Edit: The above mentioned paper Discrete ill-posed least-squares problems with a solution norm constraint is to be published in “Linear Algebra and its Applications”. It can be found via its doi.
July 22, 2011
## Linear convergence for “Pixel Sparsity”?
Posted by Dirk under Math, Regularization, Sparsity | Tags: regularization, sparsity |
1. A numerical experiment on sparse regularization
To start, I take a standard problem from the Regularization Tools Matlab toolbox: The problem \texttt{deriv2}. This problem generates a matrix ${A}$ and two vectors ${x}$ and ${b}$ such that the equation ${Ax=b}$ is a Galerkin discretization of the integral equation
$\displaystyle g(t) = \int_0^1 k(t,s)f(s) ds$
with a kernel ${k}$ such that the solution amounts to solving a boundary value problem. The Galerkin ansatz functions are simply orthonormal characteristic functions on intervals, i.e. ${\psi_i(x) = h^{-1/2}\chi_{[ih,(i+1)h]}(x)}$. Thus, I work with matrices ${A_h}$ and vectors ${x_h}$ and ${b_h}$.
I want to use sparse regularization to reconstruct spiky solutions, that is, I solve problems
$\displaystyle \min_{x_h} \tfrac{1}{2}\|A_h x_x - b_h\|^2 + \alpha_h\|x_h\|_1.$
Now, my first experiment goes as follows:
Experiment 1 (Discretization goes to zero)
I generate spiky data: I fix a point ${t_0}$ in the interval ${[0,1]}$, namely ${t_0 = 0.2}$, and a value ${a_0=1}$. Now I consider the data ${f}$ which is a delta peak of height ${a_0}$ and ${t_0}$ (which in turn leads to a right hand side ${g}$). I construct the corresponding ${x_h}$ and the right hand side ${b_h=A_hx_h}$. Now I aim at solving
$\displaystyle \min_f \tfrac{1}{2}\| g - \int_0^1 k(\cdot,s)f(s)ds\|_2^2 + \alpha \|f\|_1$
for different discretizations (${h\rightarrow 0}$). In the numerics, I have to scale ${\alpha}$ with ${h}$, i.e. I solve
$\displaystyle \min_{x_h} \tfrac{1}{2}\|A_h x_x - b_h\|^2 + h\,\alpha\|x_h\|_1.$
and I obtain the following results: In black I show the data ${x}$, ${b}$ and so on, and in blue I plot the minimizer and its image under ${A}$.
For ${n=10}$:
For ${n=50}$:
For ${n=100}$:
For ${n=500}$:
For ${n=1000}$:
Note that the scale varies in the pictures, except in the lower left one where I show the discretized ${g}$. As is should be, this converges nicely to a piecewise linear function. However, the discretization of the solution blows up which is also as it should be, since I discretize a delta peak. Well, this basically shows, that my scaling is correct.
From the paper Sparse regularization with ${\ell^q}$ penalty term one can extract the following result.
Theorem 1 Let ${K:\ell^2\rightarrow Y}$ be linear, bounded and injective and let ${u^\dagger \in \ell^2}$ have finite support. Moreover let ${g^\dagger = Ku^\dagger}$ and ${\|g^\dagger-g^\delta\|\leq \delta}$. Furthermore, denote with ${u_\alpha^\delta}$ the minimizer of
$\displaystyle \tfrac12\|Ku-g^\delta\|^2 + \alpha\|u\|_1.$
Then, for ${\alpha = c\delta}$ it holds that
$\displaystyle \|u_\alpha^\delta - u^\dagger\|_1 = \mathcal{O}(\delta).$
Now let’s observe this convergence rates in a second experiment:
Experiment 2 (Convergence rate ${\mathcal{O}(\delta)}$) Now we fix the discretization (i.e. ${n=500}$), and construct a series of ${g^\delta}$‘s for ${\delta}$ in a logscale between ${1}$ and ${10^{-6}}$. I scale ${\alpha}$ proportional to ${\delta}$ and caluclate minimizers of
$\displaystyle \min_{x_h} \tfrac{1}{2}\|A_h x_x - b_h\|^2 + h\,\alpha\|x_h\|_1.$
The I measure the error ${\|f_\alpha^\delta-f^\dagger\|_1}$ and plot it doubly logarithmically against ${\delta}$.
And there you see the linear convergence rate as predicted.
In a final experiment I vary both ${\delta}$ and ${n}$:
Experiment 3 (${\delta\rightarrow 0}$ and “${n\rightarrow\infty}$”)Now we repeat Experiment 1 for different ${n}$ and put all the loglog plots in one figure. This looks like this: You clearly observe the linear convergence rate in any case. But there is another important thing: The larger the ${n}$ (i.e. the smaller the ${h}$), the later the linear rate kicks is (i.e. for smaller ${\delta}$). You may wonder what the reason is. By looking at the reconstruction for varying ${n}$ and ${\delta}$ (which I do not show here) you see the following behavior: For large noise the regularized solutions consist of several peaks located all over the place and with vanishing noise, one peak close to the original one gets dominant. However, this peak is not at the exact position, but at a slightly larger ${t}$; moreover, it is slightly smaller. Then, this peak moves towards the right position and is also getting larger. Finally, the peak arrives at the exact position and remains there while approaching the correct height.
Hence, the linear rate kicks in, precisely when the accuracy is higher than the discretization level.
Conclusion:
• The linear convergence rate is only present in the discrete case. Moreover, it starts at a level which can not be resolved by the discretization.
• “Sparsity penalties” in the continuous case are a different and delicate matter. You may consult the preprint “Inverse problems in spaces of measures” which formulates the sparse recovery problem in a continuous setting but in the space of Radon measures rather than in ${L^1}$ (which is simply not working). There Kristian and Hanna show weak* convergence of the minimizers.
• Finally, for “continuous sparsity” also some kind of convergence is true, however, not in norm (which really should be the variation norm in measure space). Weak* convergence can be quantified by the Prokhorov metric or the Wasserstein metric (which is also called earth movers distance in some comiunities). Convergence with respect to these metric should be true (under some assumptions) but seem hard to prove. Convergence rates would be cool, but seem even harder.
July 11, 2011
## Reweight or Threshold? Iterative solution with l^p penalties
Posted by Dirk under Math, Optimization, Regularization, Sparsity | Tags: non-convex optimization, sparsity |
1 Comment
I used to work on “non-convex” regularization with ${\ell^p}$-penalties, that is, studying the Tikhonov functional
$\displaystyle \frac12 \|Ax-b\|_2^2 + \alpha\sum_{i}|x_i|^p \ \ \ \ \ (1)$
with a linear operator ${A}$ and ${0<p<1}$.
The regularization properties are quite nice as shown by Markus Grasmair in “Well-posedness and convergence rates for sparse regularization with sublinear ${l^q}$ penalty term” and “Non-convex sparse regularisation” and Kristian Bredies and myself in “Regularization with non-convex separable constraints”.
The next important issue is, to have some way to calculate global minimizers for~(1). But, well, this task may be hard, if not hopeless: Of course one expects a whole lot of local minimizers.
It is quite instructive to consider the simple case in which ${A}$ is the identity first:
Example 1 Consider the minimization of
$\displaystyle F(x) = \frac12\|x-b\|_2^2 + \alpha\sum_i |x_i|^p. \ \ \ \ \ (2)$
This problem separates over the coordinates and hence, can be solved by solving the one-dimensional minimization problem
$\displaystyle s^*\in\textup{arg}\min_s \frac12 (s-b)^2 + \alpha|s|^p. \ \ \ \ \ (3)$
We observe:
• For ${b\geq 0}$ we get ${s^*\geq 0}$.
• Replacing ${b}$ by ${-b}$ leads to ${-s^*}$ instead of ${s^*}$.
Hence, we can reduce the problem to: For ${b\geq 0}$ find
$\displaystyle s^* \in\textup{arg}\min_{s\geq 0} \frac12 (s-b)^2 + \alpha\, s^p. \ \ \ \ \ (4)$
One local minimizer is always ${s^*=0}$ since the growth of the ${p}$-th power beats the term ${(\cdot-b)^2}$. Then, ${b}$ is large enough, there are two more extrema for~(4) which are given as the solutions to
$\displaystyle s + \alpha p s^{p-1} = b$
one of which is a local maximum (the one which is smaller in magnitude) and the other is a local minimum (the one which is larger in magnitude). This is illustrated in the following “bifurcation” picture:
Now the challenge is, to find out which local minimum has the smaller value. And here a strange thing happens: The “switching point” for ${b}$ at which the global minimizer jumps from ${0}$ to the upper branch of the (multivalued) inverse of ${s\mapsto s + \alpha p s^{p-1}}$ is not at the place at which the second local minimum occurs. It is a little bit larger: In “Convergence rates and source conditions for Tikhonov regularization with sparsity constraints” I calculated this “jumping point” the as the weird expression
$\displaystyle b^* = \frac{2-p}{2-2p}\Bigl(2\alpha(1-p)\Bigr)^{\frac{1}{2-p}}.$
This leads to the following picture of the mapping
$\displaystyle b^\mapsto \textup{arg}\min_s \frac12 (s-b)^2 + \alpha|s|^p$
1. Iterative re-weighting
One approach to calculate minimizers in~(1) is the so called iterative re-weighting, which appeared at least in “An unconstrained ${\ell^q}$ minimization for sparse solution of under determined linear systems” by Ming-Jun Lai and Jingyue Wang but is probably older. I think for the problem with equality constraints
$\displaystyle \min \|x\|_q\ \textup{ s.t. }\ Ax=b$
the approach at least dates back to the 80s but I forgot the reference… For the minimization of (1) the approach goes as follows: For ${0<p<1}$ choose a ${q\geq 1}$ and a small ${\epsilon>0}$ and rewrite the ${p}$-quasi-norm as
$\displaystyle \sum_i |x_i|^p \approx \sum_i (\epsilon + |x_i|^q)^{\frac{p}{q}}.$
The necessary condition for a minimizer of
$\displaystyle \frac12\|Ax-b\|_2^2 + \alpha\sum_i (\epsilon + |x_i|^q)^{\frac{p}{q}}$
is (formally take the derivative)
$\displaystyle 0 = \alpha \Big[\frac{p}{q} (\epsilon + |x_i|^q)^{\frac{p}{q}-1} q \textup{sgn}(x_i) |x_i|^{q-1}\Big]_i + A^*(Ax-b)$
Note that the exponent ${\frac{p}{q}-1}$ is negative (which is also a reason for the introduction of the small ${\epsilon}$). Aiming at an iteration, we fix some of the ${x}$‘s and try to solve for others: If we have a current iterate ${x^k}$ we try to find ${x^{k+1}}$ by solving
$\displaystyle 0 = \alpha \Big[\frac{p}{q} (\epsilon + |x_i^k|^q)^{\frac{p}{q}-1} q \textup{sgn}(x_i) |x_i|^{q-1}\Big]_i + A^*(Ax-b)$
for ${x}$. This is the necessary condition for another minimization problem which involves a weighted ${q}$-norm: Define (non-negative) weights ${w^k_i = \frac{p}{q} (\epsilon + |x^k_i|^p)^{\frac{p}{q}-1}}$ an iterate
$\displaystyle x^{k+1}\in \textup{arg}\min_x \frac12\|Ax-b\|_2^2 + \alpha\sum_i w_i^k |x_i|^q. \ \ \ \ \ (5)$
Lai and Wang do this for ${q=2}$ which has the benefit that each iteration can be done by solving a linear system. However, for general ${1\leq q\leq 2}$ each iteration is still a convex minimization problem. The paper “Convergence of Reweighted ${\ell^1}$ Minimization Algorithms and Unique Solution of Truncated ${\ell^p}$ Minimization” by Xiaojun Chen and Weijun Zhou uses ${q=1}$ and both papers deliver some theoretical results of the iteration. Indeed in both cases one can show (subsequential) convergence to a critical point.
Of course the question arises if there is a chance that the limit will be a global minimizer. Unfortunately this is not probable as a simple numerical experiment shows:
Example 2 We apply the iteration (5) to the one dimensional problem (3) in which we know the solution. And we do this for many values of ${b}$ and plot the value of ${b}$ against the limit of the iteration. Good news first: Everything converges nicely to critical points as deserved. Even better: ${\epsilon}$ can be really small—machine precision works. The bad news: The limit depends on the initial value. Even worse: The method frequently ends on “the wrong branch”, i.e. in the local minimum which is not global. I made the following experiment: I took ${p=1/2}$, set ${\alpha=1}$ and chose ${q=2}$. First I initialized for all values of ${b}$ with ${s^0=1}$. This produced the following output (I plotted every fifth iteration):
Well, the iteration always chose the upper branch… In a second experiment I initialized with a smaller value, namely with ${s^0=0.1}$ for all ${b}$. This gave:
That’s interesting: I ended at the upper branch for all values below the point where the lower branch (the one with the local maximum) crosses the initialization line. This seems to be true in general. Starting with ${s^0=0.05}$ gave
Well, probably this is not too interesting: Starting “below the local maximum” will bring you to the local minimum which is lower and vice versa. Indeed Lai and Wang proved in their Theorem 2.5 that for a specific setting (${A}$ of completely full rank, sparsity high enough) there is an ${\alpha}$ small enough such that the method will pick the global minimizer. But wait—they do not say anything about initialization… What happens if we initialize with zero? I have to check…
By the way: A similar experiment as in this example with different values of ${q\geq 1}$ showed the same behavior (getting the right branch if the initialization is ok). However: smaller ${q}$ gave much faster convergence. But remember: For ${q=1}$ (experimentally the fastest) each iteration is an ${\ell^1}$ penalized problem while for ${q=2}$ one has to solve a linear system. So there seems to be a tradeoff between “small number of iterations in IRLP” and “complexity of the subproblems”.
2. Iterative thresholding
Together with Kristian Bredies I developed another approach to these nasty non-convex minimization problems with ${\ell^p}$-quasi-norms. We wrote a preprint back in 2009 which is currently under revision. Moreover, we always worked in a Hilbert space setting that is ${A}$ maps the sequence space ${\ell^2}$ into a separable Hilbert space.
Remark 1 When showing result for problems in separable Hilbert space I sometimes get the impression that others think this is somehow pointless since in the end one always works with ${{\mathbb R}^N}$ in practice. However, I think that working directly in a separable Hilbert space is preferable since then one can be sure that the results will not depend on the dimension ${N}$ in any nasty way.
Basically our approach was, to use one of the most popular approaches to the ${\ell^1}$-penalized problem: Iterative thresholding aka forward-backward splitting aka generalized gradient projection. I prefer the last motivation: Consider the minimization of a smooth function ${F}$ over a convex set ${C}$
$\displaystyle \min_{x\in C} F(x)$
by the projected gradient method. That is: do a gradient step and use the projection ${P_C}$ to project back onto ${C}$:
$\displaystyle x^{n+1} = P_C(x^n - s_n \nabla F(x^n)).$
Now note that the projection is characterized by
$\displaystyle P_C(x) = \textup{arg}\min_{y\in C}\frac{1}{2}\|y-x\|^2.$
Now we replace the “convex constraint” ${C}$ by a penalty function ${\alpha R}$, i.e. we want to solve
$\displaystyle \min_x F(x) + \alpha R(x).$
Then, we just replace the minimization problem for the projection with
$\displaystyle P_s(x) = \textup{arg}\min_{y}\frac{1}{2}\|y-x\|^2 + s\alpha R(y)$
and iterate
$\displaystyle x^{n+1} = P_{s_n}(x^n - s_n \nabla F (x^n)).$
The crucial thing is, that one needs global minimizers to obtain ${P_s}$. However, for these ${\ell^p}$ penalties with ${0<p<1}$ these are available as we have seem in Example~1. Hence, the algorithm can be applied in the case
$\displaystyle F(x) = \tfrac{1}{2}\|Ax-y\|^2,\qquad R(x) = \sum_i |x_i|^p.$
One easily proves that one gets descent of the objective functional:
Lemma 1 Let ${F}$ be weakly lower semicontinuous and differentiable with Lipschitz continuous gradient ${\nabla F}$ with Lipschitz constant ${L}$ and let ${R}$ be weakly lower semicontinuous and coercive. Furthermore let ${P_s(x)}$ denote any solution of
$\displaystyle \min_y \tfrac{1}{2}\|y-x\|^2 + s\alpha R(y).$
Then for ${y = P_s(x - s\nabla F(x))}$ it holds that
$\displaystyle F(y) + \alpha R(y) \leq F(x) + \alpha R(x) - \tfrac{1}{2}\big(\tfrac{1}{s} - L\big)\|y-x\|^2.$
$\displaystyle \tfrac{1}{2}\|y - (x- s\nabla F(x))\|^2 + s\alpha R(y) \leq \tfrac{1}{2}\|s\nabla F(x)\|^2 + s\alpha R(x).$
and conclude (by rearranging, expanding the norm-square, canceling terms and adding ${F(y) - F(x)}$ to both sides) that
$\displaystyle (F+\alpha R)(y) - (F+\alpha R)(x) \leq F(y) - F(x) - \langle \nabla F(x),y-x\rangle - \tfrac{1}{2s}\|y-x\|^2.$
Finally, use Lipschitz-continuity of ${\nabla F}$ to conclude
$\displaystyle F(y) - F(x) - \langle \nabla F(x),y-x\rangle \leq \tfrac{L}{2}\|x-y\|^2.$
$\Box$
This gives descent of the functional value as long as ${0< s < 1/L}$. Now starts the hard part of the investigation: Under what circumstances do we get convergence and what are possible limits?
To make a long story short: For ${\ell^p}$-penalties (and also other non-convex penalties which leave the origin with infinite slope) and fixed step-size ${s_n=s}$ one gets that every subsequence of the iterates has a strong accumulation point which is a fixed point of the iteration for that particular ${s}$ as long as ${0< s< 1/L}$. Well that’s good, but here’s the bad news: As long as ${s<1/L}$ we do not obtain the global minimizer. That’s for sure: Consider ${F(x) = \tfrac{1}{2}\|x-b\|^2}$ and any ${0<s<1}$…
However, with considerably more effort one can show the following: For the iteration ${x^{n+1} = P_{s_n}(x^n - s_n \nabla F(x))}$ with ${s_n = (L + 1/n)^{-1}\rightarrow 1/L}$ (and another technical condition on the Lipschitz constant of ${\nabla F}$) the iterates have a strong accumulation point which is a solution ${x = P_{1/L}(x - 1/L\,\nabla F(x)}$ and hence, satisfies necessary conditions for a global minimizer.
That’s not too bad yet. Currently Kristian and I, together with Stefan Reiterer, work to show that the whole sequence of iterates converges. Funny enough: This seems to be true for ${F(x) = \tfrac{1}{2}\|Ax-b\|^2}$ and ${R(x) = \sum_i |x_i|^p}$ with rational ${p}$ in ${]0,1[}$… Basically, Stefan was able to show this with the help of Gröbner bases and this has been my first contact with this nice theory. We hope to finalize our revision soon.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 585, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407830238342285, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/10220/can-the-speed-of-light-become-complex-inside-a-metamaterial/10900
|
# Can the speed of light become complex inside a metamaterial?
The speed of light in a material is defined as $c = \frac{1}{\sqrt{\epsilon \mu}}$. There are metamaterials with negative permittivity $\epsilon < 0$ and permeability $\mu < 0$ at the same time. This leads to a negative refractive index of these materials.
But do (meta-) materials exist with only negative $\epsilon < 0$ and positive $\mu > 0$ or vice versa? This would lead to a complex speed of light inside such materials.
What would be the consequences of a complex speed of light? Could particles reach unlimited speed inside these materials? Would there still be Cherenkov radiation?
-
9
I don't know about metamaterials, but it seems to me that if $\epsilon\mu<0$, it would mean that the phase velocity was purely imaginary. That would mean that waves in the material would die away exponentially rather than oscillating. The solutions to the wave equation are of the form $e^{i(kx-\omega t)}$ with $\omega=ck$. If $c$ is imaginary, then $k=i\kappa$ is imaginary for real $\omega$, and the solution looks like $e^{\pm\kappa x-i\omega t}$. The physically useful solution in these cases is the decaying one. – Ted Bunn May 21 '11 at 21:25
Keep in mind that the maximum speed of a particle, even in a material, is 299792458 m/s, regardless of how light behaves in the material. So particles wouldn't be able to reach unlimited speed even if the speed of light did become complex. – David Zaslavsky♦ May 22 '11 at 2:51
@Ted Bunn Within/close to absoprtion bands/lines n can become < 1, but difficult to observe due to absorption. "Anomalous Dispersion" In the far IR or Microwave band paramagnetic substances have "bands", eg ruby (+ magnetic field) is used in circulators and similar devices. – Georg May 22 '11 at 11:06
@Ted Bunn If waves in the material die away exponentially, would that imply that also Cherenkov radiation is damped exponentially? So I guess one couldn't measure the speed of a fast charged particle in such a material based on the emitted Cherenkov radiation? – asmaier May 29 '11 at 10:26
## 3 Answers
complex quantities always denote loss. So if the velocity is imaginary, it is impossible to for a wave to travel from one point to another. If you look at the Drude model, for some certain frequency the signal will pass so it behaves like a dielectric at that time, but for frequencies lower than the Plasma frequency it will behave like a metal where no transmission is possible and at that time permittivity is less than zero, so at that time the velocity of the wave is imaginary.
So, in my opinion, imaginary velocity means no transmission.
-
## Did you find this question interesting? Try our newsletter
email address
Technically, all of these materials will have (effective) complex dielectric and/or magnetic properties. Thus, you'll be dealing with complex wave speeds.
Complex speed means lossy transmission. The amplitude of the wave will decay with a decay constant that is inversely proportional to the imaginary part of the square root of the speed.
Note that if this term is small, or the material is thin enough, then you can have transmission.
-
Since there's only one square root, one would end up with an imaginary velocity rather than the more general case of a complex velocity.
In physics, one occasionally deals with complex energies. I don't know what imaginary velocities will mean and there's not a lot of references in the literature. One paper is "Velocities" in Quantum Mechanics
-
– Qmechanic♦ Jun 8 '11 at 11:47
Yes, I'll modify my answer. (Embarrassingly, I took plasma classes in college.) – Carl Brannen Jun 8 '11 at 22:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387914538383484, "perplexity_flag": "middle"}
|
http://letterstonature.wordpress.com/category/statistics-and-metrics/
|
# Letters to Nature
Feeds:
Posts
Comments
## How to win at the races
I’ve rambled about this before, but with the Melbourne Cup - “the race that stops a nation” – a few days away and Tom Waterhouse’s annoying face on TV too often, it’s worth repeating.
Don’t bet on the horse you think will win!
More precisely, don’t necessarily bet on the horse you think will win. Here is the only betting system that works:
1. For each horse in the race, and before you look at the price offered by the bookmaker, write what you think the probability (as a percentage) is that the horse will win. I.e. if the race was run 100 times, how many times would this horse win? You’ll have to do your homework on the field.
2. For each horse, take your probability and multiply it by the bookmakers price. Call that the magic number.
3. If any of the horses have a magic number greater than 100, bet on the horse with the highest magic number.
4. If none of the horses have a magic number greater than 100, don’t bet. Go home.
The magic number is how much (on average) you would make if you bet \$1 on the horse 100 times, so it better be more than 100. The way that the bookmaker guarantees that they will make a profit in the long run is to ensure that no magic numbers are greater than 100. Because of the bookmakers slice (the overround), the odds are stacked against the average punter. You will only end up with a magic number greater than 100 if either you have made a mistake on step 1, or the bookmaker has made a mistake on his price. This leads to the following advice.
You should only bet on a horse if
a) You know more than the bookmaker, and
b) The bookmaker has significantly underestimated one of the horses.
Thus, the better the bookmaker, the more reason not to bet. And so, we come to Tom Waterhouse’s online betting business:
“I’ve got four generations of betting knowledge in my blood. … Bet with me, and that knowledge can be yours.”
This is exactly the information you need to conclude that you should never bet with Tom Waterhouse. The ad might as well say “bet with me; I know how to take your money”. You don’t want a bookmaker who knows horse racing inside-and-out, from horse racing stock, armed will all the facts, knowing all the right people. You don’t want a professional in a sharp suit surrounded by a analysts at computer screens. You want an idiot. You want someone who doesn’t know which end of the horse is the front, armed with a broken abacus and basing his prices on a combination of tea-leaf-reading, a lucky 8-ball and “the vibe“. You want a bookmaker that is going out of business.
The more successful the bookmaker, the further you should stay away. The TAB was established in 1964, has over a million customers, 2,500 retail outlets, and made a profit of \$534.8 million in 2011, up 14%. Translation: never bet with the TAB. Betfair’s profits were \$600 million, SportingBet made \$2 billion in 2009. With those resources, they’ll always know more than you. If you’ve heard of them, don’t bet with them. Go home.
Hopefully you’re getting my point. Don’t bet on sports. If you go to the races, put on a nice outfit, drink a few beers and give the money to charity. If you must bet, have a random sweepstakes with your friends. You’ll get much better odds that way.
Read Full Post »
## Coincidences and the Lottery
Coincidences happen surprisingly often. Yet they are often not meaningful, i.e. they are “just a coincidence” and do not imply that we should change our worldview. For example, suppose there are a million people in contention for a lottery, and John Smith is found to win. Before knowing this, our probability for it is $10^{-6}$:
$P(\textnormal{John Smith wins} | \textnormal{fair lottery}) = 10^{-6}$
People often get afraid of this tiny probability, and proclaim something like “it’s not the probability of John Smith winning the lottery that is relevant, but the probability that someone wins”. However, this is anti-Bayesian nonsense. This tiny probability is, by Bayes’ rule, relevant for getting a posterior probability for $\textnormal{fair lottery}$. So how is it that we often still believe in the fair lottery (or that a coincidence is not meaningful)?
The answer is quite simple: the likelihood for the alternative, $\textnormal{unfair lottery}$ hypothesis, is just as small:
$P(\textnormal{John Smith wins} | \textnormal{unfair lottery}) = 10^{-6}$.
The reason is that before we knew who won, we had no reason to single out John Smith, and had to spread the total probability (1) over a million minus one alternatives (that the lottery was rigged in favor of one of the other entrants). Using analogous reasoning, yes, coincidences have tiny probability, but they also have tiny probability given the hypothesis of a mysterious force operating, because before the coincidence happened we didn’t know which of the multitude of coincidences were going to occur.
For more on this topic, you may be interested in this paper (by myself and Matt).
Read Full Post »
## Conjecture of the evening
Posted in Amusing, Creativity, Mathematics, Statistics and Metrics, tagged h-index, polygons on November 15, 2010 | Leave a Comment »
Especially for Cusp, I note the following (proof left for undergraduates):
(Convex h-index conjecture) For n chronologically distinct papers, each of which cites all previous papers, the corresponding h-index is the number of non-congruent diagonals in a regular polygon with number of sides 2 greater than n.
As a corollary, academics engaging in such cheeky behaviour may be indexed with the dimension of their corresponding polygon.
Read Full Post »
## Surprising Statistic of the Day
Posted in Statistics and Metrics, tagged alcohol, crime, statistics, violence on October 16, 2010 | Leave a Comment »
From the Sydney Morning Herald:
Alcohol plays a role in 50 to 60 per cent of the nearly 300,000 criminal cases that come before the state’s Local Courts each year, [New South Wales] Chief Magistrate Graeme Henson said.
That’s about twice as high as I’d have guessed. I tried to track down the source of this statistic, but the closest I could find was a report called “Alcohol related crime for each NSW Local Government Area: Numbers, proportions, rates, trends and ratios” from the NSW Bureau of Crime Statistics and Research. The report gives the percentage of “incidents of non-domestic violence related assault recorded by NSW Police” that are alcohol related as 45%.
I’d love to know what that number is for the United Kingdom, as well as European countries like France or Germany who seem to have an alcohol culture without having as much of a binge drinking culture. I’d expect that the percentage of alcohol related crime was lower for the UK and even lower for most of Europe. I’ll try to track those down.
As to what should be done about the problem, I have no idea. Perhaps nothing – it may be a correlation without causation. Perhaps its an alpha male thing: put too many young men in a nightclub with available women and testosterone will cause friction. The alcohol just happened to be there as well. On the other hand, the anecdotal evidence that certain people are more likely to “kick off after having a few” is well known.
Read Full Post »
## A Tale of Two Entropies
Posted in logic, Statistics and Metrics on July 14, 2010 | 2 Comments »
For those of us who work with degree-of-plausibility (“Bayesian”) probabilities, two situations regularly arise. The first is the need to update probabilities to take into account new information. This is usually done using Bayes’ Rule, when the information comes in the form of a proposition that is known to be true. An example of such a proposition is “The data are 3.444, 7.634, 1.227″.
More generally, information is any justified constraint on our probabilities. For example, “P(x > 3) should be 0.75″ is information. If our current probability distribution $q(x)$ doesn’t satisfy the constraint, then we better change to a new distribution $p(x)$ that does. This doesn’t mean that any old $p(x)$ will do – our $q(x)$ contained hard-won information and we want to preserve that. To proceed, we choose the $p(x)$ that is as close as possible to $q(x)$, but satisfies the constraint. Various quite persuasive arguments (see here) suggest that the correct notion of closeness that we should maximise is the relative entropy:
$H(p; q) = -\int p(x) \log \frac{p(x)}{q(x)} dx$
With no constraints, the best possible $p(x)$ is equal to $q(x)$.
Another situation that arises often is the need to simplify complex problems. For example, we might have some probability distribution $q(x)$ that is non-Gaussian, but for some reason we only want to use Gaussians for the rest of the calculation, perhaps for presentation or computational reasons. Which Gaussian should we choose to become our $p(x)$? Many people recommend maximising the relative entropy for this also: in the literature, this is known as a variational approximation, variational Bayes, or the Bogoliubov approximation (there are also variations (pun not intended) on this theme).
There are known problems with this technique. For instance, as David MacKay notes, the resulting probability distribution $p(x)$ is usually narrower than the original $q(x)$. This makes sense, since the variational approximation basically amounts to pretending you have information that you don’t actually have. This issue raises the question of whether there is something better that we could do.
I suggest that the correct functional to maximise in the case of approximating one distribution by another is actually the relative entropy, but with the two distributions reversed:
$H(q; p) = -\int q(x) \log \frac{q(x)}{p(x)} dx$
Why? Well, for one, it just works better in extreme examples I’ve concocted to magnify (a la Ed Jaynes) the differences between using $H(p; q)$ and $H(q; p)$. See the figure below:
If the blue distribution represented your actual state of knowledge, but out of necessity you could only use the red or the green distribution, which would you prefer? I find it very hard to imagine an argument that would make me choose the red distribution over the green. Another argument supporting the use of this ‘reversed’ entropy is that it is equivalent to generating a large number of samples from q, and then doing a maximum likelihood fit of p to these samples. I know maximum likelihood isn’t the best, most principled thing in the world, but in the limit of a large number of samples it’s pretty hard to argue with.
A further example supporting the ‘reversed’ entropy is what happens if $q(x)$ is zero at some points. According to the regular entropy, any distribution $p(x)$ that is nonzero where $q(x)$ is zero, is infinitely bad. I don’t think that’s true, in the case of approximations – some leakage of probability to values we know are impossible is no catastrophe. This is manifestly different to the case where we have legitimate information – if $q(x)$ is zero somewhere then of course we want to have $p(x)$ zero there as well. If we’re updating probabilities, we’re trying to narrow down the possibilities, and resurrecting some is certaintly unwarranted – but the goal in doing an approximation is different.
Maximising the reversed entropy also has some pretty neat properties. If the approximating distribution is a Gaussian, then the first and second moments should be chosen to match the moments of $q(x)$. If the original distribution is over many variables, but you want to approximate it by a distribution where the variables are all independent, just take all of the marginal distributions and product them together, and there’s your optimal approximation.
If $H(p; q)$ isn’t the best thing to use for approximations, that means that something in the derivation of $H(p; q)$ applies to legitimate information but does not apply to approximations. Most of the axioms (coordinate independence, consistency for independent systems, etc) make sense, and both entropies discussed in this post satisfy those. It is only at the very end of the derivation that the reversed entropy is ruled out, and by some pretty esoteric arguments that I admit I don’t fully understand. I think the examples I’ve presented in this post are suggestive enough that there is room here for a proof that the reversed entropy $H(q; p)$ is the thing to use for approximations. This means that maximum relative entropy is a little less than universal, but that’s okay – the optimal solutions to different problems are allowed to be different!
Read Full Post »
## Homogeneity, features
Posted in Statistics and Metrics, The Universe, tagged gammaLambda, homogeneity, integral constraint on March 20, 2010 | Leave a Comment »
Yesterday I read a few of the recent papers of Francesco Sylos Labini, who has pursued a distinction between the common or garden type statistical homogeneity in the Universe that one reads about in textbooks, and a stronger form (‘super-homogeneity’) in which the mass fluctuations follow a behaviour that is sub-Poisson as a function of scale. This implies a sort of anti-correlation—a lattice of points is, for instance, sub-Poisson, as the points are deliberately avoiding one another—and has consequences for the form of the two-point correlation function:
$\int \xi(r) d^3 r = 0$
that look remarkably similar to those imposed by the integral constraint, but which are, in fact, quite different—the super-homogeneity condition affects the actual correlation function, while the correction usually referred to as the integral constraint affects estimators of the correlation function. I started writing a summary document on this topic for the reference of myself and others.
After DARK’s infamous $\gamma\Lambda$ session, I hit a sweet spot in coding productivity and wrote a bunch of scripts to extract spatial features from galaxy images, along lines suggested to me a week or so ago by Andrew Zirm. These features are extracted from a matrix that encodes the frequency of adjacency between threshold intensity levels in the image. It’s the sort of thing best shown with pictures, which perhaps I can post once Andrew has decided which direction to pursue next.
Read Full Post »
## WMAP 7 cosmological parameter set
Posted in Statistics and Metrics, The Universe on January 29, 2010 | 1 Comment »
Your Universe ca. 2010, per the WMAP+BAO+H0[1] maximum likelihood parameter set:
| | | | Parameter | WMAP+BAO+H0 ML |
|------------------------------------------------------|------|-------------|-------------|------------------|
| | | | | |
| Hubble parameter | h | 0.702 | H0 | 70.2 km/s/Mpc |
| Dark matter density | Ωch2 | 0.1120 | Ωc | 0.227 |
| Baryonic matter density | Ωbh2 | 0.02246 | Ωb | 0.0455 |
| Total matter density | Ωmh2 | 0.1344 | Ωm | 0.272 |
| Vacuum tension[2] | ΩΛ | 0.728 | | |
| Amplitude of curvature perturbation at k = 0.002/Mpc | Δ2R | 2.45 x 10-9 | | |
| Spectral index of density perturbations | ns | 0.961 | | |
| Size of linear density fluctuation at 8 Mpc/h | σ8 | 0.807 | | |
| Redshift of matter– radiation equatlity | zeq | 3196 | | |
| Age of the Universe | t0 | 13.78 Gyr | | |
Parameters fit directly from the data are shown in a slightly different colour; all the others have been derived from the fit parameters using the usual definitions. The determination of zeq is carried out using the WMAP 7-year data on its own. The two papers in which these figures are given are:
Larson et al. (2010), arXiv:1001.4635
Komatsu et al. (2010), arXiv:1001.4538
These papers contain many other numbers: in particular, for extensions to ΛCDM cosmology, such as neutrino species, non-zero spatial curvature and dark energy that is not the cosmological constant. I expect some of the parameters mentioned there and not here—particularly the fNLstatistics of non-Gaussianity—to gain more public attention in the next decade as observations begin to determine the properties of the cosmological inflation that occurred in the very early Universe.
A final note: I’ve written this post only because these numbers are not written on an actual webpage—they are all in pdf or postscript files. But, it also gives me a chance to congratulate the WMAP team on their ongoing achievement.
Footnotes
1. Riess, A. et al. (2009), ApJ 699 539, arXiv:0905.0695
2. Dark energy, or, as assumed here, the cosmological constant.
Read Full Post »
## Genus analogues
Posted in Mathematics, Physics, Statistics and Metrics, The Universe on January 26, 2010 | 6 Comments »
N.b. This is a technical post, written to illustrate a question I believe to be interesting to some colleagues outside my particular discipline. I am accutely aware of its shortcomings as expository work, and pedagogical criticism is almost as welcome as an attempt to engage with the question at hand.
(more…)
Read Full Post »
## Small coincidences
Posted in Science, Statistics and Metrics, The Universe on January 16, 2010 | Leave a Comment »
It’s a new decade, & I’m well rested after a week locked inside the Iberostar Resortcatraz1, so there is no better time for a rejuvenation of the compact between blogger and blogosphere the mathematical space of readers and writers of blogs.
But I’ll ease myself back in with a trivium amusing to perhaps one person only. As we know, I enjoy Andrew Sullivan’s writing, & one gimmick of his blog is the View From Your Window2 snapshot series. I enjoy looking at all these different images, but am yet to bother sending anything in because of a well-known aversion to using cameras myself (Luke is much, much better at that sort of thing). But wait: here is one from Jan 8:
Copenhagen, Denmark, around ten in the morning on January 8, 2010.
Awww. It’s nice to know that there are at least two people in Denmark reading Andrew’s blog. The round tower in the background is none other than the Round Tower of Tycho Brahe, who built it after he was voted off the island where his more famous observations were made. The inside is an ascending spiral—not of steps, but a smooth cobbled road, apparently so that the astronomical instruments could be carted up by horse. It’s very interesting!
I left Copenhagen two days after this photo was taken. I’ve had a nice time here is Mexico, although I’ve been mostly cut off from that taproot of Western Civilization you and I call the Internet. My flight into SFO is in a few hours—and what do I find today on Andrew’s blog?
Berkeley Hills, California, a bit before five in the afternoon; I'm on to you, Sullivan!
So, I welcome the Internet back to my inertial frame. I’ll be staying at a place in these very same Berkeley Hills for the next little while and working in the Astronomy Department at UCB. May the bright colours of a new place forming its first impression on the mind provide much for me to write about.
1. I would have thrown myself into the carnivorous tortoise pen after the first day had there not been a cosmology conference to attend—and it was a very good meeting indeed. So a shout out to everyone who made it along. As per the request of the organisers, I draw everyone’s attention to it. There’ll be another one next year!
2. Andrew, Patrick & Chris have put together a book of these windows, selected by the readership from the many, many photos that have been sent in since 2005.
Read Full Post »
## Live Blogging: 20/20 Australia v. South Africa Match 2
Posted in Sport, Statistics and Metrics on January 13, 2009 | 2 Comments »
Well that was just as enjoyable as the first game and it was nice to have Brendon along for a small part of it. Perhaps a bit less statistics than I’d intended, and certainly less on bowling after I concluded that bowling average is pretty good just the way it is. I’m convinced, though, by the need for a much better measure of immediate batting performance for a team than run rate. Expect further advocation of this idea in the future.
(more…)
Read Full Post »
Older Posts »
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 32, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411165118217468, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Weighing
|
# Weight
(Redirected from Weighing)
This page is about the physical concept. In law, commerce, and in colloquial usage weight may also refer to mass. For other uses see weight (disambiguation).
A spring scale measures the weight of an object.
SI unit: newton (N)
Derivations from other quantities: W = m · g
In science and engineering, the weight of an object is usually taken to be the force on the object due to gravity.[1][2] Its magnitude (a scalar quantity), often denoted by an italic letter W, is the product of the mass m of the object and the magnitude of the local gravitational acceleration g;[3] thus: W = mg. The term weight and mass are often confused with each other in everyday discourse but they are distinct quantities.[4] The unit of measurement for weight is that of force, which in the International System of Units (SI) is the newton. For example, an object with a mass of one kilogram has a weight of about 9.8 newtons on the surface of the Earth, and about one-sixth as much on the Moon. In this sense of weight, a body can be weightless only if it is far away from any gravitating mass. There is also a rival tradition within Newtonian physics and engineering which sees weight as that which is measured when one uses scales. There the weight is a measure of the magnitude of the reaction force exerted on a body. Typically, in measuring someone's weight, the person is placed on scales at rest with respect to the earth but the definition can be extended to other states of motion. Thus in a state of free fall, the weight would be zero. In this second sense of weight, terrestrial objects can be weightless. Ignoring air resistance, an apple on its way to meet Newton's head is weightless.
Further complications in elucidating the various concepts of weight have to do with the theory of relativity according to which gravity becomes reduced to a space-time curvature. In the teaching community, a considerable debate has existed for over half a century on how to define weight for their students. The current situation is that a multiple set of concepts co-exist and find use in their various contexts. [2]
## History
Ancient Greek official bronze weights dating from around the 6th centtuy BC, exhibited in the Ancient Agora Museum in Athens, housed in the Stoa of Attalus.
Weighing grain, from the Babur-namah
Discussion of the concepts of heaviness (weight) and lightness (levity) date back to the ancient Greek philosophers. These were typically viewed as inherent properties of objects. Plato described weight as the natural tendency of objects to seek their kin. To Aristotle weight and levity represented the tendency to restore the natural order of the basic elements: air, earth, fire and water. He ascribed absolute weight to earth and absolute levity to fire. Archimedes saw weight as a quality opposed to buoyancy, with the conflict between the two determining if an object sinks or floats. The first operational definition of weight was given by Euclid, who defined weight as: "weight is the heaviness or lightness of one thing, compared to another, as measured by a balance."[2] Operational balances (rather than definitions) had, however, been around much longer.[5]
According to Aristotle, weight was the direct cause of the falling motion of an object, the speed of the falling object was supposed to be directly proportionate to the weight of the object. As medieval scholars discovered that in practice the speed of a falling object increased with time, this prompted a change to the concept of weight to maintain this cause effect relationship. Weight was split into a "still weight" or pondus, which remained constant, and the actual gravity or gravitas, which changed as the object fell. The concept of gravitas was eventually replaced by Jean Buridan's impetus, a precursor to momentum.[2]
The rise of the Copernican view of the world led to the resurgence of the Platonic idea that like objects attract but in the context of heavenly bodies. In the 17th century, Galileo made significant advances in the concept of weight. He proposed a way to measure the difference between the weight of a moving object and an object at rest. Ultimately, he concluded weight was proportionate to the amount of matter of an object, and not the speed of motion as supposed by the Aristotelean view of physics.[2]
### Newton
The introduction of Newton's laws of motion and the development of Newton's law of universal gravitation led to considerable further development of the concept of weight. Weight became fundamentally separate from mass. Mass was identified as a fundamental property of objects connected to their inertia, while weight became identified with the force of gravity on an object and therefore dependent on the context of the object. In particular, Newton considered weight to be relative to another object causing the gravitational pull, e.g. the weight of the Earth towards the Sun.[2]
Newton considered time and space to be absolute. This allowed him to consider concepts as true position and true velocity.[clarification needed] Newton also recognized that weight as measured by the action of weighing was affected by environmental factors such as buoyancy. He considered this a false weight induced by imperfect measurement conditions, for which he introduced the term apparent weight as compared to the true weight defined by gravity.[2]
Although Newtonian physics made a clear distinction between weight and mass, the term weight continued to be commonly used when people meant mass. This led the 3rd General Conference on Weights and Measures (CGPM) of 1901 to officially declare "The word weight denotes a quantity of the same nature as a force: the weight of a body is the product of its mass and the acceleration due to gravity", thus distinguishing it from mass for official usage.
### Relativity
In the 20th century, the Newtonian concepts of absolute time and space were challenged by relativity. Einstein's principle of equivalence put all observers, moving or accelerating, on the same footing. This led to an ambiguity as to what exactly is meant by the force of gravity and weight. A scale in an accelerating elevator cannot be distinguished from a scale in a gravitational field. Gravitational force and weight thereby became essentially frame-dependent quantities. This prompted the abandonment of the concept as superfluous in the fundamental sciences such as physics and chemistry. Nonetheless, the concept remained important in the teaching of physics. The ambiguities introduced by relativity led, starting in the 1960s, to considerable debate in the teaching community as how to define weight for their students, choosing between a nominal definition of weight as the force due to gravity or an operational definition defined by the act of weighing.[2]
## Definitions
This top-fuel dragster can accelerate from zero to 160 kilometres per hour (99 mph) in 0.86 seconds. This is a horizontal acceleration of 5.3 g. Combined with the vertical g-force in the stationary case the Pythagorean theorem yields a g-force of 5.4 g. It is this g-force that causes the driver's weight if one uses the operational definition. If one uses the gravitational definition, the driver's weight is unchanged by the motion of the car.
Several definitions exist for weight, not all of which are equivalent.[3][6][7][8]
### Gravitational definition
The most common definition of weight found in introductory physics textbooks defines weight as the force exerted on a body by gravity.[1][8] This is often expressed in the formula W = mg, where W is the weight, m the mass of the object, and g gravitational acceleration.
In 1901, the 3rd General Conference on Weights and Measures (CGPM) established this as their official definition of weight:
"The word weight denotes a quantity of the same nature[Note 1] as a force: the weight of a body is the product of its mass and the acceleration due to gravity."
— Resolution 2 of the 3rd General Conference on Weights and Measures[10][11]
This resolution defines weight as a vector, since force is a vector quantity. However, some textbooks also take weight to be a scalar by defining:
"The weight W of a body is equal to the magnitude Fg of the gravitational force on the body."[12]
The gravitational acceleration varies from place to place. Sometimes, it is simply taken to a have a standard value of 9.80665 m/s2, which gives the standard weight.[10]
The force whose magnitude is equal to mg newtons is also known as the m kilogram weight (which term is abbreviated to kg-wt)[13]
Measuring weight versus mass
Left: A spring scale measures weight, by seeing how much the object pushes on a spring (inside the device). On the Moon, an object would give a lower reading. Right: A balance scale measures mass,[dubious ] by comparing an object to references. On the Moon, an object would give the same reading, because the object and references would both become lighter.
### Operational definition
In the operational definition, the weight of an object is the force measured by the operation of weighing it, which is the force it exerts on its support.[6] This can make a considerable difference, depending on the details; for example, an object in free fall exerts little if any force on its support, a situation that is commonly referred to as weightlessness. However, being in free fall does not affect the weight according to the gravitational definition. Therefore, the operational definition is sometimes refined by requiring that the object be at rest.[citation needed] However, this raises the issue of defining "at rest" (usually being at rest with respect to the Earth is implied by using standard gravity[citation needed]). In the operational definition, the weight of an object at rest on the surface of the Earth is lessened by the effect of the centrifugal force from the Earth's rotation.
The operational definition, as usually given, does not explicitly exclude the effects of buoyancy, which reduces the measured weight of an object when it is immersed in a fluid such as air or water. As a result, a floating balloon or an object floating in water might be said to have zero weight.
### ISO definition
In the ISO International standard ISO 80000-4(2006),[14] describing the basic physical quantities and units in mechanics as a part of the International standard ISO/IEC 80000, the definition of weight is given as:
Definition
$F_g = m g \,$,
where m is mass and g is local acceleration of free fall.
Remarks
• It should be noted that, when the reference frame is Earth, this quantity comprises not only the local gravitational force, but also the local centrifugal force due to the rotation of the Earth, a force which varies with latitude.
• The effect of atmospheric buoyancy is excluded in the weight.
• In common parlance, the name "weight" continues to be used where "mass" is meant, but this practice is deprecated.
— ISO 80000-4 (2006)
The definition is dependent on the chosen frame of reference. When the chosen frame is co-moving with the object in question then this definition precisely agrees with the operational definition.[7] If the specified frame is the surface of the Earth, the weight according to the ISO and gravitational definitions differ only by the centrifugal effects due to the rotation of the Earth.
### Apparent weight
Main article: Apparent weight
In many real world situations the act of weighing may produce a result that differs from the ideal value provided by the definition used. This is usually referred to as the apparent weight of the object. A common example of this is the effect of buoyancy, when an object is immersed in a fluid the displacement of the fluid will cause an upward force on the object, making it appear lighter when weighed on a scale.[15] The apparent weight may be similarly affected by levitation and mechanical suspension. When the gravitational definition of weight is used, the operational weight measured by an accelerating scale is often also referred to as the apparent weight.[16]
## Weight and mass
A force diagram showing the forces acting on an object at rest on a surface. Notice that the amount of force that the table is pushing upward on the object (the N vector) is equal to the downward force of the object's weight (shown here as mg, as weight is equal to the object's mass multiplied with the acceleration due to gravity): because these forces are equal, the object is in a state of equilibrium (all the forces acting on it balance to zero).
Main article: Mass versus weight
In modern scientific usage, weight and mass are fundamentally different quantities: mass is an "extrinsic" (extensive) property of matter, whereas weight is a force that results from the action of gravity on matter: it measures how strongly the force of gravity pulls on that matter. However, in most practical everyday situations the word "weight" is used when, strictly, "mass" is meant.[4][17] For example, most people would say that an object "weighs one kilogram", even though the kilogram is a unit of mass.
The scientific distinction between mass and weight is unimportant for many practical purposes because the strength of gravity is almost the same everywhere on the surface of the Earth. In a uniform gravitational field, the gravitational force exerted on an object (its weight) is directly proportional to its mass. For example, object A weighs 10 times as much as object B, so therefore the mass of object A is 10 times greater than that of object B. This means that an object's mass can be measured indirectly by its weight, and so, for everyday purposes, weighing (using a weighing scale) is an entirely acceptable way of measuring mass. Similarly, a balance measures mass indirectly by comparing the weight of the measured item to that of an object(s) of known mass. Since the measured item and the comparison mass are in virtually the same location, so experiencing the same gravitational field, the effect of varying gravity does not affect the comparison or the resulting measurement.
The Earth's gravitational field is not uniform but can vary by as much as 0.5%[18] at different locations on Earth (see Earth's gravity). These variations alter the relationship between weight and mass, and must be taken into account in high precision weight measurements that are intended to indirectly measure mass. Spring scales, which measure local weight, must be calibrated at the location at which the objects will be used to show this standard weight, to be legal for commerce.[citation needed]
This table shows the variation of acceleration due to gravity (and hence the variation of weight) at various locations on the Earth's surface.[19]
Location Latitude m/s2
Equator 0° 9.7803
Sydney 33°52′ S 9.7968
Aberdeen 57°9′ N 9.8168
North Pole 90° N 9.8322
The historic use of "weight" for "mass" also persists in some scientific terminology – for example, the chemical terms "atomic weight", "molecular weight", and "formula weight", can still be found rather than the preferred "atomic mass" etc.
In a different gravitational field, for example, on the surface of the Moon, an object can have a significantly different weight than on Earth. The gravity on the surface of the Moon is only about one-sixth as strong as on the surface of the Earth. A one-kilogram mass is still a one-kilogram mass (as mass is an extrinsic property of the object) but the downward force due to gravity, and therefore its weight, is only one-sixth of what the object would have on Earth. So a man of mass 180 pounds weighs only about 30 pounds-force when visiting the Moon.
### SI units
In most modern scientific work, physical quantities are measured in SI units. The SI unit of weight is the same as that of force: the newton (N) – a derived unit which can also be expressed in SI base units as kg·m/s2 (kilograms times meters per second squared).[17]
In commercial and everyday use, the term "weight" is usually used to mean mass, and the verb "to weigh" means "to determine the mass of" or "to have a mass of". Used in this sense, the proper SI unit is the kilogram (kg).[17]
### Pound and other non-SI units
In United States customary units, the pound can be either a unit of force or a unit of mass. Related units used in some distinct, separate subsystems of units include the poundal and the slug. The poundal is defined as the force necessary to accelerate an object of one-pound mass at 1 ft/s2, and is equivalent to about 1/32.2 of a pound-force. The slug is defined as the amount of mass that accelerates at 1 ft/s2 when one pound-force is exerted on it, and is equivalent to about 32.2 pounds (mass).
The kilogram-force is a non-SI unit of force, defined as the force exerted by a one kilogram mass in standard Earth gravity (equal to 9.80665 newtons exactly). The dyne is the cgs unit of force and is not a part of SI, while weights measured in the cgs unit of mass, the gram, remain a part of SI.
## Sensation of weight
See also: Apparent weight
The sensation of weight is caused by the force exerted by fluids in the vestibular system, a three-dimensional set of tubes in the inner ear.[dubious ] It is actually the sensation of g-force, regardless of whether this is due to being stationary in the presence of gravity, or, if the person is in motion, the result of any other forces acting on the body such as in the case of acceleration or deceleration of a lift, or centrifugal forces when turning sharply.
## Measuring weight
Main article: Weighing scale
A weighbridge, used for weighing trucks
Weight is commonly measured using one of two methods. A spring scale or hydraulic or pneumatic scale measures local weight, the local force of gravity on the object (strictly apparent weight force). Since the local force of gravity can vary by up to 0.5% at different locations, spring scales will measure slightly different weights for the same object (the same mass) at different locations. To standardize weights, scales are always calibrated to read the weight an object would have at a nominal standard gravity of 9.80665 m/s2 (approx. 32.174 ft/s2). However, this calibration is done at the factory. When the scale is moved to another location on Earth, the force of gravity will be different, causing a slight error. So to be highly accurate, and legal for commerce, spring scales must be re-calibrated at the location at which they will be used.
A balance on the other hand, compares the weight of an unknown object in one scale pan to the weight of standard masses in the other, using a lever mechanism – a lever-balance. The standard masses are often referred to, non-technically, as "weights". Since any variations in gravity will act equally on the unknown and the known weights, a lever-balance will indicate the same value at any location on Earth. Therefore, balance "weights" are usually calibrated and marked in mass units, so the lever-balance measures mass by comparing the Earth's attraction on the unknown object and standard masses in the scale pans. In the absence of a gravitational field, away from planetary bodies (e.g. space), a lever-balance would not work, but on the Moon, for example, it would give the same reading as on Earth. Some balances can be marked in weight units, but since the weights are calibrated at the factory for standard gravity, the balance will measure standard weight, i.e. what the object would weigh at standard gravity, not the actual local force of gravity on the object.
If the actual force of gravity on the object is needed, this can be calculated by multiplying the mass measured by the balance by the acceleration due to gravity – either standard gravity (for everyday work) or the precise local gravity (for precision work). Tables of the gravitational acceleration at different locations can be found on the web.
Gross weight is a term that is generally found in commerce or trade applications, and refers to the total weight of a product and its packaging. Conversely, net weight refers to the weight of the product alone, discounting the weight of its container or packaging; and tare weight is the weight of the packaging alone.
## Relative weights on the Earth and other celestial bodies
Main articles: Earth's gravity and Surface gravity
The table below shows comparative gravitational accelerations at the surface of the Sun, the Earth's moon, each of the planets in the solar system. The “surface” is taken to mean the cloud tops of the gas giants (Jupiter, Saturn, Uranus and Neptune). For the Sun, the surface is taken to mean the photosphere. The values in the table have not been de-rated for the centrifugal effect of planet rotation (and cloud-top wind speeds for the gas giants) and therefore, generally speaking, are similar to the actual gravity that would be experienced near the poles.
Body Multiple of
Earth gravity
Surface gravity
m/s2
Sun 27.90 274.1
Mercury 0.3770 3.703
Venus 0.9032 8.872
Earth 1 (by definition) 9.8226
Moon 0.1655 1.625
Mars 0.3895 3.728
Jupiter 2.640 25.93
Saturn 1.139 11.19
Uranus 0.917 9.01
Neptune 1.148 11.28
## Notes
1. The phrase "quantity of the same nature" is a literal translation of the French phrase grandeur de la même nature. Although this is an authorized translation, VIM 3 of the International Bureau of Weights and Measures recommends translating grandeurs de même nature as quantities of the same kind.
## References
1. ^ a b Richard C. Morrison (1999). "Weight and gravity - the need for consistent definitions". 37: 51. Bibcode:1999PhTea..37...51M. doi:10.1119/1.880152.
2. Igal Galili (2001). "Weight versus gravitational force: historical and educational perspectives". International Journal of Science Education 23: 1073. Bibcode:2001IJSEd..23.1073G. doi:10.1080/09500690110038585.
3. ^ a b Gat, Uri (1988). "The weight of mass and the mess of weight". In Richard Alan Strehlow. Standardization of Technical Terminology: Principles and Practice – second volume. ASTM International. pp. 45–48. ISBN 978-0-8031-1183-7.
4. ^ a b The National Standard of Canada, CAN/CSA-Z234.1-89 Canadian Metric Practice Guide, January 1989:
• 5.7.3 Considerable confusion exists in the use of the term "weight." In commercial and everyday use, the term "weight" nearly always means mass. In science and technology "weight" has primarily meant a force due to gravity. In scientific and technical work, the term "weight" should be replaced by the term "mass" or "force," depending on the application.
• 5.7.4 The use of the verb "to weigh" meaning "to determine the mass of," e.g., "I weighed this object and determined its mass to be 5 kg," is correct.
5. ^ a b Allen L. King (1963). "Weight and weightlessness". 30: 387. Bibcode:1962AmJPh..30..387K. doi:10.1119/1.1942032.
6. ^ a b A. P. French (1995). "On weightlessness". 63: 105–106. Bibcode:1995AmJPh..63..105F. doi:10.1119/1.17990.
7. ^ a b Galili, I.; Lehavi, Y. (2003). "The importance of weightlessness and tides in teaching gravitation". 71 (11): 1127–1135. Bibcode:2003AmJPh..71.1127G. doi:10.1119/1.1607336.
8. Working Group 2 of the Joint Committee for Guides in Metrology (JCGM/WG 2) (2008). International vocabulary of metrology — Basic and general concepts and associated terms (VIM) — Vocabulaire international de métrologie — Concepts fondamentaux et généraux et termes associés (VIM) (JCGM 200:2008) (in English and French) (3rd ed.). BIPM. Note 3 to Section 1.2.
9. ^ a b
10. Barry N. Taylor and Ambler Thompson, ed. (2008). The International System of Units (SI). NIST Special Publication 330 (2008 ed.). NIST. p. 52.
11. Halliday, David; Resnick, Robert; Walker, Jearl (2007). Fundamentals of Physics, Volume 1 (8th ed.). Wiley. p. 95. ISBN 978-0-470-04473-5.
12. ISO 80000-4:2006, Quantities and units - Part 4: Mechanics
13. Bell, F. (1998). Principles of mechanics and biomechanics. Stanley Thornes Ltd. pp. 174–176. ISBN 978-0-7487-3332-3.
14. Galili, Igal (1993). "Weight and gravity: teachers’ ambiguity and students’ confusion about the concepts". International Journal of Science Education 15 (2): 149–162. Bibcode:1993IJSEd..15..149G. doi:10.1080/0950069930150204.
15. ^ a b c A. Thompson and B. N. Taylor (July 2, 2009 (last updated: March 3, 2010)). "The NIST Guide for the use of the International System of Units, Section 8: Comments on Some Quantities and Their Units". Special Publication 811. NIST. Retrieved 2010-05-22.
16. Hodgeman, Charles, Ed. (1961). Handbook of Chemistry and Physics, 44th Ed. Cleveland, USA: Chemical Rubber Publishing Co. p.3480-3485
17. Clark, John B (1964). Physical and Mathematical Tables. Oliver and Boyd.
18. This value excludes the adjustment for centrifugal force due to Earth’s rotation and is therefore greater than the 9.80665 m/s2
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9233670830726624, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/2672/calculation-of-the-cross-section
|
# Calculation of the cross section
Why, when we calculate the total cross section, we make the average other initial states and the sum over final states?
-
## 2 Answers
The summing over final states and the averaging over initial states is a good observation that I always emphasize as the origin of the arrow of time. As soon as one considers mathematical logic, this asymmetry has to arise.
Why are we summing over final states? Because "we don't care" about which of them occurs (and no one knows). We're calculating the probability - or cross section (which is the same thing, up to unproblematic coefficients) - of getting a final state $F_1$ or $F_2$. Both of them are OK so I use "or".
But the probability $$P(F_1 \mbox{or} F_2) = P(F_1)+P(F_2)$$ is simply the sum if the two final states are mutually excluding - e.g. orthogonal.
Things are very different for the initial states. Now, it's not quite true that "we don't care" whether the state was $I_1$ or $I_2$. Instead, "we don't know" which of them it was (but we know that one of them was the right one). We have to make probabilistic assumptions about the initial state. The most "balanced" one is that each $I_1$ and $I_2$ appear with the 50 percent probability. There could also be an asymmetric choice of the "priors" - leading to a weighted average - but it's important that $$P(I_1)+P(I_2)=1$$ The probabilities of the evolution include the calculate probabilities of transitions from $I_k$ to $F_l$ - given by the Feynman diagrams etc. - but for each $I_k$, we must also multiply this by the probability that $I_k$ occurred in the first place. This reduces the calculation by a factor of two. The total and averaged probability is $$P(I\to F) = 1/N_{initial} \sum_{ij} P(I_i \to F_j)$$ The very asymmetry here - the extra factor $1/N_{initial}$ without its "final" counterpart - is the reason why low-entropy states are favored as initial states while the high-entropy ones are favored as final states. In particular, you may compute the ratio of the probabilities from I to F, and from F to I, and they will differ by the factor of $$N_{final}/N_{initial} = exp(S_f-S_i)$$ or the exponential of the entropy difference between the final and initial state. This number's being very different from one is the reason why the transitions to lower-entropy states can't occur in practice.
Many people - including professional physicists - remain extremely confused about these points when they suggest that the arrow of time remains mysterious. It doesn't. The second law of thermodynamics may be proved and it all boils down to your question and the right answer.
The only assumptions I made are those about the addition of probabilities of assumptions and their effects - and these logical rules are fundamentally asymmetric when it comes to the role of the assumptions and their consequences. This logical arrow of time can't be removed from any reasoning about a world that depends on time - time only copies the logical relationship of implication. And this logical arrow of time is the source of the thermodynamic arrow of time as well.
-
2
thank you very much for your answer, very complete and relevant – Andrea Amoretti Jan 14 '11 at 15:36
I disagree with "we don't know". Everything is dictated by a particular experimental situation. We can determine what fraction of the incident (projectile) flux $j$ represents this or that particular polarization state so we take the corresponding weight into account according to our measurements, if and only if both polarizations contribute. As well, summation over final states is not always necessary, especially if we can experimentally distinguish different final states. This all may be necessary in some calculations if all different initial and final states contribute (inclusive picture). – Vladimir Kalitvianski Jan 18 '11 at 23:32
I was thinking about this question but decided to look it up before asking. As usual, Lubos and his great answers. Many thanks for your answer. I learned a lot from you in this forum. – stupidity May 31 '12 at 13:42
We do all the "cross section business" because we want to predict results of experiments.
Let's take for example some particle with two polarizations states: "+" and "-". You know that experimentalists will collide 1 000 000 pairs of particles, with polarisation of initial particles being unknown. Best thing you can do is to hope that in experiment polarization would be evenly distributed. So that you will have around 250000 of "++" collisions, 250000 of "+-", and the same for "-+" and "--". And in order to predict the result you will need to do your calculations for each polarization configuration and average over them.
If in experiment the polarization is fixed, then you don't need to do the averaging. And if you know the polarization density matrix or some spectrum for the initial state, then you got to average according to them.
Next. You know that after collision they will detect some resulting particles. Let's say that your calculations predict 1000 of "+" polarized particles and 500 of "-" polarization. But it turns out that the detector do not detect polarization of final particles. Best thing you can do -- sum over final polarizations and predict 1500 for the final result.
I made an example with polarization, while it can be extended to any properties/degrees of freedom in the experiment. If you do not know some property in initial state -- you should average over them. If you do not know some property in final state -- sum over them.
This works also for total/differential cross section. If the detector can tell where does particle fly -- then you can work with differential cross section. But if the detector just tells you that "there is a particle" without any info on the direction -- then you will need to integrate the differential cross section (sum over final states) to get total cross section.
-
Ok, but, for exaple, if you have completely umpolarised incident beam but you now for some reason the composition (the density matrix), also in this case you do simply the average other inital states or the weight of each initial state is in accordance with the density matrix? – Andrea Amoretti Jan 10 '11 at 13:21
Of course if you know the polarisation density matrix or some spectrum for the initial state, then you got to average according to them. (Inserted in main answer.) – Kostya Jan 10 '11 at 13:26
ok thank you for the answer – Andrea Amoretti Jan 10 '11 at 13:37
Good answer. And while we're at it, we might as well point out that this is precisely the reason why there is second law of thermodynamics. We average over initial states (assuming a priori Bayesian uniform density) but have to sum over final states. This induces the time asymmetry into the reversible microscopic dynamics. – Marek Jan 10 '11 at 17:08
@Marek: Can you and/or Lubosh tell more about this "irreversibility", please? If we speak of polarization states, I do not feel the difference between averaging and summation. – Vladimir Kalitvianski Jan 19 '11 at 9:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943729043006897, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/280879/hypercube-problem?answertab=active
|
Hypercube problem
$B$ is an n-dimensional hypercube, considered as undirected graph. Let $A$ be a subset of the vertices of $B$ such that $|A| \gt 2^{n-1}$.
Let $H$ is a subgraph of $B$ induced by $A$. Prove that $H$ has at least $n$ edges.
Any help, would be greatly appreciated.
-
Why does it seem like a Ramsey-theory-type problem? – Jan Dvorak Jan 17 at 18:07
can you explain a bit more :(... – DiscreteMath'sFan Jan 17 at 18:08
– Jan Dvorak Jan 17 at 18:10
Basically, the original problem in Ramsey theory says that if you have a large enough complete graph and color its edges by $n$ colors, you are destined to find a single-colored complete subgraph of a given size in the graph, for every such size. Namely, every edge-2-colored $K_6$ contains a single-colored $K_3$ (triangle). – Jan Dvorak Jan 17 at 18:14
I can't find a way how to solve the problem with this... – DiscreteMath'sFan Jan 17 at 18:14
show 1 more comment
2 Answers
Temporarily consider one of the $n$ coordinate axes. Your hypercube has $2^{n-1}$ edges parallel to that axis. Since $B$ has more than $2^{n-1}$ vertices, it must have two on the same one of those $2^{n-1}$ edges. So $H$ has an edge parallel to the coordinate axis under consideration. Apply this to all $n$ of the axes.
-
+1 Nice answer! Looks like you wrote $B$ instead of $A$ above. – polkjh Jan 17 at 18:21
thanks, I will try to solve it with it. If anyone has another ideas, please write them :) – DiscreteMath'sFan Jan 17 at 18:23
there is simpler combinatoric solution Ill post tommorow (I can not post the solution now since this problem was homework for me and my fellow-students) – dudelgrincen Jan 21 at 16:20
We can build n-dimensional hypercube using two $(n-1)$-dimensional hypercubes. This gives us the vertices set power $|Vn| = 2^{n}$ for the n-dimensional hypercube Gn(Vn, En). Knowing |Vn-1| and |En-1| for (n-1)-dimensional hypercube we know |En| for n-dimensional hypercube. |En| = 2*|En-1| + $2^{n-1}$.
Solving this recurrence relation gives us |En| for any dimension. |En| = $n*2^{n-1}$.
Then we choose at least $(2^{n-1}+1)$ vertices to be in the $A$ subset by choosing at most $(2^{n-1} - 1)$ vertices that wont be in $A$. We know that every vertex in $n$-dimensional hypercube is connected to $n$ edges (or less if we remove some vertices). So removing the maximum of $(2^{n-1} - 1)$ vertices removes no more than $r = (2^{n-1} - 1)*n$ edges. $r = (2^{n-1} - 1)*n = |En| - n$.
It appers that the induced graph H has at least |En| - r edges. |En| - r = |En| - (|En| - n) = n. We can conclude that H has at least n edges.
Sorry if there is something unclear. English is not my native language. I would be glad if someone improves my post. I've done my best to format it and make it readable but I am new to LaTeX and I preferred not to use it for now.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9164939522743225, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/258879/how-many-fractional-digits-do-i-need-to-represent-a-number-of-base-m-in-base
|
# How many fractional digits do I need to represent a number of base $m$ in base $n$?
I have authored a web application RADIX, which for further optimisation needs to calculate the maximum number of places necessary to precisely represent the fraction part of a number of base $m$ in base $n$ (assuming a precise conversion is possible).
For example, assuming $f$ represents $15$ in base $64$, how many fraction digits are required to represent the number $f.f$ in base $10$?
I know that the maximum number of digits needed to represent the integer part can be calculated by taking the ceiling of $\log_{10}(64)$ * the number of digits (correct me if I'm wrong), but what about the fractional part of the number?
$f.f$ is $15.234375$ in base $10$, so one fraction numeral in base $64$ seems to require up to $6$ fraction digits in base $10$ to represent it, but is there a way I can calculate that in advance for any two bases?
At the moment I am using $\log_2(m)$ * the number of fraction digits of the number in base m, which happens to give just the right answer for the example above, i.e. $\log_2(64)$ is $6$, but it causes me to calculate to an unnecessarily high number of places for other conversions.
Update:
Example code, based on ShreevatsaR's expression for d in terms of m and n using prime factorisation.
````# assumes float division
m = 288 # base of the number to be converted
n = 270 # base the number is to be converted to
i = 2
d = 0
while m > 1 and n > 1:
e = 0
f = 0
while m % i == 0:
e += 1
m /= i
while n % i == 0:
f += 1
n /= i
# if i is a prime factor of both m and n
if e != 0 and f != 0 and e / f > d:
d = math.ceil( e / f )
i += 1
if d == 0:
# No fraction part of a number of base m has a finite
# representation when converted to base n, and vice versa
else:
# A maximum of d * r fraction digits is needed to represent the
# fraction part of a number of base m in base n, where r is the
# number of fraction digits of the number in base m
````
-
## 3 Answers
This is an elaboration of Ross Millikan's answer.
• First we'll answer the question: to represent a fraction in base $n$, how many digits are needed (after the decimal point)?
If a fraction can be written in base $n$ with $d$ digits afer the decimal point, then it means that it can be written as $a/n^d$, for some integer $a$. (For instance, the fraction $975/64$ can be written as $15.234375 = \frac{15234375}{1000000}$.) Thus, if the fraction is $p/q$ in lowest terms, then the fact that $a/n^d$ is $p/q$ in lowest terms means that $q$ divides $n^d$.
Conversely, if $q$ divides $n^d$, then $p/q = a/n^d$ for some $a$ (to be precise, $a = pn^d/q$), and so the fraction can be written in base $n$ with $d$ digits after the decimal point.
So the number of digits $d$ needed after the decimal point is the smallest $d$ for which $n^d$ is divisible by $q$.
• Second, to answer the question: when a number written in base $m$ with 1 digit after the decimal point is reduced to fraction $p/q$ in lowest terms, what are the possible values of $q$?
If a number $x$ is written in base $m$ as $x = a.b$ where $a$ is any integer (has any number of digits) and $b$ is a single digit in base $m$, then $x = a + b/m = (ma+b)/m$. So when reduced to lowest terms $p/q$ (which we do by cancelling common factors from $(ma+b)$ and $m$) it must be the case that $q$ is a divisor of $m$.
And in fact it can happen that $q=m$, e.g. when $b = 1$, or $b = q-1$ or more generally $\gcd(b,m) = 1$. (This is because any common factor of $(ma+b)$ and $m$ must also divide $b$, so if $\gcd(b,m) = 1$ then the only common factor is $1$, so we cannot reduce the fraction $(ma+b)/m$ further.)
Similarly, if a number is written in base $m$ with $r$ digits after the decimal point, then when it is reduced to lowest terms, the denominator could be up to $m^r$.
Putting them together, if a number is written in base $m$ with one digit (respectively $r$ digits) after the decimal point, then the number of digits needed after the decimal point to write it in base $n$ is at most the smallest $d$ for which $n^d$ is divisible by $m$ (respectively $m^r$).
Examples:
• If you use "f" to represent $15$, then "f.ff" in base $64$ represents $15 + (15\times64 + 15)/64^2$, so $q = 64^2$. If you want to now write this in base $10$, then the smallest $d$ for which $10^d$ is divisible by $64^2$ is $d = 12$, so that's how many digits you need. (And indeed, "f.ff" is $15.238037109375$.)
• If further $c$ represents $12$, then "f.c" represents $15 + 12/64 = 15 + 3/16$, so $q = 16$. Now $10^4$ is divisible by $16$, so you only need $4$ digits for this particular number ($15.1875$).
To actually calculate the smallest $d$ for which $n^d$ is divisible by $m$, the simplest algorithm is to keep trying successive $d$ until one works (you will never need more than $m$ tries). (You could do a binary search over $d$, but this is overkill unless your $m$ and $n$ are, say, over $10000$ and your program is slow because of this pre-computation step.)
You can do a couple of optimizations:
• If you already know that $m$ is a power of $n$ (e.g. going from base $64$ to base $2$) then $d = \log_{n}m$.
• When calculating powers of $n$, you can reduce the number modulo $m$ at each step. Something like
``` N = n
d = 1
while N % m > 0:
d += 1
N *= n
N %= m
if d > m:
# exit with error
return d
```
If you insist on an expression for $d$ in terms of $m$ and $n$ (beyond $\min\left\{d\colon m|n^d\right\}$ say), then we must look at the prime factorisation of $m$ and $n$. If $m$ has prime factorization $p_1^{e_1}p_2^{e_2}\dots$ and $n$ has prime factorization $p_1^{f_1}p_2^{f_2}\dots q_1^{g_1} \dots$ (all the same primes $p_1, p_2 \dots$, plus other primes $q_1, q_2 \dots$), then $$d = \max(\left\lceil\frac{e_1}{f_1}\right\rceil, \left\lceil\frac{e_2}{f_2}\right\rceil, \dots)$$ For example with $m = 288 = 2^5 3^2$ and $n = 270 = 2^1 3^3 5^1$, we have $d = \max(\lceil\frac51\rceil, \lceil\frac23\rceil) = 5$.
-
Thank you for this very clear and comprehensive answer. I'll experiment to see if the cost of calculating d is worth it. – MikeM Dec 15 '12 at 20:13
The above algorithm works fine but is unusable with more than a handful of fraction digits because the value of $m^r$ quickly exceeds floating-point accuracy. Is it the case though that I can use it instead just to efficiently determine whether any number in base m can be exactly represented in base n? – MikeM Dec 17 '12 at 11:41
@MikeM: I think you mean powers of $n$, not $m$, and the solution to overflow is already mentioned above: when calculating powers of $n$, reduce mod $m$ at each step (see the Python code above). (And for a given number in base $m$, you write it as a fraction $p/q$, reduced to lowest terms, and calculate powers of $n$ modulo $q$ instead of mod $m$.) – ShreevatsaR Dec 17 '12 at 11:46
I was thinking that the algorithm you gave finds d of n for a single digit of base m, and that I could substitute $m^r$ for m to find d for r digits... I am unable to write the fraction parts as fractions reduced to lowest terms as I'm working with arbitrary-precision values and the cost of doing that would defeat the purpose... I would still like to know whether the algorithm is an efficient way to determine whether a base m number can be exactly represented in base n at all, i.e. if d > m return false. – MikeM Dec 17 '12 at 12:08
@MikeM: Ah I see. Well you can do the calculation (find the number of digits needed in base $n$) for 1 digit in base $m$, and then simply multiply by the number of digits — that will be an upper bound, and not off by much. Also, exact representation is possible for many digits if and only if it's possible for 1 digit. – ShreevatsaR Dec 17 '12 at 13:14
show 1 more comment
As Cameron Buie has said, you may have a number that terminates in one base and does not terminate in the other. Assuming it does terminate, represent the fraction as $\frac pq$ in lowest terms. The number of decimals in base $b$ is the smallest power of $b^n$ that can be divided evenly by $q$. So in base $10$, if $q=32=2^5$, you need $5$ decimals to represent it exactly. In base $4$, you would need three places beyond the point. Note that we don't need base information from where we are coming from, just where we are going.
-
I can't quite follow your reasoning above, but, regardless, the base from where we are coming from has got to be taken into account. For example, converting from base 4096 to base 2 each fraction numeral may require up to 12 base 2 digits, while going the other way only one digit is required. – MikeM Dec 15 '12 at 10:33
@MikeM: The base you are coming from is accounted for by the $q$. If you are already given the number as a fraction $p/q$ in lowest terms, then only the value of $q$ (as an integer, nevermind the base) matters, not the base you started with. – ShreevatsaR Dec 15 '12 at 12:01
@MikeM: I'd advise you to read this answer more carefully, since it does fully answer your question. The mention of "base 10" is in an example about going from base 10 to 2 (and even there, just to clarify what "32" means). To give another example: when going from base 64 to base 10 (as in your question), the number "f.f" can be written in lowest terms as $15 + 15/64 = 975/64$, hence $q = 64$ and the smallest power of $10$ divisible by $64$ is $10^6$ (the smallest $k$ for which $q$ divides $b^k$ is $6$), so you need $6$ decimal digits. – ShreevatsaR Dec 15 '12 at 13:36
@MikeM: Similarly, if you want to convert some "a.b" in base $m$ to base $n$, then in the worst case $q = m$, so the answer is the smallest power of $n$ divisible by $m$. If you want to convert $a.bc$ (written in base $m$) into base $n$, then in the worst case $q = m^2$, so the answer is the smallest $k$ for which $n^k$ is divisible by $m^2$. Etc. – ShreevatsaR Dec 15 '12 at 13:37
@ShreevatsaR. Thank you for your further explanation. I now understand Ross's answer. Please write your own answer to this question and I will accept it. If possible please write k as a function of m and n (assuming m = q) and include any advice you may have as regards an algorithm. – MikeM Dec 15 '12 at 16:30
I doubt there is a nice formula for it. For example, $\frac43$ in base $3$ is $1.1$, but in base $2$ it's $1.\overline{01}$, so we go from finitely many fractional digits to infinitely many.
-
Perhaps this is more a comment than an answer, Cameron; I've already stated 'assuming a precise conversion is possible'. – MikeM Dec 15 '12 at 10:26
I noticed that, yes. Apparently, I use a different definition of "precise" than you do--namely, "not approximate". Do you mean "finite" instead? – Cameron Buie Dec 15 '12 at 13:51
@Cameron Buie: Generally when dealing with computery things, "precise" and "finite" are equivalent, because computers don't play nicely with infinities. – Eric Stucky Dec 18 '12 at 12:34
@EricStucky: I hadn't considered that. I'll try to bear that in mind in the future. Thanks! – Cameron Buie Dec 18 '12 at 16:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 177, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309667944908142, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/138584/multiplication-of-taylor-series-expanding-2x-sinx
|
# Multiplication of Taylor series - expanding $2x\sin(x)$
I'm working on a problem for university Calculus 2. We're talking about Taylor series right now and I need to approximate an integral using one of a function that I think it should be easy to produce a series for, but I'm not 100% sure. This is the function:
$$f(x) = 2x\sin(x)$$
I know the expansion for $\sin(x)$, which is in a reference table in the book. To give the first few terms it looks like this:
$$x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \;\cdots$$
I'm pretty sure I can just multiply the whole polynomial by $2x$, giving:
$$2x^2 - \frac{2x^4}{3!} + \frac{2x^6}{5!} - \frac{2x^8}{7!} + \;\cdots$$
What's odd is that I can't find any examples quite like this in either of the textbooks or on the internet... which makes me wonder if this isn't actually a valid manipulation. Additionally, I can't seem to get an answer that matches this from Wolfram Alpha.
I could work out the Taylor series by hand, but the derivatives of the function start getting a bit ugly (by which I mean long), so I think I'm supposed to manipulate a known series since this should be an easy problem.
So, am I doing this right or am I on the wrong track?
Thanks in advance.
-
## 2 Answers
Your approach is correct. WolframAlpha returns the same series for the following query: `Series[2 x Sin[x],{x,0,8}]`
-
In order to estimate the integral of $2x\sin x$, you use the series for $\sin$ as you wrote above, multiplied by $2$ and $x$ as you did.
Thus
$$2x\sin x =2\sum_{n=0}^{\infty} \frac{(-1)^nx^{2n+2}}{(2n+1)!}= 2x^2 - \frac{2x^4}{3!} + \frac{2x^6}{5!} - \frac{2x^8}{7!} \cdots$$
We then integrate the summand or term-by-term to get
$$2\int x\sin x\, dx= 2\sum_{n=0}^{\infty} \int\frac{(-1)^n x^{2n+2}}{(2n+1)!}\,dx= 2\sum_{n=0}^{\infty} \frac{(-1)^nx^{2n+2}}{(2n+1)!}\int x^{2n+2}\,dx= 2\sum_{n=0}^{\infty}\frac{(-1)^n x^{3+2 n}}{(3+2 n) (1+2 n)!}$$
This sum can be used to estimate the integral of $2x\sin x$ to a desired accuracy.
$$2\int x\sin x=-2 (x \cos(x)-\sin(x))$$
-
It is important to note that you can integrate termwise because the series is absolutely convergent in $\mathbb R$ – Peter Tamaroff Apr 29 '12 at 22:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9530232548713684, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/10495/could-extra-dimensions-be-or-become-clustered/10505
|
# Could extra dimensions be or become clustered?
String theory - for example - requires extra spatial dimension. Say for example in 10 dimensional string theory, what theoretically prevents clustering of the extra 6 dimensions in 2 timeless 3 dimensional (infinite) spaces?
-
## 1 Answer
I will here only comment on the traditional superstring theory story, say, from the first superstring revolution in the 1980s, and leave it to others to include more recent developments.
Traditionally, the $10$-dimensional target space $(M^{10},g^{(10)})$ with a metric $g^{(10)}$ is viewed as a product $M^{10}=M^4 \times K^6$ with metric $g^{(10)}=g^{(4)}\oplus g^{(6)}$, where $(M^4,g^{(4)})$ is the $4$-dimensional spacetime with a $4$-metric $g^{(4)}$, which we see and observe; and $(K^6,g^{(6)})$ is a compact $6$-dimensional Riemannian manifold, whose characteristic length scales are so small that it has avoided experimental detection so far.
I will assume that the word clustering in the question (v2) essentially refers to if $(K^6,g^{(6)})$ could be a product $K^6=K^3\times L^3$ with metric $g^{(6)}=g^{(3)}\oplus h^{(3)}$ of two $3$-dimensional manifolds $(K^3,g^{(3)})$ and $(L^3,h^{(3)})$?
Again, to have avoided experimental detection, the two $3$-dimensional manifolds $K^3$ and $L^3$ must both be compact. Now, another bit of traditional string wisdom is, that to have unbroken $N=1$ supersymmetry i $4$ dimensions, the holonomy group of $(K^6,g^{(6)})$ must be the $8$-dimensional Lie group $SU(3)$, see e.g., Green, Schwarz and Witten, "Superstring theory", chap. 15. See also this question.
On the other hand, the biggest holonomy group that a $3$-dimensional Riemannian manifold can have, is the 3-dimensional Lie group $O(3)$, so $K^6=K^3\times L^3$ can at most have holonomy group $O(3)\times O(3)$, which is $6$-dimensional, and therefore too small to be $SU(3)$. Hence a product manifold $K^6=K^3\times L^3$ is ruled out.
-
I downvoted this for a stupid reason, and realized my mistake too late--- the downvote is already locked in. My apologies, the argument is completely fine. – Ron Maimon Sep 1 '11 at 4:10
Nice answer, +2 (to compensate for Ron s accidental downvote) :-) – Dilaton Nov 30 '11 at 19:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9049498438835144, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/6020/many-time-pad-attack/6026
|
# Many time pad attack
I've already sent my correct solution to a programming question on an online class:
Let us see what goes wrong when a stream cipher key is used more than once. Below are eleven hex-encoded ciphertexts that are the result of encrypting eleven plaintexts with a stream cipher, all with the same stream cipher key. Your goal is to decrypt the last ciphertext, and submit the secret message within it as solution.
Hint: XOR the ciphertexts together, and consider what happens when a space is XORed with a character in [a-zA-Z].
I managed to get the key xoring cipher 1 and cipher 2, then xoring the result with the string ' the ' at each possible positions and with a lot of guesses I get the first plaintext.
The question is: what should I have to do if I want to follow the hint? I know that if I xor a space with a letter I change the case of the letter, and then what?
I can't understand how can I recognize spaces! Suppose in $c_1 \oplus c_2$, I see a letter, how can I say if there was some space in one of the plaintexts?
-
1
– poncho Jan 18 at 17:50
@poncho I can't understand how can I recognize spaces! Suppose in $c_1 \oplus c_2$, I see a letter, how can I say if there was some space in one of the plaintexts? I've asked here because in that question there isn't the explanation of my doubt! – sunrise Jan 18 at 18:05
Consider bit 6 of $c_1 \oplus c_2$. If $c_1$ and $c_2$ are both betters, bit 6 will be clear (because bit 6 of both $c_1$ and $c_2$ are both set). If one is a letter and one is a space, then bit 6 of $c_1 \oplus c_2$ will be set. Hence, we have a good guess that there is a space in one of the two plaintexts. Now, consider the case where we have 11 ciphertexts; if 3 of the plaintexts have a space at position 7, then that fact should be obvious (because bit 6 of those ciphertexts will be different, and all those ciphertexts will have the same value there). – poncho Jan 18 at 19:02
@poncho thanks. suppose that I manage to know that the first word of the first ciphertext is 5 letters. How should I go on? How can I discover the word? – sunrise Jan 18 at 19:11
If you know that the 6th character of the first plaintext is a space, you can then deduce the 6th character of every other plaintext. Extending this observation, you can deduce any character of any plaintext where there's another plaintext with a space at that position. That should give you a large majority of the plaintexts; guessing (based on context) and verifying (based on whether it makes the other plaintexts make sense) the little that is left should be straight-forward. – poncho Jan 18 at 19:21
show 3 more comments
## 2 Answers
Let's assume that the plaintexts consist only of spaces and ASCII letters. Given the hint, that seems like a reasonable assumption to start with, even if it might turn out to be only mostly correct.
Now, take one of the ciphertexts and XOR it with each of the others. Of course, the XOR operation cancels out the keystream, so you end up with the plaintext corresponding to the chosen ciphertext XORed with each of the other plaintexts.
Now look at each character position in turn. By assumption, the character at that position in the chosen plaintext might be either a letter or a space.
• If it's a space, then the characters at that position in the pairwise XORed plaintexts will be either letters (if the character at that position in the other plaintext is a letter) or nulls (if both of the characters are spaces).
• If it's a letter, then the characters at that position in the pairwise XORed plaintexts will be random control characters (if the character at that position in the other plaintext is a letter with the same case), numbers or punctuation (if the other character is a letter with different case) or that particular letter with the case flipped (if the other character is a space).
Those two cases should be pretty easy to tell apart. Furthermore, in the first case, you can easily get the actual characters at that position in all the plaintexts just by flipping the case of all the letters you obtained by XORing the ciphertexts together.
In this manner, you can decode all the characters at the positions where the plaintext corresponding to the chosen ciphertext has spaces. Once you've done that, choose another ciphertext and repeat the process. Hopefully, by the time you've done this with all the ciphertexts in turn, you will have solved most of the character positions and can easily fill in the rest.
Ps. To see why this works, it helps to know that the 7-bit ASCII character set can be divided into four 32-character blocks, like this:
````Bit 4: 0000000000000000 1111111111111111 |
Bits 0-3: 0123456789ABCDEF 0123456789ABCDEF | Block:
---------+-----------------------------------+---------------------------
Bits 00 | ................ ................ | Control characters
5-6: 01 | !"#$%&'()*+,-./ 0123456789:;<=>? | Numbers and punctuation
10 | @ABCDEFGHIJKLMNO PQRSTUVWXYZ[\]^_ | Uppercase letters (mostly)
11 | `abcdefghijklmno pqrstuvwxyz{|}~. | Lowercase letters (mostly)
````
In particular, a consequence of this structure is that, if you XOR two ASCII characters in the same row (e.g. two uppercase letters or two lowercase letters), the result will be a control character. Similarly, XORing an uppercase and a lowercase letter will produce a character in the second row, i.e. a number or a punctuation character. Also, as pointed out in the hint, the position of the space character at the beginning of the second row means that XORing it with any other character just flips bit the fifth bit of the character code, moving the character one row up or down.
-
Hint: Are you familiar with frequency analysis (for breaking classical ciphers)?
Hint: If you have a guess/hypothesis that there is a space at a particular position in one of the ciphertexts, can you think of any way to test whether your guess/hypothesis seems likely to be correct or not?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9129429459571838, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4191522
|
Physics Forums
## Can anyone give a common-sense explanation of singular values?
I've been reading a few mathematical definitions and I'm still not sure I grasp their significance. If Ax = b, then what do the singular values of A represent in terms of x and b? Why are they important?
I'm not looking for a formal definition, just a conceptual description or even some rambling about what they're used for, so I can get an intuitive idea of their purpose/significance.
Thanks for any help
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Science Advisor The simplest description of "what singular values are" is "the eigenvalues of ##A^TA##'' (plus some zeros, if A is not a square matrix). That suggests one practical use for them, which is solving over- or under-determined sets of equations in a similar way to finding a least squares approximation. In other words, by solving ##A^T Ax = A^Tb## instead of ##Ax = b##. The singular values are useful in numerical work because they can decompose the problem into parts that are more or less numerically "important". For example if the ##A^TA## has less than full rank, a numerical calculation is unlikely to show that it is exactly singular, but it will have some very small singular values, and they can be ignored by setting them to zero. This leads to ideas like the "pseudo inverse" of a matrix. If the singular values of the original matrix were ##s_i##, the pseudo inverse has singular values ##1/s_i## when ##s_i > 0## and ##0## when ##s_i = 0##. Apart from numerical applications, the SVD is very useful theoretically, because the decompositon ##A = U\Sigma V^T## exists for any matrix A (unlike an eigenvalue decomopsition, which doesn't exist for rectangular matrices and doesn't even exist for some square matrices), and the square matrices ##U## and ##V## matrices satisfy ##U^T U = V^T V = I##. For example ##U## and ##V## are related to choosing basis vectors for interpreting ##A## as a transformation between two vector spaces.
Just passing through, not checking definitions, but maybe we could say that finding the singular values helps find the eigenvectors, and if we can decompose b in terms of the eigenbasis, it could be seen as a method for finding x. Not sure if this is practical. But you might be able to rephrase finding singular values as finding eigenvalues. The term singular values might be too general a term for where you're at in your studies. So if we rephrase to what is the point of eigenvalues, um... Anyways, singular values sounds like a bit of an advanced perspective, appearing in for instance functional analysis or quantum physics. Not really sure though (there's my disclaimer).
Recognitions:
Science Advisor
## Can anyone give a common-sense explanation of singular values?
"Singular values" and "eigenvalues" are two different concepts (though they are related).
Both terms are well defined and commonly used, but mixing them up is NOT a good idea.
I think I'm understanding it slightly. My problem is mostly concerned with numerical solutions to under-determined problems, and I think I do have the issue that if A is rank-deficient, then rounding errors would appear to make A'*A "singular to working precision" as MATLAB phrases it. But why do small singular values matter? What's the link between the eigenvalues of A'*A and the invertibility? What makes a small singular value a bad thing? What's to stop me dividing my entire matrix by 10, then surely the singular values are all divided by ten, why should this affect the fundamental maths of inversion?
Mentor
Quote by MikeyW But why do small singular values matter? What's the link between the eigenvalues of A'*A and the invertibility? What makes a small singular value a bad thing? What's to stop me dividing my entire matrix by 10, then surely the singular values are all divided by ten, why should this affect the fundamental maths of inversion?
I'll answer your last question first. A singular value is deemed "small" if the ratio of that singular value to the largest singular value is less than some ε. Multiplying the entire matrix by some scalar doesn't change that ratio.
What makes a small singular value a "bad thing" is that unless you were using infinite precision arithmetic, that small singular value is probably garbage. Things work much better if you treat those small values as if they were zero -- except when you invert your matrix, that is. Then you use the apparently kludgy operation 1/0->0.
One way to look at it is that the you are designating the first n diagonal elements and the first n rows of your U and W matrices (I look at SVD as A=U*V*WT) as the only ones that count. These are the "principal components" of your system. In fact, SVD and principal component analysis are very closely allied.
Can you elaborate on why "that small singular value is probably garbage"? Is the singular value decomposition an approximation?
Mentor
Quote by MikeyW Can you elaborate on why "that small singular value is probably garbage"? Is the singular value decomposition an approximation?
Singular value decomposition is limited by the tools at hand. The problem is that the standard representations of floating point numbers (e.g., float, double, long double in C-based languages, real, real*8, real*16 in Fortran) are but approximations of the real numbers. In the reals, addition and multiplication are associative and distributive. This is not the case for finite precision numbers such as IEEE floats and doubles. Example: -1+(1+1e-16) is zero, (-1+1)+1e-16 is 1e-16.
Suppose you get a singular value that is equal to 10-16 times the largest singular value. This value is almost certainly pure garbage, a number with zero bits of precision. You can't even trust the sign to be correct. There's a whole lot of repetitive addition and multiplication in an SVD implementation. Those small singular values result because of cancelations of positive and negative values during summations.
Quote by MikeyW I think I'm understanding it slightly. My problem is mostly concerned with numerical solutions to under-determined problems, and I think I do have the issue that if A is rank-deficient, then rounding errors would appear to make A'*A "singular to working precision" as MATLAB phrases it.
If you are solving an underdetermined linear problem $Ax=b$, then by definition your $A$ matrix is rank deficient. If $rank(A)=r$ then you only have $r$ nonzero singular values. Now, as indicated by D H, when you use a numerical method to determine the singular values, you may not find any of them that are exactly zero due to the finite precision arrithmatic that is used in the solution - instead you will find $r$ "large" ones, and the rest will be "small."
I'm assuming you are doing rank-deficient least-squares? If so then the SVD is one useful tool. I think of it this way, if $U^\dagger A V = \Sigma$ is the SVD of a matrix $A$ with $rank(A)=r$, then you can write
[tex]
A = \sum_{i=1}^{r} \sigma_i u_i v_i^\dagger,
[/tex]
where $\sigma_i$ is the $i^{th}$ singular value, $u_i$ is the $i^{th}$ left singular vector, and$v_i$ is the $i^{th}$ right singular vector. Since your rank deficient problem, you can add any vector in the nullspace of $A$ to any least-squares solution $x_{LS}$ and maintain the same residual - that is, the rank-deficient problem has an infinite number of solutions. Sometimes you want the solution that has the smallest 2-norm. In these cases the SVD can used to give it to you and the resulting solution is:
[tex]
x_{LS} = \sum_{i=1}^{r} \sigma_i^{-1} \left( u_i^\dagger b\right) v_i.
[/tex]
This is related to the so-called pseudo-inverse (http://en.wikipedia.org/wiki/Moore%E..._pseudoinverse). Note that other "complete orthogonal factorizations" can also give you the minimum 2-norm solution. I wrote my own QR based code that does this (see Golub and Van-Loan).
Since you have access to matlab, here is some simple code that solves an underdetermined system via least-squares:
A = randn(15,20);
b = ones(15,1);
x_mldivide = A\b;
x_pinv = pinv(A)*b;
you will find that x_mldivide has 5 zeros, as it provides the solution with the fewest number of non-zero elements (often this is called a "basic" solution), and that x_pinv is actually the minimum norm solution that the svd would give you. A more explicit computation is,
x_minnorm = 0;
[U,S,V] = svd(A);
for ii=1:rank(A)
x_minnorm = x_minnorm + (U(:,ii)'*b)*V(:,ii)/S(ii,ii);
end
where I have used the formula above. You should find that x_minnorm=x_pinv. Note that the command rank(A) is likely using an SVD to estimate rank - I think it looks to find the number of singular values that are above some numerical error tolerance.
enjoy!
jason
Oh wow, I had no idea MATLAB actually computed pinv. I'll have a go at the computations as soon as my PC is free! Can I ask one more thing that is probably really basic but still confusing me a little- you say an underdetermined problem is by definition rank deficient, but wikipedia has a seemingly different definition: "The rank of an m × n matrix cannot be greater than m nor n. A matrix that has a rank as large as possible is said to have full rank; otherwise, the matrix is rank deficient." What if in our underdetermined system m < n, but rank(A) = m? Surely by the above definition A is full-rank, since the rank of A is as large as it could possibly be?
Thread Tools
Similar Threads for: Can anyone give a common-sense explanation of singular values?
Thread Forum Replies
General Discussion 0
General Discussion 8
General Discussion 10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230942726135254, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/1903/what-is-the-connection-between-default-probabilities-calculated-using-the-credit/4526
|
# What is the connection between default probabilities calculated using the credit rating and the price of a CDS?
I'm working on a tool to price Credit Default Swaps. I've already done the standard pricing tools. I'm working on a pricing tool which uses the credit rating for the default probabilities used in the pricing of CDS. What is the relation between these probabilities and the price?
-
Hi David, welcome to quant.SE and thank you for posting your question. Please consider registering so that the site can track your contributions regardless of where you log in from. Also, can you please elaborate on what kind of relationship between probability and price you are looking for? If you have a set of default probabilities as a function of time, then finding the price is simply a matter of setting the NPVs of premium payments equal to default payments. – Tal Fishman Sep 13 '11 at 14:42
Hi Tal. Thank you for your answer. I've registered now, sorry. I've a set of default probabilities per rating as a function of time. But can you give more details to find the price using these probabilities please? Many thanks. – David Sep 13 '11 at 14:51
Because I'd like to use also the cds spread. Because if I don't, the price of the CDS will be the same for corporates with the same rating... – David Sep 13 '11 at 14:53
I do not understand what is your distinction between the CDS price and the CDS spread. CDS are typically quoted in terms of either spread of points upfront + 500 bp running. The value of CDS, like any other swap, is zero at inception. Are you looking to find the current value of an existing CDS contract should it be unwound? – Tal Fishman Sep 13 '11 at 15:09
Exactly, that's what I want to do. I've used the paper of Hull & White for the standard pricing using the cds spread curve to have my price. Now I've calculated my default probabilities using the credit rating and I want to price existing CDS but I don't know the relation between the price and these probabilities – David Sep 13 '11 at 15:13
show 1 more comment
## 4 Answers
One could say that a CDS price is determined by the physical default probability and the risk premium.
The physical PD (PPD) is the actual probability of company defaulting within the given period of time. It is purely a theoretical concept as no one really knows what this probability is. We could estimate it using some models or credit ratings, but those are just guesses.
In other words, if you'd known a PPD precisely you'd be able to calculate a break-even CDS price. If you write a lot of CDSs at break-even prices (on different underlyings), some will be triggered, some won't - but on average you will not make or lose money.
Of course, there is no point in doing this. So you would actually add a margin to each break-even price, so that you will make money on average (again assuming that PPDs are perfectly known to you). This margin is exactly the risk premium.
In reality, you are financially constrained. If all underlyings default you'll have to default yourself. If you are responsible you'll make your best to ensure this risk is tiny.
In the real world there's peer pressure. If you are a CEO of a company A and you see the company B is getting tons of money by writing CDSs you can start thinking along the lines of "Why are we not doing it already?". Best case you'll make money and become a great CEO, worst case you'll get your golden parachute...
Getting back to the original question. If you use credit ratings to calculate physical PDs you will find that a lot of variation in CDS prices is due to changes in risk premiums. Is it actually the case or is it that the credit ratings-based PDs are inaccurate and do not reflect all the information available up to date?
Risk premium filtered in such a way are not easily explained by other macroeconomic or financial indicators. For instance, it is highly correlated with VIX, but still is quite different.
This is a valid area in academic research in Finance. You could check "Measuring Default Risk Premia from Default Swap Rates and EDFs" by Berndt, Douglas et. al. There are slides and the paper itself online (look for the latest versions).
-
You should use the ratings-based default probabilities to derive the "fair" spreads on a set of hypothetical new contracts and compare this result to the market spreads. Each could then be used independently to also derive the price for an existing CDS. There is no set way to combine the two prices, as these are two completely different and independent approaches to solving the same problem.
-
Well I m affraid that there is a little bit of confusion here. Ratings are ... Ratings usually when used by notation agencies they imply a definite fixed once for all default probability (or transition matrix to some other rating) and then issuers are classified among those ratings usually by using some historical data. When using CDS spread then you get market implied default probability for some period and you bootstrap it from CDS quotes by using standard procedure, you can have a look at Brigo and Mercurio's book for the details. Hope this helps Regards
-
Thanks for your answer. The point is that I've already made a pricer using the market implied default probabilities. Now I'm working on a different version of the pricing using a transition matrix (from AAA to default) to generate default probabilities. And what I want to do now is a way to use these probabilities to have a price for a CDS. I know that it's not the way the CDS are priced on the market, but I'd like to try this method. Perhaps you can help me. I'd like to use the default probabilities as a function of time and also the CDS curve to price the CDS please. Many thanks – David Sep 14 '11 at 14:47
Your transition matrix $M$ has a time horizon associated with it, typically one year but sometimes 3 months or 5 years. Assume for convenience the horizon is 3 months. If it is not, you may wish to take a matrix square root to turn it into a 3 month matrix.
Now the 6 month transition probabilities are formed by multiplying the matrix with itself, $M \cdot M$ and the process can be repeated. So $N$ quarters into the future, the appropriate matrix is $M^N$. Let us take the convention that the first row $\{d_{1,j}\}_{j=1}^R$ of $M$ represents default. Let's say the current rating corresponds to row $I$.
A CDS has 2 kinds of cashflows, coupon payments $c_n$ and a default payment $D_n$ (normally $c_n$ and $D_n$ are constant). The coupon payment $n$ quarters from now has probability $p_n$ of occurring that you can read as the non-default entries for the initial rating's row, $\sum_{j=2}^R (M^N)_{I,j}$, or more easily as $p_n=1-(M^N)_{I,1}$ since the probabilities must sum to 1.
The coupon leg $L_C = \sum_{n=1}^N PV_n p_n c_n$ of the CDS has value corresponding to the present value of these cashflows times their probabilities of occurring.
The default leg is priced similarly. A default payment $D_n$ occurs only on the occurrence of a fresh default at iteration $n$. The probability of this occurring is the sum of probabilities of achieving various ratings in the iteration just before default, times their respective probabilities of freshly defaulting in one more iteration. That is to say, the total probability is is $q_n=\sum_{j=2}^R (M^{n-1})_{I,j} M_{j,1}$. This payment may occur at any iteration $n$ occurring before default, so you have to total up the default value contributions for all $n$ prior to the contract expiration.
The default leg $L_D = \sum_{n=1}^N PV_n q_n D_n$ has value equal to the sum of these payments times their probabilities of occurring.
The overall contract value can now be written as $L_D-L_C$.
Technically, this is all known as a ratings migration model, and is used a lot for risk control. The ratings paths form something called a Markov chain.
-
Hello Brian. Firstable, thanks a lot for your answer. The point is that since 2009, the coupon payments are fixed (for example 100bps) and don't take into account the value of the 5y spread CDS curve. So the problem is that for a same rating, the price will be the same and that's my problem and my question. Is there a way to add the CDS 5Y level into my calcul (for example to price a 5Y cds) please?? Many thanks – David Sep 19 '11 at 6:26
You will have to alter your transition matrix to achieve that. For example, you could take probability off columns 2...K and put it onto column 1 (the default column) -- or vice versa as necessary -- until the transition matrix price agrees with the 5Y CDS rate. – Brian B Sep 19 '11 at 18:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501612186431885, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/114310/how-to-find-error-bounds-of-trapezoidal-rule/114314
|
# How to find Error Bounds of Trapezoidal Rule?
I stack about Error Bounds of Trapezoidal Rule. The question says
How large should $n$ be to guarantee the Trapezoidal Rule approximation for $\int_{0}^{\pi}x\cos x\,dx$ be accurate to within 0.0001 ?
I used $|E_{T}| <= \frac{K(b-a)^3}{12n^2}$ On the process of this formula, I did take 3rd derivative of given function which was $x\cos x$ to find out max of 2nd derivative. However, I got some strange number. So I just stack there. Is there easy way to find the $K$ ?
If you have any idea, Please post on the wall Thank you !
-
## 2 Answers
The $K$ in your formula is the largest possible absolute value of the second derivative of your function. So let $f(x)=x\cos x$. We calculate the second derivative of $f(x)$.
We have $f'(x)=-x\sin x+\cos x$. Differentiate again. We get $$f''(x)=-x\cos x-\sin x-\sin x=-(2\sin x+x\cos x).$$
Now in principle, to find the best value of $K$, we should find the maximum of the absolute value of the second derivative. But we won't do that, it is too much trouble, and not really worth it.
So how big can the absolute value of the second derivative be? Let's be very pessimistic. The number $x$ could be as large as $\pi$. The absolute value of $\cos x$ and $\sin x$ is never bigger than $1$, so for sure the absolute value of the second derivative is $\le 2+\pi$. Thus, if we use $K=2+\pi$, we can be sure that we are taking a pessimistically large value for $K$.
Note that at $\pi$, the cosine is $-1$ and the sine is $0$, so the absolute value of the second derivative can be as large as $\pi$.
We can be less pessimistic. In the interval from $0$ to $\pi/2$, our second derivative is less than $2+\pi/2$. We can do better than that by looking at the second derivative in more detail, say between $0$ and $\pi/4$, and between $\pi/4$ and $\pi/2$.
In the interval from $\pi/2$ to $\pi$, the cosine is negative, while the sine is positive. The sine is definitely $\le 2$. The $x\cos x$ term is negative, so in the interval $[\pi/2,\pi]$, the absolute value of the derivative is less than or equal to the larger of $2$ and $\pi$, which is $\pi$.
So we have reduced our upper bound on the absolute value of the second derivative to $2+\pi/2$, say about $3.6$. We could do a bit better by graphing the second derivative on a graphing calculator, and eyeballing the largest absolute value.
It's not worth it. Use $K\le 3.6$ (or even $2+\pi$). Then we know that the error has absolute value which is less than or equal to $$\frac{3.6\pi^3}{12n^2}.$$ We want to make sure that the above quantity is $\le 0.0001$. Equivalently, we want $$n^2\ge \frac{3.6\pi^3}{(12)(0.0001}.$$ Finally, calculate. I get something like $n=305$.
Remark: There are many reasons not to work too hard to find the largest possible absolute value of the second derivative. If we are using numerical integration on $f$, it is probably because $f$ is at least a little unpleasant. Usually then, $f''$ will be more unpleasant still, and finding the maximum of its absolute value could be very difficult.
In addition, using the maximum of $|f''(x)|$ usually gives a needlessly pessimistic error estimate. I am certain that for the Trapezoidal Rule with your function, in reality we only need an $n$ much smaller than $305$ to get error $\le 0.0001$. The error estimate for the Trapezoidal Rule is close to the truth only for some really weird functions. For "nice" functions, the error bound you were given is unduly pessimistic.
The usual procedure is to calculate say $T_2$, $T_4$, $T_8$, and so on until successive answers change by less than one's error tolerance. This is theoretically not good enough, but works well in practice, particularly if you cross your fingers.
-
Hint: You don't say what K is, nor n. The absolute value of the first derivative of $x \cos (x)$ is limited by $|x \sin(x)|+|\cos(x)|=|x \sin (x)|+1$
-
Thank you for posting the hint !! but I still can't see the next step and why |$cos(x)$| became 1... Would you mind if you explain more ? – Ryu Feb 28 '12 at 5:47
@Ryu: André Nicolas has done a very good job, so I will refer you to his answer. – Ross Millikan Feb 28 '12 at 14:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555054903030396, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/291663/mean-value-theorem-help
|
Mean value theorem help?
So I am learning this chapter "mean value theorem", and there is an exercise.
Prove the inequality of $e^{x} \gt x+1$ for $x$ different from zero and the inequality of $2x \arctan x \ge \ln (x^2+1)$... I feel very retarded every time I read this exercise because I don't understand what theorem should I use?
Can you help me just a little and I'll do the rest by myself?
-
4
Hey, don't feel "retarded" man, it is just a matter of experience. After a little while, you will be solving these like butter and bread. – Anon Jan 31 at 21:03
For the one with $e^x$, how about checking the Taylor series for $e^x$? – Anon Jan 31 at 21:04
homework should not be used as a standalone tag; see tag-wiki and meta. – Martin Sleziak Feb 5 at 9:27
2 Answers
I wouldn't use the mean value theorem there. For the first one:
$e^x>x+1$ since $y=x+1$ is the tangent line to the graph $y=e^x$ in the point $(0,1)$. Since $e^x$ is convex it's graph is above the tangent line.
For the second one I would define $$f(x) = 2x \arctan x - \log(x^2+1)$$ and study the sign of the derivative $f'(x)$ to prove that $0$ is a global minimum.
addendum Just to see how you get a rigorous proof.
Consider $f(x)=e^x-x-1$. Notice that $f'(x) = e^x-1$. For $x> 0$ we have $f'(x)>0$ hence $f(x)$ is increasing in $[0,+\infty)$. For $x<0$ we have $f'(x)<0$ hence $f(x)$ is decreasing on $(-\infty,0]$. Hence for all $x\neq 0$ we have $f(x) > f(0) = 0$. This means $e^x > x+1$.
For the second one take $g(x) = 2x\arctan x - \log(x^2+1)$. We have $g'(x) = 2\arctan x$ and the proofs is the same as before.
edit So now I realize what you mean by "mean value theorem". In fact to prove that $f'>0$ implies $f$ increasing you use that theorem.
-
But rigorous proofs of things like those rely on the mean value theorem. – Michael Hardy Jan 31 at 21:08
My proofs are rigorous :-) – Emanuele Paolini Jan 31 at 21:08
"Convex" is defined by saying the secant line segments lie above the graph. How does one prove that if the second derivative is positive, then the function is convex, and if it's convex then it lies above its tangent lines? The answer is that one uses the mean value theorem to do that. – Michael Hardy Jan 31 at 21:10
@manu-fatto Since you can easily edit your answer, I suggest not using the same letter ($f$) for both functions. – Git Gud Jan 31 at 21:17
@MichaelHardy: You are right! What you (anglophone) mean with "mean value theorem" is not what I thought. In Italy this is called "Lagrange Theorem" – Emanuele Paolini Jan 31 at 21:19
For establishing the inequality $e^x>1+x$, consider two cases: $x>0$ and $x<0$.
For the case $x>0$, apply the Mean Value Theorem to the function $f(x)=e^x$ over the interval $[0,x]$. This gives a $c$ with $0<c<x$ satisfying $$e^x-e^0 =(x-0)\cdot e^c=xe^c.$$ But $e^0=1$ and $e^c>1$ (strict inequality, since $c>0$). so $$e^x-1>x;$$ which implies the result for $x>0$.
I'll leave the other case for you...
I'll just give a hint for your second inequality:
For the second inequality, break things up into two cases: $x\ge 0$ and $x<0$.
For the case $x\ge0$, apply the Mean Value Theorem to the function $g(x)=2x\arctan x -\ln(x^2+1)$ over the interval $[0,x]$ and use the fact that $\arctan(c)\ge 0$ for all $c\ge0$. (Note: the second case is easier here, since $g$ is an even function.)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292697906494141, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/70339?sort=votes
|
## Stone-Weierstrass theorem applied to Fourier series
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is a question on Fourier series convergence. The problem is, in the applications of the Stone Weierstrass approximation theorem on wikipedia, there's stated that as a consequence of the theorem the space of trigonometrical polynomials is dense (with the sup norm) in the space of continous functions in [0,1] - i.e. for every continous function its fourier series converges. This boggles me: isn't continuity not enough for the convergence (let alone uniform) of a Fourier series? What about du-Bois Reymond [and many others] example of continous function with non convergent Fourier series in a point?
-
8
Nothing tells you that the sequence of trigonometrical polynomials you can choose for approximating your function uniformly is in any way related to the Fourier series, and indeed it can't be in general, as you argue. – Theo Buehler Jul 14 2011 at 16:03
4
The Stone-Weierstrass theorem also tells us that the space of ordinary polynomials is dense in $C[0,1]$. "But wait - aren't there smooth functions whose Taylor series doesn't converge pointwise?" – Paul Siegel Jul 14 2011 at 16:21
1
@Paul: Yes. A great advantage of the Fejer kernel is that it is nonnegative, making estimates easy. That bad old Dirichlet kernel is much worse to work with. – Gerald Edgar Jul 14 2011 at 16:26
6
en.wikipedia.org/wiki/Fej%C3%A9r%27s_theorem – Terry Tao Jul 14 2011 at 16:32
2
A natural analogue is the classical Weierstrass theorem. Do the Taylor polynomials necessarily approximate even a fairly smooth function? No, but the Bernstein polynomials do! (But not terribly well, admittedly.) – Nikita Sidorov Jul 14 2011 at 18:27
show 2 more comments
## 2 Answers
The key point is that you're confusing uniform convergence and $L^2$ convergence ; indeed as $\mathcal{C}([0;1])$ is both a subspace of $\mathcal{B}([0;1])$ with $|.|_\infty$ and of $L^2([0;1])$ with $|.|_2$, you get two norms on the same vector space.
But as it isn't a finite-dimensional space, it can have non-equivalent norms - and indeed, those two norms definitely aren't equivalent, which in particular means that a sequence which has a good behaviour for the $L^2$ norm (the partial sums of the Fourier series) doesn't necessarily have a good $|.|_\infty$ behaviour.
EDIT: I should have said a little more ; there's an obvious inequality between the two norms (the mean inequality) so they are not that unrelated. But there is no reverse inequality, as can be shown by considering a sequence of piecewise linear functions : for $n\in\mathbb N$, consider $f_n$ as $t\mapsto n^\alpha-n^{\alpha+\beta}t$ on $[0;n^{-\beta}]$ and zero elsewhere ; if you choose $\alpha,\beta>0$ carefully, then you'll get a sequence which converges to zero for the $L^2$ norm, and won't converge uniformly.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The density of trigonometric polynomials in $C[0,1]$ with respect to the sup norm does not imply that the Fourier Series of some $f\in{}C[0,1]$ must converge pointwise.
Let $e_k:=e^{2\pi{}ikx}$ for $k\in{}\mathbb{Z}$. Then it can be shown that
1. $\{e_k|k\in\mathbb{Z}\}$ is an orthonormal basis for $L^2[0,1]$ with respect to the $L^2$ norm. That is, $=\delta_{jk}$ and the span of the $e_k$ is dense in $L^2[0,1]$.
2. If $f\in{}L^2[0,1]$ and $V_n:=span\{e_k|k=-n,...,n\}$, then the nth partial sum of the Fourier Series of $f$, $P_{n}f:=\sum_{k=-n}^{n}$$e_k$ is the $L^2$ projection of $f$ onto $V_n$, i.e. for any $g\in{}V_n$ we have $||P_{n}f-f||$${2}$ $\leq$ $||g-f||$${2}$
So the partial sums of a Fourier Series are a good approximation of a general $L^2$ function, and hence of $C[0,1]$ function, but only in the $L^2$ sense. To get pointwise convergence, one needs a stronger condition than continuity (e.g. differentiability), as you pointed out.
What goes wrong in an attempted proof? One would like to argue that
1. if for some trigonometric polynomial $p\in{}V_n$ and $f\in{}C[0,1]$ we have $||p-f||<\varepsilon$ (sup norm), then for the nth partial sum $P_{n}f$, $||P_{n}f-f||<\varepsilon$.
2. $||P_{n+1}f-f||\leq{}||P_{n}f-f||$ for all $n$
The fact (2) above facilitates these arguments in the case of the $L^2$ norm, but not for the sup norm.
I hope this was helpful.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284394979476929, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/202113/proof-of-chinese-remainder-theorem-for-ring
|
# proof of chinese remainder theorem for ring
Let $R$ be a ring(not necessary have "1") and let $I,J$ be ideals of $R$ such that $I+J=R$. I want to prove that there is a $x\in R$ such that $$x\equiv r ({\rm mod} I) \quad x \equiv s ({\rm mod} J) \quad \mbox{for any} ~~~r,s\in R$$
I prove that since $I+J=R$ $$r=r_i+r_j, s=s_i+s_j \quad \mbox{for some}~~~r_i,s_i\in I, r_j,s_j\in J$$ Let $x=r_j+s_i$. then $$x-r=r_i-s_i\in I \quad x-s=r_j-s_j\in J$$ Thus, $$x\equiv r ({\rm mod} I) \quad x \equiv s ({\rm mod} J)$$
Is this proof wrong?? I can't look for a mistake.
-
## 2 Answers
You did everything correctly. The proof seems right to me.
-
You're not in vector space. The hypothesis that $I$ and $J$ are stranger ideals i.e. they are such that $I+J=R$, does not imply that $\forall x \in R, \exists x_i \in I, x_j \in J$ such that $x=x_i+x_j$. What it [only] implies is $\exists a_i \in I, b_j \in J$ such that $a_i+b_j=1$. If you multiply both sides by $x$ you end up in $I$ or $J$ because of the very definition of what an ideal is and because of the property $I+J=R$.
-
1
Let $a_i + b_j = 1$. Multiply both sides by $x$, you get $xa_i+xb_j=x$. Now $xa_i$ is in $I$ and $xb_j$ is in $J$, because $I$ and $J$ are ideals. That is, for every $x\in R$ you get $x_i\in I$ and $x_j\in J$ such that $x_i+x_j=x$. – Gregor Bruns Sep 25 '12 at 10:38
my mistake, you are right. – mak Sep 25 '12 at 10:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9743228554725647, "perplexity_flag": "head"}
|
http://cms.math.ca/10.4153/CJM-2006-042-0
|
Canadian Mathematical Society
www.cms.math.ca
| | | | |
|----------|----|-----------|----|
| | | | | | |
| Site map | | | CMS store | |
location: Publications → journals → CJM
Abstract view
# Partial $*$-Automorphisms, Normalizers, and Submodules in Monotone Complete $C^*$-Algebras
Read article
[PDF: 539KB]
http://dx.doi.org/10.4153/CJM-2006-042-0
Canad. J. Math. 58(2006), 1144-1202
Published:2006-12-01
Printed: Dec 2006
• Masamichi Hamana
Features coming soon:
Citations (via CrossRef) Tools: Search Google Scholar:
Format: HTML LaTeX MathJax PDF PostScript
## Abstract
For monotone complete $C^*$-algebras $A\subset B$ with $A$ contained in $B$ as a monotone closed $C^*$-subalgebra, the relation $X = AsA$ gives a bijection between the set of all monotone closed linear subspaces $X$ of $B$ such that $AX + XA \subset X$ and $XX^* + X^*X \subset A$ and a set of certain partial isometries $s$ in the normalizer" of $A$ in $B$, and similarly for the map $s \mapsto \Ad s$ between the latter set and a set of certain partial $*$-automorphisms" of $A$. We introduce natural inverse semigroup structures in the set of such $X$'s and the set of partial $*$-automorphisms of $A$, modulo a certain relation, so that the composition of these maps induces an inverse semigroup homomorphism between them. For a large enough $B$ the homomorphism becomes surjective and all the partial $*$-automorphisms of $A$ are realized via partial isometries in $B$. In particular, the inverse semigroup associated with a type ${\rm II}_1$ von Neumann factor, modulo the outer automorphism group, can be viewed as the fundamental group of the factor. We also consider the $C^*$-algebra version of these results.
MSC Classifications: 46L05 - General theory of $C^*$-algebras 46L08 - $C^*$-modules 46L40 - Automorphisms 20M18 - Inverse semigroups
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8092655539512634, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/53742/frequency-of-an-electron
|
# Frequency of an Electron
My question is very simple. If frequency is defined as the cylces per unit time, Then what is meant by "Frequency of an Electron" ? If the rotation of electron around a nucleus is considered then, Which phenomena we consider for a free electron i.e: electron in a force field (unstable alone electron).
EDIT: Is "Frequency of an Electron" a experimental quantity?
EDIT AFTER HAVING 2 ANSWERS:
My teacher told me how to calculate the frequency of an electron. We started from finding Energy of Electron then difference in energy, then we get this equation according to Bohr's Radius of Hydrogen Atom and his postulates: i.e:
$$f = \frac{z^2e^42\pi^2m}{h^3} (\frac{1}{n_1^2} - \frac{1}{n_2^2})$$
Where:
• $z =$ Atomic number
• $e =$ Charge on Proton
• $m =$ mass of electron
• $h =$ Plank's constant
• $n =$ Orbit number
From the last part of my equation, I am confused. The $n_1$ and $n_2$ shows that, that frequency will be the frequency of energy? or electrons?
-
2
I don't think the term "frequency of an electron" has any intrinsic meaning. You would need to consider the context to work out what it means. Can you give us a link to the document where you found the phrase? – John Rennie Feb 12 at 16:32
Hi @devWaleed: Instead of flagging your question for deletion as you did, you can delete it yourself. – Qmechanic♦ Feb 12 at 16:39
@JohnRennie I thought, "Frequency of an Electron" is a property or quantity which is available. But if you say there is no such thing, Now I am clear now. -Thanks. – devWaleed Feb 12 at 16:41
@devWaleed: Do you mean the Compton frequency of the electron or the Rydberg frequency of the Hydrogen atom? – Qmechanic♦ Feb 12 at 17:08
Frequency is a physical thing, but is hard for our feeble minds to interpret. If psi = e^(i(kx - wt)), then the electron will oscillate through time and space, and will oscillate in the complex plane. Impossible to visualize, but if you combine electrons with different such phases, one can calculate and observe physically destructive and constructive interference occurring, after one looks at the observable quantity, the amplitude. – Chris Feb 12 at 23:29
show 2 more comments
## 2 Answers
Since you used the tag wave-particle-duality, I imagine you mean the frequency $f$ that corresponds to an electron's energy $E$ via Planck's relation, $$E=hf,$$ where $h$ is Planck's constant. That is a valuable question and nothing to get picked on for. After all, if the electron is a wave with wavelength and so on, it surely has a frequency, right?
It turns out that this frequency is not very easy to measure. The reason for this is that the electron "wave" is usually complex-valued. That is, the thing that oscillates is a complex number $\psi=a+ib$, usually called its wavefunction. The real and imaginary parts of this wavefunction "rotate" into each other: $\psi$ will be real, then imaginary, then negative real, then negative imaginary, then real again, and so on and so on, in a continuous fashion. The frequency you're asking about is the frequency at which this happens.
Unfortunately, we are only ever able to directly measure the modulus of $\psi$, i.e. quantities of the form $|\psi|^2=a^2+b^2$, and this is constant even though $a$ and $b$ are oscillating. Schemes to try and measure $\psi$ in some (indirect) way are some of the most interesting measurements in quantum mechanics.
In this case there is a second problem which is also quite interesting, and it is the fact that only differences in energy can have physical meaning. Thus to ever measure the frequency$\leftrightarrow$energy of a particle, then we need to compare it with a second particle with a different frequency$\leftrightarrow$energy, and then measure the difference in frequencies$\leftrightarrow$energies. This will be present as a "beat" in the wavefunction, as we add together two complex numbers that are rotating at different frequencies, and it is in principle possible (though damned hard!) to measure.
-
you mean 4 values for 4 dimensions? – devWaleed Feb 12 at 17:54
No. The process is continuous and $\psi$ describes a circle in the complex plane. There is nothing special about those four values. – Emilio Pisanty Feb 13 at 11:06
I am not sure I understand your question clearly, but here are some ideas trying to cover as many cases as possible:
For the electron in the first Bohr orbit in the hydrogen atom: The frequency of its rotational motion is the number of times it will rotate around the proton in one second, and it isabout
$f=6.58\times 10^{15} s^{-1}.$
In a uniform magnetic field: For an electron that has entered a uniform magnetic field of flux density B, depending on the speed $v$ of the electron, the magnetic field can put it into a circular orbit with frequency that can be found using these two equations
$Bev=\frac{mv^2}{r}$
which is the balance equation between the magnetic and centripetal forces, and
$v=2\pi fr$
which is from the circular motion of the electron at uniform speed $v$. These two lead to the equation
$f={\frac {Be}{2\pi m}}$.
For an electron in a piece of wire: carrying an electric current of frequency 50Hz say, it means that the electron oscillates at 50Hz (i.e. goes back and forward, and it does this 50 times per second.)
For a free electron: The frequency is of quantum mechanical nature. It relates to the the wave function of the electron
$\psi (x)=u(p)e^{i({\bf p.r}-Et)/h}$.
Note that in the above equation $E/h$ is the frequency of rotation of the phasor (the exponential part), it does not mean the electron goes back and forward so many times per second. So the larger the energy the larger the frequency of rotation of the phasor, hence of the wave function of the electron. For a relativistic electron, the energy is
$E=c\sqrt{p^2+m_o^2c^2}$
so that the frequency is given by
$f=c\sqrt{p^2+m_o^2c^2}/h$,
hence the origin of the more general part $\hbar\omega t$ of the phasor (in the wave function) representing an electron.
I hope this helps.
-
ok, you understood my question what I was asking but the real question is that, If a particle is vibrating at 50 herts, It means it will go back and forth 50 times in a second. Then what is meant by frequency of electron? Does electron vibrate? or its revolving around the nucleus is considered as its frequency? – devWaleed Feb 12 at 17:35
@devWaleed Well, still I am not sure I understand your question completely, but I have edited my answer to cover as many possibilities as possible, and you need to decide which one of these fits your real question. – JKL Feb 12 at 19:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941737711429596, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/6114/ggm-prg-construction-why-do-we-need-to-change-keys-all-the-time-with-underlying/6141
|
# GGM PRG construction: Why do we need to change keys all the time with underlying PRG
When constructing a PRF that has n bit input using the GGM PRG, why do we always have to recursively run the PRG using its previous output as a seed key n times. Instead, why don't we run the PRG n times, and use that as the output.
Any answers would be great. My assumptions is that it has something do with producing independent random blocks, but surely a secure PRG would satisfy this?
-
"why don't we run the PRG n times, and use that as the output." It's not clear what you mean by this. What exactly is the input to the PRG each time? Remember that the input & output lengths of the PRG are different. And how are you proposing to incorporate the $n$ bits (i.e., $2^n$ possibilities) of the input into the computation? – Mikero Jan 25 at 6:24
## 2 Answers
Short answer: I'm pretty sure that your suggestion would "work," but I don't think that it would be better than the GGM approach from either a performance or a security point of view.
Long answer: As Mikero suggestions, we have to be careful about the intput and output lengths of the PRGs. So the GGM problem starts with a family of pseudorandom generators whose output length is 1 more than its input length:
$$G_n : \{0,1\}^n \to \{0,1\}^{n+1}$$
The goal is to produce a family of PRGs with bigger expansion. For the sake of concreteness, let's say that you want a new family of functions that doubles the length of its input:
$$F_n : \{0,1\}^n \to \{0,1\}^{2n}$$
The GGM construction accomplishes this by using $G_n$ a total of $2n$ times to produce the corresponding $F_n$. It sounds like you want to do things slightly differently. If I understand your question correctly, you want to apply $G_n$ to the input in order to get $n+1$ bits, and then apply $G_{n+1}$ to that in order to get to $n+2$ bits, and so on. Thus, you'll set
$$F_n (x) = G_{2n-1}(G_{2n-2}(\cdots(G_n(x)\cdots)).$$
I'm pretty sure that you could prove this secure through a simple hybrid argument. However, from a theoretical point of view, I think that I'd prefer the GGM method for a few reasons:
1. The PRGs with bigger-input-lengths, like $G_{2n-1}$, could be much slower than the initial PRG $G_n$. Remember, each function in the family runs in time polynomial in its own input length.
2. The hybrid argument will have a rather large error term in the indistinguishability claim. With the GGM approach, you're using $n$ pseudorandom bits each time and you're claiming that they are indistinguishable from $n$ random bits. With your suggestion, things are worse: you're using more pseudorandom bits each time ($n+1$, $n+2$, and so on) and claiming that they are indistinguishable from the corresponding number of random bits.
In practice, it is unlikely that you'd want to use either the GGM construction or your own when building a PRG. However, if for some strange reason you wanted to do so, I think the GGM method is better again, but for a more subtle reason. I'm not sure what initial PRG you'd use in the construction, but I'm assuming it would be something like the assumption "I'm willing to believe that SHA1 is a PRG when used on 159-bit inputs." (HUGE caveat: I'm not suggesting that you should actually make this assumption! It just seems like the most logical way to use a GGM-style approach in practice.) What this really means is that you think
$$SHA1 : \{0,1\}^{159} \to \{0,1\}^{160}$$
somehow is part of a mythical family of PRGs, even if you don't know what the other members of this family are. Under this assumption, I suppose you could use the GGM construction on SHA1 in order to make bigger PRGs. Your method wouldn't work though, because we don't know the other members from this "mythical family" so we cannot use them in a construction.
Sorry for the long-winded answer; I hope it made sense.
-
Yes, I came this conclusion myself. Thank you so much for putting so much effort into the answer btw, I really appreciate this. – Barry Steyn Jan 26 at 18:33
Okay, I think I know why. If you have a n bit input, and you wanted to run the PRG, you would need to run it maximum $2^n$ times (I made a mistake by thinking you could run the PRG n times). Whereas if you change the key and run the PRG in a recursive manner using the output of the last PRG as input, then you would only need to run it $2 \cdot n$ times.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9632051587104797, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/51294/supervenience-in-mathematics/51295
|
## Supervenience in mathematics
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm not quite sure if this is the right place to ask, and if this is the right way to ask, but I dare.
In philosophy (of mind, e.g.) the concept of supervenience is used:
"Supervenience [is] used to describe relationships between sets of properties in a manner which does not imply a strong reductive relationship."
That means an object might possess higher properties that depend on some base properties, but cannot be reduced to (defined by) them.
My question is: Can this situation occur in mathematics?
As I see it, for every mathematical object - be it a set with a structure or a vertex in an abstract graph or an object in a category - all its (relevant) properties are determined by its inner structure or its relations/morphisms to other objects. To me it seems unconceivable that a mathematical object can have any extra properties, let alone in a supervenient manner, that is, two isomorphic objects would have to share them.
But maybe I'm wrong. Can anyone point me to an example?
-
4
Disregarding the quote in your post, which provides more of an example for how philosophers' language could profit from some clarity or at least some well-defined terminology, the notion of "supervenience" as defined in Wikipedia has plenty of applications in mathematics. For example, Tannaka duality is about in how far an algebraic group is determined by its category of representations. – darij grinberg Jan 6 2011 at 12:14
2
What about the answers to mathoverflow.net/questions/10993/… ? Isn't it supervenience if you can prove an equivalence abstractly (and so definable in terms of low-level properties) but not calculate anything? – Daniel Moskovich Jan 6 2011 at 12:59
9
I don't see the point of having this discussion until someone provides a clearer and more precise definition of supervenience. – Qiaochu Yuan Jan 6 2011 at 14:00
6
@Joel: all the more reason for this discussion not to take place on MO, then! – Qiaochu Yuan Jan 6 2011 at 15:19
6
Well, I'm sorry you feel that way. I view philosophical questions about the foundations of mathematics, particularly those involving a technical concept, as on-topic for MO. My objection to your remark above is that there are dozens of competing precise proposals for the meaning of supervenience. Perhaps the situation is like the use of the terms "explicit" or "canonical", often used on MO in vague ways, even though these terms have a variety of competing precise formulations. But these terms, like "supervenience", have a mathematical nature that can be usefully discussed by mathematicians. – Joel David Hamkins Jan 6 2011 at 15:49
show 11 more comments
## 5 Answers
I want to try another answer, not because I think you will necessarily accept it, but because if you don't then I think your reasons for not doing so will clarify the question.
One definition of supervenience is that A supervenes on B if you can't have a change to A without a change to B. For instance, some people hold that the mind supervenes on the physical properties of the brain because you could not have two distinct mental states arising from brains that were physically identical. (Others dispute this, but that does not matter here.)
Now consider the halting property. That clearly supervenes on the specification of the given Turing machine (encoded as a sequence of 0s and 1s, say), since if one Turing machine halts and another doesn't, then they can't have identical specifications. But it's not clear that there's any sense in which the property of halting or otherwise is reducible to the specification of the Turing machine.
-
3
To support your final point, consider that one may write down a specific Turing machine program for which the question of whether that program halts or not depends on the set-theoretic universe in which this question is asked. Thus, the question of halting or not depends not just on the program itself, but on the nature of the set-theoretic universe. The program is simply the program that searches for a contradiction in ZFC. (Or some other strong theory, such as ZFC+large cardinals, if one prefers.) – Joel David Hamkins Jan 8 2011 at 18:25
@Timothy: I have to accept this answer, because it hits the bull's eye. That's exactly the direction in which I proceeded thinking. I'll come up with a related question, soon. Thank you for having tried to understand! (Also to Joel and Todd!) – Hans Stricker Jan 8 2011 at 21:51
@Timothy: Please have a look at this question: mathoverflow.net/questions/51506/… – Hans Stricker Jan 9 2011 at 0:23
1
@Joel: Whether such a program halts or not does not depend on the set-theoretic universe in which this question is asked in the nontrivial sense you seem to be implying; "T is true of the ambient set-theoretic universe" and "T has no contradictions (according to the ambient set-theoretic universe)" are hardly the same thing. – Sridhar Ramesh Jun 23 2011 at 21:18
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
To my way of thinking, the most natural example of supervenience in mathematics---and the most similar to how this term is used in the philosophy of mind, where one uses it to describe the relation of the higher order properties, such as features of the mind, to the lower order properties, such as molecular structures in the brain---is provided by the sense in which set theory is viewed as forming a foundation for mathematics.
On that view of the foundations of mathematics (and there are many other views), the set-theoretic universe is seen to provide an ontological foundation for mathematics, in the sense that every mathematical object is regarded fundamentally as a set. One builds the natural numbers from sets as ordinals and then the integers and the rationals and the reals in any of the usual set-theoretic constructions; a group is a set with a binary operation (a set) having certain properties; a topological space is a set together with a set of subsets having a certain nature; and so on. On this view, every mathematical object is regarded as a set and the context of set theory is taken to provide a common forum in which to treat mathematical objects and constructions from what would otherwise be diverse forums. The existence of such a common forum allows us sensibly to apply knowledge from one area of mathematics to arguments in a distantly related area, and this is important.
So the view is that the basic features of the reals or of any mathematical object ultimately reduce to set theory in the sense that that object is fundamentally a set. But meanwhile, although this reduction of mathematics to set theory is important foundationally (and there are resulting a number of intriguing or even startling conclusions about ZFC-independence and paradox in non-set-theoretic contexts), the main view is also that the set-theoretic reduction is largely irrelevant for ordinary mathematics. We don't want to undertake most arguments in number theory or algebraic geometry or whatever with constant reference to the complete set-theoretic reduction of the subject, for example, by speaking of the "elements" of $\pi$. Thus, mathematics can be seen to reduce to set theory, but for most higher level mathematics, this reduction is either very complicated or not seen as illuminating of the interesting mathematical phenomenon at hand.
This relation seems very similar to the relation between our current understanding of mental properties and molecular structures in the brain. In principle, we believe that there is a reduction, but that reduction is either very complicated or not particularly illuminating of the mental phenomenon. We seem to fulfill the following analogy:
``` Higher-order Higher-order
mental features mathematical objects
and properties and relations
----------------- ------------------
molecular structure sets and the
of the brain membership relation
```
So this situation seems to accord accurately with your description of supervenience.
Addendum. Let me also mention another sense of supervenience, related to the point made by Gowers, in his second paragraph. The truth of a universal statement $\forall n\ \varphi(n)$ in arithmetic, say, reduces to the instances $\varphi(0), \varphi(1),\varphi(2)$, and so on. But by the Compactness theorem, one cannot prove the universal statement merely from those assertions in first order logic. Thus, the truth of $\forall n\ \varphi(n)$ would seem to supervene on those instances in the sense of the question. We don't prove a universal statement by proving each instance separately.
-
Please see my answer to your comment on my question. – Hans Stricker Jan 6 2011 at 13:35
@Joel: Please have a look at this question: mathoverflow.net/questions/51506/… – Hans Stricker Jan 9 2011 at 0:24
It seems to me that even if the exact philosophical notion doesn't quite apply to mathematics, there are other notions, similar but a bit more precise, that do. For example, mathematical structures can have high-level properties that are definable in terms of low-level properties but are not easily computable. In that case, it may be that the reduction, even though it exists, is not useful. This seems to me to be fairly like a physical example such as the difficulty of defining what a liquid is in microscopic terms.
What I'm getting at (but also slightly struggling with) is that reductionism has its defects in mathematics just as it does in philosophy. To give an example, to prove the prime number theorem one doesn't break it up into lots of small statements, prove those, and put them together again. That would be quite impossible, given that the largest known prime is very finite. Rather, one somehow looks at all the primes at once. It's tempting to say that the true meaning of the prime number theorem is not that all those numbers out there are prime, but rather a more global statement about density that supervenes on the properties of the numbers themselves.
-
I guess supervenience means more than "not easily computable", but "not computable at all". – Hans Stricker Jan 6 2011 at 12:07
That's why I wrote "similar" rather than "analogous". I haven't quite given up hope of thinking of an example of true mathematical supervenience but I don't have one at the moment. – gowers Jan 7 2011 at 9:16
I see something like the Prime Number Theorem - quite simply - as a property of an object, e.g. of $\mathbb{N}$ (with its structure). Since the theorem is statable and provable, the property is definable and decidable und thus not supervenient (in the narrower sense of "irreducible"). – Hans Stricker Jan 7 2011 at 9:30
3
Perhaps Goodstein's theorem (which asserts that every Goodstein sequence eventually hits zero) is an example of the phenomenon, in the following sense. The universal claim would seem to be supervenient on the individual instances, because the universal claim amounts to the sum total of all those instances, but although PA proves all the individual instances, it does not prove the universal claim. – Joel David Hamkins Jan 8 2011 at 18:21
When I read
To me it seems unconceivable that a mathematical object can have any extra properties, let alone in a supervenient manner, that is, two isomorphic objects would have to share them.
I first of all agreed that mathematics should be like that, but the potential counterexample that sprang to mind is set theory, specifically membership-based set theory with urelements.
By "membership-based set theory", I mean the traditional sort of set theory like ZF which is based on a membership relation $\in$; I do not mean a structural or categorical set theory like Lawvere's Elementary Theory of the Category of Sets. This comes to mind because two sets that are "isomorphic", which in the first instance might mean there is a bijection between them, may have very different set-theoretic properties; for example they may have different ranks.
Upon further reflection, I felt this notion of isomorphism as bijection could be a loaded way to interpret "relation to other objects" and that one should look further to the structure of membership trees. Are two sets with the same inner structure (i.e., that have isomorphic membership trees) distinguishable as sets? In ordinary ZF, I believe one can prove by recursion that two sets with isomorphic membership trees are in fact equal. But this is not the case in a set theory with urelements. If we permit ordinary objects to be urelements of sets, then I can't think of any mathematical properties based on $\in$ alone which might distinguish two sets with three urelements each (thinking of say a box with three cats and another with three dogs), although we'd certainly want to distinguish them. My admittedly cursory reading leads me to believe that set theorists who work with New Foundations take this possibility seriously; I was glancing in particular at this Wikipedia article. I'd be happy to hear from set theorists who derive a different conclusion.
-
@Todd: Please have a look at this question: mathoverflow.net/questions/51506/… – Hans Stricker Jan 9 2011 at 0:24
I think that an interesting phenomenon analogous to the relation of informal mathematics to its set theoretical foundations described by Joel David Hamkins is the relation between those meta-arithmetical notions, theorems, and proofs that we formulate by the help of Gödel numbering. Actually, it is a perplexing fact that we can establish the truth of arithmetical theorems without having the faintest idea of their arithmetical content. For example, we know that the Gödel sentence of a consistent theory is true. The statement carrying this metamathematical content is actually a sentence of the language of arithmetic. But, obviously, its arithmetical content is incomprehensible by any human being.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478505849838257, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/43771/find-the-quantity-of-charge-given-potential-function?answertab=active
|
Find the quantity of charge - given potential function
A potential function is given by $V(r)=\frac{Ae^{-\lambda r}}{r}$ Find charge density and hence charge.
I first took the gradient of potential to get $\vec{E}(r)=\frac{Ae^{-\lambda r}}{r}[\lambda+\frac{1}{r}] \hat{r}$
Now using Gauss's law $\nabla \cdot \vec{E}=\frac{\rho}{\epsilon_0}$
$\implies \rho=-\epsilon_0 \frac{Ae^{-\lambda r}}{r}[\frac{1}{r^2}+\frac{\lambda}{r}+\lambda^2]$
Now to find the charge $Q=\int \rho 4\pi r^2 dr$ .
Did I do it correctly? What limits should I choose for the integration?
-
2 Answers
Gauss' law relates the integrated flux through a closed surface to the integrated charge inside it.
The symmetry of the situation lets you do the flux integral trivially, so I assume that you mean the volume one. Well, you integrate over the full angular ranges and from the center to the bounding surface.
-
At least in the first $r$-derivation, you missed some $\lambda$'s. Generally, always check your units if you derive solutation like that, as for exmple "$1+\frac{1}{r}$" just must be flawed.
And you'll solve your problem more directly using the Poisson equation $\Delta V(r)=\frac{\varrho(r)}{\varepsilon_0}$, and that with a $\Delta$ in explicitly spherical coordinates.
To take a short track here, your $V(r)$ looks like a Yukawa potential, i.e. $$G(r)=\frac{\text{e}^{-\lambda r}}{4\pi\ r}$$ solves $(\Delta-\lambda^2)G(r)=-\delta(\vec r)$ and hence $$\Delta V(r)=4\pi A\left(-\delta(\vec r)+\frac{\lambda^2}{4\pi}\frac{\text{e}^{-\lambda r}}{r}\right)\equiv \frac{\varrho(r)}{\varepsilon_0}.$$
Integration over all of space, if you want to find the total charge. If you have a density like $\propto r^{-n}$, then clearly there are charges everywhere. Remark: For integration, notice that you're dealing with a three dimensional delta-function here (but from the Coulomb potential, you know that it gives you a single charge anyway). For the exp-integration, up to some small numbers, you can figure out the result just by power counting in $\lambda$'s.
-
Sorry I missed out $\lambda$ in my equation I've now edited that – Prakash Gautam Nov 9 '12 at 15:25
Sorry I am not much aware of delta function. Keeping $\delta(\vec r)$ in the result keeps the work of taking Laplacian still intact. So why should I prefer using $\delta(\vec r)$ in my answer of charge density? – Prakash Gautam Nov 9 '12 at 15:30
– Nick Kidman Nov 9 '12 at 18:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196195602416992, "perplexity_flag": "middle"}
|
http://mathbabe.org/2012/11/15/columbia-data-science-course-week-11-estimating-causal-effects/
|
# mathbabe
Exploring and venting about quantitative issues
Home > data science, math education, open source tools, statistics > Columbia Data Science course, week 11: Estimating causal effects
## Columbia Data Science course, week 11: Estimating causal effects
November 15, 2012
The week in Rachel Schutt’s Data Science course at Columbia we had Ori Stitelman, a data scientist at Media6Degrees.
We also learned last night of a new Columbia course: STAT 4249 Applied Data Science, taught by Rachel Schutt and Ian Langmore. More information can be found here.
Ori’s background
Ori got his Ph.D. in Biostatistics from UC Berkeley after working at a litigation consulting firm. He credits that job with allowing him to understand data through exposure to tons of different data sets; since his job involved creating stories out of data to let experts testify at trials, e.g. for asbestos. In this way Ori developed his data intuition.
Ori worries that people ignore this necessary data intuition when they shove data into various algorithms. He thinks that when their method converges, they are convinced the results are therefore meaningful, but he’s here today to explain that we should be more thoughtful than that.
It’s very important when estimating causal parameters, Ori says, to understand the data-generating distributions and that involves gaining subject matter knowledge that allows you to understand if you necessary assumptions are plausible.
Ori says the first step in a data analysis should always be to take a step back and figure out what you want to know, write that down, and then find and use the tools you’ve learned to answer those directly. Later of course you have to decide how close you came to answering your original questions.
Thought Experiment
Ori asks, how do you know if your data may be used to answer your question of interest? Sometimes people think that because they have data on a subject matter then you can answer any question.
Students had some ideas:
• You need coverage of your parameter space. For example, if you’re studying the relationship between household income and holidays but your data is from poor households, then you can’t extrapolate to rich people. (Ori: but you could ask a different question)
• Causal inference with no timestamps won’t work.
• You have to keep in mind what happened when the data was collected and how that process affected the data itself
• Make sure you have the base case: compared to what? If you want to know how politicians are affected by lobbyists money you need to see how they behave in the presence of money and in the presence of no money. People often forget the latter.
• Sometimes you’re trying to measure weekly effects but you only have monthly data. You end up using proxies. Ori: but it’s still good practice to ask the precise question that you want, then come back and see if you’ve answered it at the end. Sometimes you can even do a separate evaluation to see if something is a good proxy.
• Signal to noise ratio is something to worry about too: as you have more data, you can more precisely estimate a parameter. You’d think 10 observations about purchase behavior is not enough, but as you get more and more examples you can answer more difficult questions.
Ori explains confounders with a dating example
Frank has an important decision to make. He’s perusing a dating website and comes upon a very desirable woman – he wants her number. What should he write in his email to her? Should he tell her she is beautiful? How do you answer that with data?
You could have him select a bunch of beautiful women and half the time chosen at random, tell them they’re beautiful. Being random allows us to assume that the two groups have similar distributions of various features (not that’s an assumption).
Our real goal is to understand the future under two alternative realities, the treated and the untreated. When we randomize we are making the assumption that the treated and untreated populations are alike.
OK Cupid looked at this and concluded:
But note:
• It could say more about the person who says “beautiful” than the word itself. Maybe they are otherwise ridiculous and overly sappy?
• The recipients of emails containing the word “beautiful” might be special: for example, they might get tons of email, which would make it less likely for Frank to get any response at all.
• For that matter, people may be describing themselves as beautiful.
Ori points out that this fact, that she’s beautiful, affects two separate things:
1. whether Frank uses the word “beautiful” or not in his email, and
2. the outcome (i.e. whether Frank gets the phone number).
For this reason, the fact that she’s beautiful qualifies as a confounder. The treatment is Frank writing “beautiful” in his email.
Causal graphs
Denote by $W$ the list of all potential confounders. Note it’s an assumption that we’ve got all of them (and recall how unreasonable this seems to be in epidemiology research).
Denote by $A$ the treatment (so, Frank using the word “beautiful” in the email). We usually assume this to have a binary (0/1) outcome.
Denote by $Y$ the binary (0/1) outcome (Frank getting the number).
We are forming the following causal graph:
In a causal graph, each arrow means that the ancestor is a cause of the descendent, where ancestor is the node the arrow is coming out of and the descendent is the node the arrow is going into (see this book for more).
In our example with Frank, the arrow from beauty means that the woman being beautiful is a cause of Frank writing “beautiful” in the message. Both the man writing “beautiful” and and the woman being beautiful are direct causes of her probability to respond to the message.
Setting the problem up formally
The building blocks in understanding the above causal graph are:
2. Make causal assumptions (denote these by $P$).
3. Translate question into a formal quantity (denote this by $\Psi(P)$).
4. Estimate quantity (denote this by $\Psi(P_n)$).
We need domain knowledge in general to do this. We also have to take a look at the data before setting this up, for example to make sure we may make the
Positivity Assumption. We need treatment (i.e. data) in all strata of things we adjust for. So if think gender is a confounder, we need to make sure we have data on women and on men. If we also adjust for age, we need data in all of the resulting bins.
What is the effect of ___ on ___?
This is the natural form of a causal question. Here are some examples:
1. What is the effect of advertising on customer behavior?
2. What is the effect of beauty on getting a phone number?
3. What is the effect of censoring on outcome? (censoring is when people drop out of a study)
4. What is the effect of drug on time until viral failure?, and the general case
5. What is the effect of treatment on outcome?
Look, estimating causal parameters is hard. In fact the effectiveness of advertising is almost always ignored because it’s so hard to measure. Typically people choose metrics of success that are easy to estimate but don’t measure what they want! Everyone makes decision based on them anyway because it’s easier. This results in people being rewarded for finding people online who would have converted anyway.
Accounting for the effect of interventions
Thinking about that, we should be concerned with the effect of interventions. What’s a model that can help us understand that effect?
A common approach is the (randomized) A/B test, which involves the assumption that two populations are equivalent. As long as that assumption is pretty good, which it usually is with enough data, then this is kind of the gold standard.
But A/B tests are not always possible (or they are too expensive to be plausible). Often we need to instead estimate the effects in the natural environment, but then the problem is the guys in different groups are actually quite different from each other.
So, for example, you might find you showed ads to more people who are hot for the product anyway; it wouldn’t make sense to test the ad that way without adjustment.
The game is then defined: how do we adjust for this?
The ideal case
Similar to how we did this last week, we pretend for now that we have a “full” data set, which is to say we have god-like powers and we know what happened under treatment as well as what would have happened if we had not treated, as well as vice-versa, for every agent in the test.
Denote this full data set by $X:$
$X = (W, A, Y^*(1), Y^*(0)),$ where
• $W$ denotes the baseline variables (attributes of the agent) as above,
• $A$ denotes the binary treatment as above,
• $Y^*(1)$ denotes the binary outcome if treated, and
• $Y^*(0)$ denotes the binary outcome if untreated.
As a baseline check: if we observed this full data structure how would we measure the effect of A on Y? In that case we’d be all-powerful and we would just calculate:
$E(Y^*(1)) - E(Y^*(0)).$
Note that, since $Y^*(0)$ and $Y^*(1)$ are binary, the expected value $E(Y^*(0))$ is just the probability of a positive outcome if untreated. So in the case of advertising, the above is the conversion rate change when you show someone an ad. You could also take the ratio of the two quantities:
$E(Y^*(1))/E(Y^*(0)).$
This would be calculating how much more likely someone is to convert if they see an ad.
Note these are outcomes you can really do stuff with. If you know people convert at 30% versus 10% in the presence of an ad, that’s real information. Similarly if they convert 3 times more often.
In reality people use silly stuff like log odds ratios, which nobody understands or can interpret meaningfully.
The ideal case with functions
In reality we don’t have god-like powers, and we have to make do. We will make a bunch of assumptions. First off, denote by $U$ exogenous variables, i.e. stuff we’re ignoring. Assume there are functions $f_1, f_2,$ and $f_3$ so that:
• $W = f_1(U_W),$ i.e. the attributes $W$ are just functions of some exogenous variables,
• $A = f_2(W, U_A),$ i.e. the treatment depends in a nice way on some exogenous variables as well the attributes we know about living in $W$, and
• $Y = f_3(A, W, U_Y),$ i.e. the outcome is just a function of the treatment, the attributes, and some exogenous variables.
Note the various $U$‘s could contain confounders in the above notation. That’s gonna change.
But we want to intervene on this causal graph as though it’s the intervention we actually want to make. i.e. what’s the effect of treatment $A$ on outcome $Y$?
Let’s look at this from the point of view of the joint distribution $P(W, A, Y) = P(W)P(A|W)P(Y|A,W).$ These terms correspond to the following in our example:
1. the probability of a woman being beautiful,
2. the probability that Frank writes and email to a her saying that she’s beautiful, and
3. the probability that Frank gets her phone number.
What we really care about though is the distribution under intervention:
$P_a = P(W) P(Y_a| W),$
i.e. the probability knowing someone either got treated or not. To answer our question, we manipulate the value of $A,$ first setting it to 1 and doing the calculation, then setting it to 0 and redoing the calculation.
Assumptions
We are making a “Consistency Assumption / SUTVA” which can be expressed like this:
We have also assumed that we have no unmeasured confounders, which can be expressed thus:
We are also assuming positivity, which we discussed above.
Down to brass tacks
We only have half the information we need. We need to somehow map the stuff we have to the full data set as defined above. We make use of the following identity:
Recall we want to estimate $\Psi(P) = E(Y^*(1))/E(Y^*(0)),$ which by the above can be rewritten
$E_W(E(Y|A=1, W))/ E_W(E(Y|A=0, W)).$
We’re going to discuss three methods to estimate this quantity, namely:
1. MLE-based substitution estimator (MLE),
2. Inverse probability estimators (IPTW),
3. Double robust estimating equations (A-IPTW)
For the above models, it’s useful to think of there being two machines, called $g$ and $Q$, which generate estimates of the probability of the treatment knowing the attributes (that’s machine $g$) and the probability of the outcome knowing the treatment and the attributes (machine $Q$).
IPTW
In this method, which is also called importance sampling, we weight individuals that are unlikely to be shown an ad more than those likely. In other words, we up-sample in order to generate the distribution, to get the estimation of the actual effect.
To make sense of this, imagine that you’re doing a survey of people to see how they’ll vote, but you happen to do it at a soccer game where you know there are more young people than elderly people. You might want to up-sample the elderly population to make your estimate.
This method can be unstable if there are really small sub-populations that you’re up-sampling, since you’re essentially multiplying by a reciprocal.
The formula in IPTW looks like this:
Note the formula depends on the $g$ machine, i.e. the machine that estimates the treatment probability based on attributes. The problem is that people get the $g$ machine wrong all the time, which makes this method fail.
In words, when $a=1$ we are taking the sum of terms whose numerators are zero unless we have a treated, positive outcome, and we’re weighting them in the denominator by the probability of getting treated so each “population” has the same representation. We do the same for $a=0$ and take the difference.
MLE
This method is based on the $Q$ machine, which as you recall estimates the probability of a positive outcome given the attributes and the treatment, so the \$latex P(Y|A,W)\$ values.
This method is straight-forward: shove everyone in the machine and predict how the outcome would look under both treatment and non-treatment conditions, and take difference.
Note we don’t know anything about the underlying machine \$latex Q\$. It could be a logistic regression.
Get ready to get worried: A-IPTW
What if our machines are broken? That’s when we bring in the big guns: double robust estimators.
They adjust for confounding through the two machines we have on hand, $Q$ and $g,$ and one machine augments the other depending on how well it works. Here’s the functional form written in two ways to illustrate the hedge:
and
Note: you are still screwed if both machines are broken. In some sense with a double robust estimator you’re hedging your bet.
“I’m glad you’re worried because I’m worried too.” – Ori
Simulate and test
I’ve shown you 3 distinct methods that estimate effects in observational studies. But they often come up with different answers. We set up huge simulation studies with known functions, i.e. where we know the functional relationships between everything, and then tried to infer those using the above three methods as well as a fourth method called TMLE (targeted maximal likelihood estimation).
As a side note, Ori encourages everyone to simulate data.
We wanted to know, which methods fail with respect to the assumptions? How well do the estimates work?
We started to see that IPTW performs very badly when you’re adjusting by very small thing. For example we found that the probability of someone getting sick is 132. That’s not between 0 and 1, which is not good. But people use these methods all the time.
Moreover, as things get more complicated with lots of nodes in our causal graph, calculating stuff over long periods of time, populations get sparser and sparser and it has an increasingly bad effect when you’re using IPTW. In certain situations your data is just not going to give you a sufficiently good answer.
Causal analysis in online display advertising
An overview of the process:
1. We observe people taking actions (clicks, visits to websites, purchases, etc.).
2. We use this observed data to build list of “prospects” (people with a liking for the brand).
3. We subsequently observe same user during over the next few days.
4. The user visits a site where a display ad spot exists and bid requests are made.
5. An auction is held for display spot.
6. If the auction is won, we display the ad.
7. We observe the user’s actions after displaying the ad.
But here’s the problem: we’ve instituted confounders – if you find people who convert highly they think you’ve done a good job. In other words, we are looking at the treated without looking at the untreated.
We’d like to ask the question, what’s the effect of display advertising on customer conversion?
As a practical concern, people don’t like to spend money on blank ads. So A/B tests are a hard sell.
We performed some what-if analysis stipulated on the assumption that the group of users that sees ad is different. Our process was as follows:
1. Select prospects that we got a bid request for on day 0
2. Observe if they were treated on day 1. For those treated set $A=1$ and those not treated set $A=0.$ collect attributes $W.$
3. Create outcome window to be the next five days following treatment; observe if outcome event occurs (visit to the website whose ad was shown).
4. Estimate model parameters using the methods previously described (our three methods plus TMLE).
Here are some results:
Note results vary depending on the method. And there’s no way to know which method is working the best. Moreover, this is when we’ve capped the size of the correction in the IPTW methods. If we don’t then we see ridiculous results:
### Like this:
Categories: data science, math education, open source tools, statistics
1. November 15, 2012 at 12:41 pm | #1
Brava! You’ve succeeded in making sex boring (or, at least, merely equivalent to sliced bread).
• November 15, 2012 at 12:42 pm | #2
That hurt!
• November 15, 2012 at 1:03 pm | #3
Sorry, not meant to. Just trying to push the topic forward (without having an answer).
2. November 24, 2012 at 11:06 pm | #4
Thank you for blogging the class! Do you have suggestions on supplemental reading to fill in gaps such as meaning of the “g machine, i.e. the machine that estimates the treatment probability based on attributes”? I am looking forward to your book.
1. November 20, 2012 at 12:18 pm | #1
2. November 20, 2012 at 1:40 pm | #2
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 47, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407498240470886, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/69800-tricky-question.html
|
Thread:
1. Tricky question
Morning Forum I am having issues with the following question any help would be appreciated:
Attached Thumbnails
2. Originally Posted by AlgebraicallyChallenged
Morning Forum I am having issues with the following question any help would be appreciated:
It might help if you write
$f\circ g(x)$ as $f(g(x))$
and
$g \circ f(x)$ as $g(f(x))$.
Can you see what's happening now?
3. I can't remember how is that sign called between the f and g..
$f\circ g(x)$
compo.. something?
4. Originally Posted by metlx
I can't remember how is that sign called between the f and g..
$f\circ g(x)$
compo.. something?
Yes it's a composition of functions.
Let me put it this way.
Say you had a function $f(x)$.
If you evaluated it at say, point $x = a$ you would have to replace all the x's with a's.
Here, if we had $f \circ g(x) = f(g(x))$, instead of substituting x with a, we'd see what happens if we replaced all the x's with whatever $g(x)$ is.
Does that make sense?
Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9742568135261536, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/32623?sort=votes
|
## behavior of places of a function field under automorphism
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
if $P_{1}$ and $P_{2}$ are distinct places of equal degree of the function field F/K, and $\sigma$ is a K-field automorphism, such that $\sigma(P_{1})=P_{2}$. then, does $\deg (P_{1}\cap K(x))=\deg (P_{2}\cap K(x))$, where K(x) is the rational function field? in particular, is this true over the hermitian function field?
-
## 1 Answer
No, not in general, that is not without particular requirements for $x$:
take $F=\mathbb{R}(y)$, the rational function field in one variable over the reals. Then the equation $\sigma (y)=y+1$ determines an automorphism of $F/\mathbb{R}$.
Let $P_1$ be the place associated to the polynomial $y^2+1$; then $\deg (P_1)=2$.
Let $P_2 := \sigma (P_1)$; then $P_2$ is associated to the polynomial $y^2+2y+2$ and (automatically) $\deg (P_2)=2$.
Let $x := y^2+1$; then $[F:\mathbb{R}(x)]=2$ and $P_1|_{\mathbb{R}(x)}$ has degree $1$.
On the other hand $yP_2$ either equals $i-1$ or $-i-1$. In both cases $xP_2$ is non-real and thus $\deg (P_2)=2$.
H
-
ok, thank you. Do you know what these "particular requirements" might be? – y_kaplan Jul 20 2010 at 18:49
1
Things are working in the case $\sigma (x)=x$ or more generally if $\sigma |_{K(x)}$ is an automorphism of $K(x)$. H – Hagen Jul 21 2010 at 7:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8739393353462219, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/21260/lim-limits-x-to-infty-frac-ln1000-xx5/21269
|
$\lim\limits_{x \to \infty}\frac{\ln^{1000} x}{x^5}$
I'm trying to solve $$\lim\limits_{x \to \infty}\frac{\ln^{1000} x}{x^5}$$ Here's what I get: $$e^{\lim\limits_{x \to \infty}\ln{\frac{\ln^{1000}x}{x^5} }}$$ Dropping the $e$ for ease, $$\lim\limits_{x \to \infty} 1000\ln{(\ln{(x)})} - 5 \ln{x}$$
Now I have $\infty - \infty$.. I know there must be a next step, but I don't know what it would be.
-
Is $\ln^{1000}x$ intended to mean $(\ln x)^{1000}$? That's usual withtrigonometric functions, but I had never seen it used with logarithms... – Mariano Suárez-Alvarez♦ Feb 9 '11 at 22:03
The limit after "Here's what I get" it not at all the same as what you started with. – Hans Lundmark Feb 10 '11 at 7:28
@Hans: Why? It was writing $e^{ln{y}}=y$ – bobobobo Feb 13 '11 at 18:26
Sorry, that was just me doing some sloppy reading! I thought it said "limit, as $e^x \to \infty$, of ..." instead of "e to the limit of ...". – Hans Lundmark Feb 13 '11 at 18:57
3 Answers
HINT $\$ Changing variables $\rm\ Z = ln\ X\$ yields $\displaystyle\rm\ \lim_{\ Z\ \to\ \infty}\ \frac{Z^{1000}}{e^{5\:Z}}\$ which is easily handled either by power series, L'Hopital or related techniques.
-
I'm interested in this path, but it takes to $\lim\limits_{x \to \infty} 5(200 \ln(z) - \ln(x) )$ – bobobobo Feb 9 '11 at 22:56
@bobobobo: I've added further details - see above. – Gone Feb 9 '11 at 23:11
Oh man! What a good answer. Now I have 1000 derivatives of $\lim\limits_{z \to \infty} \frac{z^{1000}}{e^{5z}}$ via L'Hos which gives $\lim\limits_{z \to \infty} \frac{1000! z^0}{1000 \times 5 e^{5z} } = 0$ – bobobobo Feb 9 '11 at 23:18
Of course, professional limit-takers know when "changes of variables" in limits is a valid step to perform, but it's surprisingly subtle to formulate it in a way that encompasses all the variants we do without thinking (and that also is still true!). Students are generally not taught changes of variables in limits very early on, if I'm not mistaken. – Greg Martin Oct 28 '11 at 20:33
Be careful using limit operation.
First, let show that $\lim_{x \rightarrow +\infty} \dfrac{\ln x}{x} = 0$. For $t \geq 1$, we have $t \geq \sqrt{t}$ which imply for $x \geq 1$
$$0 \leq \ln x = \int_1^x \dfrac{dt}{t} \leq \int_1^x \dfrac{dt}{\sqrt{t}} = 2 \sqrt{x} - 2 \leq 2 \sqrt{x}$$.
Then, for any $a,b > 0$ and $x > 1$, we have $$\dfrac{\ln^b x}{x^a} = \left( \dfrac{\ln x}{x^{\frac{a}{b}}} \right)^b = \left( \dfrac{b}{a} \right)^b \left( \dfrac{\ln (x^\frac{a}{b})}{x^{\frac{a}{b}}} \right)^b$$
-
+1: I have to say, I like the second part very much, since it completely avoids L'Hopital's Rule. I'm not so happy with the first part (most people would encounter the limit of $\ln x/x$ well before encountering integrals), but the second part makes up for it. – Arturo Magidin Feb 10 '11 at 5:18
Taking $\ln^{1000}x=(\ln x)^{1000}$ you can apply L'Hopital's rule 999 times to reduce to
$$\lim_{x\to\infty}\frac{1000!\ln x}{5^{1000}x^5}.$$
Then one more application gives
$$\lim_{x\to \infty}\frac{1000!}{5^{1001}x^5}=0.$$
-
You used L'Hopitals' rule once too many times. Each application reduces the exponent of $\ln(x)$ by $1$, so applying it $1000$ times reduces it to $(\ln x)^{1000-1000} = (\ln x)^0$. – Arturo Magidin Feb 9 '11 at 22:40
Thanks. I fixed it. – Joe Johnson 126 Feb 9 '11 at 22:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462319016456604, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/159169/flipping-cards-probability
|
# Flipping Cards Probability
You have a deck of cards, 26 red, 26 black. These are turned over, and at any point you may stop and exclaim "The next card is red.". If the next card is red you win £10.
What's the optimal strategy? Prove this is the optimal strategy.
I feel like the optimal strategy is whenever you are in a state that you flip more black than red cards you call out red card, but I do not know how to prove it, nor can I think of why it must be the absolute best strategy.
Thanks for any help.
-
1
The following seems persuasive to me. Let Alice play the optimal strategy for red, and let Bob simultaneously play the optimal strategy for black. If random variables $X$ and $Y$ represent their winnings, then $E(X+Y)=10$, so by symmetry $E(X)=5$. – André Nicolas Jun 16 '12 at 19:24
Indeed. It doesn't seem there's anything you can do to increase your odds beyond 50/50, so the optimal strategy is to predict the first card red and be done with it. – Théophile Jun 16 '12 at 19:32
I have replaced [brainteaser] by [puzzle], but I'm not sure either is needed. – Asaf Karagila Jun 16 '12 at 19:40
## 2 Answers
There is no optimal strategy (or rather, every strategy is optimal). Assuming your strategy involves exclaiming before the last card is drawn, your expected winnings are 5 pounds.
If you don't exclaim, your winnings are obviously zero.
Here's a reference: "Games People Don't Play" (Peter Winkler)
-
This is a really fun paper. Thanks! – MJD Jun 17 '12 at 17:45
We deal first with deterministic strategies, adding the obvious rule that a call must be made on or before the $52$nd card.
The first step is an existence proof: there are optimal deterministic strategies. For among the finitely many deterministic strategies, there must exist one or more that maximizes expectation.
Suppose that Alice chooses an optimal strategy, and plays it for red, and independently Bob chooses and plays an optimal strategy for black. Let random variable $X$ denote Alice's winnings, and let $Y$ denote Bob's winnings. (The sample space is the set of permutations of the cards, assumed all equally likely.)
Then $E(X+Y)=10$. So by linearity of expectation, $E(X)+E(Y)=10$, and by symmetry $E(X)=E(Y)$, so $E(X)=5$.
Suppose that under the same conditions, we selflessly try to minimize the expected return. The above argument shows that again the expectation is $5$. So all deterministic "strategies" lead to the same expected return.
Now we briefly consider probabilistic strategies, for which the probability of what one does next (call or not call) is a function of the sequence of cards already seen, but nothing else. It is not hard to show that again in this case there are optimal strategies, and the rest of the argument is the same.
There may be more general notions of strategy. The above arguments are silent about these.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313902258872986, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/21999/list
|
## Return to Answer
2 know--> no; currently known--> known
One way to answer this question is as follows: Ramanujan's conjecture is a special case of a much more general conjecture that any cuspdial automorphic representation of $GL_n$ over a number field is tempered. This is a technical but fundamental notion, which in the special case of the automorphic representation of $GL_2$ attached to the $\Delta$ function, reduces to Ramanujan's original conjecture. In fact, many people working in the theory of automorphic representations refer to this very general conjecture simply as the Ramanujan conjecture.
When applied to other cuspforms on $GL_2$ (namely Maass forms) it includes Selberg's conjecture that on congruence quotients of the upper half-plane, the spectrum of the hyperbolic Laplacian is bounded below by $1/4$.
The appearance of hyperbolic geometry can be understood in the following way: the quotient of $SL_n(\mathbb R)$ by $SO(n)$ is a non-compact symmetric space, which in the particular case of $SL_2$ is the hyperbolic plane. So, while the particular appearance of hyperbolic geometry may be a bit of a red herring, the appearence of highly symmetric geometry is a reflection of the group representation theory that is underlying the picture.
As of the current moment, know no purely representation-theoretic approach to the (general form of) Ramanujan's conjecture is currently known. (Or rather, a proof strategy involving what is called symmetric power functoriality is known, but the requisite results on symmmetric power functoriality seem very much out of reach at the moment). The only cases that are proved at the moment are cases when one can relate the group theoretic picture of automorphic forms to algebraic geometry (first over $\mathbb C$, then over a number field, and then ultimately over finite fields, so that the Weil conjecture apply). This is how Deligne's proof proceeds. This connection between the geometry of symmetric spaces and arithmetic and geometry over finite fields is one of the profound points of investigation of modern number theory, but despite many positive results related to it, it remains essentially mysterious, even to experts.
1
One way to answer this question is as follows: Ramanujan's conjecture is a special case of a much more general conjecture that any cuspdial automorphic representation of $GL_n$ over a number field is tempered. This is a technical but fundamental notion, which in the special case of the automorphic representation of $GL_2$ attached to the $\Delta$ function, reduces to Ramanujan's original conjecture. In fact, many people working in the theory of automorphic representations refer to this very general conjecture simply as the Ramanujan conjecture.
When applied to other cuspforms on $GL_2$ (namely Maass forms) it includes Selberg's conjecture that on congruence quotients of the upper half-plane, the spectrum of the hyperbolic Laplacian is bounded below by $1/4$.
The appearance of hyperbolic geometry can be understood in the following way: the quotient of $SL_n(\mathbb R)$ by $SO(n)$ is a non-compact symmetric space, which in the particular case of $SL_2$ is the hyperbolic plane. So, while the particular appearance of hyperbolic geometry may be a bit of a red herring, the appearence of highly symmetric geometry is a reflection of the group representation theory that is underlying the picture.
As of the current moment, know purely representation-theoretic approach to the (general form of) Ramanujan's conjecture is currently known. (Or rather, a proof strategy involving what is called symmetric power functoriality is known, but the requisite results on symmmetric power functoriality seem very much out of reach at the moment). The only cases that are proved at the moment are cases when one can relate the group theoretic picture of automorphic forms to algebraic geometry (first over $\mathbb C$, then over a number field, and then ultimately over finite fields, so that the Weil conjecture apply). This is how Deligne's proof proceeds. This connection between the geometry of symmetric spaces and arithmetic and geometry over finite fields is one of the profound points of investigation of modern number theory, but despite many positive results related to it, it remains essentially mysterious, even to experts.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415336847305298, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.