url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://www.nag.com/numeric/CL/nagdoc_cl23/html/C09/c09ecc.html
|
# NAG Library Function Documentnag_mldwt_2d (c09ecc)
## 1 Purpose
nag_mldwt_2d (c09ecc) computes the two-dimensional multi-level discrete wavelet transform (DWT). The initialization function nag_wfilt_2d (c09abc) must be called first to set up the DWT options.
## 2 Specification
#include <nag.h>
#include <nagc09.h>
void nag_mldwt_2d (Integer m, Integer n, const double a[], Integer lda, Integer lenc, double c[], Integer nwl, Integer dwtlvm[], Integer dwtlvn[], Integer icomm[], NagError *fail)
## 3 Description
nag_mldwt_2d (c09ecc) computes the multi-level DWT of two-dimensional data. For a given wavelet and end extension method, nag_mldwt_2d (c09ecc) will compute a multi-level transform of a matrix $A$, using a specified number, ${n}_{l}$, of levels. The number of levels specified, ${n}_{l}$, must be no more than the value ${l}_{\mathrm{max}}$ returned in nwl by the initialization function nag_wfilt_2d (c09abc) for the given problem. The transform is returned as a set of coefficients for the different levels (packed into a single array) and a representation of the multi-level structure.
The notation used here assigns level $0$ to the input matrix, $A$. Level 1 consists of the first set of coefficients computed: the vertical (${v}_{1}$), horizontal (${h}_{1}$) and diagonal (${d}_{1}$) coefficients are stored at this level while the approximation (${a}_{1}$) coefficients are used as the input to a repeat of the wavelet transform at the next level. This process is continued until, at level ${n}_{l}$, all four types of coefficients are stored. The output array, $C$, stores these sets of coefficients in reverse order, starting with ${a}_{{n}_{l}}$ followed by ${v}_{{n}_{l}},{h}_{{n}_{l}},{d}_{{n}_{l}},{v}_{{n}_{l}-1},{h}_{{n}_{l}-1},{d}_{{n}_{l}-1},\dots ,{v}_{1},{h}_{1},{d}_{1}$.
None.
## 5 Arguments
1: m – IntegerInput
On entry: number of rows, $m$, of data matrix $A$.
Constraint: this must be the same as the value m passed to the initialization function nag_wfilt_2d (c09abc).
2: n – IntegerInput
On entry: number of columns, $n$, of data matrix $A$.
Constraint: this must be the same as the value n passed to the initialization function nag_wfilt_2d (c09abc).
3: a[${\mathbf{lda}}×{\mathbf{n}}$] – const doubleInput
Note: the $\left(i,j\right)$th element of the matrix $A$ is stored in ${\mathbf{a}}\left[\left(j-1\right)×{\mathbf{lda}}+i-1\right]$.
On entry: the $m$ by $n$ data matrix, $A$.
4: lda – IntegerInput
On entry: the stride separating matrix row elements in the array a.
Constraint: ${\mathbf{lda}}\ge {\mathbf{m}}$.
5: lenc – IntegerInput
On entry: the dimension of the array c. c must be large enough to contain, ${n}_{\mathrm{ct}}$, wavelet coefficients. The maximum value of ${n}_{\mathrm{ct}}$ is returned in nwct by the call to the initialization function nag_wfilt_2d (c09abc) and corresponds to the DWT being continued for the maximum number of levels possible for the given data set. When the number of levels, ${n}_{l}$, is chosen to be less than the maximum, ${l}_{\mathrm{max}}$, then ${n}_{\mathrm{ct}}$ is correspondingly smaller and lenc can be reduced by noting that the vertical, horizontal and diagonal coefficients are stored at every level and that in addition the approximation coefficients are stored for the final level only. The number of coefficients stored at each level is given by $3×⌈\stackrel{-}{m}/2⌉×⌈\stackrel{-}{n}/2⌉$ for ${\mathbf{mode}}=\mathrm{Nag_Periodic}$ in nag_wfilt_2d (c09abc) and $3×⌊\left(\stackrel{-}{m}+{n}_{f}-1\right)/2⌋×⌊\left(\stackrel{-}{n}+{n}_{f}-1\right)/2⌋$ for ${\mathbf{mode}}=\mathrm{Nag_HalfPointSymmetric}$, $\mathrm{Nag_WholePointSymmetric}$ or $\mathrm{Nag_ZeroPadded}$, where the input data is of dimension $\stackrel{-}{m}×\stackrel{-}{n}$ at that level and ${n}_{f}$ is the filter length nf provided by the call to nag_wfilt_2d (c09abc). At the final level the storage is $4/3$ times this value to contain the set of approximation coefficients.
Constraint: ${\mathbf{lenc}}\ge {n}_{\mathrm{ct}}$, where ${n}_{\mathrm{ct}}$ is the total number of coefficients that correspond to a transform with nwl levels.
6: c[lenc] – doubleOutput
On exit: the coefficients of a multi-level wavelet transform of the dataset.
Let $q\left(\mathit{i}\right)$ denote the number of coefficients (of each type) at level $\mathit{i}$, for $\mathit{i}=1,2,\dots ,{n}_{l}$, such that $q\left(i\right)={\mathbf{dwtlvm}}\left[{n}_{l}-i\right]×{\mathbf{dwtlvn}}\left[{n}_{l}-i\right]$. Then, letting ${k}_{1}=q\left({n}_{l}\right)$ and ${k}_{\mathit{j}+1}={k}_{\mathit{j}}+q\left({n}_{l}-⌈\mathit{j}/3⌉+1\right)$, for $\mathit{j}=1,2,\dots ,3{n}_{l}$, the coefficients are stored in c as follows:
${\mathbf{c}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{k}_{1}$
Contains the level ${n}_{l}$ approximation coefficients, ${a}_{{n}_{l}}$.
${\mathbf{c}}\left[\mathit{i}-1\right]$, for $\mathit{i}={k}_{j}+1,\dots ,{k}_{j+1}$
Contains the level ${n}_{l}-⌈j/3⌉+1$ vertical, horizontal and diagonal coefficients. These are:
• vertical coefficients if ;
• horizontal coefficients if ;
• diagonal coefficients if ,
for $j=1,\dots ,3{n}_{l}$.
7: nwl – IntegerInput
On entry: the number of levels, ${n}_{l}$, in the multi-level resolution to be performed.
Constraint: $1\le {\mathbf{nwl}}\le {l}_{\mathrm{max}}$, where ${l}_{\mathrm{max}}$ is the value returned in nwl (the maximum number of levels) by the call to the initialization function nag_wfilt_2d (c09abc).
8: dwtlvm[nwl] – IntegerOutput
On exit: the number of coefficients in the first dimension for each coefficient type at each level. ${\mathbf{dwtlvm}}\left[\mathit{i}-1\right]$ contains the number of coefficients in the first dimension (for each coefficient type computed) at the (${n}_{l}-\mathit{i}+1$)th level of resolution, for $\mathit{i}=1,2,\dots ,{n}_{l}$. Thus for the first ${n}_{l}-1$ levels of resolution, ${\mathbf{dwtlvm}}\left[{n}_{l}-\mathit{i}\right]$ is the size of the first dimension of the matrices of vertical, horizontal and diagonal coefficients computed at this level; for the final level of resolution, ${\mathbf{dwtlvm}}\left[0\right]$ is the size of the first dimension of the matrices of approximation, vertical, horizontal and diagonal coefficients computed.
9: dwtlvn[nwl] – IntegerOutput
On exit: the number of coefficients in the second dimension for each coefficient type at each level. ${\mathbf{dwtlvn}}\left[\mathit{i}-1\right]$ contains the number of coefficients in the second dimension (for each coefficient type computed) at the (${n}_{l}-\mathit{i}+1$)th level of resolution, for $\mathit{i}=1,2,\dots ,{n}_{l}$. Thus for the first ${n}_{l}-1$ levels of resolution, ${\mathbf{dwtlvn}}\left[{n}_{l}-\mathit{i}\right]$ is the size of the second dimension of the matrices of vertical, horizontal and diagonal coefficients computed at this level; for the final level of resolution, ${\mathbf{dwtlvn}}\left[0\right]$ is the size of the second dimension of the matrices of approximation, vertical, horizontal and diagonal coefficients computed.
10: icomm[$180$] – IntegerCommunication Array
On entry: contains details of the discrete wavelet transform and the problem dimension as setup in the call to the initialization function nag_wfilt_2d (c09abc).
On exit: contains additional information on the computed transform.
11: fail – NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
NE_BAD_PARAM
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_INITIALIZATION
Either the initialization function has not been called first or icomm has been corrupted.
Either the initialization function was called with ${\mathbf{wtrans}}=\mathrm{Nag_SingleLevel}$ or icomm has been corrupted.
NE_INT
On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{m}}=〈\mathit{\text{value}}〉$, the value of m on initialization (see nag_wfilt_2d (c09abc)).
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}=〈\mathit{\text{value}}〉$, the value of n on initialization (see nag_wfilt_2d (c09abc)).
On entry, ${\mathbf{nwl}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nwl}}\ge 1$.
NE_INT_2
On entry, ${\mathbf{lda}}=〈\mathit{\text{value}}〉$ and ${\mathbf{m}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{lda}}\ge {\mathbf{m}}$.
On entry, ${\mathbf{lenc}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{lenc}}\ge 〈\mathit{\text{value}}〉$, the total number of coefficents to be generated.
On entry, ${\mathbf{nwl}}=〈\mathit{\text{value}}〉$ and ${\mathbf{nwl}}=〈\mathit{\text{value}}〉$ in nag_wfilt_2d (c09abc).
Constraint: ${\mathbf{nwl}}\le {\mathbf{nwl}}$ in nag_wfilt_2d (c09abc).
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
## 7 Accuracy
The accuracy of the wavelet transform depends only on the floating point operations used in the convolution and downsampling and should thus be close to machine precision.
## 8 Further Comments
The wavelet coefficients at each level can be extracted from the output array c using the information contained in dwtlvm and dwtlvn on exit (see the descriptions of c, dwtlvm and dwtlvn in Section 5). For example, given an input data set, $A$, denoising can be carried out by applying a thresholding operation to the detail (vertical, horizontal and diagonal) coefficients at every level. The elements ${\mathbf{c}}\left[{k}_{1}\right]$ to ${\mathbf{c}}\left[{k}_{{n}_{l}+1}-1\right]$, as described in Section 5, contain the detail coefficients, ${\stackrel{^}{c}}_{ij}$, for $\mathit{i}={n}_{l},{n}_{l}-1,\dots ,1$ and $\mathit{j}=1,2,\dots ,3q\left(i\right)$, where $q\left(i\right)$ is the number of each type of coefficient at level $i$ and ${\stackrel{^}{c}}_{ij}={c}_{ij}+\sigma {\epsilon }_{ij}$ and $\sigma {\epsilon }_{ij}$ is the transformed noise term. If some threshold parameter $\alpha $ is chosen, a simple hard thresholding rule can be applied as
$c- ij = 0, if c^ij ≤ α c^ij , if c^ij > α,$
taking ${\stackrel{-}{c}}_{ij}$ to be an approximation to the required detail coefficient without noise, ${c}_{ij}$. The resulting coefficients can then be used as input to nag_imldwt_2d (c09edc) in order to reconstruct the denoised signal.
See the references given in the introduction to this chapter for a more complete account of wavelet denoising and other applications.
## 9 Example
This example performs a multi-level resolution transform of a dataset using the Daubechies wavelet (see ${\mathbf{wavnam}}=\mathrm{Nag_Daubechies2}$ in nag_wfilt_2d (c09abc)) using half-point symmetric end extensions, the maximum possible number of levels of resolution, where the number of coefficients in each level and the coefficients themselves are not changed. The original dataset is then reconstructed using nag_imldwt_2d (c09edc).
### 9.1 Program Text
Program Text (c09ecce.c)
### 9.2 Program Data
Program Data (c09ecce.d)
### 9.3 Program Results
Program Results (c09ecce.r)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 104, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7246508598327637, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/82615-two-antiderivative-problems.html
|
# Thread:
1. ## Two antiderivative problems!!!
1. Evaluate the indefinite integral.
2. A stone is thrown straight up from the edge of a roof, feet above the ground, at a speed of feet per second.
A. Remembering that the acceleration due to gravity is , how high is the stone seconds later?
B. At what time does the stone hit the ground?
C. What is the velocity of the stone when it hits the ground?
2. Originally Posted by Kayla_N
1. Evaluate the indefinite integral.
Let $u=5+14x^4\Rightarrow du=56x^3\,dx.$
2. A stone is thrown straight up from the edge of a roof, feet above the ground, at a speed of feet per second.
A. Remembering that the acceleration due to gravity is , how high is the stone seconds later?
Integrate the acceleration function to get the velocity function $v.$ You can solve for the constant of integration by noting that $v(0)=20.$
Integrate $v$ to get the stone's position function $s.$ Set $s(0)=775$ and you can solve for the constant of integration. Then just evaluate $s(3).$
B. At what time does the stone hit the ground?
That is, for what value of $t$ is $s(t)=0?$
C. What is the velocity of the stone when it hits the ground?
Evaluate the velocity function at the time found in part (B).
3. I still dont understand what you wrote for number 2..can you explain a little bit more.???
4. Originally Posted by Kayla_N
I still dont understand what you wrote for number 2..can you explain a little bit more.???
Velocity is the derivative of position; acceleration is the derivative of velocity.
The stone is undergoing a constant $-32\text{ ft./s}^2$ acceleration due to gravity, so the acceleration function is
$a(t)=-32.$
The velocity function is therefore
$v(t)=\int a(t)\,dt=\int(-32)\,dt=-32t+C_0.$
We know that the initial velocity is 20 ft./s, so
$v(0)=20\Rightarrow -32\cdot0+C_0=20\Rightarrow C_0=20$
and
$v(t)=20-32t.$
Work similarly to find the position function. Can you continue?
5. Originally Posted by Reckoner
Velocity is the derivative of position; acceleration is the derivative of velocity.
The stone is undergoing a constant $-32\text{ ft./s}^2$ acceleration due to gravity, so the acceleration function is
$a(t)=-32.$
The velocity function is therefore
$v(t)=\int a(t)\,dt=\int(-32)\,dt=-32t+C_0.$
We know that the initial velocity is 20 ft./s, so
$v(0)=20\Rightarrow -32\cdot0+C_0=20\Rightarrow C_0=20$
and
$v(t)=20-32t.$
Work similarly to find the position function. Can you continue?
since v(t)=20-32t, so t= .625.
Then to find s(t)= \int (-32+20)dt
s(t)=-16t^2+20t+C2
3=-16(0)+20(0)+c2
C2=3??
Then i dont know what else to do!!!
6. Originally Posted by Kayla_N
since v(t)=20-32t, so t= .625.
At $t=0.625,$ the velocity is zero (i.e., the stone is not moving). This will occur when the stone reaches its maximum height. Were you asked to find this?
Then to find s(t)= \int (-32+20)dt
s(t)=-16t^2+20t+C2
3=-16(0)+20(0)+c2
What are you doing? The initial height is 775 ft., not 3. After you solve for $C_2,$ evaluate $s(3)$ to get the position after 3 seconds.
7. So the s(t) -16t^2+20t+739 right???
8. Originally Posted by Kayla_N
So the s(t) -16t^2+20t+739 right???
Where did you get 739 from?
9. Originally Posted by Reckoner
Where did you get 739 from?
well 739 is wrong, i did again 775=-16t^2+20t+C
then i plug in t=3 and i got C=859?
Jeeze i'm so lost.
10. Originally Posted by Kayla_N
well 739 is wrong, i did again 775=-16t^2+20t+C
then i plug in t=3 and i got C=859?
Think about what you are doing. The stone's initial height (at 0 seconds) is 775 ft., so when $t=0,\;s(t)=775.$ So why are you setting $t=3?$
11. i'm trying to calculate the height at 3 seconds.. Can you please show me how to get part A??
12. Originally Posted by Kayla_N
i'm trying to calculate the height at 3 seconds.. Can you please show me how to get part A??
Do as I said:
We have $s(t)=-16t^2+20t+C_2,$ and $s(0)=775$ (as I mentioned above). Therefore,
$s(0)=-16\cdot0^2+20\cdot0+C_2=775$
$\Rightarrow C_2=775$
and
$s(t)=-16t^2+20t+775.$
Now find $s(3).$
13. ok and i got 691. From there i went on to second part...s(t)=0
0=-16t^2+20t+775. I used quadratic formala and got 7.6127 sec. then moved on to 3 part, i used V= (7.6127)(-32)+20= -223.6068.
14. Originally Posted by Kayla_N
ok and i got 691. From there i went on to second part...s(t)=0
0=-16t^2+20t+775. I used quadratic formala and got 7.6127 sec. then moved on to 3 part, i used V= (7.6127)(-32)+20= -223.6068.
Looks good!
15. Thanks...just part A that i'm stuck on..Thanks again for you help. I really really appreciated.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8892301917076111, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/89/is-newtons-law-of-gravity-consistent-with-general-relativity
|
# Is Newton's Law of Gravity consistent with General Relativity?
By 'Newton's Law of Gravity', I am referring to
The magnitude of the force of gravity is proportional to the product of the mass of the two objects and inversely proportional to their distance squared.
Does this law of attraction still hold under General Relativity's Tensor Equations?
I don't really know enough about mathematics to be able to solve any of Einstein's field equations, but does Newton's basic law of the magnitude of attraction still hold?
If they are only approximations, what causes them to differ?
-
3
If you're really interested in this stuff, check out Carroll's book "Spacetime and Geometry" for a pretty good intro to G.R. and the math behind it. – j.c. Nov 3 '10 at 14:21
## 6 Answers
Yes, in the appropriate limit. Roughly, the study of geodesic motion in the Schwarzschild solution (which is radially symmetric) reduces to Newtonian gravity at sufficiently large distances and slow speeds. To see how this works exactly, one must look more specifically at the equations.
-
1
What exactly is geodesic motion, and the Schwarzschild solution? I'm sorry, I don't really have much of a background in physics past the nineteenth century. – Justin L. Nov 3 '10 at 6:04
1
The geodesic motion are the orbits. The Schwartzschild solution represents the gravitational field in free space where all the mass is concentrated in a spherical region. – Sklivvz♦ Nov 3 '10 at 9:45
In fact, if I recall correctly, you can derive from the Newton's law of gravity the Schwarzschild radius (radius of the event horizont) of the Schwarzschild solution for Einstein equations. – user52 Nov 3 '10 at 14:48
1
It's worth noting that "sufficiently large distance" is really pretty small. Experiments at the University of Washington have shown that gravity follows an inverse-square law for separations as small as about 50 microns (0.05 mm, on the thin side of the diameter of a human hair). – Chad Orzel Nov 13 '10 at 13:16
1
@Chad: the distance where Newton works depends on the mass, so the right measure is the ratio of the separation to the Schwartschild radius. The deviations from Newton's law fall as one over this ratio, so even when this ratio is relatively large, like Mercury's orbit, you can see deviations from Newton over the centuries. – Ron Maimon Jan 17 '12 at 4:50
Eric's answer is not really correct (or at least not complete). For instance, it doesn't tell you anything about the motion of two comparably heavy bodies (and indeed this problem is very hard in GR, in stark contrast to the Newtonian case). So let me make his statements a bit more precise.
The correct approach is to treat the Newtonian gravity as a perturbation of the flat Minkowski space-time. One writes $g = \eta + h$ for the metric of this space-time ($\eta$ being Minkowski metric and $h$ being the perturbation that encodes curvature of the space-time) and linearize the theory in $h$. By doing this one actually obtains a lot more than just Newtonian gravity, namely gravitomagnetism, in which one can also investigate dynamical properties of the space-time not included in the Newtonian picture. In particular the propagation of gravitational waves.
Now, to recover Newtonian gravity we have to make one more approximation. Just realize that Newtonian gravity is not relativistic, i.e. it violates finite speed of light. But if we assume that $h$ changes only slowly and make calculations we will find out that the perturbation metric $h$ encodes the Newtonian field potential $\Phi$ and that the space-time is curved in precisely the way to reproduce the Newtonian gravity. Or rather (from the modern perspective): Newtonian picture is indeed a correct low-speed, almost-flat description of GR.
-
The main problem here is this: Newton gives us formulas for a force, or a field, if you like. Einstein gives us more generic equations from which to derive gravitational formulas. In this context, one must first find a solution to Einstein's equations. This is represented by a formula. This formula is what might, or may not, be approximately equal to Newton's laws.
This said, as answered elsewhere, there is one solution which is very similar to Newton's. It's a very important solution which describes the field in free space.
The fact that they are approximations fundamentally arises from different factros: the fact that they are invariant laws under a number of transformations, but mostly special relativity concerns - in other words, no action at a distance - is a big one.
-
All four answers agree in saying « no ». Newton's Law is not consistent with General Relativity. But all four answers point out that Newton's Law is sometimes a reasonable approximation and can be derived from Eintein's Equations by neglecting some terms and introducing some approximations.
-
Newton's Law of Gravity is consistent with General Relativity at high speed too :)
Lets consider Newton equation of energy conservation for free fall from the infinity with initial speed of object equal to zero:
$\large {mc^2=E-\frac{GMm}{R}}$
or
$\large {mc^2=E-\frac{R_{g*}}{R}\;mc^2}$ where $\large {R_{g*}=GM/c^2}$
so
$\large {E=mc^2\left(1+\frac{R_{g*}}{R}\right)=mc^2\left(\frac{R+R_{g*}}{R}\right)}$
Now
$\large {mc^2=E\;\frac{R}{R+R_{g*}}=E\left(1-\frac{R_{g*}}{R+R_{g*}}\right)}$
and as the result
$\bf\large {mc^2=E-\frac{GM}{R+R_{g*}}\;\frac{E}{c^2}}$
Compare to
$\bf\large {mc^2=E-\frac{GMm}{R}}$
In the resulting equation energy ($E/c^2$) is attracted, not mass ($m$). That's why gravitational redshift is the same in Newton Gravity and in General Relativity (for $R>>R_g$).
Slight modification of Newton equation describes radial movement of an object at any speed with different initial conditions in the same way as General Relativity. Not only free fall from infinity with initial speed equal to zero.
$\bf\large {E_1\left(1-\frac{GM}{c^2(R_1+R_{gm}+R_{gM})}\right)=E_2\left(1-\frac{GM}{c^2(R_2+R_{gm}+R_{gM})}\right)}$
And it has no any singularity! So I like it :)
-
@voix LaTeX markup works here, just put code inside two dollar signs, like $E=mc^2$. – mbq♦ Nov 17 '10 at 11:34
1
Einstein proved $E = mc^2$ (therefore making $m$ redundant and indeed, this symbol is used differently nowadays to mean invariant mass). I don't see any mathematical nor physical content in these equations (or more precisely one equation written out seven times). – Marek Nov 17 '10 at 17:05
1
@voix: then it makes even less sense because $E$ in your equation is just a sum of gravitational energy $\Phi$ and rest energy $mc^2$. Now, where did the kinetic energy go? You can't just make equations up like this. Besides $m$ in this sense is invariant (it doesn't depend on from where you look at it) but $E$ clearly isn't (you can see in your formula that $E$ depends on $v$). So your equation also doesn't obey the laws of relativity. – Marek Nov 17 '10 at 17:48
1
@voix: I still don't think the answer is great. But at least it's readable and quite interesting now. Anyway, thank you for trying to improve the answer; I appreciate that :-) – Marek Nov 18 '10 at 19:41
1
-1: Come on! The General relativistic energy conservation equation holds, but it has an additional potential contributions which you didn't consider. The answer above it giving wrong formulas, and I don't know why it is upvoted at all. – Ron Maimon Jan 17 '12 at 4:52
show 14 more comments
May be the case that Gerber could not give an exact explanation for his formula, 18 years before GR, on the advance of Mercury's perihelium as we can see at mathpages. After reading the fine explanation on Lienard & Wiechert retarded potentials in the Hans de Vries online book I think that the treatment of the subject is not correct in the mathpages.
It appears to me that Walter Orlov, 2011 has a nice way to explain why Gerber's formula is correct to explain Mercury's orbit.
The answer is that they are mutually consistent because Gerber'gravity (post-Newtonian treatment with delayed potentials) is consistent with observations, the same as with GR's formulation.
Before I can ask 'Do I need GR to explain the observations?' I need to be sure that Orlov got it right.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349004030227661, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/geometry/106875-vector-points-angle-problem.html
|
# Thread:
1. ## Vector-points-angle problem
Hi all,
I have a 2 points - 1 vector - 1 angle problem in 3D:
- There are two points, S and N.
- The x,y,z-coordinates of S are known.
- Only the z-coordinate of N is known.
- The distance between N and S is known!
- there is a vector U which is the direction of a line going through S
- the vector V is the direction vector (N-S) of the points mentioned
- the angle between vector U and V is known!
Problem: calculate the other coordinates (x,y) of point N. If anybody could help me solve it, it would be greatly appreciated!!
2. ## More information please
Hello hobbyist
Welcome to Math Help Forum!
Sorry, but your question doesn't make sense. What does this mean?
Originally Posted by hobbyist
- there is a vector U which is the direction of a line going through S
Grandad
3. ## Trying to make it more clear
point S is on a line. The direction vector of the line is U. So the line's equation would be $S + U$. Is it clear ? There's another line which is the line going through point S and point N. The director vector of that line is V. That line's equation is $S+V$. So there are two lines which are at an angle with each other. The angle is known. THe only thing that is not known is the X and Y coordinaties of point S , or (the same) the x + y vector components of direction vector V. If you have Vx and Vy, then you can calculate N (which is the goal), because you know the distance between S and N.
4. Hello hobbyist
Originally Posted by hobbyist
point S is on a line. The direction vector of the line is U. So the line's equation would be $S + U$. Is it clear ? There's another line which is the line going through point S and point N. The director vector of that line is V. That line's equation is $S+V$. So there are two lines which are at an angle with each other. The angle is known. THe only thing that is not known is the X and Y coordinaties of point S , or (the same) the x + y vector components of direction vector V. If you have Vx and Vy, then you can calculate N (which is the goal), because you know the distance between S and N.
Sorry, but it still doesn't make sense. The information supplied about U being the direction of a line through S tells us nothing about the point S itself - its direction will be the same no matter where it is, and so will the angle between U and V. How does this supply any more information about the point S?
(Incidentally, the vector equation of the line with direction given by the vector $\vec{u}$ that passes through the point with position vector
$\vec{s}$ is $\vec{r}=\vec{s} + \lambda\vec{u}$, not $\vec{r} = \vec{s} + \vec{u}$.)
Grandad
The coordinates of points S are known! U is also known, and you're right, the equation is $<br /> \vec{r} = \vec{s} + \lambda\vec{u}$ . What else do you want to know about S? I think the information about S and U are complete ?
The goal is to calculate point N. Only the z-coordinate of N is known.
To recapulate, there are two lines:
$<br /> \vec{r} = \vec{s} + \lambda\vec{u}$
and
$\vec{v} = \vec{s} + \mu(\vec{n}-\vec{s})$
Assumptions:
• line r is known
• the angle between r and v is known.
• the length of vector v is known
• Only the x and y components of n are not known.
Goal: calculate x and y components of vector $\vec{v}$.
I was thinking about using two equations to solve for Nx and Ny:
1) the formula for the angle between vectors $\vec{r}$ and $\vec{v}$ :
$\alpha = \arccos (\vec{r} \cdot \vec{v} ) \div ( \mid \vec{r} \mid \ast \mid \vec{r} \mid )$
2) and the formula for the length of the vector v :
$<br /> \mid \vec{v} \mid = Vx^2 + Vy^2 + Vz^2$
by re-arranging (2) you get the expression for Vy, which you can fill in in the formula of (1) to solve for Vx. Once you have Vx, ofcourse you can use equation (2) to find Vy. What do you think?
6. there's a small mistake iny my previous reply, formula 1 should be:
$<br /> \alpha = \arccos ( (\vec{r} \cdot \vec{v}) \div (\mid \vec{r} \mid \ast \mid \vec{v} \mid))<br />$
you can also take the normalized versions of r and v to make it simpler:
$<br /> \alpha = \arccos (\vec{rnorm} \cdot \vec{vnorm}) <br />$
but then you also have to change the 2nd formula to
$<br /> 1= vnorm_{x}^2 + vnorm_{y}^2 + vnorm_z^2<br />$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9662911891937256, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/40528-how-can-get-anything.html
|
# Thread:
1. ## how can get anything from this?....
Three forces AB and C act at, and away from, the orgin of a three-dimensional coordinate system. Force acts along the x axis and has a magnitude of 3 N; force B acts along the y axis and has a magnitude of 5 N; force C acts along the z axis and has a magnitude of 2 N. Evaluate the magnitude of the resultant force and specify the angle to the xy plane at which it acts. The resultant force and its projection both lie in a plane at an angle X to the xz plane. Find this angle between these two planes.
thanx
x
2. Originally Posted by steph21
Three forces AB and C act at, and away from, the orgin of a three-dimensional coordinate system. Force acts along the x axis and has a magnitude of 3 N; force B acts along the y axis and has a magnitude of 5 N; force C acts along the z axis and has a magnitude of 2 N. Evaluate the magnitude of the resultant force and specify the angle to the xy plane at which it acts. The resultant force and its projection both lie in a plane at an angle X to the xz plane. Find this angle between these two planes.
thanx
x
Hello steph21,
Lets write each of your vectors in component notation.
$F_1=3 \vec i \\\ F_2=5 \vec j \\\ F_3=2 \vec k$
Now our resultant force is the sum of all the forces.
$F_r=3\vec i+ 5 \vec j +2 \vec k$
The magnitude of the force is the vector's length
$|F_r|=\sqrt{3^2+5^2+2^2}=\sqrt{38}$
To find the angle between the vector and the xy plane lets draw a right triangle. Lets start by drawing a line from the origin to the tip of the vector, and from the tip of the vector drop a perpendicular into the xy plane.
We know both of these distances. The length of the vector is its magnitude and the perpendicular is the z component of the vector. We can find the angle by using the $\sin^{-1}\left( \frac{2}{\sqrt{38}}\right) \approx .33 rad \approx 18.9^\circ$
For the last part try to find a different triangle. Good luck.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211198687553406, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2008/11/07/hopf-algebras/
|
# The Unapologetic Mathematician
## Hopf Algebras
One more piece of structure we need. We take a bialgebra $H$, and we add an “antipode”, which behaves sort of like an inverse operation. Then what we have is a Hopf algebra.
An antipode will be a linear map $S:H\rightarrow H$ on the underlying vector space. Here’s what we mean by saying that an antipode “behaves like an inverse”. In formulas, we write that:
$\mu\circ(S\otimes1_H)\circ\Delta=\iota\circ\epsilon=\mu\circ(1_H\otimes S)\circ\Delta$
On either side, first we comultiply an algebra element to split it into two parts. Then we use $S$ on one or the other part before multiplying them back together. In the center, this is the same as first taking the counit to get a field element, and then multiplying that by the unit of the algebra.
By now it shouldn’t be a surprise that the group algebra $\mathbb{F}[G]$ is also a Hopf algebra. Specifically, we set $S(e_g)=e_{g^{-1}}$. Then we can check the “left inverse” law:
$\begin{aligned}\mu\left(\left[S\otimes1_H\right]\left(\Delta(e_g)\right)\right)=\mu\left(\left[S\otimes1_H\right](e_g\otimes e_g)\right)=\\\mu(e_{g^{-1}}\otimes e_g)=e_{g^{-1}g}=e_1=\iota(1)=\iota\left(\epsilon(e_g)\right)\end{aligned}$
One thing that we should point out: this is not a group object in the category of vector spaces over $\mathbb{F}$. A group object needs the diagonal we get from the finite products on the target category. But in the category of vector spaces we pointedly do not use the categorical product as our monoidal structure. There is no “diagonal” for the tensor product.
Instead, we move to the category of coalgebras over $\mathbb{F}$. Now each coalgebra $C$ comes with its own comultiplication $\Delta:C\rightarrow C\otimes C$, which stands in for the diagonal. In the case of $\mathbb{F}[G]$ we’ve been considering, this comultiplication is clearly related to the diagonal on the underlying set of the group $G$. In fact, it’s not going too far to say that “linearizing” a set naturally brings along a coalgebra structure on top of the vector space structure we usually consider. But many coalgebras, bialgebras, and Hopf algebras are not such linearized sets.
In the category of coalgebras over $\mathbb{F}$, a Hopf algebra is a group object, so long as we use the comultiplications and counits that come with the coalgebras instead of the ones that come from the categorical product structure. Dually, we can characterize a Hopf algebra as a cogroup object in the category of algebras over $\mathbb{F}$, subject to a similar caveat. It is this cogroup structure that will be important moving forwards.
### Like this:
Posted by John Armstrong | Algebra
## 8 Comments »
1. Thank you! This is a subject which I’ve had a hard time learning from books and journal articles. I’m driven to try because of the special cases known as “called quantum groups” and because of categories of representations as a motivation for Hopf algebras.
PlanetMath says, for instance:
“The category of commutative Hopf algebras is anti-equivalent to the category of affine group schemes. The prime spectrum of a commutative Hopf algebra is an affine group scheme of multiplicative units. And going in the opposite direction, the algebra of natural transformations from an affine group scheme to its affine 1-space is a commutative Hopf algebra, with coalgebra structure given by dualising the group structure of the affine group scheme. Further, a commutative Hopf algebra is a cogroup object in the category of commutative algebras.”
Polynomial functions on a Lie group looks like it should have PHysics applications that I never understood when presented to me by Physicists. Nothing person, my wife being a Physics professor, after all…
Comment by | November 7, 2008 | Reply
2. Polynomial functions on a Lie group looks like it should have PHysics applications
Well, most Lie groups you might be thinking of are also algebraic groups, which is why polynomial functions would be apropos. The algebra of such functions will be another Hopf algebra — an algebro-geometric analogue of the group algebra I talk about above.
Comment by | November 7, 2008 | Reply
3. So how does this count as mathematics for the interested outsider?
It’s interesting, but not for the outsider!
You have to use less technical language to appeal to the masses, who don’t know very much.
Comment by | November 8, 2008 | Reply
4. If the outsider starts at the beginning, they’d have a considerably easier time. Some things just don’t translate to “lay language”.
Comment by | November 8, 2008 | Reply
5. (Hmmm, the beginning was actually before the post to which I linked, which was more like the beginning of the middle.)
Comment by | November 8, 2008 | Reply
6. Try following back the links I give that refer to earlier posts. And then follow those links, and so on until you find illumination.
It’s postmodern satori
Comment by | November 8, 2008 | Reply
7. [...] if we can swap the outputs from the comultiplication. That is, if . Similarly, bialgebras and Hopf algebras can be [...]
Pingback by | November 19, 2008 | Reply
8. [...] let’s say we have a group . This gives us a cocommutative Hopf algebra. Thus the category of representations of is monoidal — symmetric, even — and has [...]
Pingback by | November 21, 2008 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216378927230835, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/galaxy-rotation-curve+gravity
|
# Tagged Questions
1answer
162 views
### Does conformal gravity explain the Bullet cluster lensing effects?
Conformal gravity is an "alternative" theory of gravity, where instead of using the Einstein-Hilbert action composed of the Ricci scalar, the square of the conformal Weyl tensor is used. It was ...
4answers
323 views
### What makes the stars that are farther from the nucleus of the galaxy go faster than those in the middle?
It has no sense that stars that have a bigger radius and apparently less angular speed($\omega$) goes faster than the ones near the center.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276666045188904, "perplexity_flag": "middle"}
|
http://en.m.wikibooks.org/wiki/Complex_Analysis/Complex_Numbers/Introduction
|
# Complex Analysis/Complex Numbers/Introduction
This book assumes you have some passing familiarity with the complex numbers. Indeed much of the material in the book assumes your already familiar with the multi-variable calculus. If you have not encountered the complex numbers previously it would be a good idea to read a more detailed introduction which will have many more worked examples of arithmetic of complex numbers which this book assumes is already familiar. Such an introduction can often be found in an Algebra (or "Algebra II") text, such as the Algebra wikibook's section on complex numbers.
Intuitively a complex number z is a number written in the form:
$z=x+iy$,
where x and y are real number and i is an imaginary number that satisfies $i^2 = -1$. We call x the real part and y the imaginary part of z, and denote them by $\text{Re }z$ and $\text{Im }z$, respectively. Note that for the number $z=3-2i$, $\text{Im }z=y=-2$, not $-2i$. Also, to distinguish between complex and purely real numbers, we will often use the letters z and w for the complex numbers. It is useful to have a more formal definition of the complex numbers. For example, one frequently encounters treatments of the complex numbers that state that $i$ is the number so that $i=\sqrt{-1}$, and we then operate with $i$ using many of our usual rules for arithmetic. Unfortunately if one is not careful this will lead to difficulties. Not all of the usual rules for algebra carry through in the way one might expect. For example, there is a flaw in the following calculation: $i=\sqrt{-1}=\sqrt{\frac{1}{-1}}=\frac{\sqrt{1}}{\sqrt{-1}}=\frac{1}{i}=-i$, but is very difficult to point out the flaw without first being clear about what a complex number is, and what operations are allowed with complex numbers.
Mathematically the complex numbers are defined as an ordered pair, endowed with algebraic operations.
Definition
A complex number z is an ordered pair of real numbers. That is $z=(x,y)$ where x and y are real numbers. The collection of all complex numbers is denoted by the symbol $\mathbb{C}$.
The most immediate consequence of this definition is that we may think of a complex number as a point lying the the plane. Comparing this definition with the intuitive definition above, it is easy to see that the imaginary number i simply acts as a place holder for denoting which number belongs in the second coordinate.
Definition
We define the following two functions on the complex plane. Let $z=(x,y)$ be a complex number. We define the real part is as function $\text{Re}:\mathbb{C}\to \mathbb{R}$ given by $\textrm{Re}(z)=x$. Similarly we define the imaginary part as a function $\textrm{Im}:\mathbb{C}\to \mathbb{R}$ given by $\textrm{Im}(z)=y$.
We say two complex numbers are equal if and only if they are equal as ordered pairs. That is if $z=(x,y)$ and $w=(u,v)$ then z = w if and only if x = u and y = v. Put more succinctly, two complex numbers are equal iff their real parts and imaginary parts are equal.
If complex numbers were simply ordered pairs there would not really be much to say about them. But the complex numbers are ordered pairs together with several algebraic operations, and it is these operations that make the complex numbers so interesting.
Definition
Let z = (x, y) and w = (u, v) then we define addition as:
z + w = (x + u, y + v)
and multiplication as:
z · w = (x · u − y · v, x · v + y · u)
Of course, we can view any real number r as being a complex number. Using our intuitive model for the complex numbers it is clear that the real number r should correspond to the complex number (r, 0), and with this identification the above operations correspond exactly to the usual definitions of addition and multiplication of real numbers. For the remainder of the text we will freely refer to a real number r as being a complex number, where the above identification is understood.
The following facts about addition and multiplication follow easily from the corresponding operators for the real numbers. Their verification is left as an exercise to the reader. Let z, w and v be complex numbers, then:
• z + (w + v) = (z + w) + v (Associativity of addition);
• z · (w · v) = (z · w) · v (Associativity of multiplication);
• z + w = w + z (Commutativity of addition);
• z · w = w · z (Commutativity of multiplication);
• z · (w + v) = z · w + z · v (Distributive Property).
One nice feature of complex addition and multiplication is that 0 and 1 play the same role in the real numbers as they do in the complex numbers. That is 0 is the additive identity for the complex numbers (meaning z + 0 = 0 + z = z) and 1 is the multiplicative identity (meaning z · 1 = 1 · z = z).
Of course it is natural at this point to ask about subtraction and division. But stating the formula's for subtraction and division outright, we instead follow the usual course for other subjects of algebra and first discuss inverses.
Definition
Let z = (x, y) be any complex number, then we define the additive inverse −z as:
−z = (−x, −y)
Then it is immediate to verify that z + −z = 0.
Now for any two complex numbers z and w we define z − w to be z + −w. We now turn to doing the same for multiplication.
Definition
Let z = (x, y) be any non-zero complex number, then we define the multiplicative inverse, $\tfrac{1}{z}$ as:
$\frac{1}{z}=\Big(\frac{x}{x^2+y^2}, -\frac{y}{x^2+y^2}\Big)$
It is left to the reader to verify that $z\cdot\tfrac{1}{z}=1$.
We may now of course define division as $\tfrac{z}{w}=z\cdot\tfrac{1}{w}$. Just as with the real numbers, division by zero remains undefined. In order for this last definition to make more sense it helps to introduce two more operations on the complex numbers. The first is the absolute value.
Definition
Let z = (x, y) be any complex number, then we define the complex absolute value, denoted |z| as:
$|z|=\sqrt{x^2+y^2}$
Notice that |z| is always a real number and |z| ≥ 0 for any z.
Of course with this definition of the absolute value, if z = (x, y) then |z| is exactly the same as the norm of the vector (x, y).
Before introducing the second definition, notice that our intuitive definition simply required us to find a number whose square was −1. Of course i2 = (−i)2 = −1, so for a starting point one could have chosen -i as the most basic imaginary number. This idea motivates the following definition.
Definition
Let z = (x, y) be any complex number, then we define the conjugate of z, denoted $\bar z$ as:
$\bar z=(x,-y).$
With this definition it is an easy exercise to check that $z\cdot\bar z=|z|^2$, so dividing both sides by |z|2 we arrive at $z\cdot\tfrac{\bar z}{|z|^2}=1$. Compare this with the definition of the multiplicative inverse above.
Recall that, every point in the plane can be written using rectangular coordinates such as (x, y) where of course the numbers denote the distance from the x and y axes respectively. But the point could equally well be described using polar coordinates (r, θ), where the first number represents the distance from the origin, and the second is the angle that is made with the positive x axis when you connect the origin and the point with a line segment. Since complex numbers may be thought of simply as points in the plane, we can immediately derive a polar representation of a complex number. As usual we can let a point z = (x, y) = (r cos θ, r sin θ) where $\textstyle r=\sqrt{x^2+y^2}$. The choice of θ is not unique because sine and cosine are 2π periodic. A value θ for which z = (r cos θ, r sin θ) is called an argument of z. If we restrict our choice of θ so that 0 ≤ θ < 2π then the choice of θ is unique provided that z ≠ 0. This is often called the principle branch of the argument.
As a shorthand, we may write $\operatorname{cis}\,\theta = \cos \theta + i\sin \theta$, so $z=r\operatorname{cis} \theta$. This notation simplifies multiplication and taking powers, because
$z_1 z_2 \,\!$ $=(r_1 \operatorname{cis} \theta_1)(r_2 \operatorname{cis} \theta_2)$
$=r_1 r_2 \left [ \left ( \cos \theta_1 + i \sin \theta_1 \right )\left ( \cos \theta_2 + i \sin \theta_2 \right ) \right ]$
$=r_1 r_2 \left [ \left ( \cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2 \right ) + i \left ( \sin \theta_1 \cos \theta_2 + \cos \theta_1 \sin \theta_2 \right ) \right ]$
$=r_1 r_2 \left ( \cos (\theta_1 + \theta_2) + i \sin (\theta_1 + \theta_2) \right )$
$=r_1 r_2 \operatorname{cis} (\theta_1 + \theta_2)$
by elementary trigonometric identities. Applying this formula can therefore simplify many calculations with complex numbers.
Using induction we can show that
$z^n=r^n \operatorname{cis} (n \theta)$,
holds for all positive integers $n$.
Now that we have set up the basic concept of a complex number, we continue to topological properties of the complex plane.
## Exercises
1. Determine $\overline{(z_1+z_2)}$ in terms of $\bar z_1$ and $\bar z_2$.
2. Determine $\overline{z_1z_2}$ in terms of $\bar z_1$ and $\bar z_2$.
3. Show that the absolute value on the complex plane obeys the triangle inequality. That is show that:
$|z_1+z_2|\leq |z_1|+|z_2|.$
4. Show that the absolute value on the complex plane obeys the reverse triangle inequality. That is show that:
$|z_1+z_2|\geq \big||z_1|-|z_2|\big|.$
5. Given a non-zero complex number $z=r\operatorname{cis}(\theta)$ determine $r'$ and $\theta'$ so that $\frac{1}{z}=r'\operatorname{cis}(\theta')$.
6. Determine formulas for $\text{Re }z$ and $\text{Im }z$ in terms of $z$ and $\bar z$.
7. Find $n$ distinct complex numbers $z_k$, $k = 0, \ldots, n-1$ so that $z_k^n=z$. Hint: Use the formula given above for $z_k^n$ and the $2\pi$ periodicity of $\cos(\theta)$ and $\sin(\theta)$.
Next
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 64, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9066497683525085, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/1646-determine-equation-median-vertex.html
|
# Thread:
1. ## Determine the equation of the median from vertex
ABC has the following co-ordinates: A(3,7)B(-1,-6) and C(-5,3). Determine the equation of the median from vertex C.
This question is giving me a bit of trouble. If someone could please help me out i would greatly appreciate it.
2. Originally Posted by Scott9909
ABC has the following co-ordinates: A(3,7)B(-1,-6) and C(-5,3). Determine the equation of the median from vertex C.
This question is giving me a bit of trouble. If someone could please help me out i would greatly appreciate it.
Part the First, find the coordinates of midpoint: By definition, the median from vertex $C=(-5,3)$ is a line which joins the midpoint of side $AB$. But by the midpoint formula the midpoint of $AB$ is $(\frac{3-1}{2},\frac{7-6}{2})=(1,1/2)$. Thus, the median passes through points $C=(-5,3)$ and $(1,1/2)$.
Part the Second, find the equation of median: Using the slope-point formula which states that the equation of a line passing through point $(x_0,y_0)$ having slope $m$ is $y-y_0=m(x-x_0)$. Thus, the slope of $(1,1/2),(-5,3)$ is $m=-5/12$. Thus, the equation of line is (use any point for $(x_0,y_0)$)
$y+5=-5/12(x-3)$ Open and simplify,
$y=-\frac{5}{12}x-\frac{15}{4}$
Q.E.D.
3. Im a bit confused on part 2.
Do you have to find the slope of the line? Im not furmilur with the formula you put up. Ive been taught to do it Y=X2-X1/Y2-y1
and i dont seem to be getting the same slope.
4. That is exactly what I did $(y_2-y_1)/(x_2-x_1)$. You mean the formula for the equation of the line?
5. Im not exactly sure. I really dont understand math that well.
are you supposed to do y2-y/x2-x1 with your midpoint and C(-5,3)?
And also if it is what is considered y2 and x2? C or the midpoint.
Sorry if these questions are stupid.
6. Originally Posted by Scott9909
Im not exactly sure. I really dont understand math that well.
are you supposed to do y2-y/x2-x1 with your midpoint and C(-5,3)?
And also if it is what is considered y2 and x2? C or the midpoint.
Hello,
you've got the answer to your problem already. I'll give you only a few additional informations:
1. If you have 2 points $P_1,\ P_2$ with the coordinates $P_1(x_1,\ y_1),\ P_2(x_2,\ y_2)$ then you'll get the midpoint $M \left( \frac{x_1+x_2}{2},\ \frac{y_1+y_2}{2}\right)$
2. A line through 2 points is described completely by the following equation: $\frac{y-y_1}{x-x_1} = \frac{y_2 - y_1}{x_2 - x_2}$
Solve this equation for y and you'll get: $y = \frac{y_2 - y_1}{x_2 - x_2}\cdot (x-x_1) + y_1$ where $\frac{y_2 - y_1}{x_2 - x_2}$ is the slope of the line.
I hope that these additional remarks helped a little bit.
Greetings
EB
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520488381385803, "perplexity_flag": "head"}
|
http://solution-nine.com/Singly_linked_list
|
Singly Linked List Research Materials
This page contains a list of user images about Singly Linked List which are relevant to the point and besides images, you can also use the tabs in the bottom to browse Singly Linked List news, videos, wiki information, tweets, documents and weblinks.
connect() timed out!
Rihanna - Take A Bow
Music video by Rihanna performing Take A Bow. YouTube view counts pre-VEVO: 66288884. (C) 2008 The Island Def Jam Music Group.
THE LEGEND OF ZELDA RAP [MUSIC VIDEO]
WATCH BLOOPERS & MORE: http://bit.ly/ZELDAxtras DOWNLOAD THE SONG: http://smo.sh/13NrBp8 DOWNLOAD UNCENSORED SONG: http://smo.sh/WMYpsf GET LEGEND OF SMOSH T...
Key & Peele: Substitute Teacher
A substitute teacher from the inner city refuses to be messed with while taking attendance.
FIRETRUCK! (Official Music Video)
BLOOPERS: http://bit.ly/FiretruckBloopers GET THE SONG: http://smo.sh/WMZv7l MILKSHAKE MUSIC VIDEO: http://bit.ly/MilkyMilkshake CHECK OUT THIS FIRETRUCK TEE...
Draw My Life - Ryan Higa
So i was pretty hesitant to make this video... but after all of your request, here is my Draw My Life video! Check out my 2nd Channel for more vlogs: http://...
Assassin's Creed Meets Parkour in Real Life
Watch the Behind The Scenes in this link below: http://youtu.be/36CLFOyaml0 Make sure to subscribe to this channel for new vids each week! http://youtube.com...
Taylor Swift - Back To December
Music video by Taylor Swift performing Back To December. (C) 2011 Big Machine Records, LLC.
Adele - Rolling In The Deep
Music video by Adele performing Rolling In The Deep. (C) 2010 XL Recordings Ltd. #VEVOCertified on July 25, 2011. http://www.vevo.com/certified http://www.yo...
Avril Lavigne - When You're Gone
Music video by Avril Lavigne performing When You're Gone. YouTube view counts pre-VEVO: 696566 (C) 2007 RCA/JIVE Label Group, a unit of Sony Music Entertain...
David Guetta - Just One Last Time ft. Taped Rai
"Just One Last Time" feat. Taped Rai. Available to download on iTunes including remixes of : Tiësto, HARD ROCK SOFA & Deniz Koyu http://smarturl.it/DGJustOne...
YOLO (feat. Adam Levine & Kendrick Lamar)
YOLO is available on iTunes now! http://smarturl.it/lonelyIslandYolo New album coming soon... Check out the awesome band the music in YOLO is sampled from Th...
PEOPLE ARE AWESOME 2011
Most Annoying People On The Internet
Don't be these people. Mapoti See Bloopers and Behind-The-Scenes Here!: http://youtu.be/dfpo7uXwJnM Huge thank you and shout out to Dtrix: http://www.youtube...
In computer science, a linked list is a data structure consisting of a group of nodes which together represent a sequence. Under the simplest form, each node is composed of a datum and a reference (in other words, a link) to the next node in the sequence; more complex variants add additional links. This structure allows for efficient insertion or removal of elements from any position in the sequence.
A linked list whose nodes contain two fields: an integer value and a link to the next node. The last node is linked to a terminator used to signify the end of the list.
Linked lists are among the simplest and most common data structures. They can be used to implement several other common abstract data types, including lists (the abstract data type), stacks, queues, associative arrays, and S-expressions, though it is not uncommon to implement the other data structures directly without using a list as the basis of implementation.
The principal benefit of a linked list over a conventional array is that the list elements can easily be inserted or removed without reallocation or reorganization of the entire structure because the data items need not be stored contiguously in memory or on disk. Linked lists allow insertion and removal of nodes at any point in the list, and can do so with a constant number of operations if the link previous to the link being added or removed is maintained during list traversal.
On the other hand, simple linked lists by themselves do not allow random access to the data, or any form of efficient indexing. Thus, many basic operations — such as obtaining the last node of the list (assuming that the last node is not maintained as separate node reference in the list structure), or finding a node that contains a given datum, or locating the place where a new node should be inserted — may require scanning most or all of the list elements.
History
Linked lists were developed in 1955-56 by Allen Newell, Cliff Shaw and Herbert A. Simon at RAND Corporation as the primary data structure for their Information Processing Language. IPL was used by the authors to develop several early artificial intelligence programs, including the Logic Theory Machine, the General Problem Solver, and a computer chess program. Reports on their work appeared in IRE Transactions on Information Theory in 1956, and several conference proceedings from 1957 to 1959, including Proceedings of the Western Joint Computer Conference in 1957 and 1958, and Information Processing (Proceedings of the first UNESCO International Conference on Information Processing) in 1959. The now-classic diagram consisting of blocks representing list nodes with arrows pointing to successive list nodes appears in "Programming the Logic Theory Machine" by Newell and Shaw in Proc. WJCC, February 1957. Newell and Simon were recognized with the ACM Turing Award in 1975 for having "made basic contributions to artificial intelligence, the psychology of human cognition, and list processing". The problem of machine translation for natural language processing led Victor Yngve at Massachusetts Institute of Technology (MIT) to use linked lists as data structures in his COMIT programming language for computer research in the field of linguistics. A report on this language entitled "A programming language for mechanical translation" appeared in Mechanical Translation in 1958.
LISP, standing for list processor, was created by John McCarthy in 1958 while he was at MIT and in 1960 he published its design in a paper in the Communications of the ACM, entitled "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I". One of LISP's major data structures is the linked list. By the early 1960s, the utility of both linked lists and languages which use these structures as their primary data representation was well established. Bert Green of the MIT Lincoln Laboratory published a review article entitled "Computer languages for symbol manipulation" in IRE Transactions on Human Factors in Electronics in March 1961 which summarized the advantages of the linked list approach. A later review article, "A Comparison of list-processing computer languages" by Bobrow and Raphael, appeared in Communications of the ACM in April 1964.
Several operating systems developed by Technical Systems Consultants (originally of West Lafayette Indiana, and later of Chapel Hill, North Carolina) used singly linked lists as file structures. A directory entry pointed to the first sector of a file, and succeeding portions of the file were located by traversing pointers. Systems using this technique included Flex (for the Motorola 6800 CPU), mini-Flex (same CPU), and Flex9 (for the Motorola 6809 CPU). A variant developed by TSC for and marketed by Smoke Signal Broadcasting in California, used doubly linked lists in the same manner.
The TSS/360 operating system, developed by IBM for the System 360/370 machines, used a double linked list for their file system catalog. The directory structure was similar to Unix, where a directory could contain files and/or other directories and extend to any depth. A utility flea was created to fix file system problems after a crash, since modified portions of the file catalog were sometimes in memory when a crash occurred. Problems were detected by comparing the forward and backward links for consistency. If a forward link was corrupt, then if a backward link to the infected node was found, the forward link was set to the node with the backward link. A humorous comment in the source code where this utility was invoked stated "Everyone knows a flea collar gets rid of bugs in cats".
Basic concepts and nomenclature
Each record of a linked list is often called an element or node.
The field of each node that contains the address of the next node is usually called the next link or next pointer. The remaining fields are known as the data, information, value, cargo, or payload fields.
The head of a list is its first node. The tail of a list may refer either to the rest of the list after the head, or to the last node in the list. In Lisp and some derived languages, the next node may be called the cdr (pronounced could-er) of the list, while the payload of the head node may be called the car.
Bob (bottom) has the key to box 201, which contains the first half of the book and a key to box 102, which contains the rest of the book.
Post office box analogy
The concept of a linked list can be explained by a simple analogy to real-world post office boxes. Suppose Alice is a spy who wishes to give a codebook to Bob by putting it in a post office box and then giving him the key. However, the book is too thick to fit in a single post office box, so instead she divides the book into two halves and purchases two post office boxes. In the first box, she puts the first half of the book and a key to the second box, and in the second box she puts the second half of the book. She then gives Bob a key to the first box. No matter how large the book is, this scheme can be extended to any number of boxes by always putting the key to the next box in the previous box.
In this analogy, the boxes correspond to elements or nodes, the keys correspond to pointers, and the book itself is the data. The key given to Bob is the head pointer, while those stored in the boxes are next pointers. The scheme as described above is a singly linked list (see below).
Singly linked list
Singly linked lists contain nodes which have a data field as well as a next field, which points to the next node in the linked list.
A singly linked list whose nodes contain two fields: an integer value and a link to the next node
Doubly linked list
In a doubly linked list, each node contains, besides the next-node link, a second link field pointing to the previous node in the sequence. The two links may be called forward(s) and backwards, or next and prev(ious).
A doubly linked list whose nodes contain three fields: an integer value, the link forward to the next node, and the link backward to the previous node
A technique known as XOR-linking allows a doubly linked list to be implemented using a single link field in each node. However, this technique requires the ability to do bit operations on addresses, and therefore may not be available in some high-level languages.
Multiply linked list
In a multiply linked list, each node contains two or more link fields, each field being used to connect the same set of data records in a different order (e.g., by name, by department, by date of birth, etc.). While doubly linked lists can be seen as special cases of multiply linked list, the fact that the two orders are opposite to each other leads to simpler and more efficient algorithms, so they are usually treated as a separate case.
Circular list
In the last node of a list, the link field often contains a null reference, a special value used to indicate the lack of further nodes. A less common convention is to make it point to the first node of the list; in that case the list is said to be circular or circularly linked; otherwise it is said to be open or linear.
In the case of a circular doubly linked list, the only change that occurs is that the end, or "tail", of the said list is linked back to the front, or "head", of the list and vice versa.
Sentinel nodes
Main article: Sentinel node
In some implementations, an extra sentinel or dummy node may be added before the first data record and/or after the last one. This convention simplifies and accelerates some list-handling algorithms, by ensuring that all links can be safely dereferenced and that every list (even one that contains no data elements) always has a "first" and "last" node.
Empty lists
An empty list is a list that contains no data records. This is usually the same as saying that it has zero nodes. If sentinel nodes are being used, the list is usually said to be empty when it has only sentinel nodes.
Hash linking
The link fields need not be physically part of the nodes. If the data records are stored in an array and referenced by their indices, the link field may be stored in a separate array with the same indices as the data records.
List handles
Since a reference to the first node gives access to the whole list, that reference is often called the address, pointer, or handle of the list. Algorithms that manipulate linked lists usually get such handles to the input lists and return the handles to the resulting lists. In fact, in the context of such algorithms, the word "list" often means "list handle". In some situations, however, it may be convenient to refer to a list by a handle that consists of two links, pointing to its first and last nodes.
Combining alternatives
The alternatives listed above may be arbitrarily combined in almost every way, so one may have circular doubly linked lists without sentinels, circular singly linked lists with sentinels, etc.
Tradeoffs
As with most choices in computer programming and design, no method is well suited to all circumstances. A linked list data structure might work well in one case, but cause problems in another. This is a list of some of the common tradeoffs involving linked list structures.
Linked lists vs. dynamic arrays
array
Balanced
tree
Random access
list
Indexing Θ(n) Θ(1) Θ(1) Θ(log n) Θ(log n)
Insert/delete at beginning Θ(1) N/A Θ(n) Θ(log n) Θ(1)
Insert/delete at end Θ(n) N/A Θ(1) amortized Θ(log n) Θ(log n) updating
Insert/delete in middle search time +
Θ(1)[1][2][3]
N/A Θ(n) Θ(log n) Θ(log n) updating
Wasted space (average) Θ(n) 0 Θ(n)[4] Θ(n) Θ(n)
A dynamic array is a data structure that allocates all elements contiguously in memory, and keeps a count of the current number of elements. If the space reserved for the dynamic array is exceeded, it is reallocated and (possibly) copied, an expensive operation.
Linked lists have several advantages over dynamic arrays. Insertion or deletion of an element at a specific point of a list, assuming that we have a pointer to the node (before the one to be removed, or before the insertion point) already, is a constant-time operation (otherwise without this reference it is O(n)), whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.
Moreover, arbitrarily many elements may be inserted into a linked list, limited only by the total memory available; while a dynamic array will eventually fill up its underlying array data structure and will have to reallocate — an expensive operation, one that may not even be possible if memory is fragmented, although the cost of reallocation can be averaged over insertions, and the cost of an insertion due to reallocation would still be amortized O(1). This helps with appending elements at the array's end, but inserting into (or removing from) middle positions still carries prohibitive costs due to data moving to maintain contiguity. An array from which many elements are removed may also have to be resized in order to avoid wasting too much space.
On the other hand, dynamic arrays (as well as fixed-size array data structures) allow constant-time random access, while linked lists allow only sequential access to elements. Singly linked lists, in fact, can only be traversed in one direction. This makes linked lists unsuitable for applications where it's useful to look up an element by its index quickly, such as heapsort. Sequential access on arrays and dynamic arrays is also faster than on linked lists on many machines, because they have optimal locality of reference and thus make good use of data caching.
Another disadvantage of linked lists is the extra storage needed for references, which often makes them impractical for lists of small data items such as characters or boolean values, because the storage overhead for the links may exceed by a factor of two or more the size of the data. In contrast, a dynamic array requires only the space for the data itself (and a very small amount of control data).[note 1] It can also be slow, and with a naïve allocator, wasteful, to allocate memory separately for each new element, a problem generally solved using memory pools.
Some hybrid solutions try to combine the advantages of the two representations. Unrolled linked lists store several elements in each list node, increasing cache performance while decreasing memory overhead for references. CDR coding does both these as well, by replacing references with the actual data referenced, which extends off the end of the referencing record.
A good example that highlights the pros and cons of using dynamic arrays vs. linked lists is by implementing a program that resolves the Josephus problem. The Josephus problem is an election method that works by having a group of people stand in a circle. Starting at a predetermined person, you count around the circle n times. Once you reach the nth person, take them out of the circle and have the members close the circle. Then count around the circle the same n times and repeat the process, until only one person is left. That person wins the election. This shows the strengths and weaknesses of a linked list vs. a dynamic array, because if you view the people as connected nodes in a circular linked list then it shows how easily the linked list is able to delete nodes (as it only has to rearrange the links to the different nodes). However, the linked list will be poor at finding the next person to remove and will need to search through the list until it finds that person. A dynamic array, on the other hand, will be poor at deleting nodes (or elements) as it cannot remove one node without individually shifting all the elements up the list by one. However, it is exceptionally easy to find the nth person in the circle by directly referencing them by their position in the array.
The list ranking problem concerns the efficient conversion of a linked list representation into an array. Although trivial for a conventional computer, solving this problem by a parallel algorithm is complicated and has been the subject of much research.
A balanced tree has similar memory access patterns and space overhead to a linked list while permitting much more efficient indexing, taking O(log n) time instead of O(n) for a random access. However, insertion and deletion operations are more expensive due to the overhead of tree manipulations to maintain balance. Schemes exist for trees to automatically maintain themselves in a balanced state: AVL trees or red-black trees.
Singly linked linear lists vs. other lists
While doubly linked and/or circular lists have advantages over singly linked linear lists, linear lists offer some advantages that make them preferable in some situations.
For one thing, a singly linked linear list is a recursive data structure, because it contains a pointer to a smaller object of the same type. For that reason, many operations on singly linked linear lists (such as merging two lists, or enumerating the elements in reverse order) often have very simple recursive algorithms, much simpler than any solution using iterative commands. While one can adapt those recursive solutions for doubly linked and circularly linked lists, the procedures generally need extra arguments and more complicated base cases.
Linear singly linked lists also allow tail-sharing, the use of a common final portion of sub-list as the terminal portion of two different lists. In particular, if a new node is added at the beginning of a list, the former list remains available as the tail of the new one — a simple example of a persistent data structure. Again, this is not true with the other variants: a node may never belong to two different circular or doubly linked lists.
In particular, end-sentinel nodes can be shared among singly linked non-circular lists. One may even use the same end-sentinel node for every such list. In Lisp, for example, every proper list ends with a link to a special node, denoted by `nil` or `()`, whose `CAR` and `CDR` links point to itself. Thus a Lisp procedure can safely take the `CAR` or `CDR` of any list.
Indeed, the advantages of the fancy variants are often limited to the complexity of the algorithms, not in their efficiency. A circular list, in particular, can usually be emulated by a linear list together with two variables that point to the first and last nodes, at no extra cost.
Doubly linked vs. singly linked
Double-linked lists require more space per node (unless one uses XOR-linking), and their elementary operations are more expensive; but they are often easier to manipulate because they allow sequential access to the list in both directions. In a doubly linked list, one can insert or delete a node in a constant number of operations given only that node's address. To do the same in a singly linked list, one must have the address of the pointer to that node, which is either the handle for the whole list (in case of the first node) or the link field in the previous node. Some algorithms require access in both directions. On the other hand, doubly linked lists do not allow tail-sharing and cannot be used as persistent data structures.
Circularly linked vs. linearly linked
A circularly linked list may be a natural option to represent arrays that are naturally circular, e.g. the corners of a polygon, a pool of buffers that are used and released in FIFO order, or a set of processes that should be time-shared in round-robin order. In these applications, a pointer to any node serves as a handle to the whole list.
With a circular list, a pointer to the last node gives easy access also to the first node, by following one link. Thus, in applications that require access to both ends of the list (e.g., in the implementation of a queue), a circular structure allows one to handle the structure by a single pointer, instead of two.
A circular list can be split into two circular lists, in constant time, by giving the addresses of the last node of each piece. The operation consists in swapping the contents of the link fields of those two nodes. Applying the same operation to any two nodes in two distinct lists joins the two list into one. This property greatly simplifies some algorithms and data structures, such as the quad-edge and face-edge.
The simplest representation for an empty circular list (when such a thing makes sense) is a null pointer, indicating that the list has no nodes. Without this choice, many algorithms have to test for this special case, and handle it separately. By contrast, the use of null to denote an empty linear list is more natural and often creates fewer special cases.
Using sentinel nodes
Sentinel node may simplify certain list operations, by ensuring that the next and/or previous nodes exist for every element, and that even empty lists have at least one node. One may also use a sentinel node at the end of the list, with an appropriate data field, to eliminate some end-of-list tests. For example, when scanning the list looking for a node with a given value x, setting the sentinel's data field to x makes it unnecessary to test for end-of-list inside the loop. Another example is the merging two sorted lists: if their sentinels have data fields set to +∞, the choice of the next output node does not need special handling for empty lists.
However, sentinel nodes use up extra space (especially in applications that use many short lists), and they may complicate other operations (such as the creation of a new empty list).
However, if the circular list is used merely to simulate a linear list, one may avoid some of this complexity by adding a single sentinel node to every list, between the last and the first data nodes. With this convention, an empty list consists of the sentinel node alone, pointing to itself via the next-node link. The list handle should then be a pointer to the last data node, before the sentinel, if the list is not empty; or to the sentinel itself, if the list is empty.
The same trick can be used to simplify the handling of a doubly linked linear list, by turning it into a circular doubly linked list with a single sentinel node. However, in this case, the handle should be a single pointer to the dummy node itself.[5]
Linked list operations
When manipulating linked lists in-place, care must be taken to not use values that you have invalidated in previous assignments. This makes algorithms for inserting or deleting linked list nodes somewhat subtle. This section gives pseudocode for adding or removing nodes from singly, doubly, and circularly linked lists in-place. Throughout we will use null to refer to an end-of-list marker or sentinel, which may be implemented in a number of ways.
Linearly linked lists
Singly linked lists
Our node data structure will have two fields. We also keep a variable firstNode which always points to the first node in the list, or is null for an empty list.
``` record Node
{
data; // The data being stored in the node
Node next // A reference to the next node, null for last node
}
```
``` record List
{
Node firstNode // points to first node of list; null for empty list
}
```
Traversal of a singly linked list is simple, beginning at the first node and following each next link until we come to the end:
``` node := list.firstNode
while node not null
(do something with node.data)
node := node.next
```
The following code inserts a node after an existing node in a singly linked list. The diagram shows how it works. Inserting a node before an existing one cannot be done directly; instead, one must keep track of the previous node and insert a node after it.
``` function insertAfter(Node node, Node newNode) // insert newNode after node
newNode.next := node.next
node.next := newNode
```
Inserting at the beginning of the list requires a separate function. This requires updating firstNode.
``` function insertBeginning(List list, Node newNode) // insert node before current first node
newNode.next := list.firstNode
list.firstNode := newNode
```
Similarly, we have functions for removing the node after a given node, and for removing a node from the beginning of the list. The diagram demonstrates the former. To find and remove a particular node, one must again keep track of the previous element.
``` function removeAfter(Node node) // remove node past this one
obsoleteNode := node.next
node.next := node.next.next
destroy obsoleteNode
```
``` function removeBeginning(List list) // remove first node
obsoleteNode := list.firstNode
list.firstNode := list.firstNode.next // point past deleted node
destroy obsoleteNode
```
Notice that `removeBeginning()` sets `list.firstNode` to `null` when removing the last node in the list.
Since we can't iterate backwards, efficient `insertBefore` or `removeBefore` operations are not possible.
Appending one linked list to another can be inefficient unless a reference to the tail is kept as part of the List structure, because we must traverse the entire first list in order to find the tail, and then append the second list to this. Thus, if two linearly linked lists are each of length $n$, list appending has asymptotic time complexity of $O(n)$. In the Lisp family of languages, list appending is provided by the `append` procedure.
Many of the special cases of linked list operations can be eliminated by including a dummy element at the front of the list. This ensures that there are no special cases for the beginning of the list and renders both `insertBeginning()` and `removeBeginning()` unnecessary. In this case, the first useful data in the list will be found at `list.firstNode.next`.
Circularly linked list
In a circularly linked list, all nodes are linked in a continuous circle, without using null. For lists with a front and a back (such as a queue), one stores a reference to the last node in the list. The next node after the last node is the first node. Elements can be added to the back of the list and removed from the front in constant time.
Both types of circularly linked lists benefit from the ability to traverse the full list beginning at any given node. This often allows us to avoid storing firstNode and lastNode, although if the list may be empty we need a special representation for the empty list, such as a lastNode variable which points to some node in the list or is null if it's empty; we use such a lastNode here. This representation significantly simplifies adding and removing nodes with a non-empty list, but empty lists are then a special case.
Algorithms
Assuming that someNode is some node in a non-empty circular singly linked list, this code iterates through that list starting with someNode:
``` function iterate(someNode)
if someNode ≠ null
node := someNode
do
do something with node.value
node := node.next
while node ≠ someNode
```
Notice that the test "while node ≠ someNode" must be at the end of the loop. If the test was moved to the beginning of the loop, the procedure would fail whenever the list had only one node.
This function inserts a node "newNode" into a circular linked list after a given node "node". If "node" is null, it assumes that the list is empty.
``` function insertAfter(Node node, Node newNode)
if node = null
newNode.next := newNode
else
newNode.next := node.next
node.next := newNode
```
Suppose that "L" is a variable pointing to the last node of a circular linked list (or null if the list is empty). To append "newNode" to the end of the list, one may do
``` insertAfter(L, newNode)
L := newNode
```
To insert "newNode" at the beginning of the list, one may do
``` insertAfter(L, newNode)
if L = null
L := newNode
```
Linked lists using arrays of nodes
Languages that do not support any type of reference can still create links by replacing pointers with array indices. The approach is to keep an array of records, where each record has integer fields indicating the index of the next (and possibly previous) node in the array. Not all nodes in the array need be used. If records are also not supported, parallel arrays can often be used instead.
As an example, consider the following linked list record that uses arrays instead of pointers:
``` record Entry {
integer next; // index of next entry in array
integer prev; // previous entry (if double-linked)
string name;
real balance;
}
```
By creating an array of these structures, and an integer variable to store the index of the first element, a linked list can be built:
```integer listHead
Entry Records[1000]
```
Links between elements are formed by placing the array index of the next (or previous) cell into the Next or Prev field within a given element. For example:
Index Next Prev Name Balance
0 1 4 Jones, John 123.45
1 -1 0 Smith, Joseph 234.56
2 (listHead) 4 -1 Adams, Adam 0.00
3 Ignore, Ignatius 999.99
4 0 2 Another, Anita 876.54
5
6
7
In the above example, `ListHead` would be set to 2, the location of the first entry in the list. Notice that entry 3 and 5 through 7 are not part of the list. These cells are available for any additions to the list. By creating a `ListFree` integer variable, a free list could be created to keep track of what cells are available. If all entries are in use, the size of the array would have to be increased or some elements would have to be deleted before new entries could be stored in the list.
The following code would traverse the list and display names and account balance:
```i := listHead
while i ≥ 0 // loop through the list
print i, Records[i].name, Records[i].balance // print entry
i := Records[i].next
```
When faced with a choice, the advantages of this approach include:
• The linked list is relocatable, meaning it can be moved about in memory at will, and it can also be quickly and directly serialized for storage on disk or transfer over a network.
• Especially for a small list, array indexes can occupy significantly less space than a full pointer on many architectures.
• Locality of reference can be improved by keeping the nodes together in memory and by periodically rearranging them, although this can also be done in a general store.
• Naïve dynamic memory allocators can produce an excessive amount of overhead storage for each node allocated; almost no allocation overhead is incurred per node in this approach.
• Seizing an entry from a pre-allocated array is faster than using dynamic memory allocation for each node, since dynamic memory allocation typically requires a search for a free memory block of the desired size.
This approach has one main disadvantage, however: it creates and manages a private memory space for its nodes. This leads to the following issues:
• It increases complexity of the implementation.
• Growing a large array when it is full may be difficult or impossible, whereas finding space for a new linked list node in a large, general memory pool may be easier.
• Adding elements to a dynamic array will occasionally (when it is full) unexpectedly take linear (O(n)) instead of constant time (although it's still an amortized constant).
• Using a general memory pool leaves more memory for other data if the list is smaller than expected or if many nodes are freed.
For these reasons, this approach is mainly used for languages that do not support dynamic memory allocation. These disadvantages are also mitigated if the maximum size of the list is known at the time the array is created.
Language support
Many programming languages such as Lisp and Scheme have singly linked lists built in. In many functional languages, these lists are constructed from nodes, each called a cons or cons cell. The cons has two fields: the car, a reference to the data for that node, and the cdr, a reference to the next node. Although cons cells can be used to build other data structures, this is their primary purpose.
In languages that support abstract data types or templates, linked list ADTs or templates are available for building linked lists. In other languages, linked lists are typically built using references together with records.
Internal and external storage
When constructing a linked list, one is faced with the choice of whether to store the data of the list directly in the linked list nodes, called internal storage, or merely to store a reference to the data, called external storage. Internal storage has the advantage of making access to the data more efficient, requiring less storage overall, having better locality of reference, and simplifying memory management for the list (its data is allocated and deallocated at the same time as the list nodes).
External storage, on the other hand, has the advantage of being more generic, in that the same data structure and machine code can be used for a linked list no matter what the size of the data is. It also makes it easy to place the same data in multiple linked lists. Although with internal storage the same data can be placed in multiple lists by including multiple next references in the node data structure, it would then be necessary to create separate routines to add or delete cells based on each field. It is possible to create additional linked lists of elements that use internal storage by using external storage, and having the cells of the additional linked lists store references to the nodes of the linked list containing the data.
In general, if a set of data structures needs to be included in multiple linked lists, external storage is the best approach. If a set of data structures need to be included in only one linked list, then internal storage is slightly better, unless a generic linked list package using external storage is available. Likewise, if different sets of data that can be stored in the same data structure are to be included in a single linked list, then internal storage would be fine.
Another approach that can be used with some languages involves having different data structures, but all have the initial fields, including the next (and prev if double linked list) references in the same location. After defining separate structures for each type of data, a generic structure can be defined that contains the minimum amount of data shared by all the other structures and contained at the top (beginning) of the structures. Then generic routines can be created that use the minimal structure to perform linked list type operations, but separate routines can then handle the specific data. This approach is often used in message parsing routines, where several types of messages are received, but all start with the same set of fields, usually including a field for message type. The generic routines are used to add new messages to a queue when they are received, and remove them from the queue in order to process the message. The message type field is then used to call the correct routine to process the specific type of message.
Example of internal and external storage
Suppose you wanted to create a linked list of families and their members. Using internal storage, the structure might look like the following:
``` record member { // member of a family
member next;
string firstName;
integer age;
}
record family { // the family itself
family next;
string lastName;
string address;
member members // head of list of members of this family
}
```
To print a complete list of families and their members using internal storage, we could write:
``` aFamily := Families // start at head of families list
while aFamily ≠ null // loop through list of families
print information about family
aMember := aFamily.members // get head of list of this family's members
while aMember ≠ null // loop through list of members
print information about member
aMember := aMember.next
aFamily := aFamily.next
```
Using external storage, we would create the following structures:
``` record node { // generic link structure
node next;
pointer data // generic pointer for data at node
}
record member { // structure for family member
string firstName;
integer age
}
record family { // structure for family
string lastName;
string address;
node members // head of list of members of this family
}
```
To print a complete list of families and their members using external storage, we could write:
``` famNode := Families // start at head of families list
while famNode ≠ null // loop through list of families
aFamily := (family) famNode.data // extract family from node
print information about family
memNode := aFamily.members // get list of family members
while memNode ≠ null // loop through list of members
aMember := (member)memNode.data // extract member from node
print information about member
memNode := memNode.next
```
``` famNode := famNode.next
```
Notice that when using external storage, an extra step is needed to extract the record from the node and cast it into the proper data type. This is because both the list of families and the list of members within the family are stored in two linked lists using the same data structure (node), and this language does not have parametric types.
As long as the number of families that a member can belong to is known at compile time, internal storage works fine. If, however, a member needed to be included in an arbitrary number of families, with the specific number known only at run time, external storage would be necessary.
Speeding up search
Finding a specific element in a linked list, even if it is sorted, normally requires O(n) time (linear search). This is one of the primary disadvantages of linked lists over other data structures. In addition to the variants discussed above, below are two simple ways to improve search time.
In an unordered list, one simple heuristic for decreasing average search time is the move-to-front heuristic, which simply moves an element to the beginning of the list once it is found. This scheme, handy for creating simple caches, ensures that the most recently used items are also the quickest to find again.
Another common approach is to "index" a linked list using a more efficient external data structure. For example, one can build a red-black tree or hash table whose elements are references to the linked list nodes. Multiple such indexes can be built on a single list. The disadvantage is that these indexes may need to be updated each time a node is added or removed (or at least, before that index is used again).
Random access lists
A random access list is a list with support for fast random access to read or modify any element in the list.[6] One possible implementation is a skew binary random access list using the skew binary number system, which involves a list of trees with special properties; this allows worst-case constant time head/cons operations, and worst-case logarithmic time random access to an element by index).[6] Random access lists can be implemented as persistent data structures.[6]
Random access lists can be viewed as immutable linked lists in that they likewise support the same O(1) head and tail operations.[6]
A simple extension to random access lists is the min-list, which provides an additional operation that yields the minimum element in the entire list in constant time (without[clarification needed] mutation complexities).[6]
Related data structures
Both stacks and queues are often implemented using linked lists, and simply restrict the type of operations which are supported.
The skip list is a linked list augmented with layers of pointers for quickly jumping over large numbers of elements, and then descending to the next layer. This process continues down to the bottom layer, which is the actual list.
A binary tree can be seen as a type of linked list where the elements are themselves linked lists of the same nature. The result is that each node may include a reference to the first node of one or two other linked lists, which, together with their contents, form the subtrees below that node.
An unrolled linked list is a linked list in which each node contains an array of data values. This leads to improved cache performance, since more list elements are contiguous in memory, and reduced memory overhead, because less metadata needs to be stored for each element of the list.
A hash table may use linked lists to store the chains of items that hash to the same position in the hash table.
A heap shares some of the ordering properties of a linked list, but is almost always implemented using an array. Instead of references from node to node, the next and previous data indexes are calculated using the current data's index.
A self-organizing list rearranges its nodes based on some heuristic which reduces search times for data retrieval by keeping commonly accessed nodes at the head of the list.
Notes
1. The amount of control data required for a dynamic array is usually of the form $K+B*n$, where $K$ is a per-array constant, $B$ is a per-dimension constant, and $n$ is the number of dimensions. $K$ and $B$ are typically on the order of 10 bytes.
Footnotes
1. Gerald Kruse. CS 240 Lecture Notes: Linked Lists Plus: Complexity Trade-offs. Juniata College. Spring 2008.
2. Day 1 Keynote - Bjarne Stroustrup: C++11 Style at GoingNative 2012 on channel9.msdn.com from minute 45 or foil 44
3. Number crunching: Why you should never, ever, EVER use linked-list in your code again at kjellkod.wordpress.com
4. Brodnik, Andrej; Carlsson, Svante; Sedgewick, Robert; Munro, JI; Demaine, ED (Technical Report CS-99-09), Resizable Arrays in Optimal Time and Space, Department of Computer Science, University of Waterloo
5. Ford, William and Topp, William Data Structures with C++ using STL Second Edition (2002). Prentice-Hall. ISBN 0-13-085850-1, pp. 466-467
References
•
• "Definition of a linked list". National Institute of Standards and Technology. 2004-08-16. Retrieved 2004-12-14.
• Antonakos, James L.; Mansfield, Kenneth C., Jr. (1999). Practical Data Structures Using C/C++. Prentice-Hall. pp. 165–190. ISBN 0-13-280843-9.
• Collins, William J. (2005) [2002]. Data Structures and the Java Collections Framework. New York: McGraw Hill. pp. 239–303. ISBN 0-07-282379-8.
• Cormen, Thomas H.; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2003). . MIT Press. pp. 205–213 & 501–505. ISBN 0-262-03293-7.
• Cormen, Thomas H.; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2001). "10.2: Linked lists". (2md ed.). MIT Press. pp. 204–209. ISBN 0-262-03293-7.
• Green, Bert F. Jr. (1961). "Computer Languages for Symbol Manipulation". IRE Transactions on Human Factors in Electronics (2): 3–8. doi:10.1109/THFE2.1961.4503292.
•
• Knuth, Donald (1997). "2.2.3-2.2.5". Fundamental Algorithms (3rd ed.). Addison-Wesley. pp. 254–298. ISBN 0-201-89683-4.
• Newell, Allen; Shaw, F. C. (1957). "Programming the Logic Theory Machine". Proceedings of the Western Joint Computer Conference: 230–240.
• Parlante, Nick (2001). "Linked list basics". Stanford University. Retrieved 2009-09-21.
• Sedgewick, Robert (1998). Algorithms in C. Addison Wesley. pp. 90–109. ISBN 0-201-31452-5.
• Shaffer, Clifford A. (1998). A Practical Introduction to Data Structures and Algorithm Analysis. New Jersey: Prentice Hall. pp. 77–102. ISBN 0-13-660911-2.
• Wilkes, Maurice Vincent (1964). "An Experiment with a Self-compiling Compiler for a Simple List-Processing Language". Annual Review in Automatic Programming (Pergamon Press) 4 (1): 1. doi:10.1016/0066-4138(64)90013-8.
• Wilkes, Maurice Vincent (1964). "Lists and Why They are Useful". Proceeds of the ACM National Conference, Philadelphia 1964 (ACM) (P–64): F1–1.
• Shanmugasundaram, Kulesh (2005-04-04). "Linux Kernel Linked List Explained". Retrieved 2009-09-21.
News
Documents
Don't believe everything they write, until confirmed from SOLUTION NINE site.
What is SOLUTION NINE?
It's a social web research tool
that helps anyone exploring anything.
Updates:
Stay up-to-date. Socialize with us!
We strive to bring you the latest
from the entire web.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8994565606117249, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/209116/why-is-the-name-general-linear-group
|
# Why is the name general “linear” group?
Well, I just want to know if is there any significance of the term "linear" in the of name "General Linear Group" - for example, $\text{GL}_ n(\mathbb{R})$?
-
## 2 Answers
$GL(V)$ is the group of linear transformations over a vector space $V$. You can also, as you have, write it $GL_n(K)$ if $V$ is an $n$-dimensional vector space over a field $K$, and thus isomorphic to $K^n$.
So, the "linear" part refers to the linearity property of the transformations: given vectors $v,w\in V$, scalars $\alpha,\beta\in K$, and a transformation $T\in GL(V)$, $$T(\alpha v+\beta w)=\alpha T(v) + \beta T(w).$$
-
Thank you very much @Gruber – Taxi Driver Oct 8 '12 at 7:32
The term linear here refers to the fact that it is a group consisting of linear transformations of some vector space. In some sense, all groups are "linear" like this, but usually if one refers to something as a linear group, then a specific realization as a group of linear transformations is usually (at least implicitly) meant.
-
thank you very much @Tobias – Taxi Driver Oct 8 '12 at 7:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245797991752625, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/5762/does-mond-make-good-predictions
|
# Does MOND make good predictions?
Well,it does according to this preprint: http://arxiv.org/PS_cache/arxiv/pdf/1102/1102.3913v1.pdf for certain scales
How would be a simple way to explain MOND to a layman?
Does it ignore mainstream physics?How much?
-
## 3 Answers
The basic version of MOND pretty much only attempts to explain the rotation curves by postulating a modification of Newtonian gravity at accelerations smaller than a_0, and does a good job at that. However, many other observational facts (cluster dynamics, stability of spiral galaxies) still require large quantities of dark matter, as acknowledged by proponents of MOND themselves. The Bullet Cluster is the most dramatic evidence that some form of dark matter must exist and does not necessarily trace the baryonic matter distribution.
In regards to its relationship with physics: MOND does not attempt to reconcile with General Relativity, so it cannot do calculations regarding cosmology, gravitational lensing, etc. There is a theory called TeVeS by Bekenstein, which purports to be a relativistic generalization of MOND. I won't pretend I understand it or looked at it in any detail (maybe the value of TeVeS could be a nice question for GR experts). In any case, it has been criticized for yielding unstable solutions for stars, and is still not compatible with the dynamics of the Bullet Cluster.
-
Good point about the bullet cluster +1 – user346 Feb 24 '11 at 6:22
Yes. The would appear to be the point of the paper you cite. It has been accepted for PRL apparently so that should lend it some credibility.
MOND - short for MOdified Newtonian Dynamics - is a phenomenological theory that was conceived of by Mordechai Milgrom in order to explain the huge discrepancy between the shapes of galaxy rotation curves as predicted by Newtonian theory and the actual shapes observed. A rotation curve is the plot of the orbital velocity of objects in the galaxy with respect to distance from the center:
The simplest statement of MOND is that it is a theory with a minimum acceleration scale $a_0 \sim 10^{-10} m/s^2$. When this scale is reached at some radius $r_0$ in a galaxy, objects appear to cease to respond to gravitational forces. Alternatively one could say that gravitational forces cannot generate an acceleration lower than $a_0$.
There are many critics of MOND. However, it has had remarkable success over the years. This paper is only the most recent evidence in its favor. MOND makes no claim to explaining the microscopic physics which leads to these effects. Compared to the LCDM model, MOND makes far fewer assumptions (only one, in fact) and thus has the benefit of being simpler.
The challenge for theories of quantum gravity is to either rule out MOND'ian behavior in weak fields or otherwise explain what gives rise to it. So far, none have even tackled the question in part due out of fear of being labeled "fringe" for associating with such rabble as MOND ;]
-
Any reason for the down vote, other than "MOND is rubbish"? – user346 Feb 24 '11 at 2:51
Image is not working for some reason ... – user346 Feb 24 '11 at 3:31
It is mandatory that MOND is good at predictions. Specially the before mentioned paper (measuring galaxy rotation) will be a success ;)
It is a phenomenological theory i.e. MOND does not have a model and is accurate because it does data fit, on purpose. As long as galaxies do have a common underlying mechanism MOND will be good.
It is not really physics, it is statistics, accounting, cheating, and it should be forbidden because it can prevent us from progressing. I can say to myself: I do not know the answer, and I will keep looking. Physics, as an institution, can not admit: We dont know. And then Dark matter, MOND, ..., are expressions of our ignorance.
At least the Ptolomaic epicycles had a model, the Geocentric one.
Things will change as soon as you and the comunity can find the correct answer. Try googling "Galáxias e Atractores" and maybe you can find a nice reading.
-
– Helder Velez Mar 3 '11 at 22:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9521949291229248, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/54355/the-gradient-as-a-row-vs-column-vector
|
# The Gradient as a Row vs. Column Vector
Kaplan's Advanced Calculus defines the gradient of a function $f:\mathbb{R^n} \rightarrow \mathbb{R}$ as the $1 \times n$ row vector whose entries respectively contain the $n$ partial derivatives of $f$. By this definition then, the gradient is just the Jacobian matrix of the transformation.
We also know that using the Riesz representation theorem, assuming $f$ is differentiable at the point $x$, we can define the gradient as the unique vector $\nabla f$ such that
$$df(x)(h) = \langle h, \nabla f(x) \rangle, \; h \in \mathbb{R}^n$$
Assuming we ignore the distinction between row vectors and column vectors, the former definition follows easily from the latter. But, row vectors and column vectors are not the same things. So, I have the following questions:
1. Is the distinction here between row/column vectors important?
2. If (1) is true, then how can we know from the second defintion that the vector in question is a row vector and not a column vector?
-
1
The gradient as a row vector seems pretty non-standard to me. I'd say vectors are column vectors by definition (or usual convention), so $df(x)$ is a row vector (as it is a functional) while $\nabla f(x)$ is a column vector (the scalar product is a product of two vectors. And yes, the distinction is important. – t.b. Jul 28 '11 at 21:46
Kaplan seems to really be describing $df$, not $\nabla f$. – Qiaochu Yuan Jul 28 '11 at 22:03
@Qiaochu Near the top of page 94 he writes "The Jacobian matrix of $f$ is the row vector $(\partial_x f, \partial_y f, \partial_z f)$. We call this vector the gradient vector of $f$ and write $\nabla f$". – ItsNotObvious Jul 28 '11 at 22:12
1
@3Sphere: yes, and...? I don't think that's the standard definition of the gradient in general. – Qiaochu Yuan Jul 28 '11 at 22:17
@Qiaochu No "and" really other than just to note that, according to what both you and Theo have indicated, Kaplan's definition there is incorrect. Which is unfortunate since I absorbed that definition some time ago and it has been living in my head ever since... – ItsNotObvious Jul 28 '11 at 22:24
show 1 more comment
## 1 Answer
Yes, the distinction between row vectors and column vectors is important. On an arbitrary smooth manifold $M$, the derivative of a function $f : M \to \mathbb{R}$ at a point $p$ is a linear transformation $df_p : T_p(M) \to \mathbb{R}$; in other words, it's a cotangent vector. In general the tangent space $T_p(M)$ does not come equipped with an inner product (this is an extra structure: see Riemannian manifold), so in general we cannot identify tangent vectors and cotangent vectors.
So on a general manifold one must distinguish between vector fields (families of tangent vectors) and differential $1$-forms (families of cotangent vectors). While $df$ is a differential form and exists for all $M$, $\nabla f$ can't be sensibly defined unless $M$ has a Riemannian metric, and then it's a vector field (and the identification between differential forms and vector fields now depends on the metric).
If one thinks of tangent vectors as column vectors, then $\nabla f$ ought to be a column vector, but the linear functional $\langle -, \nabla f \rangle$ ought to be a row vector. A major problem with working entirely in bases is that distinctions like these are frequently glossed over, and then when they become important students are very confused.
Some remarks about non-canonicity. The tangent space $T_p(V)$ to a vector space at any point can be canonically identified with $V$, so for vector spaces we don't run into quite the same problems. If $V$ is an inner product space, then in the same way it automatically inherits the structure of a Riemannian manifold by the above identification. Finally, when people write $V = \mathbb{R}^n$ they frequently intend $\mathbb{R}^n$ to have the standard inner product with respect to the standard basis, and this equips $V$ with the structure of a Riemannian manifold.
-
It is important to keep track of what things are and what extra structures they depend on, but distinctions like row vector versus column vector are essentially cosmetic until you have a whole family of objects, and then only because you want natural operations on your vectors to correspond to geometric operations and not nonsense (and you want the geometric operations to correspond to vector operations which are as simple as possible). If you are considering "row vector vs. column vector" you have already fixed a basis, so much of the intrinsic is already lost. – Aaron Jul 28 '11 at 22:06
@Aaron: well, I am just using "column vector" as a euphemism for "tangent vector" and "row vector" as a euphemism for "cotangent vector." I prefer that nobody use these terrible terms, but as long as the OP is... – Qiaochu Yuan Jul 28 '11 at 22:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383186101913452, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/308505/finding-the-coordinates-of-the-point-where-each-line-crosses-the-y-axis
|
# Finding the coordinates of the point where each line crosses the y-axis
I have a problem like this:
````Give the coordinates of the point where each line crosses the y-axis.
````
Then it gives me an equation in slope-intercept form, here is an example:
$y=3x+4$
Would I just use the y-intercept (4) and write down the answer as (0,4)?
-
## 2 Answers
Yes, that's correct. The "b" value in the slope-intercept form: $$y = mx + b$$ denotes the y-coordinate at $x = 0$, hence, the y-intercept is given $b$, meaning the point of intersection of the line and the y-axis is the point $(0, b)$.
In your example, $$y = 3x + 4,$$ slope = $m = 3$, and $b = 4$ is the y-value at which the line "intercepts" the y-axis (y axis: $x = 0$). Hence $(0, 4)$ is the point of intersection.
-
Ahh thank you! So lets say if I have a problem like y=(1/2)x, would the answer just be (0,0) since the "b" value would just be interpreted as 0? – user60161 Feb 19 at 22:22
@user60161: that is correct, assuming by 1/2x you meant (1/2)x not 1/(2x). Please use parentheses when using slashes for division. – Ross Millikan Feb 19 at 22:23
Whoops, my mistake. Thank you! – user60161 Feb 19 at 22:25
user60161 yes, assuming you mean $y = (1/2)x$, then $b = 0$, hence the point of intersection of the line with the y-axis is the origin $(0,0)$ – amWhy Feb 19 at 22:25
## Did you find this question interesting? Try our newsletter
In general you could always just plug in $x=0$ and solve, no matter what form its in
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8935867547988892, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/116318/calculating-the-continued-fraction-of-sqrt47-using-a-different-result
|
# Calculating the continued fraction of $\sqrt{47}$ using a different result
I have calculated the continued fraction of $\alpha=\frac{6+\sqrt{47}}{11}$ which equals $\overline{[1,5,1,12]}$. Now I am asked to calculated the cont. fraction of $\sqrt{47}$ using this result. I am not sure whether there is a simple formula to calculate the continued fraction of $\sqrt{47}=11\alpha-6$.
I know the answer to be $\sqrt{47}=[6,\overline{1,5,1,12}]$ (checked by Mathematica) but it's not clear how to arrive at this result using our previous answer.
-
1
I think I found the answer: Those two cont'd fractions are nearly identical because $47=6^2+11$, and thus when calculating the continued fraction the second step in calculating $\sqrt{47}$ coincides with the first step of $\alpha$, thus those two expressions are the same from that point onwards. I.e., $(\sqrt{47}-6)^{-1}=\alpha$ – ClausW Mar 4 '12 at 13:52
Hint: $6=\sqrt{36}$ and $11=47-36$ (consider the conjugate!). (seeing your answer : yes you are right!) – Raymond Manzoni Mar 4 '12 at 13:54
## 2 Answers
$(\sqrt{47}-6)(\sqrt{47}+6)=47-36=11$, so $$(\sqrt{47}-6)\alpha=(\sqrt{47}-6)\left(\frac{\sqrt{47}+6}{11}\right)=1\;,$$ and $$\sqrt{47}-6=\frac1{\alpha}\;.$$
Clearly $\lfloor\sqrt{47}\rfloor=6$, so you know that $$\sqrt{47}=6+\frac1{\left(\frac1{\sqrt{47}-6}\right)}=6+\frac1\alpha=[6,\overline{1,5,1,12}]\;.$$
-
Just my comment of a minute ago, only you explained it better. Thanks. :) – ClausW Mar 4 '12 at 13:54
@Claus: I saw your comment just after I posted; good job. – Brian M. Scott Mar 4 '12 at 13:55
I know the answer to be $\sqrt{47}=[6,\overline{1,5,1,12}]$ (checked by Mathematica) but it's not clear how to arrive at this result using our previous answer.
No ingenuity is needed. The above observation makes the proof mechanical. The above is true
$$\iff\ \sqrt{47}\: =\: 6 + \dfrac{1}{\overline{1,5,1,12}}\: =\: 6 + \dfrac{1}\alpha\ \iff\ \alpha \:=\: \dfrac{1}{\sqrt{47}-6}\: =\: \dfrac{\sqrt{47}+6}{11}$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184165596961975, "perplexity_flag": "middle"}
|
http://gowers.wordpress.com/2011/11/28/a-short-post-on-countability-and-uncountability/
|
# Gowers's Weblog
Mathematics related discussions
## A short post on countability and uncountability
There is plenty I could write about countability and uncountability, but much of what I have to say I have said already in written form, and I don’t see much reason to rewrite it. So here’s a link to two articles on the Tricki, which, if you don’t know, is a wiki for mathematical techniques. The Tricki hasn’t taken off, and probably never will, but it’s still got some useful material on it that you might enjoy looking at. The articles in question are one about how to tell almost instantly whether a set is countable and another about how to find neat proofs that sets are countable when they are.
The main additional point I’d like to make about this whole area is that you will do much better if you follow some of the general advice from earlier in this series of posts and work from the formal definitions and basic facts that you have been taught. Perhaps I can make that clearer by spelling out what you shouldn’t do, which is to pay too much attention to the words “countable” and “uncountable”. Let’s face it, you can’t count the natural numbers — you’ll be dead long before you’ve got to $10^{20}$. You can’t even put them in a list, since the number of atoms in the universe is only around $10^{79}$. And if you imagine some hypothetical world where you live for ever, you’ll never actually finish counting through the natural numbers if you try to do so (unless you find some way of speeding up without limit so that the sum of the times you take converges, but let’s not go there). So if you think of countable as meaning “can be counted”, then you risk confusing yourself — and I know for a fact that many people do end up confusing themselves.
Far better to stick to basic facts and definitions that are stated in precise mathematical language. Here’s a list of them — I may forget one or two important ones but I’ll try not to. You should have all these facts at your fingertips. (If you can prove the ones that aren’t definitions, then so much the better, but knowing the facts is even more important than knowing the proofs, since it is the facts themselves that you will use to go on to prove other things.)
Incidentally, some people use the convention that all finite sets are countable, whereas others use the word “countable” only for infinite sets. I’ll use the convention that finite sets are countable, so if you prefer the other convention then you’ll have to make some small modifications.
1. A set $X$ is finite if for some $n$ there is a bijection $\phi:X\to\{1,2,\dots,n\}$. Otherwise, it is infinite.
2. A set $X$ is countable if and only if it is finite or there is a bijection $\phi:X\to\mathbb{N}$. Otherwise, it is uncountable.
3. Two sets $X$ and $Y$ are said to have the same cardinality if there is a bijection $\phi:X\to Y$.
4. If $X$ and $Y$ are sets, then $X$ has cardinality at most that of $Y$ if there is an injection $\psi:X\to Y$.
5. If $X$ and $Y$ are sets and $X$ is non-empty, then the following two statements are equivalent.
(i) There is an injection from $X$ to $Y$.
(ii) There is a surjection from $Y$ to $X$.
6. Let $X$ be an infinite set. The following statements are equivalent.
(i) There is a bijection from $X$ to $\mathbb{N}$.
(ii) There is an injection from $X$ to $\mathbb{N}$.
(iii) There is a surjection from $\mathbb{N}$ to $X$.
[Note that this gives three potential ways of proving that $X$ is countable. Although I gave (i) as the definition of countability, it is usually much more convenient to prove (ii).]
7. $\mathbb{R}$ is uncountable.
8. If $X$ is any set, then the power set of $X$ has strictly larger cardinality than $X$. (Equivalently, there is no surjection from $X$ to the power set of $X$.) In particular, the power set of an infinite set is uncountable.
9. A union of countably many countable sets is countable. More formally, if $\Gamma$ is a countable set and for each $\gamma\in\Gamma$ the set $X_\gamma$ is countable, then the union $\bigcup_{\gamma\in\Gamma}X_\gamma$ is countable.
10. In particular, a union of countably many finite sets is countable. If you are told to prove that a set is countable, then using this very simple principle usually leads to the shortest proof.
11. If $n>m$ then there is no injection from the set $\{1,2,\dots,n\}$ to the set $\{1,2,\dots,m\}$ (and hence no surjection from the set $\{1,2,\dots,m\}$ to the set $\{1,2,\dots,n\}$).
[This may look obvious, but it needs a proof. One way to do it is to use the well-ordering principle. Pick a counterexample with $n$ minimal. Let $\phi$ be an injection from $\{1,2,\dots,n\}$ to $\{1,2,\dots,m\}$. If $\phi(n)=j$, then define $\psi:\{1,\dots,n-1\}\to\{1,\dots,m-1\}$ by taking $\psi(r)=\phi(r)$ if $\phi(r)<j$ and $\psi(r)=\phi(r)-1$ if $\phi(r)>j$. This is an injection, which contradicts the minimality of $n$.]
Here are a couple of examples of how to do exercises that involve countability.
1. Prove that if $Y$ is countable and $f:X\to Y$ is an injection, then $X$ is countable.
Solution. Since $Y$ is countable, there is an injection $g:Y\to\mathbb{N}$. A composition of injections is an injection, so $g\circ f$ is an injection from $X$ to $\mathbb{N}$. Therefore, $X$ is countable. $\square$
Note how short and clean the above proof is. Note also that what I did not do was say anything about “putting the elements of $Y$ in a list”.
2. Let $X$ be an uncountable set and let $f$ be an injection from $X$ to another set $Y$. Prove that $Y$ is uncountable.
Solution. Since uncountability is defined negatively, it will be no surprise that we prove this result by looking at the contrapositive. If $Y$ is countable, then by the previous exercise $X$ is countable, contradicting our hypothesis. So $Y$ is uncountable. $\square$
3. Prove that the set of all irrational numbers is uncountable.
Solution 1. There are various ways of doing this. The easiest argument starts from the thought that the reals form a huge set, and to get the irrationals we take away just the rationals, which form a small set. Therefore, the irrationals must form a huge set.
To turn that into a proper proof, we once again prove the contrapositive — for the same reason as we did when solving question 2. If the set of all irrationals is countable, then the reals are the union of two countable sets. Hence, by fact 9 above, the reals are countable. But that contradicts fact 7. $\square$
Solution 2. It is tempting to try to use the solution to 2 above. That is, we’d like to find a set that we know is uncountable, and define an injection from that set to the irrationals. The most obvious uncountable set is the set of reals. Can we inject those to the set of irrationals? Hmm, it seems hard, since nothing that’s even slightly continuous has any hope of working. Are there any “less continuous” uncountable sets that we could use? Yes: we could take the set of all 01 sequences. So now we’d like a way of associating an irrational with each 01 sequence. This is fun to do, so here’s a spoiler alert. I’ll leave some space, then I’ll give a solution in just one paragraph, then I’ll leave some more space, and then I’ll present a third solution.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Bearing in mind that all rational numbers have repeating decimal expansions, we just need some way of associating a non-repeating sequence with each 01 sequence. This can be done in many natural ways, of which one is this. To each sequence associate a decimal between 0 and 1 that is 0 in the $n$th decimal place whenever $n$ is not a square, and is either 1 or 2 in the square places according to whether the 01 sequence is 0 or 1. For example, if the 01 sequence begins 00101, then the decimal will begin 0.10010000200000010000000020… $\square$
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Solution 3. This is just for people who know a little about continued fractions. If you haven’t, then don’t worry about it — though if you read the beginning of the Wikipedia article on the subject then that will be more than enough to understand what follows.
Continued fractions give a very beautiful bijection between the set of all positive irrational numbers and the set of all infinite sequences of natural numbers. Given a positive irrational number, you just take the terms of its continued-fraction expansion, and given an infinite sequence, you just take the number that has those terms, which must be irrational since all rational numbers have terminating continued-fraction expansions. It is easy to prove that the set of all infinite sequences of natural numbers is uncountable, so we’re done. $\square$
Sometimes one is asked to prove that a set $X$ is uncountable when you’re not told what $X$ is, but just that it has certain properties. This is slightly harder to deal with. I’m not going to work through an example, because I don’t want to spoil what may be a nice examples sheet question from next term. However, here is a technique that can sometimes work very nicely. You define, using information about $X$, a function $\phi$ that takes finite sequences of 0s and 1s to points in $X$, and you do it in such a way that for any infinite sequence, the images of its initial segments form a sequence that you prove converges to something in $X$. (For instance, if the infinite sequence starts 110101… then the sequence of points in $X$ starts $\phi(1),\phi(11),\phi(110),\phi(1101),\phi(11010),\phi(110101),...$.) You also do the construction in a way that ensures that no two limits are the same.
### Like this:
This entry was posted on November 28, 2011 at 3:44 pm and is filed under Cambridge teaching, IA Numbers and Sets. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 11 Responses to “A short post on countability and uncountability”
1. Tom Leinster Says:
November 28, 2011 at 5:05 pm | Reply
Here’s an annoying little comment: fact 5 isn’t quite true as it stands, because of the possibility that X is empty and Y is not. Then there is an injection from X to Y, but no surjection from Y to X.
Thanks — corrected now.
It’s maybe also worth noting that (ii) => (i) is closely related to the axiom of choice. The axiom of choice states that every surjection has a section (right inverse). Any section of any map is injective, so the axiom of choice immediately tells you that (ii) => (i).
I think it’s better for students to meet the axiom of choice first in this simple setting. Often it’s first introduced in a more advanced context, with an unnecessary amount of mystery attached.
2. Greg Martin Says:
November 28, 2011 at 11:41 pm | Reply
A pedantic comment regarding your proof for Example 2: to me your proof reads like a proof by contradiction, rather than a proof of the contrapositive. The two are, of course, very closely related and especially hard to distinguish in such a short proof; here I’m swayed by the particular words you used, “contradicting our hypothesis”.
Let XU and YU be the statements “X is uncountable” and “Y is uncountable”, and let I be the statement “there exists an injection f from X to Y”. Then the statement as written is “if XU and I, then YU”. The literal contrapositive is certainly “if not YU, then (not XU) or (not I)”; but we experienced mathematicians would probably also call either “if XU and not YU, then not I” or “if I and not YU, then not XU” a contrapositive of the original statement.
It is this latter formulation that we want to use here; personally I prefer (with this level of audience) to formulate the contrapositive explicitly so that my proof is easily seen to be a proof of the new explicit statement. Even though your proof never uses the hypothesis XU, it reads (as currently worded) a little bit like the proof by contradiction “if XU and I and not YU, then False”. I like to discourage my students from these types of proofs by contradiction where one hypothesis is never used; I think it leads to greater clarity of thought to explicitly identify the form of the contrapositive that’s being used.
3. plm Says:
November 29, 2011 at 2:12 pm | Reply
I take the opportunity to point out a great piece of knowledge that many (mathematicians) would appreciate I think:
http://en.wikipedia.org/wiki/Observable_universe
(with a section on matter content)
So at the beginning of your article you probably meant “observable universe” rather than “universe”.
Current orthodoxy in cosmology seems to be that the universe is spatially infinite (has points at arbitrarily large spatial distances), with a positive homogeneous (on large-scale) matter density. Then there are as many atoms as natural numbers in the universe.
But please anyone correct me if I am mistaken, this is a tricky topic to learn.
4. Kevin O'Bryant Says:
November 30, 2011 at 5:00 pm | Reply
There are set theoretic issues buried here that aren’t widely appreciated. The definition of “$X$ is countable” not only uses the set $X$, but also the malleable notion of “function”. To come head-to-head with this, find the error in the statement: “there are countable models of ZFC, and (a model of) the reals sits inside that model, and so the reals are countable.”
Most mathematicians fix ZFC, and also fix a model of it, and don’t concern themselves with whether they are doing math or metamath or metametamath, and don’t even acknowledge issues like whether Grothendieck universes are consistent with ZFC. For those, “countable” is a property a set may or may not have. But there is a substantial that like to vary their set theory and/or their model of it, and for these “countable” expresses an interplay between the set theory, the model, and the specific set.
• Tom Leinster Says:
November 30, 2011 at 7:08 pm
Kevin, without disagreeing with your substantive point, I wanted to pick up on this:
Most mathematicians fix ZFC
Surely most mathematicians ignore ZFC (and all other axiomatizations of set theory).
There’s an implicitly agreed collection of “naive” rules about how sets can be manipulated, and this forms a kind of fixed framework for almost all non-set-theorists. But that collection of rules is definitely not ZFC. It’s probably more like what appears in the first year course that Tim is writing about.
• plm Says:
November 30, 2011 at 7:37 pm
I rather wonder about the “fix a model of it” part.
Textbooks usually develop their material using the axiom of choice (e.g. for the existence of basis of vector spaces, of algebraic closure of fields, etc.). In this sense mathematicians’ naive rules amount to assuming ZFC (and also first-order logic as opposed to say type theory, although there are interesting nuances to argue between both kinds of assumption).
But what particular model is necessary or used, explicitly or implicitly?
5. Pete L. Clark Says:
November 30, 2011 at 5:27 pm | Reply
I hope it is okay to post such a comment…
A few years ago I wrote some notes on elementary set theory. They come in three parts. The first part treats finite, countable and uncountable sets in a quite naive way (i.e., set-theoretically serious people will notice that some of the results presented require or are even equivalent to the Axiom of Choice, whereas AC is not even mentioned until the next set of notes. This is a deliberate expository choice):
http://math.uga.edu/~pete/settheorypart1.pdf
In comparison to Prof. Gowers’s notes, the first thing to say is that mine are much longer. Other than that they seem broadly similar (how dramatically different could two treatments of this topic be?). Perhaps my notes are a bit more “conceptual” — i.e., I spend some time trying to explain how one should think about infinite sets as mathematical objects — whereas his are more “practical”, i.e., he has more concrete advice on how to prove things about infinite sets.
Any feedback on these notes would be warmly received. They have not been used for any very official purpose, but I have referred students to them, for instance, so improvements would certainly be beneficial.
• gowers Says:
November 30, 2011 at 5:52 pm
It’s definitely OK to post such a comment …
6. dasuxullebt Says:
December 6, 2011 at 10:55 pm | Reply
It may not fit perfectly, but as you are discussing misconceptions on countability, a somewhat similar concept which is often confused with countability is enumerability, aka semi-decidability. Both concepts have similar meanings and some proof-techniques (like diagonalization) are common, but the class of countable sets is strictly greater than the class of enumerable sets.
7. animateholic Says:
December 15, 2011 at 12:39 am | Reply
There are various ways of doing this. The easiest argument starts from the thought that the reals form a huge set,
Huge set is infinte set ??
8. Pensierini base su matematiche avanzate (e web culturale) | agora-vox.bluhost.info Says:
January 7, 2012 at 9:42 am | Reply
[...] molti materiali anche introduttivi (per chi fa seriamente matematica, è ovvio; lo stesso Gowers ammette però che il sito non è [...]
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 91, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436061978340149, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/210164/rotation-matrix/210202
|
# Rotation matrix
HI I am wondering if there is a unique matrix that maps $(x_1,y_1,z_1)$ into $(x_2,y_2,z_2)$. These two vectors have equal magnitude and are defined in orthogonal 3-D basis. If there is a unique solution how can I find it by considering rotation about all three orthogonal basis?
-
## 4 Answers
There are many such matrices. If you find a single matrix $M$ which maps $v_1=(x_1,y_1,z_1)$ to $v_2=(x_2,y_2,z_2)$, then we could multiply such an $M$ by a rotation matrix whose axis goes along $v_2$.
To get a matrix $M$ we can use $M=1/(v_1 * v_1)v_2^t v_1$. Here by $v_1*v_1$ I mean dot product of $v_1$ with itself, and by $v_2^t$ I mean transpose of $v_2$, which would make it into a column vector. That way $v_2^t v_1$ comes out to be a matrix. You have to use $v_1^t$ when computing the result of multiplying $M$ by $v_1$.
-
coffeemath explained why you cannot find an unique such matrix.
On another hand, since you are interested in rotations about the axes, there exists a pair of rotations $R_1, R_2$ about two of the axes of your choices, so that $v_2=R_2 R_1 v_1$. Moreover, if you fix the axes, and the order, I think there are exactly two such rotations.
To understand why, just think in spherical coordinates, where the angles are expressed with respect to the two axes you chose...
Then the first vector is $(R, \phi_1, \theta_1)$ and the second vector is $(R, \phi_2, \theta_2)$, so a rotation if angle $\phi_1-\phi_2$ and one of angle $\theta_1-\theta_2$ should do it.
The second possibility comes from the fact that you can also do a rotation of more than $180^\circ$ with respect to the second angle....
-
Suppose your choice is to rotate around x axis, then around y axis. And suppose v_1 is close to the unit vector for the x axis, while v_2 is close to the unit vector for the y axis. Then after a rotation around the x axis, we arrive at a vector w which is still near the unit vector of the x axis. When such a vector is rotated around the y axis it cannot end up near the goal vector v_2. The rotation about the y axis will preserve whatever angle w now makes with the y axis. – coffeemath Oct 10 '12 at 2:12
I think the spherical coordinate approach makes it clear one can use first a rotation about the z axis, and then one more rotation. However, once the rotation about the z axis has been done, in such a way as to change the longitude of v_1 to that of v_2 (arriving at say w) there is another rotation to go. In order to now rotate w into v_2 typically the origin, w, and v_2 make a plane, and the second rotation must be around the perpendicular to that plane, which is not typically one of the three axes (x or y or z). – coffeemath Oct 10 '12 at 2:29
No.
For example, let $p_1 = (x_1,y_1,z_1)^T$, and $d_1 = (x_2,y_2,z_2)^T$. Choose $p_2,p_3 \in \mathbb{R}^3$ so that $p_1, p_2, p_3$ are orthogonal. Let $d_2, d_3$ be arbitrary. Then define the matrix $$A = \frac{1}{\|p_1\|^2} d_1 p_1^T + d_2 p_2^T + d_3 p_3^T$$
It is easy to check that $A p_1 = d_1$. Since $d_2, d_3$ are arbitrary, it is clear that the transformation that maps $p_1$ into $d_1$ is not unique.
In fact, all transformations $A$ that satisfy $A p_1 = d_1$ can be expressed in this form with appropriate choice of $d_2,d_3$. If the $p_k$ and $d_k$ are chosen to be orthonormal, then the resulting $A$ will be a rotation (possibly improper).
-
In the following, c stands for cosine and s for sine (of any angle $\theta$) $$\pmatrix{c & s\\ -s &c} \pmatrix{x_1 \\ y_1}$$ So in particular, we have that $s^2 + c^2 = 1$. The matrix multiply represents the rotation about the z-axis. The magnitude of the vector is preserved as such: $$\left|\pmatrix{x_1 \\ y_1}\right|^2 = x_1^2 + y_1^2 \quad \text{this is the original magnitude}$$ $$\left|\pmatrix{cx_1 +sy_1\\ -sx_1 + cy_1}\right|^2 = (cx_1 +sy_1)^2 + (-sx_1 + cy_1)^2$$ $$=(c^2x_1^2+s^2y_1^2+2csx_1y_1) +(s^2x_1^2+c^2y_1^2-2csx_1y_1)$$ $$=c^2(x_1^2+y_1^2)+s^2(x_1^2+y_1^2)$$ $$=(c^2+s^2)(x_1^2+y_1^2)$$ $$=x_1^2+y_1^2 \quad \text{and the magnitude is unchanged}$$
You can apply this form of matrix a few times, starting with your first point in 3-D, and reach the desired value in one coordinate at a time. So for example, to rotate about the y axis: $$\pmatrix{c & 0& s\\0&1&0\\-s&0&c}\pmatrix{x \\ y \\ z}$$ You would only need to do this twice, as the final value would by necessity match as desired, since the magnitudes are the same for the two points.
To find c and s, use $$c=\frac{x_1}{\sqrt{x_1^2+y_1^2}} \quad\text{and}\quad s=\frac{y_1}{\sqrt{x_1^2+y_1^2}}$$
HINT: any two values instead of $x_1$ and $y_1$ in that formula gives a valid c and s for a rotation matrix.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385102391242981, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/24108/can-select-many-disjoint-pairs-with-prescribed-differences-from-z-n/28254
|
Can select many disjoint pairs with prescribed differences from Z_n?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose we have a sequence $d_i<2n$ for $i=1,\ldots,n$ and we want to select $n$ disjoint pairs from $Z_p$, $x_i,y_i$ such that $x_i-y_i=d_i \mod p$. Then how big $p$ has to be compared to $n$ to do this? I am primary interested on an upper bound on $p$. Is it true that there is always a $p\le (1+\epsilon)2n+O(1)$?
My comments. It is trivial that $p\ge 2n$ because all the numbers $x_i,y_i$ must be different and $d_1=1, d_2=2$ shows that this is not always enough. I also guess that it helps if $p$ is a prime, maybe the smallest prime bigger than $2n$ works which would answer the question.
-
3 Answers
Your last guess is correct. The smallest prime number $>2n$ works, see [Preissmann, Emmanuel; Mischler, Maurice Seating couples around the King's table and a new characterization of prime numbers. Amer. Math. Monthly 116 (2009), no. 3, 268--272.]
-
2
See also N. Alon, Additive Latin transversals, Isreal J. Math. 117 (2000) 125--130. – Douglas S. Stones May 10 2010 at 22:39
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For those who are not afraid of a little spanish, here is a short proof using Nullstellensatz pointed to me by Preissmann, Emmanuel: http://grupofundamental.wordpress.com/2010/03/05/il-faut-exiger-de-chacun-ce-que-chacun-peut-donner/
And yet another proof (or in fact two) which even generalizes a bit: http://arxiv.org/abs/1005.1177
-
And if you are afraid, just ask for a translation! I'm sure the authors would not mind doing it :) – Mariano Suárez-Alvarez May 31 2010 at 22:18
Here you can see the first proof that Domotorp pointed out http://arxiv.org/abs/1006.2571 (in english)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9019923806190491, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?s=444aac9f3e04a6b9954a48593c137b14&p=4226090
|
Physics Forums
## Graphing Covariant Spherical Coordinates
I am studying Riemannian Geometry and General Relativity and feel like I dont have enough practice with covariant vectors. I can convert vector components and basis vectors between contravariant and covariant but I cant do anything else with them in the covariant form. I thought converting the familar graph of spherical coordinates to its covariant equivalent and plotting some covariant vectors on it would be a good exercise. I spent but a lot of time on it and couldnt do it.
Does anyone know where I can find some good excerises like this or a drawing of the covariant equivalent of spherical coordinates?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
I didn't think there was a difference between the contravariant and covariant spherical basis vectors because they're orthogonal even though they are positionally dependent. If your mission is to mess around with covariant basis vectors in 3 dimensions (in a visual geometric way), in solid state physics you can mess around with the reciprocal lattice vectors for various lattice types. The reciprocal lattice vectors are the dual basis vectors (covariant basis vectors/contravariant vector components) for the lattice vectors, so you can mess around and see what these reciprocal lattices actually look like.
I dont really understand your reply. The only coordinate system where the covariant and contravariant bases are the same is Rectangular Coodrinates. I have no idea what reciprocal lattice vectors are or how to find exercises using them.
## Graphing Covariant Spherical Coordinates
It's not spherical coords but maybe this video can help you out if you haven't watched it already (the drawing starts at about 4.00):
Quote by jstrunk The only coordinate system where the covariant and contravariant bases are the same is Rectangular Coodrinates.
You're right, woops, although the contra and covariant basis vectors are all in the same direction for orthogonal coordinate systems their magnitudes are inverse of each other.
Quote by jstrunk I have no idea what reciprocal lattice vectors are or how to find exercises using them.
I was just kind of being optimistic that you might have seen different crystal lattice types that you can then compare to their reciprocal lattice types analytically and geometrically. This is a good way to visualize the changes in the two representations 3-dimensionally with non-orthogonal basis vectors (where the differences between covariance and contravariance is most pronounced, though there are primitive lattice vectors that are orthogonal..) This topic is usually the beginning of any text on solid state physics when performing Fourier analysis of lattice structure (x-ray diffraction).
Basically it's just using the equations linking covariant and contravariant basis vectors:$$\vec{e}^{1} = \frac{\vec{e}_{2} \times \vec{e}_{3}}{\vec{e}_{1} \circ (\vec{e}_{2} \times \vec{e}_{3})}$$
$$\vec{e}^{2} = \frac{\vec{e}_{3} \times \vec{e}_{1}}{\vec{e}_{1} \circ (\vec{e}_{2} \times \vec{e}_{3})}$$
$$\vec{e}^{3} = \frac{\vec{e}_{1} \times \vec{e}_{2}}{\vec{e}_{1} \circ (\vec{e}_{2} \times \vec{e}_{3})}$$ to change between a set of non-orthogonal basis vectors that represent a crystal's structure to what the crystals look like in "reciprocal space".
It's just this kind of a thing:
I dunno, I'm thinking my suggestion is too far from helpful ._. It's too complicated for what looks to be not a lot of enlightenment :/ I remembered it being more enlightening than it's turning out to be.. Though I was trying to appeal from something in physics rather than just straight mathematics.
Thread Tools
| | | |
|---------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: Graphing Covariant Spherical Coordinates | | |
| Thread | Forum | Replies |
| | Advanced Physics Homework | 26 |
| | Calculus & Beyond Homework | 1 |
| | General Physics | 0 |
| | Calculus & Beyond Homework | 2 |
| | General Physics | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335511922836304, "perplexity_flag": "middle"}
|
http://terrytao.wordpress.com/tag/free-probability/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tag Archive
You are currently browsing the tag archive for the ‘free probability’ tag.
## 254A, Notes 5: Free probability
10 February, 2010 in 254A - random matrices, math.OA, math.PR, math.SP | Tags: free convolution, free probability, Gelfand-Naimark theorem, noncommutative probability, spectral theorem | by Terence Tao | 32 comments
In the foundations of modern probability, as laid out by Kolmogorov, the basic objects of study are constructed in the following order:
1. Firstly, one selects a sample space ${\Omega}$, whose elements ${\omega}$ represent all the possible states that one’s stochastic system could be in.
2. Then, one selects a ${\sigma}$-algebra ${{\mathcal B}}$ of events ${E}$ (modeled by subsets of ${\Omega}$), and assigns each of these events a probability ${{\bf P}(E) \in [0,1]}$ in a countably additive manner, so that the entire sample space has probability ${1}$.
3. Finally, one builds (commutative) algebras of random variables ${X}$ (such as complex-valued random variables, modeled by measurable functions from ${\Omega}$ to ${{\bf C}}$), and (assuming suitable integrability or moment conditions) one can assign expectations ${\mathop{\bf E} X}$ to each such random variable.
In measure theory, the underlying measure space ${\Omega}$ plays a prominent foundational role, with the measurable sets and measurable functions (the analogues of the events and the random variables) always being viewed as somehow being attached to that space. In probability theory, in contrast, it is the events and their probabilities that are viewed as being fundamental, with the sample space ${\Omega}$ being abstracted away as much as possible, and with the random variables and expectations being viewed as derived concepts. See Notes 0 for further discussion of this philosophy.
However, it is possible to take the abstraction process one step further, and view the algebra of random variables and their expectations as being the foundational concept, and ignoring both the presence of the original sample space, the algebra of events, or the probability measure.
There are two reasons for wanting to shed (or abstract away) these previously foundational structures. Firstly, it allows one to more easily take certain types of limits, such as the large ${n}$ limit ${n \rightarrow \infty}$ when considering ${n \times n}$ random matrices, because quantities built from the algebra of random variables and their expectations, such as the normalised moments of random matrices tend to be quite stable in the large ${n}$ limit (as we have seen in previous notes), even as the sample space and event space varies with ${n}$. (This theme of using abstraction to facilitate the taking of the large ${n}$ limit also shows up in the application of ergodic theory to combinatorics via the correspondence principle; see this previous blog post for further discussion.)
Secondly, this abstract formalism allows one to generalise the classical, commutative theory of probability to the more general theory of non-commutative probability theory, which does not have a classical underlying sample space or event space, but is instead built upon a (possibly) non-commutative algebra of random variables (or “observables”) and their expectations (or “traces”). This more general formalism not only encompasses classical probability, but also spectral theory (with matrices or operators taking the role of random variables, and the trace taking the role of expectation), random matrix theory (which can be viewed as a natural blend of classical probability and spectral theory), and quantum mechanics (with physical observables taking the role of random variables, and their expected value on a given quantum state being the expectation). It is also part of a more general “non-commutative way of thinking” (of which non-commutative geometry is the most prominent example), in which a space is understood primarily in terms of the ring or algebra of functions (or function-like objects, such as sections of bundles) placed on top of that space, and then the space itself is largely abstracted away in order to allow the algebraic structures to become less commutative. In short, the idea is to make algebra the foundation of the theory, as opposed to other possible choices of foundations such as sets, measures, categories, etc..
[Note that this foundational preference is to some extent a metamathematical one rather than a mathematical one; in many cases it is possible to rewrite the theory in a mathematically equivalent form so that some other mathematical structure becomes designated as the foundational one, much as probability theory can be equivalently formulated as the measure theory of probability measures. However, this does not negate the fact that a different choice of foundations can lead to a different way of thinking about the subject, and thus to ask a different set of questions and to discover a different set of proofs and solutions. Thus it is often of value to understand multiple foundational perspectives at once, to get a truly stereoscopic view of the subject.]
It turns out that non-commutative probability can be modeled using operator algebras such as ${C^*}$-algebras, von Neumann algebras, or algebras of bounded operators on a Hilbert space, with the latter being accomplished via the Gelfand-Naimark-Segal construction. We will discuss some of these models here, but just as probability theory seeks to abstract away its measure-theoretic models, the philosophy of non-commutative probability is also to downplay these operator algebraic models once some foundational issues are settled.
When one generalises the set of structures in one’s theory, for instance from the commutative setting to the non-commutative setting, the notion of what it means for a structure to be “universal”, “free”, or “independent” can change. The most familiar example of this comes from group theory. If one restricts attention to the category of abelian groups, then the “freest” object one can generate from two generators ${e,f}$ is the free abelian group of commutative words ${e^n f^m}$ with ${n,m \in {\bf Z}}$, which is isomorphic to the group ${{\bf Z}^2}$. If however one generalises to the non-commutative setting of arbitrary groups, then the “freest” object that can now be generated from two generators ${e,f}$ is the free group ${{\Bbb F}_2}$ of non-commutative words ${e^{n_1} f^{m_1} \ldots e^{n_k} f^{m_k}}$ with ${n_1,m_1,\ldots,n_k,m_k \in {\bf Z}}$, which is a significantly larger extension of the free abelian group ${{\bf Z}^2}$.
Similarly, when generalising classical probability theory to non-commutative probability theory, the notion of what it means for two or more random variables to be independent changes. In the classical (commutative) setting, two (bounded, real-valued) random variables ${X, Y}$ are independent if one has
$\displaystyle \mathop{\bf E} f(X) g(Y) = 0$
whenever ${f, g: {\bf R} \rightarrow {\bf R}}$ are well-behaved functions (such as polynomials) such that all of ${\mathop{\bf E} f(X)}$, ${\mathop{\bf E} g(Y)}$ vanishes. In the non-commutative setting, one can generalise the above definition to two commuting bounded self-adjoint variables; this concept is useful for instance in quantum probability, which is an abstraction of the theory of observables in quantum mechanics. But for two (bounded, self-adjoint) non-commutative random variables ${X, Y}$, the notion of classical independence no longer applies. As a substitute, one can instead consider the notion of being freely independent (or free for short), which means that
$\displaystyle \mathop{\bf E} f_1(X) g_1(Y) \ldots f_k(X) g_k(Y) = 0$
whenever ${f_1,g_1,\ldots,f_k,g_k: {\bf R} \rightarrow {\bf R}}$ are well-behaved functions such that all of ${\mathop{\bf E} f_1(X), \mathop{\bf E} g_1(Y), \ldots, \mathop{\bf E} f_k(X), \mathop{\bf E} g_k(Y)}$ vanish.
The concept of free independence was introduced by Voiculescu, and its study is now known as the subject of free probability. We will not attempt a systematic survey of this subject here; for this, we refer the reader to the surveys of Speicher and of Biane. Instead, we shall just discuss a small number of topics in this area to give the flavour of the subject only.
The significance of free probability to random matrix theory lies in the fundamental observation that random matrices which are independent in the classical sense, also tend to be independent in the free probability sense, in the large ${n}$ limit ${n \rightarrow \infty}$. (This is only possible because of the highly non-commutative nature of these matrices; as we shall see, it is not possible for non-trivial commuting independent random variables to be freely independent.) Because of this, many tedious computations in random matrix theory, particularly those of an algebraic or enumerative combinatorial nature, can be done more quickly and systematically by using the framework of free probability, which by design is optimised for algebraic tasks rather than analytical ones.
Much as free groups are in some sense “maximally non-commutative”, freely independent random variables are about as far from being commuting as possible. For instance, if ${X, Y}$ are freely independent and of expectation zero, then ${\mathop{\bf E} XYXY}$ vanishes, but ${\mathop{\bf E} XXYY}$ instead factors as ${(\mathop{\bf E} X^2) (\mathop{\bf E} Y^2)}$. As a consequence, the behaviour of freely independent random variables can be quite different from the behaviour of their classically independent commuting counterparts. Nevertheless there is a remarkably strong analogy between the two types of independence, in that results which are true in the classically independent case often have an interesting analogue in the freely independent setting. For instance, the central limit theorem (Notes 2) for averages of classically independent random variables, which roughly speaking asserts that such averages become gaussian in the large ${n}$ limit, has an analogue for averages of freely independent variables, the free central limit theorem, which roughly speaking asserts that such averages become semicircular in the large ${n}$ limit. One can then use this theorem to provide yet another proof of Wigner’s semicircle law (Notes 4).
Another important (and closely related) analogy is that while the distribution of sums of independent commutative random variables can be quickly computed via the characteristic function (i.e. the Fourier transform of the distribution), the distribution of sums of freely independent non-commutative random variables can be quickly computed using the Stieltjes transform instead (or with closely related objects, such as the ${R}$-transform of Voiculescu). This is strongly reminiscent of the appearance of the Stieltjes transform in random matrix theory, and indeed we will see many parallels between the use of the Stieltjes transform here and in Notes 4.
As mentioned earlier, free probability is an excellent tool for computing various expressions of interest in random matrix theory, such as asymptotic values of normalised moments in the large ${n}$ limit ${n \rightarrow \infty}$. Nevertheless, as it only covers the asymptotic regime in which ${n}$ is sent to infinity while holding all other parameters fixed, there are some aspects of random matrix theory to which the tools of free probability are not sufficient by themselves to resolve (although it can be possible to combine free probability theory with other tools to then answer these questions). For instance, questions regarding the rate of convergence of normalised moments as ${n \rightarrow \infty}$ are not directly answered by free probability, though if free probability is combined with tools such as concentration of measure (Notes 1) then such rate information can often be recovered. For similar reasons, free probability lets one understand the behaviour of ${k^{th}}$ moments as ${n \rightarrow \infty}$ for fixed ${k}$, but has more difficulty dealing with the situation in which ${k}$ is allowed to grow slowly in ${n}$ (e.g. ${k = O(\log n)}$). Because of this, free probability methods are effective at controlling the bulk of the spectrum of a random matrix, but have more difficulty with the edges of that spectrum (as well as with related concepts such as the operator norm, Notes 3) as well as with fine-scale structure of the spectrum. Finally, free probability methods are most effective when dealing with matrices that are Hermitian with bounded operator norm, largely because the spectral theory of bounded self-adjoint operators in the infinite-dimensional setting of the large ${n}$ limit is non-pathological. (This is ultimately due to the stable nature of eigenvalues in the self-adjoint setting; see this previous blog post for discussion.) For non-self-adjoint operators, free probability needs to be augmented with additional tools, most notably by bounds on least singular values, in order to recover the required stability for the various spectral data of random matrices to behave continuously with respect to the large ${n}$ limit. We will discuss this latter point in a later set of notes.
Read the rest of this entry »
### Recent Comments
Sandeep Murthy on An elementary non-commutative…
Luqing Ye on 245A, Notes 2: The Lebesgue…
Frank on Soft analysis, hard analysis,…
andrescaicedo on Soft analysis, hard analysis,…
Richard Palais on Pythagoras’ theorem
The Coffee Stains in… on Does one have to be a genius t…
Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
%anchor_text% on Books
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245A, Notes 2: The Lebesgue…
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 60, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330043792724609, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/185802/dimensions-of-symmetric-and-skew-symmetric-matrices?answertab=active
|
# Dimensions of symmetric and skew-symmetric matrices
Let $\textbf A$ denote the space of symmetric $(n\times n)$ matrices over the field $\mathbb K$, and $\textbf B$ the space of skew-symmetric $(n\times n)$ matrices over the field $\mathbb K$. Then $\dim (\textbf A)=n(n+1)/2$ and $\dim (\textbf B)=n(n-1)/2$.
Short question: is there any short explanation (maybe with combinatorics) why this statement is true?
EDIT: $\dim$ refers to linear spaces.
-
1
Do you mean symmetric (not normal) in the title? And do you mean the $\dim$ of linear spaces of such matrices, not the $\dim$ of the matrices, right? – enzotib Aug 23 '12 at 9:59
I did edit it - thanks for the reminder! – Christian Ivicevic Aug 23 '12 at 10:02
You did not edit it correctly; $\mathbf A$ still refers to just one matrix, not a subspace. I will edit it for you. – Marc van Leeuwen Aug 23 '12 at 12:38
And you should say that $\mathbb K$ is not of characteristic $2$, or otherwise symmetric and anti-symmetric matrices are the same thing and your equations cannot both be true. – Marc van Leeuwen Aug 23 '12 at 12:40
## 3 Answers
All square matrices of a given size $n$ constitute a linear space of dimension $n^2$, because every matrix element is a component in the canonical base, ie. the set of matrices having a single $1$ and all other elements $0$.
The skew-symmetric matrices have arbitrary elements on one side with respect of the diagonal, and those elements determine the other triangle of the matrix. So they are in number of $(n^2-n)/2=n(n-1)/2$, ($-n$ to remove the diagonal).
For the symmetric matrices the reasoning is the same, but we have to add the elements on the diagonal: $(n^2-n)/2+n=(n^2+n)/2=n(n+1)/2$.
-
Here is my two cents:
\begin{eqnarray} M_{n \times n}(\mathbb{R}) & \text{has form} & \begin{pmatrix} *&*&*&*&\cdots \\ *&*&*&*& \\ *&*&*&*& \\ *&*&*&*& \\ \vdots&&&&\ddots \end{pmatrix} \hspace{.5cm} \text{with $n^2$ elements}\\ \\ \\ Skew_{n \times n}(\mathbb{R}) & \text{has form} & \begin{pmatrix} 0&*'&*'&*'&\cdots \\ *&0&*'&*'& \\ *&*&0&*'& \\ *&*&*&0& \\ \vdots&&&&\ddots \end{pmatrix} \end{eqnarray} For this bottom formation the (*)s and (*')s are just operationally inverted forms of the same number, so the array here only takes $\frac{(n^2 - n)}{2}$ elements as an argument to describe it. This appears to be an array geometry question really... I suppose if what I'm saying is true, then I conclude that because $\dim(Skew_{n \times n}(\mathbb{R}) + Sym_{n \times n}(\mathbb{R})) = \dim(M_{n \times n}(\mathbb{R}))$ and $\dim(Skew_{n \times n}(\mathbb{R}))=\frac{n^2-n}{2}$ then we have that \begin{eqnarray} \frac{n^2-n}{2}+\dim(Sym_{n \times n}(\mathbb{R})))=n^2 \end{eqnarray} or \begin{eqnarray} \dim(Sym_{n \times n}(\mathbb{R})))=\frac{n^2+n}{2}. \end{eqnarray}
-
The dimension of symmetric matrices is $\frac{n(n+1)}2$ because they have one basis as the matrices $\{M_{ij}\}_{n \ge i \ge j \ge 1}$, having $1$ at the $(i,j)$ and $(j,i)$ positions and $0$ elsewhere. For skew symmetric matrices, the corresponding basis is $\{M_{ij}\}_{n \ge i > j \ge 1}$ with $1$ at the $(i,j)$ position, $-1$ at the $(j,i)$ position, and $0$ elsewhere.
Note that the diagonal elements of skew symmetric matrices are $0$, hence their dimension is $n$ less than the dimension of normal symmetric matrices.
-
But this is no explanation why the symmetric matrices have the specified $\dim$. – Christian Ivicevic Aug 23 '12 at 10:03
Do you mean $n^2$? – enzotib Aug 23 '12 at 10:04
Yeah, I didn't notice earlier that he asked for proofs of the dimensions too, and have edited. – Rijul Saini Aug 23 '12 at 10:07
@RijulSaini: Thanks, but enzotib's answer seems to be easier to understand! – Christian Ivicevic Aug 23 '12 at 10:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8214782476425171, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2010/09/30/schurs-lemma/?like=1&_wpnonce=539d3ea130
|
# The Unapologetic Mathematician
## Schur’s Lemma
Now that we know that images and kernels of $G$-morphisms between $G$-modules are $G$-modules as well, we can bring in a very general result.
Remember that we call a $G$-module irreducible or “simple” if it has no nontrivial submodules. In general, an object in any category is simple if it has no nontrivial subobjects. If a morphism in a category has a kernel and an image — as we’ve seen all $G$-morphisms do — then these are subobjects of the source and target objects.
So now we have everything we need to state and prove Schur’s lemma. Working in a category where every morphism has both a kernel and an image, if $f:V\to W$ is a morphism between two simple objects, then either $f$ is an isomorphism or it’s the zero morphism from $V$ to $W$. Indeed, since $V$ is simple it has no nontrivial subobjects. The kernel of $f$ is a subobject of $V$, so it must either be $V$ itself, or the zero object. Similarly, the image of $f$ must either be $W$ itself or the zero object. If either $\mathrm{Ker}(f)=V$ or $\mathrm{Im}(f)=\mathbf{0}$ then $f$ is the zero morphism. On the other hand, if $\mathrm{Ker}(f)=\mathbf{0}$ and $\mathrm{Im}(f)=W$ we have an isomorphism.
To see how this works in the case of $G$-modules, every time I say “object” in the preceding paragraph replace it by “$G$-module”. Morphisms are $G$-morphisms, the zero morphism is the linear map sending every vector to $0$, and the zero object is the trivial vector space $\mathbf{0}$. If it feels more comfortable, walk through the preceding proof making the required substitutions to see how it works for $G$-modules.
In terms of matrix representations, let’s say $X$ and $Y$ are two irreducible matrix representations of $G$, and let $T$ be any matrix so that $TX(g)=Y(g)T$ for all $g\in G$. Then Schur’s lemma tells us that either $T$ is invertible — it’s the matrix of an isomorphism — or it’s the zero matrix.
## 6 Comments »
1. [...] is the identity matrix. The matrix commutes with for every complex scalar , and so Schur’s lemma will apply to all of [...]
Pingback by | October 1, 2010 | Reply
2. [...] we can apply Schur’s lemma to all of them. In the middle two equations, we see that both and must be either be invertible or [...]
Pingback by | October 4, 2010 | Reply
3. [...] time, Schur’s lemma tells us that each is an intertwinor between and itself. And so we conclude that each of the [...]
Pingback by | October 5, 2010 | Reply
4. [...] What I do want to get into right now, is calculating the center of the matrix algebra . The answer is reminiscent of Schur’s lemma: [...]
Pingback by | October 6, 2010 | Reply
5. [...] this point, Schur’s lemma kicks in to tell us that if then is the zero matrix, while if then is a scalar times the [...]
Pingback by | October 22, 2010 | Reply
6. [...] straightforward concept: it’s a module such that its only submodules are and . As usual, Schur’s lemma tells us that any morphism between two irreducible modules is either or an isomorphism. And, as [...]
Pingback by | September 15, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8899145126342773, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/100987-finding-polynomial-func-given-zeros.html
|
# Thread:
1. ## Finding polynomial func. with given zeros
I am completely lost with this one.
Find a polynomial function with integer coefficients that has the given zeros:
1+sqrt(3i) , 2 , 2 , -1-sqrt(2)
sqrt = square root of
I would appreciate all the help I can get. The 1+sqrt(3i) is throwing me off entirely.
2. I dont know if im correct or not, but these are the zeros I got from Conjugate pairs.
2, 2, 1+sqrt(3i), 1-sqrt(3i), -1-sqrt(2), and -1+sqrt(2)
After tedious calculations and multiplying, this is my end result:
x^6 - 4x^5 + 20x^3 - 33x^2 + 20x - 3ix^4 + 6ix^3 + 15ix^2 - 36ix + 12i - 4
If this is not correct, please explain on how i can solve this.
3. Please check the problem again. I doubt very much that one root was "1+ sqrt(3i)". While that is possible, I think it would be much too complicated for this level course. Are you sure it is not $1+ \sqrt{3}i= 1+ i\sqrt{3}$ (that is, the "i" is outside the square root). In that case, in order to have integer coefficients you must also have $1- i\sqrt{3}$ as a root and to have a integer coefficients you must also have -1+ \sqrt{2} as you have. What you get does NOT have integer coefficients since "15i", "36i", etc. are not integers.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474371075630188, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/29660-polar-equations-trig-intersection-points.html
|
Thread:
1. polar equations with trig intersection points
asd
2. Originally Posted by stargirldrummer187
I am trying to find the intersecting points of two equations in a polor coordinate grid, by setting them equal.
In the equations x is meant to be theta.
And we are in degrees.
Our first equation is
r=1+5cos(x)
and the second is
r=3sec(x-30)
I have many trig identities and have tired using them any way I can.
If you can solve this:
1+5cos(x)=3sec(x-30)
Please let me know, it would be much appreciated.
Thank you,
from many very lost pre calc students
It might be easier to switch to Cartesian coordinates. I'll explain soon - have to run and put a fire out right now.
Nope, false alarm. Same for the fire.
Any reason to think you have to get exact solutions to the equation?
Also, have you tried drawing the curves .... you realise that $r = 3 \sec (\theta - 30)$ is a line ...?
3. We have somewhat tired converting the equations to cartesian equations with conversion formulas, but it turned out..funny.
If you can do it, that would be wonderful.
We also know you can graph them as cartesian equations and get the intersections, but if we were to do that (according to our teacher) we have to have proof of why it works.
Thank you for your input, We'll look forward to more help.
4. Yeah, we need exact..our teacher is a tad insane.
Good..but very insane.
Yes we know it's a line, but how is that going to help us solve the situation?
5. Originally Posted by stargirldrummer187
Yeah, we need exact..our teacher is a tad insane.
Good..but very insane.
Yes we know it's a line, but how is that going to help us solve the situation?
My only other thought at this stage is to get an equation with r.
$r = 3 \sec (\theta - 30)$
$\Rightarrow \sqrt{3} r \cos \theta + r \sin \theta = 6$ .... (1).
$r = 1 + 5 \cos \theta$
$\Rightarrow \cos \theta = \frac{r - 1}{5}$ .... (2).
From (2), $\sin \theta = \pm \frac{\sqrt{25 - (r - 1)^2}}{5} = \pm \frac{\sqrt{(6 - r)(r + 4)}}{5}$ .... (3).
Substitute (2) and (3) into (1) and re-arrange into a quartic equation in r. You might get r from this (I'll cop that I haven't actually tried .....)
6. I'm a little confused as to how you got that last line, but I'll work on it and get back to you.
7. Originally Posted by stargirldrummer187
I'm a little confused as to how you got that last line, but I'll work on it and get back to you.
From the Pythagorean Identity: $\sin^2 \theta = 1 - \cos^2 \theta$
$1 - \cos^2 \theta = 1 - \left( \frac{r - 1}{5} \right)^2 = 1 - \frac{r^2 - 2r + 1}{25}$
$= \frac{25 - r^2 + 2r - 1}{25} = \frac{-r^2 + 2r + 24}{25} = \frac{-(r^2 - 2r - 24)}{25}$ etc.
But on reflection I don't think you're gonna get anything even a little bit easy popping out of the resulting quartic equation in r .....
I'll keep it in the back of my mind.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9623600244522095, "perplexity_flag": "middle"}
|
http://nrich.maths.org/2278
|
### Days and Dates
Investigate how you can work out what day of the week your birthday will be on next year, and the year after...
### Plum Tree
Label this plum tree graph to make it totally magic!
### Magic W
Find all the ways of placing the numbers 1 to 9 on a W shape, with 3 numbers on each leg, so that each set of 3 numbers has the same total.
# Pair Products
##### Stage: 4 Challenge Level:
Choose four consecutive whole numbers.
Multiply the first and last numbers together.
Multiply the middle pair together.
Choose several different sets of four consecutive whole numbers and do the same.
What do you notice?
Can you explain what you have noticed? Will it always happen?
Click below to see how Charlie and Alison explained what they noticed.
Charlie said:
I noticed that the product of the outer pair was always $2$ less than the product of the inner pair.
I can explain this by labelling the four consecutive numbers $n, n+1, n+2, n+3$.
Outer pair: $n(n+3) = n^2 + 3n$
Inner pair: $(n+1)(n+2) = n^2 + 3n + 2$
Alison said:
I drew a diagram, in which the product of each pair is represented by the area of a rectangle:
The outer pair is represented by the red rectangle.
The inner pair is represented by the blue rectangle.
The purple area is common to both.
The area of the red strip will always be two units less than the area of the blue strip.
Therefore, the product of the outer pair is always two less than the product of the inner pair.
Instead of doing lots of calculations, can you use these representations to compare the product of the first and last numbers with the product of the second and penultimate numbers, when you have:
• $5$ consecutive whole numbers
• $6, 7, 8, \ldots x$ consecutive whole numbers
• $4$ consecutive even numbers
• $4$ consecutive odd numbers
• $5, 6, 7, 8, \ldots x$ consecutive even or odd numbers
• $4$ consecutive multiples of $3, 4, 5 \ldots$
• $1.2, 2.2, 3.2, 4.2$
• $2, 5, 8, 11$
• $4, 4\frac{1}{2}, 5, 5\frac{1}{2}$
Make up a few similar questions of your own. Impress your friends by giving them a calculator and 'predicting' what will happen!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231801629066467, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/2966/scattering-of-phonons-and-electrons-within-solids/3046
|
# Scattering of phonons and electrons within solids
I got a question concerning the scattering of phonons and electrons. I read an introductory explanation to this process that is somehow not very satisfactory. It goes like this:
Let $\psi_{k}$ and $\psi_{k'}$ be Bloch-waves within a solid. We denote the probability of transmission between these two states by $P_{k,k'}$. This probability is according to quantum mechanical perturbation theory proportional to $|<k'|H'|k>|^{2}$ where H' denotes the perturbed Hamiltonian or the perturbed potential (caused by either phonons or impurities). Now we assume that our two wave function are of the standard Bloch-form. Hence we obtain: $|<k'|H'|k>| = \int{ d^{3}r u_{k'}(r)^{*} H'(r,t) u_{k}(r) e^{i(k-k')r}}$ (Eq. 1) where $u_{k}$ and $u_{k'}$ have lattice-periodicity.
Now in an inelastic collision with of an electron and a phonon we have by energy conservation: $E(k') - E(k) = \pm \hbar \omega(q)$ where $E(k)$ and $E(k')$ denote the electron energies and on the rights hand side there is the energy of a phonon with a wavevector $q$.
Now comes the tricky part of the analysis:
Now they say that the disturbed potential must include a dependency on $e^{iqr}$ Hence (Eq. 1) (which is the scattering probability) must include a matrix element of the form $<k'|e^{iqr} |k>=\int{d^{3}r u_{k'}(r)^{*} u_{k}(r) e^{i(k-k'+q)r}}$ . (Eq. 2)
Why is that? I don't see their point here. I want to remark here that I do have heard an introductory quantum mechanics class and class in linear algebra. However I neither understand this from a mathematical point of view nor from a physical point of view. Can anyone give me a better explanation of this? Please note that the course I'm taking is not a course in theoretical solid state physics.
Now the derivation goes on:
The say that since $u_{k}u_{k'}$ can be expanded into a Fourier series of reciprocal lattice vectors. Fine - I agree that's legitimate. So assume: $(u_{k}u_{k'})(r) = \sum_{G}f_{g} e^{-iGr}$. Plugging this into this equation they claim the matrix element above (Eq. 2) only does NOT vanish is $k'-k+q=G$ Well this doesn't seem plausible. I mean then $e^{i(k-k'+q-G)r}=1$. Integrating over all space leads to infinity. Well I may agree that the integral over this term vanishes if the equality $k'-k+q=G$ does not hold because of periodicity/symmetry. Maybe one has to restrict the integral to the solid itself to keep it finite.
Can anyone give me a more detailed explanation??
I'm looking forward to your responses. Thanks in advance.
-
1
Your second question is really one about the Dirac delta function and Fourier transforms on a lattice. You may want to look in the appendix of whatever book you're looking at (or if you're not looking at a book yet, you may want to pick up Ashcroft and Mermin's text and start reading). – j.c. Jan 15 '11 at 20:04
## 2 Answers
Sorry I can't be more specific, but you will probably find the answer to the question in Fundamentals of Carrier Transport By Mark Lundstrom. I also recommend that you listen to his lecture series which includes phonon scattering on iTunesU or from the NanoHub.org ECE 656 Lecture 23: Phonon Scattering 1. Lundstrom makes logical arguments about assuming forms of solutions to integrals; when you hear him explain it seems obvious! Good luck.
** Sorry I couldn't embed that link. The self importance of an administrator is currently limiting me to 1 link per answer! Furthermore, I actually intended this to be a comment to your question, but it seems they have crippled that too.
-
everything you write is perfectly OK, except for the statement that "it doesn't seem plausible". Yes, it is plausible and yes, the exponential is equal to one. That's the whole reason why the contribution for $G=k'-k+q$ to the interaction amplitude is nonzero. All the terms with different values of $G$ lead to oscillating functions in the complex plane that sum up to zero.
Yes, the integral of 1 over the whole space is infinite. And yes, you also know the way to make it finite: restrict the interval to the solid itself. It is not surprising - and it is physically correct - that the integral is infinite for an infinite solid. It's because the interaction Hamiltonian between the two plane wave states is proportional to the volume of the solid as well - the greater volume, the greater chance that the interaction will occur somewhere.
If you were calculating the things for a finite volume of the solid - e.g. a box - you would get manifestly finite numbers everywhere. However, some formulae would be unnecessary awkward and they would depend on the size and shape of the solid. That's why adult physicists learned from Paul Dirac how to use the distributions such as $\delta(G-G_0)$. For your problem, $G_0=k'-k+q$ is treated as a constant. The defining property of the $\delta$-function is that $$\int_{-\infty}^\infty dG\,f(G) \delta(G-G_0) = f(G_0)$$ for any function $f(G)$. So the integral of $f(G)$ over $G$, weighted by the delta-function, only picks the value at $G=G_0$. That's because $\delta(G-G_0)$ vanishes for all values for which $G\neq G_0$. But for $G=G_0$, it is infinite and so large that the integral of $\delta(G)$ over $G$ is equal to one, and if you insert $f(G)$, it simply picks $f(G_0)$.
The object $\delta(G-G_0)$ is not a function in the usual sense but it is extremely helpful and consistent in dealing with the integrals that appear in the Fourier transformations. The delta-function may be approximated by a function that is equal to $1/\epsilon$ for the argument being between $-\epsilon/2$ and $\epsilon/2$, and otherwise is equal to zero, in the limit where $\epsilon$ goes to zero. The delta-function can also be written as the Fourier transform of the function $1$ divided by $2\pi$: $$\delta(G-G_0) = \frac{1}{2\pi}\int_{-\infty}^\infty dR\,\exp[i(G-G_0)R]$$ You may always imagine that the delta-function exercises are translated to a calculation at a finite volume of the space or solid; then the momenta such as $G$ become discrete - number of zeros of a standing wave, for example, times $1/R_{solid}$ - and the integral over $G$ is replaced by a summation. The function $\delta(G-G_0)$ is then replaced by a simple Kronecker delta symbol $\delta_{G,G_0}$ which is only nonzero - equal to one - if $G=G_0$. But you will pay the price that most formulae will contain powers of the volume and other things, and you need to remember the spacing of the momenta etc.
All these extra things in the formulae will depend on the size and shape of the solid - or space(time) - that you picked. But at the end, you know very well that there exists a sensible physics in the infinite-volume limit that should be independent of the volume (because it was sent to infinity). For you, to learn how to deal with the Dirac distributions is important to do all these calculations effectively because the volume $V$ and all the problematic features about the spacing of the momentum evaporate from the formulae, and the fact that the formulae hold for any large piece of the solid (or space) becomes self-evident.
Best wishes Lubos
-
Thanks for your response. Let's use your notation. Fix $G_{0}=k-k'+q$. Then we obtain $<k'|e^{iqr}|k>=\sum_{G} f_{G} \int{d^{3}r e^{i(G_{0}-G)r}}=\sum_{G} f_{G} \delta(G-G_{0})$ so $<k'|e^{iqr}|k>$ is still infinite for G=$G_{0}$ as long as you don't integrate over it (which you obviously don't do). In addition to that I couldn't find a reason why a matrix element of the specific form <k'|e^{iqr}|k> even exists? (That was my first question) This would mean that the disturbed hamiltonian H' contains a (linear) term e^{iqr}. But I don't see why that is the case. – Solidz Jan 15 '11 at 23:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947063684463501, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/13015/how-to-get-statics-out-of-a-dynamic-force-concept/13018
|
# How to get statics out of a dynamic force concept?
If one defines force as the time derivative of momentum, i.e. by
$$\vec{F}= \frac{d}{dt} \vec{p}$$
how can this include static forces? Is there a generally accepted way to argue in detail how to get from this to the static case? If not, what different solutions are discussed?
Edit: I shoud add, that by static forces I mean forces involved in problems where bodies don't move.
-
1
There is no such thing as static and dynamic force. Could you clarify what you mean? I suppose you're talking about multiple forces acting on the same system such that in total they produce zero net force (e.g. gravity vs. reaction from the ground acting on the chair you're sitting on)? – Marek Jul 31 '11 at 17:04
2
What you are missing here from Newton's laws is the sum of forces. – ja72 Aug 1 '11 at 12:44
## 2 Answers
I don't exactly know what you mean by static forces. But I am going to take a wild guess here and assume that by that you mean forces involved in problems where bodies don't move. I think you assumed that Newton's second law quantifies a force. This is actually wrong. First of all realize that a force is an interaction and it still acts whether the body on which it acts moves or not. Newton's second law quantifies the total effect of all such forces on a body of mass $m$ and not the force itself. For example the Newton's law of Gravitation tells you that the force between two masses is:
$$\vec{F} = G\frac{Mm}{r^2}$$
Now this is practically useless unless you specifies what a force does on a body. That's where Newton's second law comes in. So along with Newton's second law, you have a complete theory of (classical) gravitation.
Also the $\vec{F}$ in Newton's second law is the total force acting on a body having momentum $\vec{p}$. So when bodies don't move the net forces on them is zero. But that does not mean that you can not have forces acting on it.
-
Yes I want to take Newton's second law as a definition of Force. I came across this question when studying the Karlsruhe physics course, where force is defined in this way. If I am not mistaken this goes back to Ernst Mach and is used as definition of force in some textbooks as well, for example in the german books by Demtröder or Fließbach. However my problem is that I don't see how one can take this as a definition and also to be able to describe static situations with this concept. – martin Aug 1 '11 at 7:54
@martin Ernst Mach defined a force in an entirely different way. According to Mach when two bodies interact (assuming they are the only bodies which do so) the following equation holds: $m_1/m_2 = -a_2/a_1$. Thus it is meaningful to define a force as $m\vec{a}$. But note that this form of the law isn't as useful as Newton's second law as you consider only two particles. And clearly a system containing just two objects won't be static. So Mach's definition (as its normally given) can't be practically used to solve such problems. – Bernhard Heijstek Aug 1 '11 at 12:21
By "static forces" I think you mean forces within a physical system in static equilibrium.
You get static equilibrium out of the kinetic effects of force by noting the sum of the forces at every point is zero and therefore so is the sum of their kinetic effects.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9617095589637756, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/10667/euler-maclaurin-formula-and-riemann-roch/10691
|
## Euler-Maclaurin formula and Riemann-Roch
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $Df$ denote the derivative of a function $f(x)$ and $\bigtriangledown f=f(x)-f(x-1)$ be the discrete derivative. Using the Taylor series expansion for $f(x-1)$, we easily get $\bigtriangledown = 1- e^{-D}$ or, by taking the inverses, $$\frac{1}{\bigtriangledown} = \frac{1}{1-e^{-D}} = \frac{1}{D}\cdot \frac{D}{1-e^{-D}}= \frac{1}{D} + \frac12+ \sum_{k=1}^{\infty} B_{2k}\frac{D^{2k-1}}{(2k)!} ,$$ where $B_{2k}$ are Bernoulli numbers.
(Edit: I corrected the signs to adhere to the most common conventions.)
Here, $(1/D)g$ is the opposite to the derivative, i.e. the integral; adding the limits this becomes a definite integral $\int_0^n g(x)dx$. And $(1/\bigtriangledown)g$ is the opposite to the discrete derivative, i.e. the sum $\sum_{x=1}^n g(x)$. So the above formula, known as Euler-Maclaurin formula, allows one, sometimes, to compute the discrete sum by using the definite integral and some error terms.
Usually, there is a nontrivial remainder in this formula. For example, for $g(x)=1/x$, the remainder is Euler's constant $\gamma\simeq 0.57$. Estimating the remainder and analyzing the convergence of the power series is a long story, which is explained for example in the nice book "Concrete Mathematics" by Graham-Knuth-Patashnik. But the power series becomes finite with zero remainder if $g(x)$ is a polynomial. OK, so far I am just reminding elementary combinatorics.
Now, for my question. In the (Hirzebruch/Grothendieck)-Riemann-Roch formula one of the main ingredients is the Todd class which is defined as the product, going over Chern roots $\alpha$, of the expression $\frac{\alpha}{1-e^{-\alpha}}$. This looks so similar to the above, and so suggestive (especially because in the Hirzebruch's version $$\chi(X,F) = h^0(F)-h^1(F)+\dots = \int_X ch(F) Td(T_X)$$ there is also an "integral", at least in the notation) that it makes me wonder: is there a connection?
The obvious case to try (which I did) is the case when $X=\mathbb P^n$ and $F=\mathcal O(d)$. But the usual proof in that case is a residue computation which, to my eye, does not look anything like Euler-Maclaurin formula.
But is there really a connection?
An edit after many answers: Although the connection with Khovanskii-Pukhlikov's paper and the consequent work, pointed out by Dmitri and others, is undeniable, it is still not obvious to me how the usual Riemann-Roch for $X=\mathbb P^n$ and $F=\mathcal O(d)$ follows from them. It appears that one has to prove the following nontrivial
Identity: The coefficient of $x^n$ in $Td(x)^{n+1}e^{dx}$ equals $$\frac{1}{n!} Td(\partial /\partial h_0) \dots Td(\partial /\partial h_n) (d+h_0+\dots + h_n)^n |_{h_0=\dots h_n=0}$$
A complete answer to my question would include a proof of this identity or a reference to where this is shown. (I did not find it in the cited papers.) I removed the acceptance to encourage a more complete explanation.
-
A comment on the nice answers and the references contained in them. It appears that for Hirzebruch-Riemann-Roch one uses not the Euler-Maclaurin formula written up above but a version from Khovanskii-Pukhlikov's paper, spelled out in D. Speyer's answer. For $g(x)$ which is a quasipolynomial (combination of polynomials and exponential functions) this gives an exact formula. For the most important case of $X=P^n$ and $F=O(d)$, the polytope is the standard simplex dilated by $d$ and $g(x)=1$. (But Karshon-Sternberg-Weitzman also consider formulas with remainders for more general $g(x)$). – VA Jan 4 2010 at 23:24
I didn't answer this question. Do you mean an answer of mine elsewhere, or have you confused me with someone else? – David Speyer Jan 11 2010 at 19:10
@ David Speyer: indeed, I meant Steve Huntsman's answer. My apologies to both of you. – VA Jan 12 2010 at 2:06
I added the proof of this combinatorial identity below. I don't want to reedit the question, however, so as not to push the question in the community wiki abyss. – VA Jan 24 2010 at 16:04
## 6 Answers
As far as I understand this connection was observed (and generalised) by Khovanskii and Puhlikov in the article
A. G. Khovanskii and A. V. Pukhlikov, A Riemann-Roch theorem for integrals and sums of quasipolynomials over virtual polytopes, Algebra and Analysis 4 (1992), 188–216, translation in St. Petersburg Math. J. (1993), no. 4, 789–812.
This is related to toric geometry, for which some really well written introduction articles are contained on the page of David Cox http://www3.amherst.edu/~dacox/
Since 1992 many people wrote on this subject, for example
EXACT EULER MACLAURIN FORMULAS FOR SIMPLE LATTICE POLYTOPES
http://arxiv.org/PS_cache/math/pdf/0507/0507572v2.pdf
Or Riemann sums over polytopes http://arxiv.org/PS_cache/math/pdf/0608/0608171v1.pdf
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Last year, Leonhard Euler posted a note, Finding the sum of any series from a given general term on the arXiv. In recent years, this idea has been extended to sums over lattice approximations of convex polytopes $\Delta \cap \mathbb{Z}^n$ as shown in the other responses.
-
4
Euler didn't post it, because he's been dead for quite some time. This is, if I'm not mistaken, a translation into English of the original paper in which Euler derived the Euler-Maclaurin formula. – Michael Lugo Jan 4 2010 at 16:31
9
Your answer brought a smile to my face. :-) – Dan Piponi Jan 4 2010 at 17:39
1
Thank you for this reference. I wouldn't have thought of searching for Euler's paper on arXiv. – VA Jan 6 2010 at 22:14
2
@VA - You can actually find all of Euler's papers online at the Euler Archive: math.dartmouth.edu/~euler – Ben Linowitz Jan 9 2010 at 17:42
Euler-Maclaurin's formula transforms the integral $I=\int_a^b f(x)dx$ into the finite sum $S=\sum_a^b f(x)$, for two integers $a,b$. As Dmitri pointed out, in 1993 Khovanskii and Pukhlikov gave a multi-dimensional generalization of Euler-Maclaurin which, in particular says the following:
Let $P$ be an $n$-dimensional polytope in $\mathbb R^n\supset\mathbb Z^n$ with integral vertices, and further assume that $P$ defines a nonsingular toric variety (i.e. $P$ is simplicial and at every vertex the integral generators of the edges give a basis in $\mathbb Z^n$). Let us say the facets of $P$ are defined by the inequalities $l_j(x)\le a_j$ for some primitive integral linear functions $l_j(x_1,\dots,x_n)$. Denote by $P(h)$ the polytope defined by the inequalities $l_j(x)\le a_j+h_j$. Finally, let $$I(f,h)= \int_{P(h)} f(x)dx, \quad S(f)= \int_{P\cap \mathbb Z^n} f(x).$$ Then for any quasipolynomial $f(x)$ (a sum of products of polynomial and exponential functions) one has $$S(f) = \prod_j Td(\partial / \partial h_j)\ I(f,h)\ |_{h_j=0}.$$
Here is how the Hirzebruch-Riemann-Roch for the sheaf $\mathcal F=\mathcal O(d)$ on $X=\mathbb P^n$ follows from the Khovanskii-Pukhlikov's version of Euler-Maclaurin's formula:
Taking $P$ to be a simplex of side $d$ and $f(x)=1$, the Khovanskii-Pukhlikov's formula gives $$h^0(\mathbb P^n, \mathcal O(d)) = \prod_{j=0}^n Td(\partial/\partial h_j) \frac{(d+h_0+\dots+h_n)^n}{n!} \ |_{h_j=0}$$ which by making a substitution $y=d+h_0+\dots+h_n$ transforms into $Td(\partial/\partial y)^{n+1} (y^n/n!)\ |_{y=d}.$
The usual Hirzebruch-Riemann-Roch, on the other hand, says that $h^0(\mathbb P^n,\mathcal O(d))$ is the coefficient of $x^n$ in the expression $Td(x)^{n+1} e^{dx}$. So why is this the same? Because $$Td(x)^{n+1} e^{dx} = Td(\partial/ \partial y)^{n+1} e^{yx}\ |_{y=d}$$ (here we used the fact that $(\partial/ \partial y)^k e^{yx} = x^k e^{yx}$) and the coefficient of $x^n$ in $e^{yx}$, expanded as a power series in $x$, is $(y^n/n!)$. QED
Now that wasn't so hard, but why isn't this written somewhere? Or am I missing a reference?
So what does this suggest conceptually about the meaning of Hirzebruch-Riemann-Roch? I think, clearly, it suggests that
1. The pushforward $$f_!:K(X)\to K(pt)=\mathbb Z, \qquad \mathcal F\mapsto \chi(\mathcal F) = h^0(F)-h^1(F)+\dots$$ between the K-groups should be considered to be the "discrete summation" of a "function" $f=f(\mathcal F)$. Indeed, for say a toric variety $X$ and an ample line bundle $\mathcal F$ we are just counting integral points in a polytope $P$. So that fits.
2. The pushforward $$f_*: A(X)_Q\to A(pt)_Q=\mathbb Q$$ between the Chow groups should be considered to be a "continuous" version, an integral. Indeed, for a cycle on $X$ its pushforward can be interpreted as, and computed by, an integral of a corresponding differential form. So this makes perfect sense as well.
So now the Riemann-Roch, = the Euler-Maclaurin for this situation, transforms the integral into the sum, by multiplying it by the differential operator given by the Todd class. This also explains why in HRR the Todd class of $T_X$ appears and not, say, of $\Omega^1_X$. The tangent bundle is the place where the derivations $\partial/\partial z$ live.
-
Yes, this is a big area of research. I'll add some references to the ones Dmitri provides.
Here are references from a question about Moment map for toric actions:
• Riemann-Roch for toric orbifolds by Victor Guillemin
• to learn about toric geometry, draft of a book Toric Varieties by Cox et al
More on the topic itself:
• Riemann sums over polytopes by Victor Guillemin and Shlomo Sternberg
• Exact Euler Maclaurin formulas for simple lattice polytopes by Shlomo Sternberg et al.
• the original paper by Pukhlikov and Khovanskii (page in English, full text in Russian)
A series of papers on arXiv by Michèle Vergne, especially:
Also papers by Brion and Vergne, which seem to be missing from arXiv (Google Scholar, thanks to Steve).
-
When I studied this area I personally found the joint work of Brion and Vergne and the book Combinatorial Convexity and Algebraic Geometry by Ewald to be very helpful. There is also a Springer UTM called Computing the Continuous Discretely that handles a lot of the polytope stuff without invoking algebraic geometry. – Steve Huntsman Jan 4 2010 at 14:34
@Steve: thanks, do you think you could fill in the links for them? We can make this post community wiki or you could also post a new post which I'll upvote :) – Ilya Nikokoshev Jan 4 2010 at 15:00
Posting below in my original post – Steve Huntsman Jan 4 2010 at 15:21
I thought I'd give a more explicit answer showing how the Todd class appears. Let $Td(x) := \frac{x}{1-e^{-x}} = -\sum_{j=0}^\infty B_j \frac{x^j}{j!}$. Now for $a,b \in \mathbb{Z}$, $z \in \mathbb{R}$, $|z| << 1$, we have that $Td(\partial_h)e^{hz} = -\sum_{j=0}^\infty B_j \frac{\partial_h^{(j)}}{j!}e^{hz} = -\sum_{j=0}^\infty B_j \frac{z^j}{j!}e^{hz} = Td(z)e^{hz}$. So
`$Td(\partial_g)|_{g=0} Td(\partial_h)|_{h=0} \int_{a-g}^{b+h} e^{xz} dx$`
`$= Td(\partial_g)|_{g=0} Td(\partial_h)|_{h=0} \frac{e^{(b+h)z} - e^{(a-g)z}}{z}$`
`$= \frac{Td(z)e^{bz} - Td(-z)e^{az}}{z} = \frac{e^{bz}}{1-e^{-z}} + \frac{e^{az}}{1-e^z}$`
`$= \sum_{k=a}^b e^{kz}$`.
It follows for suitable functions $f$ (as VA pointed out below) that `$\sum_{k=a}^b f(k) = Td(\partial_g)|_{g=0} Td(\partial_h)|_{h=0} \int_{a-g}^{b+h} f(x) dx$`.
As far as references:
Brion and Vergne give a good treatment of the problem. Their key paper is available at http://www.jstor.org/pss/2152855
Ewald's introduction to toric varieties takes place in the context of convex polytopes and is more concrete than others (e.g., Fulton): see http://books.google.com/books?id=bz8SfJId3BgC
[PPS--I used this work to complete a structure theory for the equilibrium hybridization thermodynamics of DNA about 7 or 8 years ago: see http://mathoverflow.net/questions/10493/the-matrix-tree-theorem-for-weighted-graphs/10500#10500]
-
David--How did you fix the TeX? – Steve Huntsman Jan 4 2010 at 16:49
2
You can put a backslash in front of the underscores to escape them. – S. Carnahan♦ Jan 4 2010 at 17:24
I think for a general smooth function $f$ convergence is a far more delicate question than you indicate. Indeed, I think the equality does not hold for some very simple functions. But this formula with two $Td$ operators is an identity if $f$ is a hyperpolynomial. – VA Jan 6 2010 at 22:58
The signs in your expansion of $Td$ are wrong; the question gives the right expansion. In particular, $Td(x)$ starts with $1=B_0$, not with $-1$. Further, in the 6th line it should be $e^{az}/(1-e^z)$. – VA Jan 9 2010 at 17:10
Fixed the 6th line, thanks. I probably used a different convention for the Bernoullis. And I think you are correct about the convergence issue. I was basically just pasting old notes and am not current on this. Sorry. – Steve Huntsman Jan 11 2010 at 19:07
I was about to post the same question and came across yours. I wasn't aware of the "toric" direction here that other people have referred to, but I know a pretty answer in the particular case when $X$ is the flag variety of a semi-simple algebraic group. In this case RR reduces to saying that $\chi(F)={\mathrm{const}} \int ch(F\otimes L^{-1})$ where $L$ is the square root of the canonical class and the constant is explicit. So in this case at least multiplication by Todd does amount to shift (by half-forms) as in Euler-Maclaurin formula. Furthermore, in this form the formula has a very short proof via characteristic $p$, deducing it from the fact that $Fr_*(L)=L^{p^d}$, $d=\dim(X)$.
[Well, in fact it also follows from Weyl dimension formula which of course has many other proofs, I just happen to like this char p proof.] It would be cool to have a proof of the general case along these lines. Something related has been done by Pink and Rossler, arXiv:0812.0254.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 86, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923845112323761, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Spin-statistics_theorem
|
# Spin–statistics theorem
(Redirected from Spin-statistics theorem)
Statistical mechanics
• NVE Microcanonical
• NVT Canonical
• µVT Grand canonical
• NPH Isoenthalpic–isobaric
• NPT Isothermal–isobaric
• µVT Open statistical
Models
Scientists
In quantum mechanics, the spin-statistics theorem relates the spin of a particle to the particle statistics it obeys. The spin of a particle is its intrinsic angular momentum (that is, the contribution to the total angular momentum which is not due to the orbital motion of the particle). All particles have either integer spin or half-integer spin (in units of the reduced Planck constant ħ).
The theorem states that:
• the wave function of a system of identical integer-spin particles has the same value when the positions of any two particles are swapped. Particles with wavefunctions symmetric under exchange are called bosons;
• the wave function of a system of identical half-integer spin particles changes sign when two particles are swapped. Particles with wavefunctions anti-symmetric under exchange are called fermions.
In other words, the spin-statistics theorem states that integer spin particles are bosons, while half-integer spin particles are fermions.
The spin-statistics relation was first formulated in 1939 by Markus Fierz,[1] and was rederived in a more systematic way by Wolfgang Pauli.[2] Fierz and Pauli argued by enumerating all free field theories, requiring that there should be quadratic forms for locally commuting[clarification needed] observables including a positive definite energy density. A more conceptual argument was provided by Julian Schwinger in 1950. Richard Feynman gave a demonstration by demanding unitarity for scattering as an external potential is varied,[3] which when translated to field language is a condition on the quadratic operator that couples to the potential.[4]
## General discussion
Two indistinguishable particles, occupying two separate points, have only one state, not two. This means that if we exchange the positions of the particles, we do not get a new state, but rather the same physical state. In fact, one cannot tell which particle is in which position.
A physical state is described by a wavefunction, or – more generally – by a vector, which is also called a "state"; if interactions with other particles are ignored, then two different wavefunctions are physically equivalent if their absolute value is equal. So, while the physical state does not change under the exchange of the particles' positions, the wavefunction may get a minus sign.
Bosons are particles whose wavefunction is symmetric under such an exchange, so if we swap the particles the wavefunction does not change. Fermions are particles whose wavefunction is antisymmetric, so under such a swap the wavefunction gets a minus sign, meaning that the amplitude for two identical fermions to occupy the same state must be zero. This is the Pauli exclusion principle: two identical fermions cannot occupy the same state. This rule does not hold for bosons.
In quantum field theory, a state or a wavefunction is described by field operators operating on some basic state called the vacuum. In order for the operators to project out the symmetric or antisymmetric component of the creating wavefunction, they must have the appropriate commutation law. The operator
$\int \psi(x,y) \phi(x)\phi(y)\,dx\,dy \,$
(with $\phi$ an operator and $\psi(x,y)$ a numerical function) creates a two-particle state with wavefunction $\psi(x,y)$, and depending on the commutation properties of the fields, either only the antisymmetric parts or the symmetric parts matter.
Let us assume that $x \ne y$ and the two operators take place at the same time; more generally, they may have spacelike separation, as is explained hereafter.
If the fields commute, meaning that the following holds
$\phi(x)\phi(y)=\phi(y)\phi(x)\,$,
then only the symmetric part of $\psi$ contributes, so that $\psi(x,y) = \psi(y,x)$ and the field will create bosonic particles.
On the other hand if the fields anti-commute, meaning that $\phi$ has the property that
$\phi(x)\phi(y)=-\phi(y)\phi(x)\,$
then only the antisymmetric part of $\psi$ contributes, so that $\psi(x,y) = -\psi(y,x)$, and the particles will be fermionic.
Naively, neither has anything to do with the spin, which determines the rotation properties of the particles, not the exchange properties.
## A suggestive bogus argument
Consider the two-field operator product
$R(\pi)\phi(x) \phi(-x) \,$
where R is the matrix which rotates the spin polarization of the field by 180 degrees when one does a 180 degree rotation around some particular axis. The components of $\phi$ are not shown in this notation, $\phi$ has many components, and the matrix R mixes them up with one another.
In a non-relativistic theory, this product can be interpreted as annihilating two particles at positions x and −x with polarizations which are rotated by π (180°) relative to each other. Now rotate this configuration by π around the origin. Under this rotation, the two points $x \$ and $-x \$ switch places, and the two field polarizations are additionally rotated by a $\pi \$. So you get
$R(2\pi)\phi(-x) R(\pi)\phi(x) \,$
which for integer spin is equal to
$\phi(-x) R(\pi)\phi(x) \$
and for half integer spin is equal to
$- \phi(-x) R(\pi)\phi(x) \,$
(proved here). Both the operators $\pm \phi(-x) R(\pi)\phi(x)$ still annihilate two particles at $x$ and $- x$. Hence we claim to have shown that, with respect to particle states: $R(\pi)\phi(x) \phi(-x) = \begin{cases}\phi(-x) R(\pi)\phi(x) & \text{ for integral spins}, \\ -\phi(-x) R(\pi)\phi(x) & \text{ for half-integral spins}.\end{cases}$ So exchanging the order of two appropriately polarized operator insertions into the vacuum can be done by a rotation, at the cost of a sign in the half integer case.
This argument by itself does not prove anything like the spin/statistics relation. To see why, consider a nonrelativistic spin 0 field described by a free Schrödinger equation. Such a field can be anticommuting or commuting. To see where it fails, consider that a nonrelativistic spin 0 field has no polarization, so that the product above is simply:
$\phi(-x) \phi(x)\,$
In the nonrelativistic theory, this product annihilates two particles at x and −x, and has zero expectation value in any state. In order to have a nonzero matrix element, this operator product must be between states with two more particles on the right than on the left:
$\langle 0| \phi(-x) \phi(x) |\psi\rangle \,$
Performing the rotation, all that you learn is that rotating the 2-particle state $|\psi\rangle$ gives the same sign as changing the operator order. This is no information at all, so this argument does not prove anything.
## Why the bogus argument fails
To prove spin/statistics, it is necessary to use relativity (though there are a few nice methods[5][6] which do not use field theoretic tools). In relativity, there are no local fields which are pure creation operators or annihilation operators. Every local field both creates particles and annihilates the corresponding antiparticle. This means that in relativity, the product of the free real spin-0 field has a nonzero vacuum expectation value, because in addition to creating particles and annihilating particles, it also includes a part which creates and then annihilates a particle:
$G(x)= \langle 0 | \phi(-x) \phi(x) | 0\rangle \,$
And now the heuristic argument can be used to see that G(x) is equal to G(−x), which tells you that the fields cannot be anti-commuting.
## Proof
The essential ingredient in proving the spin/statistics relation is relativity, that the physical laws do not change under Lorentz transformations. The field operators transform under Lorentz transformations according to the spin of the particle that they create, by definition.
Additionally, the assumption (known as microcausality) that spacelike separated fields either commute or anticommute can be made only for relativistic theories with a time direction. Otherwise, the notion of being spacelike is meaningless. However, the proof involves looking at a Euclidean version of spacetime, in which the time direction is treated as a spatial one, as will be now explained.
Lorentz transformations include 3-dimensional rotations as well as boosts. A boost transfers to a frame of reference with a different velocity, and is mathematically like a rotation into time. By analytic continuation of the correlation functions of a quantum field theory, the time coordinate may become imaginary, and then boosts become rotations. The new "spacetime" has only spatial directions, and is termed Euclidean.
A π rotation in the Euclidean x–t plane can be used to rotate vacuum expectation values of the field product of the previous section. The time rotation turns the argument of the previous section into the spin/statistics theorem.
The proof requires the following assumptions:
1. The theory has a Lorentz invariant Lagrangian.
2. The vacuum is Lorentz invariant.
3. The particle is a localized excitation. Microscopically, it is not attached to a string or domain wall.
4. The particle is propagating, meaning that it has a finite, not infinite, mass.
5. The particle is a real excitation, meaning that states containing this particle have a positive definite norm.
These assumptions are for the most part necessary, as the following examples show:
1. The spinless anticommuting field shows that spinless fermions are nonrelativistically consistent. Likewise, the theory of a spinor commuting field shows that spinning bosons are too.
2. This assumption may be weakened.
3. In 2+1 dimensions, sources for the Chern–Simons theory can have exotic spins, despite the fact that the three dimensional rotation group has only integer and half-integer spin representations.
4. An ultralocal field can have either statistics independently of its spin. This is related to Lorentz invariance, since an infinitely massive particle is always nonrelativistic, and the spin decouples from the dynamics. Although colored quarks are attached to a QCD string and have infinite mass, the spin-statistics relation for quarks can be proved in the short distance limit.
5. Gauge ghosts are spinless fermions, but they include states of negative norm.
Assumptions 1 and 2 imply that the theory is described by a path integral, and assumption 3 implies that there is a local field which creates the particle.
The rotation plane includes time, and a rotation in a plane involving time in the Euclidean theory defines a CPT transformation in the Minkowski theory. If the theory is described by a path integral, a CPT transformation takes states to their conjugates, so that the correlation function
$\langle 0 | R\phi(x) \phi(-x)|0\rangle$
must be positive definite at x=0 by assumption 5, the particle states have positive norm. The assumption of finite mass implies that this correlation function is nonzero for x spacelike. Lorentz invariance now allows the fields to be rotated inside the correlation function in the manner of the argument of the previous section:
$\langle 0 | RR\phi(x) R\phi(-x) |0\rangle = \pm \langle 0| \phi(-x) R\phi(x)|0\rangle$
Where the sign depends on the spin, as before. The CPT invariance, or Euclidean rotational invariance, of the correlation function guarantees that this is equal to G(x). So
$\langle 0 | ( R\phi(x)\phi(y) - \phi(y)R\phi(x) )|0\rangle = 0 \,$
for integer spin fields and
$\langle 0 | R\phi(x)\phi(y) + \phi(y)R\phi(x)|0\rangle = 0 \,$
for half-integer spin fields.
Since the operators are spacelike separated, a different order can only create states that differ by a phase. The argument fixes the phase to be −1 or 1 according to the spin. Since it is possible to rotate the space-like separated polarizations independently by local perturbations, the phase should not depend on the polarization in appropriately chosen field coordinates.
This argument is due to Julian Schwinger.[7]
## Consequences
Spin statistics theorem implies that half-integer spin particles are subject to the Pauli exclusion principle, while integer-spin particles are not. Only one fermion can occupy a given quantum state at any time, while the number of bosons that can occupy a quantum state is not restricted. The basic building blocks of matter such as protons, neutrons, and electrons are fermions. Particles such as the photon, which mediate forces between matter particles, are bosons.
There are a couple of interesting phenomena arising from the two types of statistics. The Bose–Einstein distribution which describes bosons leads to Bose–Einstein condensation. Below a certain temperature, most of the particles in a bosonic system will occupy the ground state (the state of lowest energy). Unusual properties such as superfluidity can result. The Fermi–Dirac distribution describing fermions also leads to interesting properties. Since only one fermion can occupy a given quantum state, the lowest single-particle energy level for spin-1/2 fermions contains at most two particles, with the spins of the particles oppositely aligned. Thus, even at absolute zero, the system still has a significant amount of energy. As a result, a fermionic system exerts an outward pressure. Even at non-zero temperatures, such a pressure can exist. This degeneracy pressure is responsible for keeping certain massive stars from collapsing due to gravity. See white dwarf, neutron star, and black hole.
Ghost fields do not obey the spin-statistics relation. See Klein transformation on how to patch up a loophole in the theorem.
## Relation to representation theory of the Lorentz group
Since the Lorentz group has no non-trivial unitary representation of finite dimension, it naively seems that one cannot construct a state with finite, non-zero spin and positive, Lorentz-invariant norm.
For a state of integer spin the negative norm states (known as "unphysical polarization") are set to zero, which makes the use of gauge symmetry necessary.
For a state of half-integer spin the argument can be circumvented by having fermionic statistics.[8]
## Literature
• Markus Fierz: Über die relativistische Theorie kräftefreier Teilchen mit beliebigem Spin. Helv. Phys. Acta 12, 3–17 (1939)
• Wolfgang Pauli: The connection between spin and statistics. Phys. Rev. 58, 716–722 (1940)
• Ray F. Streater and Arthur S. Wightman: PCT, Spin & Statistics, and All That. 5th edition: Princeton University Press, Princeton (2000)
• Ian Duck and Ennackel Chandy George Sudarshan: Pauli and the Spin-Statistics Theorem. World Scientific, Singapore (1997)
• Arthur S Wightman: Pauli and the Spin-Statistics Theorem (book review). Am. J. Phys. 67 (8), 742–746 (1999)
• Arthur Jabs: Connecting spin and statistics in quantum mechanics. http://arXiv.org/abs/0810.2399 (Found. Phys. 40, 776–792, 793–794 (2010))
## Notes
1. M. Fierz "Über die relativistische Theorie kräftefreier Teilchen mit beliebigem Spin" Helvetica Physica Acta 12:3–37, 1939
2. R.P. Feynman "Quantum Electrodynamics", Basic Books, 1961
3. W. Pauli "On the Connection Between Spin and Statistics" Progress of Theoretical Physics vol 5 no. 4, 1950
4. Jabs, Arthur (5). "Connecting Spin and Statistics in Quantum Mechanics". Foundations of Physics. Foundations of Physics 40 (7): 776–792. arXiv:0810.2399. Bibcode:2010FoPh...40..776J. doi:10.1007/s10701-009-9351-4. Retrieved May 29, 2011.
5.
6. The Quantum Theory of Fields I, Schwinger 1950. The only difference between the argument in this paper and the argument presented here is that the operator "R" in Schwinger's paper is a pure time reversal, instead of a CPT operation, but this is the same for CP invariant free field theories which were all that Schwinger considered.
## References
• Paul O'Hara, Rotational Invariance and the Spin-Statistics Theorem, Foun. Phys. 33, 1349–1368(2003).
• Ian Duck and E. C. G. Sudarshan, Toward an understanding of the spin-statistics theorem, Am. J. Phys. 66 (4), 284–303 April 1998. Archived from the original on 2009-01-02.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 33, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8862119317054749, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/213374/group-homomorphisms-between-two-abelian-groups-with-different-kernel/213376
|
# Group homomorphisms between two abelian groups with different kernel
Does there exist two abelian groups $A,B$ with an epimorphism $f: A\to B$, and two other abelian groups $A', B'$ along with an epimorphism $g: A'\to B'$ such that $A\cong A'$, $B\cong B'$ and $ker\,f \not\cong ker\,g$? It seems to me that the groups must be infinite, since we have $B\cong A/ker\,f$ and $B'\cong A'/ker\,g$.
Thanks!
-
## 1 Answer
Let me reformulate the question: you want an abelian group $G$ with two subgroups $H, H'$ which are not isomorphic but such that the quotients $G/H, G/H'$ are isomorphic.
The smallest example is $G = C_2 \times C_4, H = C_2 \times C_2, H' = 1 \times C_4$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9599767327308655, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/130745-normal-extension.html
|
# Thread:
1. ## Normal extension
I've been asked to decide whether the following extensions are normal
(a) Q(cube root of 5,i):Q
(b) Q(5^(1/4),i):Q
(c) C(t):C(t^3)
(d) Q(t): Q(t^3)
This is my attempts
(a) Let a = 5^(1/3), then a has minimal polynomial m(x) = x^3 -5 which is irreducible over Q by Eisenstein's criterion with p = 5
i has min poly x^2 +1 over Q(5^(1/3)) and since i does not belong to Q(5^(1/3)) , then it is irreducible over Q(5^(1/3))
Then by tower law, [Q(cube root of 5,i):Q] = 3*2 = 6
Also, the polynomial f(x) = x^3 -5 = (x -a)(x-aw)(x-a(w^2)) where w is the cube root of unity.
Is it correct so far?
I've done up to here and dont know how to decide whether normal or not.
Part b is similar to part a
But bart c and d looks a bit hard. Can you give me some hints on these parts please?
Thank you very much
2. Originally Posted by dangkhoa
I've been asked to decide whether the following extensions are normal
(a) Q(cube root of 5,i):Q
(b) Q(5^(1/4),i):Q
(c) C(t):C(t^3)
(d) Q(t): Q(t^3)
This is my attempts
(a) Let a = 5^(1/3), then a has minimal polynomial m(x) = x^3 -5 which is irreducible over Q by Eisenstein's criterion with p = 5
i has min poly x^2 +1 over Q(5^(1/3)) and since i does not belong to Q(5^(1/3)) , then it is irreducible over Q(5^(1/3))
Then by tower law, [Q(cube root of 5,i):Q] = 3*2 = 6
Also, the polynomial f(x) = x^3 -5 = (x -a)(x-aw)(x-a(w^2)) where w is the cube root of unity.
Is it correct so far?
I've done up to here and dont know how to decide whether normal or not.
Part b is similar to part a
But bart c and d looks a bit hard. Can you give me some hints on these parts please?
Thank you very much
If your definition of normal extension is here, (a) is not normal extension. If your w is a non-real cube root of unity, then the splitting field of your $f(x)=x^3-5$ over $\mathbb{Q}$ is $\mathbb{Q}(\sqrt[3]{5},\sqrt{3}i)$ or $\mathbb{Q}(\sqrt[3]{5},\alpha)$ ( $\alpha$ is a root of $x^2+x+1$), which is not equal to $\mathbb{Q}(\sqrt[3]{5},i)$.
For (b), let $f(x)=x^4-5$. The splitting field of f(x) over Q is to adjoin 4th root of 5 to $\mathbb{Q}$. Since two primitive fourth roots of unity are {i, -i}, you can check that (b) is the normal extension.
(c) is the normal extension while (d) is not.
Let $F=\mathbb{C}(t^3)$ and $F'=\mathbb{Q}(t^3)$
Check if $x^3 - t^3 \in F[x]$ ( resp. $x^3 - t^3 \in F'[x]$) splits completely in $\mathbb{C}(t)[x]$ ( resp. $\mathbb{Q}(t)[x]$ ).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913030207157135, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/50125/what-is-the-operation-boxtimes
|
# What is the operation $\boxtimes$?
Reading papers about $p$-adic analysis and Galois representations, I have found objects like this $D \boxtimes \mathbb{Q}_p$. So my question is what is $\boxtimes$ and how do we read it ?
-
4
Can you give some more context? What is $D$? – Qiaochu Yuan Jul 7 '11 at 16:19
$D$ is a $(\phi,\Gamma)$-module. This notation is often used in Pierre Colmez's papers. He talks also about $D \boxtimes U$ where $U$ is an open subset of $Q_p$. – user10676 Jul 7 '11 at 16:36
Maybe you should ask Pierre Colmez, then. – Gerry Myerson Jul 8 '11 at 1:51
1
– Gerry Myerson Jul 8 '11 at 1:55
## 3 Answers
In the context of Colmez's papers, the notation has its own meaning, not related (by more than vague analogy) to other meanings it has in other contexts where it is used.
You will have to read Colmez's article in Asterisque 330 to learn the details.
Roughly: you should think of the $(\varphi,\Gamma)$-module as being an object (like a space of measures, or functions) living over $\mathbb Z_p$. Then $D\boxtimes \mathbb Q_p$ is what you get by using scaling by $p$ (which is rigorously defined using the operator $\psi$) to "stretch" the $(\varphi,\Gamma)$-module out over $\mathbb Q_p$.
Similarly $D\boxtimes \mathbb P^1$ is what you by taking two copies of $D$ and gluing them together, in accordance with the way that $\mathbb P^1(\mathbb Q_p)$ is obtained by gluing together two copies of $\mathbb Z_p$.
Non-mathematical remark: I should add that what you are asking about is very recent mathematics, and has a pretty high entry-level. Where/with who are you learning this material? You may be better off asking your advisor directly rather than trying to learn this on math.SE.
You may also want to look at some of Colmez's lectures, several of which should be available online. He lectured this past July at the Durham conference, and I believe those lectures were videotaped. In the past he has lectured at Luminy (several times, I think), at the Newton Institute (Summer of 09, if I remember correctly), and this past March he gave a lecture course at the IAS (although I wasn't there, so I don't know if it was filmed).
You may also find it easier to study the functor from $GL_2$-reps. to Galois reps. before trying to go backwards from Galois reps. to $GL_2$-reps. (which is the point of the $\boxtimes$ constructions). As well as Colmez's Asterisque 330 article, there is also my short preprint On a class of coherent rings ..., which you will be able to find with a google search.
-
There is a notion of external (also called exterior, or box) tensor product $\boxtimes$ ( e.g. http://books.google.com/books?id=6GUH8ARxhp8C&pg=PA24 ).
I think that the usage is not completely standardized, in that the definition is often adapted to other contexts (examples at http://mathoverflow.net/search?q=boxtimes ), but the adaptations are not always consistent with each other.
-
1
Dear zyx, Yes, this is rather standard notation, but the way Colmez uses $\boxtimes$, it is not really an exterior tensor product. The point is that the $(\varphi,\Gamma)$-module $D$ is a vector space --- which can be thought of as a space of sections of a sheaf over $\mathbb Z_p$ --- while the $\mathbb Q_p$ and $\mathbb P^1$ that appear in the expression $D\boxtimes ?$ are spaces extending the original domain $\mathbb Z_p$ of the sheaf. So in Colmez's use of the notation, it is not at all an example of two objects of the same kind being tensored together. Regards, – Matt E Sep 6 '11 at 18:11
1
@Matt: I should have made it clearer -- your answer was definitive as far as the OP question on Colmez' paper is concerned, and I am just offering the additional cultural observation that abuses (or nonstandard uses) of this notation are actually fairly common and sometimes more confusing than writing a box product with P^1 (when sheaves canonically extend there etc). My recollection of other papers by Colmez is that he is good about providing definitions, but other authors use their own versions of the $\boxtimes$ notation without comment, leaving an interesting puzzle for the reader. – zyx Sep 6 '11 at 18:58
This is probably not directly what you're asking about, but it might be related. In any event it can't hurt to add it.
Let $X$ be a topological space and let $E_1 \to X$, $E_2 \to X$ be vector bundles (or sheaves, probably). One often defines $S_1 \boxtimes S_2$ to be the bundle $\pi_1^\ast E_1 \otimes \pi_2^\ast E_2^\ast$ over $X \times X$, where $\pi_1, \pi_2: X \times X \to X$ are the usual projection maps. Thus a vector over the point $(p,q)$ is a linear map from $S_1(p)$ to $S_1(q)$. This bundle is useful in differential geometry because its sections are Schwarz kernels of linear operators. I have seen it arise in algebraic geometry (over $\mathbb{C}$) as well for a similar reason. Perhaps there is an analogy between the notation coming from geometry and the notation coming from representation theory and number theory?
-
Dear Paul, As I wrote in my comment to zyx's answer, in this particular context I the objects being "multiplied" via $\boxtimes$ are not two vector bundles (or any two objects of the same type), but rather (a space of global sections of) an equivariant sheaf and a certain topological space extending the domain of this sheaf. So it is not particularly analogous to an exterior product of vector bundles. Regards, – Matt E Sep 7 '11 at 2:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582474231719971, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/120006/compute-lim-limits-x-to-infty-fracx-2x2x/120772
|
# Compute $\lim \limits_{x\to\infty} (\frac{x-2}{x+2})^x$
Compute
$$\lim \limits_{x\to\infty} (\frac{x-2}{x+2})^x$$
I did
$$\lim_{x\to\infty} (\frac{x-2}{x+2})^x = \lim_{x\to\infty} \exp(x\cdot \ln(\frac{x-2}{x+2})) = \exp( \lim_{x\to\infty} x\cdot \ln(\frac{x-2}{x+2}))$$
But how do I continue? The hint is to use L Hopital's Rule. I tried changing to
$$\exp(\lim_{x\to\infty} \frac{\ln(x-2)-\ln(x+2)}{1/x})$$
This is
$$(\infty - \infty )/0 = 0/0$$
But I find that I can keep differentiating?
-
## 6 Answers
A nitpick: $\infty-\infty$ is not 0! It's undefined. Your limit is of the form $0/0$ though.
You can apply L'H'ôpital from the start if you like: $\lim\limits_{x\rightarrow\infty}{x-2\over x+2} =1$, and $\ln 1=0$. So $$\lim_{x\rightarrow\infty} \Bigl(x \ln{x-2\over x+2} \Bigr) =\lim_{x\rightarrow\infty} {\ln{x-2\over x+2}\over1/x} =\lim_{x\rightarrow\infty} {{x+2\over x-2}\cdot{1(x+2)-1(x-2)\over (x+2)^2} \over- 1/x^2 } =\lim_{x\rightarrow\infty} {{-4x^2\over (x+2) (x-2)} }=-4.$$ (use L'Hopital again to evaluate the limit on the right hand side if you like).
So, $$\lim_{x\rightarrow\infty}\Bigl({x-2\over x+2}\Bigr)^x =e^{ \lim\limits_{x\rightarrow\infty}\bigl(x\ln{x-2\over x+2}\bigr)}=e^{-4}.$$
To answer more directly, L'Hôpital applied to $$\lim_{x\rightarrow\infty}{\ln(x-2)-\ln(x+2)\over 1/x}$$ gives you $$\lim_{x\rightarrow\infty}{{1\over x-2}-{1\over x+2}\over- 1/x^2}.$$ Now simplify: $${{1\over x-2}-{1\over x+2}\over- 1/x^2} =-x^2\Bigl({1\over x-2}-{1\over x+2}\Bigr) = {-4x^2\over (x+2)(x-2)}.$$ So, using L'Hôpital's rule again $$\lim_{x\rightarrow\infty}{{1\over x-2}-{1\over x+2}\over- 1/x^2} =\lim_{x\rightarrow\infty} {-4x^2\over (x+2)(x-2)} =\lim_{x\rightarrow\infty} {-8x\over (x+2)+(x-2)} =\lim_{x\rightarrow\infty} {-8x\over2x}=-4.$$
-
Hint :
Rewrite limit into form :
$$\lim_{x\to\infty} \left(1+\frac{1}{\left(\frac{x+2}{-4}\right)}\right)^{\left(\frac{x+2}{-4}\right) \cdot \left(\frac{-4x}{x+2}\right)}$$
-
This can be done using only the definition of $e$, $$e = \lim_{n\to\infty}(1+1/n)^n.$$ Notice that this implies immediately that $1/e = \lim_{n\to\infty}(1-1/n)^n$ and, more generally, $$\lim_{n\to\infty} (1+ a/n)^n = e^{a n}.$$ We find $$\lim_{x\to\infty} \left(\frac{x-2}{x+2}\right)^x = \lim_{x\to\infty} \left(\frac{1-2/x}{1+2/x}\right)^x = \frac{e^{-2}}{e^2}$$ and so $$\lim_{x\to\infty} \left(\frac{x-2}{x+2}\right)^x = \frac{1}{e^4}.$$
-
$$\lim_{x\to\infty} (\frac{x-2}{x+2})^x$$
$$\lim_{x\to\infty} (1-\frac{4}{x+2})^x = y$$
taking log on both sides we get
$$ln(y) = x ln (1- \frac{4}{x+2})$$
the expansion for $ln (1+r)$ is $r- \frac{r^2}{2} +\frac{r^3}{3}$ .... where r tends to zero
$$ln(y) = x ( \frac{-4}{x+2} - \frac{\frac{-4}{x+2}^2}{2} +\frac{\frac{-4}{x+2}^3}{3} ....)$$
$ln (y) = \frac{-4x}{x+2}$ {rest all terms will terminate to zero}
$$ln (y) =\lim_{x\to\infty} \frac{-4x}{x+2} = -4$$
$$y = \frac{1}{e^4}$$
-
you can use $$\left( \frac{x-2}{x+2}\right)^x = \left(1 - \frac{4}{x+2}\right)^x$$ and $(1 + \frac ax)^x \to \exp(a)$,
HTH, AB
-
If you want to use LHopital then $\lim_{u\to 0} \frac{\ln(1+u)}{u}=1$ by Lhopital's rule.
$l= \lim_{x\to \infty} (\frac{x-2}{x+2})^x=\lim_{x\to \infty} \exp((x+2)\ln(1-\frac{4}{x+2})-2\ln(1-\frac{4}{x+2}))$
For $u = -\frac{4}{x+2}:$ $l= \lim_{u\to 0}\exp(-4\times\frac{\ln(1+u)}{u}-2\ln(1+u))=\exp(-4)$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 22, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.872261643409729, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Stress_(physics)
|
# Stress (mechanics)
(Redirected from Stress (physics))
Built-in stress inside a plastic protractor, revealed by its effect on polarized light.
In continuum mechanics, stress is a physical quantity that expresses the internal forces that neighboring particles of a continuous material exert on each other. For example, when a solid vertical bar is supporting a weight, each particle in the bar pulls on the particles immediately above and below it. When a liquid is under pressure, each particle gets pushed inwards by all the surrounding particles, and, in reaction, pushes them outwards. These forces are actually the average of a very large number of intermolecular forces and collisions between the molecules in those particles.
Stress inside a body may arise by various mechanisms, such as reaction to external forces applied to the bulk material (like gravity) or to its surface (like contact forces, external pressure, or friction). Any strain (deformation) of a solid material generates an internal elastic stress, analogous to the reaction force of a spring, that tends to restore the material to its original undeformed state. In liquids and gases, only deformations that change the volume generate persistent elastic stress. However, if the deformation is gradually changing with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the name mechanical stress.
Significant stress may exist even when deformation is negligible (a common assumption when modeling the flow of water) or non-existent. Stress may exist in the absence of external forces; such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, or by external electromagnetic fields (as in piezoelectric and magnetostrictive materials).
The stress across a surface element (yellow disk) is the force that the material on one side (top ball) exerts on the material on the other side (bottom ball), divided by the area of the surface.
Quantitatively, the stress is expressed by the Cauchy traction vector T defined as the traction force F between adjacent parts of the material across an imaginary separating surface S, divided by the area of S.[1]:p.41–50 In a fluid at rest the force is perpendicular to the surface, and is the familiar pressure. In a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S; hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude generally depend on the orientation of S. Thus the stress state of the material must be described by tensor, called the (Cauchy) stress tensor; which is a linear function that relates the normal vector n of a surface S to the stress T across S. With respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3x3 real numbers. Even within a homogeneous body, the stress tensor may vary from place to place, and may change over time; therefore, the stress within a material is, in general, a time-varying tensor field.
The relation between mechanical stress, deformation, and the rate of change of deformation can be quite complicated, although a linear approximation may be adequate in practice if the quantities are small enough. Stress that exceeds certain strength limits of the material will result in permanent deformation (such as plastic flow, fracture, cavitation) or even change its crystal structure and chemical composition.
In some branches of engineering, the term stress is occasionally used in a looser sense as a synonym of "internal force". For example, in the analysis of trusses, it may refer to the total traction or compression force acting on a beam, rather than the force divided by the area of its cross-section.
## History
A Roman-era bridge in Switzerland.
Since ancient times humans have been consciously aware of stress inside materials. Until the 17th century the understanding of stress was largely intuitive and empirical; and yet it resulted in some surprisingly sophisticated technology, like the composite bow and glass blowing.
Over several millennia, architects and builders, in particular, learned how to put together carefully shaped wood beams and stone blocks to withstand, transmit, and distribute stress in the most effective manner, with ingenious devices such as the capitals, arches, cupolas, trusses and the flying buttresses of Gothic cathedrals.
Ancient and medieval architects did develop some geometrical methods and simple formulas to compute the proper sizes of pillars and beams, but the scientific understanding of stress became possible only after the necessary tools were invented in the 17th and 18th centuries: Galileo's rigorous experimental method, Descartes's coordinates and analytic geometry, and Newton's laws of motion and equilibrium and calculus of infinitesimals. With those tools, Cauchy was able to give the first rigorous and general mathematical model for stress in a homogeneous medium. Cauchy observed that the force across an imaginary surface was a linear function of its normal vector; and, moreover, that it must be a symmetric function (with zero total momentum).
The understanding of stress in liquids started with Newton himself, who provided a differential formula for friction forces (shear stress) in laminar parallel flow.
## Overview
### Definition
Stress is defined as the average force per unit area that some particle of a body exerts on an adjacent particle, across an imaginary surface that separates them.[2]:p.46–71
Being derived from a fundamental physical quantity (force) and a purely geometrical quantity (area), stress is also a fundamental quantity, like velocity, torque or energy, that can be quantified and analyzed without explicit consideration of the nature of the material or of its physical causes.
Following the basic premises of continuum mechanics, stress is a macroscopic concept. Namely, the particles considered in its definition and analysis should be just small enough to be treated as homogeneous in composition and state, but still large enough to ignore quantum effects and the detailed motions of molecules. Thus, the force between two particles is actually the average of a very large number of atomic forces between their molecules; and physical quantities like mass, velocity, and forces that act through the bulk of three-dimensional bodies, like gravity, are assumed to be smoothly distributed over them.[3]:p.90–106 Depending on the context, one may also assume that the particles are large enough to allow the averaging out of other microscopic features, like the grains of a metal rod or the fibers of a piece of wood.
### Normal and shear stress
Further information: compression (physical) and Shear stress
In general, the stress T that a particle P applies on another particle Q across a surface S can have any direction relative to S. The vector T may be regarded as the sum of two components: the normal stress (Compression or Tension) perpendicular to the surface, and the shear stress that is parallel to it.
If the normal unit vector n of the surface (pointing from Q towards P) is assumed fixed, the normal component can be expressed by a single number, the dot product T·n. This number will be positive if P is "pulling" on Q (tensile stress), and negative if P is "pushing" against Q (compressive stress) The shear component is then the vector T - (T·n)n.
### Units
The dimension of stress is that of pressure, and therefore its coordinates are commonly measured in the same units as pressure: namely, pascals (Pa, that is, newtons per square metre) in the International System, or pounds per square inch (psi) in the Imperial system.
### Causes and effects
Glass vase with the craquelé effect. The cracks are the result of brief but intense stress created when the semi-molten piece is briefly dipped in water.[4]
Stress in a material body may be due to multiple physical causes, including external influences and internal physical processes. Some of these agents (like gravity, changes in temperature and phase, and electromagnetic fields) act on the bulk of the material, varying continuously with position and time. Other agents (like external loads and friction, ambient pressure, and contact forces) may create stresses and forces that are concentrated on certain surfaces, lines, or points; and possibly also on very short time intervals (as in the impulses due to collisions). In general, the stress distribution in the body is expressed as a piecewise continuous function of space and time.
Conversely, stress is usually correlated with various effects on the material, possibly including changes in physical properties like birefringence, polarization, and permeability. The imposition of stress by an external agent usually creates some strain (deformation) in the material, even if it is too small to be detected. In a solid material, such strain will in turn generate an internal elastic stress, analogous to the reaction force of a stretched spring, tending to restore the material to its original undeformed state. Fluid materials (liquids, gases and plasmas) by definition can only oppose deformations that would change their volume. However, if the deformation is changing with time, even in fluids there will usually be some viscous stress, opposing that change.
The relation between stress and its effects and causes, including deformation and rate of change of deformation, can be quite complicated (although a linear approximation may be adequate in practice if the quantities are small enough). Stress that exceeds certain strength limits of the material will result in permanent deformation (such as plastic flow, fracture, cavitation) or even change its crystal structure and chemical composition.
## Simple stresses
In some situations, the stress within a body may adequately be described by a single number, or by a single vector (a number and a direction). Three such simple stress situations, that are often encountered in engineering design, are the uniaxial normal stress, the simple shear stress, and the isotropic normal stress.[5]
### Uniaxial normal stress
Idealized stress in a straight bar with uniform cross-section.
A common situation with a simple stress pattern is when a straight rod, with uniform material and cross section, is subjected to tension by opposite forces of magnitude $F$ along its axis. If the system is in equilibrium and not changing with time, and the weight of the bar can be neglected, then through each transversal section of the bar the top part must pull on the bottom part with the same force F Therefore the stress throughout the bar, across any horizontal surface, can be described by the number $\sigma$ = F/A, where A is the area of the cross-section.
On the other hand, if one imagines the bar being cut along its length, parallel to the axis, there will be no force (hence no stress) between the two halves across the cut.
This type of stress may be called (simple) normal stress or uniaxial stress; specifically, (uniaxial, simple, etc.) tensile stress.[5] If the load is compression on the bar, rather than stretching it, the analysis is the same except that the force F and the stress $\sigma$ change sign, and the stress is called compressive stress.
The ratio $\sigma = F/A$ may be only an average stress. The stress may be unevenly distributed over the cross section (m–m), especially near the attachment points (n–n).
This analysis assumes the stress is evenly distributed over the entire cross-section. In practice, depending on how the bar is attached at the ends and how it was manufactured, this assumption may not be valid. In that case, the value $\sigma$ = F/A will be only the average stress, called engineering stress or nominal stress. However, if the bar's length L is many times its diameter D, and it has no gross defects or built-in stress, then the stress can be assumed to be uniformly distributed over any cross-section that is more than a few times D from both ends. (This observation is known as the Saint-Venant's principle).
Normal stress occurs in many other situations besides axial tension and compression. If an elastic bar with uniform and symmetric cross-section is bent in one of its planes of symmetry, the resulting bending stress will still be normal (perpendicular to the cross-section), but will vary over the cross section: the outer part will be under tensile stress, while the inner part will be compressed. Another variant of normal stress is the hoop stress that occurs on the walls of a cylindrical pipe or vessel filled with pressurized fluid.
### Simple shear stress
Shear stress in a horizontal bar loaded by two offset blocks.
Another simple type of stress occurs when a uniformly thick layer of elastic material like glue or rubber is firmly attached to two stiff bodies that are pulled in opposite directions by forces parallel to the layer; or a section of a soft metal bar that is being cut by the jaws of a scissors-like tool. Let F be the magnitude of those forces, and M be the midplane of that layer. Just as in the normal stress case, the part of the layer on one side of M must pull the other part with the same force F. Assuming that the direction of the forces is known, the stress across M can be expressed by the single number $\tau$ = F/A, where F is the magnitude of those forces and A is the area of the layer.
However, unlike normal stress, this simple shear stress is directed parallel to the cross-section considered, rather than perpendicular to it.[5] For any plane S that is perpendicular to the layer, the net internal force across S, and hence the stress, will be zero.
As in the case of an axially loaded bar, in practice the shear stress may not be uniformly distributed over the layer; so, as before, the ratio F/A will only be an average ("nominal", "engineering") stress. However, that average is often sufficient for practical purposes.[6]:p.292 Shear stress is observed also when a cyindrical bar such as a shaft is subjected to opposite torques at its ends. In that case, the shear stress on each cross-section is parallel to the cross-section, but oriented tangentially relative to the axis, and increases with distance from the axis. Significant shear stress occurs in the middle plate (the "web") of I-beams under bending loads, due to the web constraining the end plates ("flanges").
### Isotropic stress
Isotropic tensile stress. Top left: Each face of a cube of homogeneous material is pulled by a force with magnitude F, applied evenly over the entire face whose area is A. The force across any section S of the cube must balance the forces applied below the section. In the three sections shown, the forces are F (top right), F$\sqrt{2}$ (bottom left), and F$\sqrt{3}/2$ (bottom right); and the area of S is A, A$\sqrt{2}$ and A$\sqrt{3}/2$, respectively. So the stress across S is F/A in all three cases.
Another simple type of stress occurs when the material body is under equal compression or tension in all directions. This is the case, for example, in a portion of liquid or gas at rest, whether enclosed in some container or as part of a larger mass of fluid; or inside a cube of elastic material that is being pressed or pulled on all six faces by equal perpendicular forces — provided, in both cases, that the material is homogeneous, without built-in stress, and that the effect of gravity and other external forces can be neglected.
In these situations, the stress across any imaginary internal surface turns out to be equal in magnitude and always directed perpendicularly to the surface independently of the surface's orientation. This type of stress may be called isotropic normal or just isotropic; if it is compressive, it is called hydrostatic pressure or just pressure. Gases by definition cannot withstand tensile stresses, but liquids may withstand very small amounts of isotropic tensile stress.
### Cylinder stresses
Parts with rotational symmetry, such as wheels, axles, pipes, and pillars, are very common in engineering. Often the stress patterns that occur in such parts have rotational or even cylindrical symmetry. The analysis of such cylinder stresses can take advantage of the symmetry to reduce the dimension of the domain and/or of the stress tensor.
## General stress
Often, mechanical bodies experience more than one type of stress at the same time; this is called combined stress. In normal and shear stress, the magnitude of the stress is maximum for surfaces that are perpendicular to a certain direction $d$, and zero across any surfaces that are parallel to $d$. When the stress is zero only across surfaces that are perpendicular to one particular direction, the stress is called biaxial, and can be viewed as the sum of two normal or shear stresses. In the most general case, called triaxial stress, the stress is nonzero across every surface element.
### The Cauchy stress tensor
Illustration of typical stresses (arrows) across various surface elements on the boundary of a particle (sphere), in a homogeneous material under uniform (but not isotropic) triaxial stress. The normal stresses on the principal axes are +5, +2, and −3 units.
Combined stresses cannot be described by a single vector. Even if the material is stressed in the same way throughout the volume of the body, the stress across any imaginary surface will depend on the orientation of that surface, in a non-trivial way.
However, Cauchy observed that the stress vector $T$ across a surface will always be a linear function of the surface's normal vector $n$, the unit-length vector that is perpendicular to it. That is, $T = \boldsymbol{\sigma}(n)$, where the function $\boldsymbol{\sigma}$ satisfies
$\boldsymbol{\sigma}(\alpha u + \beta v) = \alpha\boldsymbol{\sigma}(u) + \beta\boldsymbol{\sigma}(v)$
for any vectors $u,v$ and any real numbers $\alpha,\beta$. The function $\boldsymbol{\sigma}$, now called the (Cauchy) stress tensor, completely describes the stress state of a uniformly stressed body. (Today, any linear connection between two physical vector quantities is called a tensor, reflecting Cauchy's original use to describe the "tensions" (stresses) in a material.) In tensor calculus, $\boldsymbol{\sigma}$ is classified as second-order tensor of type (0,2).
Like any linear map between vectors, the stress tensor can be represented in any chosen Cartesian coordinate system by a 3×3 matrix of real numbers. Depending on whether the coordinates are numbered $x_1,x_2,x_3$ or named $x,y,z$, the matrix may be written as
$\begin{bmatrix} \sigma _{11} & \sigma _{12} & \sigma _{13} \\ \sigma _{21} & \sigma _{22} & \sigma _{23} \\ \sigma _{31} & \sigma _{32} & \sigma _{33} \end{bmatrix} \quad\quad\quad$ or $\quad\quad\quad \begin{bmatrix} \sigma _{xx} & \sigma _{xy} & \sigma _{xz} \\ \sigma _{yx} & \sigma _{yy} & \sigma _{yz} \\ \sigma _{zx} & \sigma _{zy} & \sigma _{zz} \\ \end{bmatrix}$
The stress vector $T = \boldsymbol{\sigma}(n)$ across a surface with normal vector $n$ with coordinates $n_1,n_2,n_3$ is then a matrix product $T = \boldsymbol{\sigma} n$, that is
$\begin{bmatrix} T_1\\T_2 \\ T_3 \end{bmatrix} = \begin{bmatrix} \sigma_{11} & \sigma_{21} & \sigma_{31} \\ \sigma_{12} & \sigma_{22} & \sigma_{32} \\ \sigma_{13} & \sigma_{23} & \sigma_{33} \end{bmatrix} \begin{bmatrix} n_1\\n_2 \\ n_3 \end{bmatrix}$
The linear relation between $T$ and $n$ follows from the fundamental laws of conservation of linear momentum and static equilibrium of forces, and is therefore mathematically exact, for any material and any stress situation. The components of the Cauchy stress tensor at every point in a material satisfy the equilibrium equations (Cauchy’s equations of motion for zero acceleration). Moreover, the principle of conservation of angular momentum implies that the stress tensor is symmetric, that is $\sigma_{12} = \sigma_{21}$, $\sigma_{13} = \sigma_{31}$, and $\sigma_{23} = \sigma_{32}$. Therefore, the stress state of the medium at any point and instant can be specified by only six independent parameters, rather than nine. These may be written
$\begin{bmatrix} \sigma_x & \tau_{xy} & \tau_{xz} \\ \tau_{xy} & \sigma_y & \tau_{yz} \\ \tau_{xz} & \tau_{yz} & \sigma_z \end{bmatrix}$
where the elements $\sigma_x,\sigma_y,\sigma_z$ are called the orthogonal normal stresses (relative to the chosen coordinate system), and $\tau_{xy}, \tau_{xz},\tau_{yz}$ the orthogonal shear stresses.
### Change of coordinates
The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is the Mohr's circle of stress distribution.
As a symmetric 3×3 real matrix, the stress tensor $\boldsymbol{\sigma}$ has three mutually orthogonal unit-length eigenvectors $e_1,e_2,e_3$ and three real eigenvalues $\lambda_1,\lambda_2,\lambda_3$, such that $\boldsymbol{\sigma} e_i = \lambda_i e_i$. Therefore, in a coordinate system with axes $e_1,e_2,e_3$, the stress tensor is a diagonal matrix, and has only the three normal components $\lambda_1,\lambda_2,\lambda_3$ the principal stresses. If the three eigenvalues are equal, the stress is an isotropic compression or tension, always perpendicular to any surface; there is no shear stress, and the tensor is a diagonal matrix in any coordinate frame.
### Stress as a tensor field
In general, stress is not uniformly distributed over a material body, and may vary with time. Therefore the stress tensor must be defined for each point and each moment, by considering an infinitesimal particle of the medium surrounding that point, and taking the average stresses in that particle as being the stresses at the point.
### Stress in thin plates
A tank car made from bent and welded steel plates.
Man-made objects are often made from stock plates of various materials by operations that do not change their essentially two-dimensional character, like cutting, drilling, gentle bending and welding along the edges. The description of stress in such bodies can be simplified by modeling those parts as two-dimensional surfaces rather than three-dimensional bodies.
In that view, one redefines a "particle" as being an infinitesimal patch of the plate's surface, so that the boundary between adjacent particles becomes an infinitesimal line element; both are implicitly extended in the third dimension, straight through the plate. "Stress" is then redefined as being a measure of the internal forces between two adjacent "particles" across their common line element, divided by the length of that line. Some components of the stress tensor can be ignored, but since particles are not infinitesimal in the third dimension one can no longer ignore the torque that a particle applies on its neighbors. That torque is modeled as a bending stress that tends to change the curvature of the plate. However, these simplifications may not hold at welds, at sharp bends and creases (where the radius of curvature is comparable to the thickness of the plate).
### Stress in thin beams
For stress modeling, a fishing pole may be considered one-dimensional.
The analysis of stress can be considerably simplified also for thin bars, beams or wires of uniform (or smoothly varying) composition and cross-section that are subjected to moderate bending and twisting. For those bodies may consider only cross-sections that are perpendicular to the bar's axis, and redefine a "particle" as being a piece of wire with infinitesimal length between two such cross sections. The ordinary stress is then reduced to a scalar (tension or compression of the bar), but one must take into account also a bending stress (that tries to change the bar's curvature, in some direction perpendicular to the axis) and a torsional stress (that tries to twist or un-twist it about its axis).
### Other descriptions of stress
The Cauchy stress tensor is used for stress analysis of material bodies experiencing small deformations where the differences in stress distribution in most cases can be neglected. For large deformations, also called finite deformations, other measures of stress, such as the first and second Piola–Kirchhoff stress tensors, the Biot stress tensor, and the Kirchhoff stress tensor, are required.
Solids, liquids, and gases have stress fields. Static fluids support normal stress but will flow under shear stress. Moving viscous fluids can support shear stress (dynamic pressure). Solids can support both shear and normal stress, with ductile materials failing under shear and brittle materials failing under normal stress. All materials have temperature dependent variations in stress-related properties, and non-Newtonian materials have rate-dependent variations.
## Stress analysis
Stress analysis is a branch of applied physics that covers the determination of the internal distribution of stresses in solid objects. It is an essential tool in engineering for the study and design of structures such as tunnels, dams, mechanical parts, and structural frames, under prescribed or expected loads. It is also important in many other disciplines; for example, in geology, to study phenomena like plate tectonics, vulcanism and avalanches; and in biology, to understand the anatomy of living beings.
### Goals and assumptions
Stress analysis is generally concerned with objects and structures that can be assumed to be in macroscopic static equilibrium. By Newton's laws of motion, any external forces are being applied to such a system must be balanced by internal reaction forces,[7]:p.97 which are almost always surface contact forces between adjacent particles — that is, as stress.[1] Since every particle needs to be in equilibrium, this reaction stress will generally propagate from particle, creating a stress distribution throughout the body.
The typical problem in stress analysis is to determine these internal stresses, given the external forces that are acting on the system. The latter may be body forces (such as gravity or magnetic attraction), that act throughout the volume of a material;[8]:p.42–81 or concentrated loads (such as friction between an axle and a bearing, or the weight of a train wheel on a rail), that are imagined to act over a two-dimensional area, or along a line, or at single point.
In stress analysis one normally disregards the physical causes of the forces or the precise nature of the materials. Instead, one assumes that the stresses are related to deformation (and, in non-static problems, to the rate of deformation) of the material by known constitutive equations.[9]
### Methods
Stress analysis may be carried out experimentally, by applying loads to the actual artifact or to scale model, and measuring the resulting stresses, by any of several available methods. This approach is often used for safety certification and monitoring. However, most stress analysis is done by mathematical methods, especially during design.
The basic stress analysis problem can be formulated by Euler's equations of motion for continuous bodies (which are consequences of Newton's laws for conservation of linear momentum and angular momentum) and the Euler-Cauchy stress principle, together with the appropriate constitutive equations. Thus one obtains a system of partial differential equations involving the stress tensor field and the strain tensor field, as unknown functions to be determined. The external body forces appear as the independent ("right-hand side") term in the differential equations, while the concentrated forces appear as boundary conditions. The basic stress analysis problem is therefore a boundary-value problem.
Stress analysis for elastic structures is based on the theory of elasticity and infinitesimal strain theory. When the applied loads cause permanent deformation, one must use more complicated constitutive equations, that can account for the physical processes involved (plastic flow, fracture, phase change, etc.).
However, engineered structures are usually designed so that the maximum expected stresses are well within the range of linear elasticity (the generalization of Hooke’s law for continuous media); that is, the deformations caused by internal stresses are linearly related to them. In this case the differential equations that define the stress tensor are linear, and the problem becomes much easier. For one thing, the stress at any point will be a linear function of the loads, too. For small enough stresses, even non-linear systems can usually be assumed to be linear.
Simplified model of a truss for stress analysis, assuming unidimensional elements under uniform axial tension or compression.
Stress analysis is simplified when the physical dimensions and the distribution of loads allow the structure to be treated as one- or two-dimensional. In the analysis of trusses, for example, the stress field may be assumed to be uniform and uniaxial over each member. Then the differential equations reduce to a finite set of equations (usually linear) with finitely many unknowns. In other contexts one may be able to reduce the three-dimensional problem to a two-dimensional one, and/or replace the general stress and strain tensors by simpler models like uniaxial tension/compression, simple shear, etc.
Still, for two- or three-dimensional cases one must solve a partial differential equation problem. Anlytical or closed-form solutions to the differential equations can be obtained when the geometry, constitutive relations, and boundary conditions are simple enough. Otherwise one must generally resort to numerical approximations such as the finite element method, the finite difference method, and the boundary element method.
## Theoretical background
The mathematical description of stress is founded on Euler's laws for the motion of continuous bodies. They can be derived from Newton's laws, but may also be taken as axioms describing the motions of such bodies.[10]
## Alternative measures of stress
Main article: Stress measures
Other useful stress measures include the first and second Piola–Kirchhoff stress tensors, the Biot stress tensor, and the Kirchhoff stress tensor.
### Piola–Kirchhoff stress tensor
In the case of finite deformations, the Piola–Kirchhoff stress tensors express the stress relative to the reference configuration. This is in contrast to the Cauchy stress tensor which expresses the stress relative to the present configuration. For infinitesimal deformations or rotations, the Cauchy and Piola–Kirchhoff tensors are identical.
Whereas the Cauchy stress tensor, $\boldsymbol{\sigma}$ relates stresses in the current configuration, the deformation gradient and strain tensors are described by relating the motion to the reference configuration; thus not all tensors describing the state of the material are in either the reference or current configuration. Describing the stress, strain and deformation either in the reference or current configuration would make it easier to define constitutive models (for example, the Cauchy Stress tensor is variant to a pure rotation, while the deformation strain tensor is invariant; thus creating problems in defining a constitutive model that relates a varying tensor, in terms of an invariant one during pure rotation; as by definition constitutive models have to be invariant to pure rotations). The 1st Piola–Kirchhoff stress tensor, $\boldsymbol{P}$ is one possible solution to this problem. It defines a family of tensors, which describe the configuration of the body in either the current or the reference state.
The 1st Piola–Kirchhoff stress tensor, $\boldsymbol{P}$ relates forces in the present configuration with areas in the reference ("material") configuration.
$\boldsymbol{P} = J~\boldsymbol{\sigma}~\boldsymbol{F}^{-T} ~$
where $\boldsymbol{F}$ is the deformation gradient and $J= \det\boldsymbol{F}$ is the Jacobian determinant.
In terms of components with respect to an orthonormal basis, the first Piola–Kirchhoff stress is given by
$P_{iL} = J~\sigma_{ik}~F^{-1}_{Lk} = J~\sigma_{ik}~\cfrac{\partial X_L}{\partial x_k}~\,\!$
Because it relates different coordinate systems, the 1st Piola–Kirchhoff stress is a two-point tensor. In general, it is not symmetric. The 1st Piola–Kirchhoff stress is the 3D generalization of the 1D concept of engineering stress.
If the material rotates without a change in stress state (rigid rotation), the components of the 1st Piola–Kirchhoff stress tensor will vary with material orientation.
The 1st Piola–Kirchhoff stress is energy conjugate to the deformation gradient.
#### 2nd Piola–Kirchhoff stress tensor
Whereas the 1st Piola–Kirchhoff stress relates forces in the current configuration to areas in the reference configuration, the 2nd Piola–Kirchhoff stress tensor $\boldsymbol{S}$ relates forces in the reference configuration to areas in the current configuration. The force in the reference configuration is obtained via a mapping that preserves the relative relationship between the force direction and the area normal in the current configuration.
$\boldsymbol{S} = J~\boldsymbol{F}^{-1}\cdot\boldsymbol{\sigma}\cdot\boldsymbol{F}^{-T} ~.$
In index notation with respect to an orthonormal basis,
$S_{IL}=J~F^{-1}_{Ik}~F^{-1}_{Lm}~\sigma_{km} = J~\cfrac{\partial X_I}{\partial x_k}~\cfrac{\partial X_L}{\partial x_m}~\sigma_{km} \!\,\!$
This tensor is symmetric.
If the material rotates without a change in stress state (rigid rotation), the components of the 2nd Piola–Kirchhoff stress tensor remain constant, irrespective of material orientation.
The 2nd Piola–Kirchhoff stress tensor is energy conjugate to the Green–Lagrange finite strain tensor.
## See also
Continuum mechanics
Laws
Scientists
## Further reading
• Chakrabarty, J. (2006). Theory of plasticity (3 ed.). Butterworth-Heinemann. pp. 17–32. ISBN 0-7506-6638-2.
• Beer, Ferdinand Pierre; Elwood Russell Johnston, John T. DeWolf (1992). Mechanics of Materials. McGraw-Hill Professional. ISBN 0-07-112939-1.
• Brady, B.H.G.; E.T. Brown (1993). Rock Mechanics For Underground Mining (Third ed.). Kluwer Academic Publisher. pp. 17–29. ISBN 0-412-47550-2.
• Chen, Wai-Fah; Baladi, G.Y. (1985). Soil Plasticity, Theory and Implementation. ISBN 0-444-42455-5, 0444416625 Check `|isbn=` value (help).
• Chou, Pei Chi; Pagano, N.J. (1992). Elasticity: tensor, dyadic, and engineering approaches. Dover books on engineering. Dover Publications. pp. 1–33. ISBN 0-486-66958-0.
• Davis, R. O.; Selvadurai. A. P. S. (1996). Elasticity and geomechanics. Cambridge University Press. pp. 16–26. ISBN 0-521-49827-9.
• Dieter, G. E. (3 ed.). (1989). Mechanical Metallurgy. New York: McGraw-Hill. ISBN 0-07-100406-8.
• Holtz, Robert D.; Kovacs, William D. (1981). An introduction to geotechnical engineering. Prentice-Hall civil engineering and engineering mechanics series. Prentice-Hall. ISBN 0-13-484394-0.
• Jones, Robert Millard (2008). Deformation Theory of Plasticity. Bull Ridge Corporation. pp. 95–112. ISBN 0-9787223-1-0.
• Jumikis, Alfreds R. (1969). Theoretical soil mechanics: with practical applications to soil mechanics and foundation engineering. Van Nostrand Reinhold Co. ISBN 0-442-04199-3.
• Landau, L.D. and E.M.Lifshitz. (1959). Theory of Elasticity.
• Love, A. E. H. (4 ed.). (1944). Treatise on the Mathematical Theory of Elasticity. New York: Dover Publications. ISBN 0-486-60174-9.
• Marsden, J. E.; Hughes, T. J. R. (1994). Mathematical Foundations of Elasticity. Dover Publications. pp. 132–142. ISBN 0-486-67865-2.
• Parry, Richard Hawley Grey (2004). Mohr circles, stress paths and geotechnics (2 ed.). Taylor & Francis. pp. 1–30. ISBN 0-415-27297-1.
• Rees, David (2006). Basic Engineering Plasticity – An Introduction with Engineering and Manufacturing Applications. Butterworth-Heinemann. pp. 1–32. ISBN 0-7506-8025-3.
• Timoshenko, Stephen P.; James Norman Goodier (1970). Theory of Elasticity (Third ed.). McGraw-Hill International Editions. ISBN 0-07-085805-5.
• Timoshenko, Stephen P. (1983). History of strength of materials: with a brief account of the history of theory of elasticity and theory of structures. Dover Books on Physics. Dover Publications. ISBN 0-486-61187-6.
## References
1. ^ a b I-Shih Liu (2002), "Continuum Mechanics". Springer ISBN 3-540-43019-9
2. Wai-Fah Chen and Da-Jian Han (2007), "Plasticity for Structural Engineers". J. Ross Publishing ISBN 1-932159-75-4
3. Peter Chadwick (1999), "Continuum Mechanics: Concise Theory and Problems". Dover Publications, series "Books on Physics". ISBN 0-486-40180-4. pages
4. ^ a b c
5. Fridtjov Irgens (2008), "Continuum Mechanics". Springer. ISBN 3-540-74297-2
6. Slaughter
7. Jacob Lubliner (2008). "Plasticity Theory" (revised edition). Dover Publications. ISBN 0-486-46290-0
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9046130776405334, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/64904-simplify-expression-without-using-zero-negative-indices.html
|
# Thread:
1. ## Simplify expression without using zero or negative indices
Hi! I'm new to the forum, I'm learning maths from some textbooks over the aussie summer holidays and have no teacher or tutor, so I'm in here bothering you guys
I have to simplify some expressions without using 0 or a negative exponent, and got to this one:
(2a^-1/3b^-2)^-2
When I worked through it I got 4a^2/9b^4 as the answer, but the textbook says the answer is 9a^2/4b^4. My working went as follows.
(2a^-1/3b^-2)^-2
=
(3b^2/2a)^2
=
(3b^2)^-2/(2a)^-2
=
9b^2*2/4a^-2
=
9b^-4/4a^-2
=
4a^2/9b^-4
I've worked through it a few times and can't figure it out. Can someone tell me where I went wrong? I want to sort it out before I go further.
p.s. I'd use the [tex] codes but they're not showing negative exponents properly when I preview. sorry
2. Hello,
There are many typos or mistakes. I'll correct them in red.
And that will be assuming that what you have to calculate is :
$\left( \frac{2a^{-1}}{3b^{-2}} \right)^{-2}$
(see the code by clicking on it)
Originally Posted by spishak
Hi! I'm new to the forum, I'm learning maths from some textbooks over the aussie summer holidays and have no teacher or tutor, so I'm in here bothering you guys
I have to simplify some expressions without using 0 or a negative exponent, and got to this one:
(2a^-1/3b^-2)^-2
When I worked through it I got 4a^2/9b^4 as the answer, but the textbook says the answer is 9a^2/4b^4. My working went as follows.
(2a^-1/3b^-2)^-2
=
(2b^2/3a)^-2 << since it's 3b^(-2) and not (3b)^(-2), there is no negative exponent over 2 and 3.
=
(2b^2)^-2/(3a)^-2
=
9b^2*(-2)/4a^-2
=
9b^-4/4a^-2
=
9a^2/4b^+4
I've worked through it a few times and can't figure it out. Can someone tell me where I went wrong? I want to sort it out before I go further.
p.s. I'd use the [tex] codes but they're not showing negative exponents properly when I preview. sorry
Okay, I'll try to write down something that is as near as possible from what you have written.
$\begin{aligned}<br /> \left( \frac{2 {\color{blue}a^{-1}}}{3 {\color{blue}b^{-2}}} \right)^{-2}<br /> &= \left( \frac{2 b^2}{3a} \right)^{-2} \\<br /> &= \left( \frac{3a}{2 b^2} \right)^{2} \\<br /> &= \frac{3^2 a^2}{2^2 b^{2*2}} \\<br /> &= \frac{9a^2}{4b^4} \end{aligned}$
Does it look clear ? Do tell me if there's something you don't understand !
Yes, it's clear! Sorry about the poor syntax, I'm very new to serious maths.
Your interpretation was correct, and I still have to use my way of writing, but if I need to make any more posts after this I will learn and use the markup language the forum uses for my expressions.
As for your correction, i think i got it.
the nature of the term "2a^-1" is such that it could also be written correctly as "2^1*a^-1" and i was interpreting it as "(2a)^-1". i think i get it. thanks a bunch!
4. Originally Posted by spishak
Your interpretation was correct, and I still have to use my way of writing, but if I need to make any more posts after this I will learn and use the markup language the forum uses for my expressions.
It comes with the habit
the nature of the term "2a^-1" is such that it could also be written correctly as "2^1*a^-1" and i was interpreting it as "(2a)^-1". i think i get it. thanks a bunch!
Exactly !
Good luck with your studies !
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9680147767066956, "perplexity_flag": "middle"}
|
http://readtiger.com/wkp/en/Gas
|
# Gas
Gas phase particles (atoms, molecules, or ions) move around freely in the absence of an applied electric field.
Continuum mechanics
Laws
Scientists
Gas is one of the four fundamental states of matter (the others being solid, liquid, and plasma). A pure gas may be made up of individual atoms (e.g. a noble gas or atomic gas like neon), elemental molecules made from one type of atom (e.g. oxygen), or compound molecules made from a variety of atoms (e.g. carbon dioxide). A gas mixture would contain a variety of pure gases much like the air. What distinguishes a gas from liquids and solids is the vast separation of the individual gas particles. This separation usually makes a colorless gas invisible to the human observer. The interaction of gas particles in the presence of electric and gravitational fields are considered negligible as indicated by the constant velocity vectors in the image.
The gaseous state of matter is found between the liquid and plasma states,[1] the latter of which provides the upper temperature boundary for gases. Bounding the lower end of the temperature scale lie degenerative quantum gases[2] which are gaining increasing attention.[3] High-density atomic gases super cooled to incredibly low temperatures are classified by their statistical behavior as either a Bose gas or a Fermi gas. For a comprehensive listing of these exotic states of matter see list of states of matter.
## Etymology
The word gas is a neologism first used by the early 17th century Flemish chemist J.B. Van Helmont.[4] Van Helmont's word appears to have been simply a phonetic transcription of the Greek word χάος Chaos – the g in Dutch being pronounced like the English ch – in which case Van Helmont was simply following the established alchemical usage first attested in the works of Paracelsus. According to Paracelsus's terminology, chaos meant something like "ultra-rarefied water".[5]
## Physical characteristics
Drifting smoke particles provide clues to the movement of the surrounding gas.
As most gases are difficult to observe directly, they are described through the use of four physical properties or macroscopic characteristics: pressure, volume, number of particles (chemists group them by moles) and temperature. These four characteristics were repeatedly observed by scientists such as Robert Boyle, Jacques Charles, John Dalton, Joseph Gay-Lussac and Amedeo Avogadro for a variety of gases in various settings. Their detailed studies ultimately led to a mathematical relationship among these properties expressed by the ideal gas law (see simplified models section below).
Gas particles are widely separated from one another, and consequently have weaker intermolecular bonds than liquids or solids. These intermolecular forces result from electrostatic interactions between gas particles. Like-charged areas of different gas particles repel, while oppositely charged regions of different gas particles attract one another; gases that contain permanently charged ions are known as plasmas. Gaseous compounds with polar covalent bonds contain permanent charge imbalances and so experience relatively strong intermolecular forces, although the molecule while the compound's net charge remains neutral. Transient, randomly-induced charges exist across non-polar covalent bonds of molecules and electrostatic interactions caused by them are referred to as Van der Waals forces. The interaction of these intermolecular forces varies within a substance which determines many of the physical properties unique to each gas.[6][7] A comparison of boiling points for compounds formed by ionic and covalent bonds leads us to this conclusion.[8] The drifting smoke particles in the image provides some insight into low pressure gas behavior.
Compared to the other states of matter, gases have low density and viscosity. Pressure and temperature influence the particles within a certain volume. This variation in particle separation and speed is referred to as compressibility. This particle separation and size influences optical properties of gases as can be found in the following list of refractive indices. Finally, gas particles spread apart or diffuse in order to homogeneously distribute themselves throughout any container.
## Macroscopic
Shuttle imagery of re-entry phase.
When observing a gas, it is typical to specify a frame of reference or length scale. A larger length scale corresponds to a macroscopic or global point of view of the gas. This region (referred to as a volume) must be sufficient in size to contain a large sampling of gas particles. The resulting statistical analysis of this sample size produces the "average" behavior (i.e. velocity, temperature or pressure) of all the gas particles within the region. In contrast, a smaller length scale corresponds to a microscopic or particle point of view.
Macroscopically, the gas characteristics measured are either in terms of the gas particles themselves (velocity, pressure, or temperature) or their surroundings (volume). For example, Robert Boyle studied pneumatic chemistry for a small portion of his career. One of his experiments related the macroscopic properties of pressure and volume of a gas. His experiment used a J-tube manometer which looks like a test tube in the shape of the letter J. Boyle trapped an inert gas in the closed end of the test tube with a column of mercury, thereby making the number of particles and the temperature constant. He observed that when the pressure was increased in the gas, by adding more mercury to the column, the trapped gas' volume decreased (this is known as an inverse relationship). Furthermore, when Boyle multiplied the pressure and volume of each observation, the product was constant. This relationship held for every gas that Boyle observed leading to the law, (PV=k), named to honor his work in this field.
There are many mathematical tools available for analyzing gas properties. As gases are subjected to extreme conditions, these tools become a bit more complex, from the Euler equations for inviscid flow to the Navier-Stokes equations[9] that fully account for viscous effects. These equations are adapted to the conditions of the gas system in question. Boyle's lab equipment allowed the use of algebra to obtain his analytical results. His results were possible because he was studying gases in relatively low pressure situations where they behaved in an "ideal" manner. These ideal relationships apply to safety calculations for a variety of flight conditions on the materials in use. The high technology equipment in use today was designed to help us safely explore the more exotic operating environments where the gases no longer behave in an "ideal" manner. This advanced math, including statistics and multivariable calculus, makes possible the solution to such complex dynamic situations as space vehicle reentry. An example is the analysis of the space shuttle reentry pictured to ensure the material properties under this loading condition are appropriate. In this flight regime, the gas is no longer behaving ideally.
### Pressure
Main article: Pressure
The symbol used to represent pressure in equations is "p" or "P" with SI units of pascals.
When describing a container of gas, the term pressure (or absolute pressure) refers to the average force per unit area that the gas exerts on the surface of the container. Within this volume, it is sometimes easier to visualize the gas particles moving in straight lines until they collide with the container (see diagram at top of the article). The force imparted by a gas particle into the container during this collision is the change in momentum of the particle.[10] During a collision only the normal (geometry) component of velocity changes. A particle traveling parallel to the wall does not change its momentum. Therefore the average force on a surface must be the average change in linear momentum from all of these gas particle collisions.
Pressure is the sum of all the normal components of force exerted by the particles impacting the walls of the container divided by the surface area of the wall.
### Temperature
Air balloon shrinks after submersion in liquid nitrogen
Main article: Thermodynamic temperature
The symbol used to represent temperature in equations is T with SI units of kelvins.
The speed of a gas particle is proportional to its absolute temperature. The volume of the balloon in the video shrinks when the trapped gas particles slow down with the addition of extremely cold nitrogen. The temperature of any physical system is related to the motions of the particles (molecules and atoms) which make up the [gas] system.[11] In statistical mechanics, temperature is the measure of the average kinetic energy stored in a particle. The methods of storing this energy are dictated by the degrees of freedom of the particle itself (energy modes). Kinetic energy added (endothermic process) to gas particles by way of collisions produces linear, rotational, and vibrational motion. In contrast, a molecule in a solid can only increase its vibrational modes with the addition of heat as the lattice crystal structure prevents both linear and rotational motions. These heated gas molecules have a greater speed range which constantly varies due to constant collisions with other particles. The speed range can be described by the Maxwell-Boltzmann distribution. Use of this distribution implies ideal gases near thermodynamic equilibrium for the system of particles being considered.
### Specific volume
Main article: Specific volume
The symbol used to represent specific volume in equations is "v" with SI units of cubic meters per kilogram.
See also: Gas volume
The symbol used to represent volume in equations is "V" with SI units of cubic meters.
When performing a thermodynamic analysis, it is typical to speak of intensive and extensive properties. Properties which depend on the amount of gas (either by mass or volume) are called extensive properties, while properties that do not depend on the amount of gas are called intensive properties. Specific volume is an example of an intensive property because it is the ratio of volume occupied by a unit of mass of a gas that is identical throughout a system at equilibrium.[12] 1000 atoms a gas occupy the same space as any other 1000 atoms for any given temperature and pressure. This concept is easier to visualize for solids such as iron which are incompressible compared to gases. Since a gas fills any container in which it is placed, volume is an extensive property.
### Density
Main article: Density
The symbol used to represent density in equations is ρ (rho) with SI units of kilograms per cubic meter. This term is the reciprocal of specific volume.
Since gas molecules can move freely within a container, their mass is normally characterized by density. Density is the amount of mass per unit volume of a substance, or the inverse of specific volume. For gases, the density can vary over a wide range because the particles are free to move closer together when constrained by pressure or volume. This variation of density is referred to as compressibility. Like pressure and temperature, density is a state variable of a gas and the change in density during any process is governed by the laws of thermodynamics. For a static gas, the density is the same throughout the entire container. Density is therefore a scalar quantity. It can be shown by kinetic theory that the density is inversely proportional to the size of the container in which a fixed mass of gas is confined. In this case of a fixed mass, the density decreases as the volume increases.
## Microscopic
If one could observe a gas under a powerful microscope, one would see a collection of particles (molecules, atoms, ions, electrons, etc.) without any definite shape or volume that are in more or less random motion. These neutral gas particles only change direction when they collide with another particle or with the sides of the container. In an ideal gas, these collisions are perfectly elastic. This particle or microscopic view of a gas is described by the Kinetic-molecular theory. The assumptions behind this theory can be found in the postulates section of Kinetic Theory.
### Kinetic theory
Main article: Kinetic theory
Kinetic theory provides insight into the macroscopic properties of gases by considering their molecular composition and motion. Starting with the definitions of momentum and kinetic energy,[13] one can use the conservation of momentum and geometric relationships of a cube to relate macroscopic system properties of temperature and pressure to the microscopic property of kinetic energy per molecule. The theory provides averaged values for these two properties.
The theory also explains how the gas system responds to change. For example, as a gas is heated from absolute zero, when it is (in theory) perfectly still, its internal energy (temperature) is increased. As a gas is heated, the particles speed up and its temperature rises. This results in greater numbers of collisions with the container per unit time due to the higher particle speeds associated with elevated temperatures. The pressure increases in proportion to the number of collisions per unit time.
### Brownian motion
Random motion of gas particles results in diffusion.
Main article: Brownian motion
Brownian motion is the mathematical model used to describe the random movement of particles suspended in a fluid. The gas particle animation, using pink and green particles, illustrates how this behavior results in the spreading out of gases (entropy). These events are also described by particle theory.
Since it is at the limit of (or beyond) current technology to observe individual gas particles (atoms or molecules), only theoretical calculations give suggestions about how they move, but their motion is different from Brownian motion because Brownian motion involves a smooth drag due to the frictional force of many gas molecules, punctuated by violent collisions of an individual (or several) gas molecule(s) with the particle. The particle (generally consisting of millions or billions of atoms) thus moves in a jagged course, yet not so jagged as would be expected if an individual gas molecule were examined.
### Intermolecular forces
When gases are compressed, intermolecular forces like those shown here start to play a more active role.
Main articles: van der Waals force and Intermolecular force
As discussed earlier, momentary attractions (or repulsions) between particles have an effect on gas dynamics. In physical chemistry, the name given to these intermolecular forces is van der Waals force. These forces play a key role in determining physical properties of a gas such as viscosity and flow rate (see physical characteristics section). Ignoring these forces in certain conditions (see Kinetic-molecular theory) allows a real gas to be treated like an ideal gas. This assumption allows the use of ideal gas laws which greatly simplifies calculations.
Proper use of these gas relationships requires the Kinetic-molecular theory (KMT). When gas particles possess a magnetic charge or Intermolecular force they gradually influence one another as the spacing between them is reduced (the hydrogen bond model illustrates one example). In the absence of any charge, at some point when the spacing between gas particles is greatly reduced they can no longer avoid collisions between themselves at normal gas temperatures. Another case for increased collisions among gas particles would include a fixed volume of gas, which upon heating would contain very fast particles. This means that these ideal equations provide reasonable results except for extremely high pressure (compressible) or high temperature (ionized) conditions. Notice that all of these excepted conditions allow energy transfer to take place within the gas system. The absence of these internal transfers is what is referred to as ideal conditions in which the energy exchange occurs only at the boundaries of the system. Real gases experience some of these collisions and intermolecular forces. When these collisions are statistically negligible (incompressible), results from these ideal equations are still meaningful. If the gas particles are compressed into close proximity they behave more like a liquid (see fluid dynamics).
## Simplified models
Main article: Equation of state
An equation of state (for gases) is a mathematical model used to roughly describe or predict the state properties of a gas. At present, there is no single equation of state that accurately predicts the properties of all gases under all conditions. Therefore, a number of much more accurate equations of state have been developed for gases in specific temperature and pressure ranges. The "gas models" that are most widely discussed are "perfect gas", "ideal gas" and "real gas". Each of these models has its own set of assumptions to facilitate the analysis of a given thermodynamic system.[14] Each successive model expands the temperature range of coverage to which it applies.
### Ideal and perfect gas models
Main article: Perfect gas
The equation of state for an ideal or perfect gas is the ideal gas law and reads
$PV=nRT,$
where P is the pressure, V is the volume, n is amount of gas (in mol units), R is the universal gas constant, 8.314 J/(mol K), and T is the temperature. Written this way, it is sometimes called the "chemist's version", since it emphasizes the number of molecules n. It can also be written as
$P=\rho R_s T,$
where $R_s$ is the specific gas constant for a particular gas, in units J/(kg K), and ρ = m/V is density. This notation is the "gas dynamicist's" version, which is more practical in modeling of gas flows involving acceleration without chemical reactions.
The ideal gas law does not make an assumption about the specific heat of a gas. In the most general case, the specific heat is a function of both temperature and pressure. If the pressure-dependence is neglected (and possibly the temperature-dependence as well) in a particular application, sometimes the gas is said to be a perfect gas, although the exact assumptions may vary depending on the author and/or field of science.
For an ideal gas, the ideal gas law applies without restrictions on the specific heat. An ideal gas is a simplified "real gas" with the assumption that the compressibility factor Z is set to 1 meaning that this pneumatic ratio remains constant. A compressibility factor of one also requires the four state variables to follow the ideal gas law.
This approximation is more suitable for applications in engineering although simpler models can be used to produce a "ball-park" range as to where the real solution should lie. An example where the "ideal gas approximation" would be suitable would be inside a combustion chamber of a jet engine.[15] It may also be useful to keep the elementary reactions and chemical dissociations for calculating emissions.
### Real gas
21 April 1990 eruption of Mount Redoubt, Alaska, illustrating real gases not in thermodynamic equilibrium.
Main article: Real gas
Each one of the assumptions listed below adds to the complexity of the problem's solution. As the density of a gas increases with pressure rises, the intermolecular forces play a more substantial role in gas behavior which results in the ideal gas law no longer providing "reasonable" results. At the upper end of the engine temperature ranges (e.g. combustor sections – 1300 K), the complex fuel particles absorb internal energy by means of rotations and vibrations that cause their specific heats to vary from those of diatomic molecules and noble gases. At more than double that temperature, electronic excitation and dissociation of the gas particles begins to occur causing the pressure to adjust to a greater number of particles (transition from gas to plasma).[16] Finally, all of the thermodynamic processes were presumed to describe uniform gases whose velocities varied according to a fixed distribution. Using a non-equilibrium situation implies the flow field must be characterized in some manner to enable a solution. One of the first attempts to expand the boundaries of the ideal gas law was to include coverage for different thermodynamic processes by adjusting the equation to read pVn = constant and then varying the n through different values such as the specific heat ratio, γ.
Real gas effects include those adjustments made to account for a greater range of gas behavior:
• Compressibility effects (Z allowed to vary from 1.0)
• Variable heat capacity (specific heats vary with temperature)
• Van der Waals forces (related to compressibility, can substitute other equations of state)
• Non-equilibrium thermodynamic effects
• Issues with molecular dissociation and elementary reactions with variable composition.
For most applications, such a detailed analysis is excessive. Examples where "Real Gas effects" would have a significant impact would be on the Space Shuttle re-entry where extremely high temperatures and pressures are present or the gases produced during geological events as in the image of the 1990 eruption of Mount Redoubt.
## Historical synthesis
See also: Gas laws
### Boyle's law
Boyle's equipment.
Main article: Boyle's law
Boyle's Law was perhaps the first expression of an equation of state. In 1662 Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was carefully measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. The image of Boyle's Equipment shows some of the exotic tools used by Boyle during his study of gases.
Through these experiments, Boyle noted that the pressure exerted by a gas held at a constant temperature varies inversely with the volume of the gas.[17] For example, if the volume is halved, the pressure is doubled; and if the volume is doubled, the pressure is halved. Given the inverse relationship between pressure and volume, the product of pressure (P) and volume (V) is a constant (k) for a given mass of confined gas as long as the temperature is constant. Stated as a formula, thus is:
$PV = k$
Because the before and after volumes and pressures of the fixed amount of gas, where the before and after temperatures are the same both equal the constant k, they can be related by the equation:
$\qquad P_1 V_1 = P_2 V_2.$
### Charles's Law
Main article: Charles's law
In 1787, the French physicist and balloon pioneer, Jacques Charles, found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to the same extent over the same 80 kelvin interval. He noted that, for an ideal gas at constant pressure, the volume is directly proportional to its temperature:
$\frac{V_1}{T_1} = \frac{V_2}{T_2}$
### Gay-Lussac's Law
Main article: Gay-Lussac's Law
In 1802, Joseph Louis Gay-Lussac published results of similar, though more extensive experiments.[18] Gay-Lussac credited Charle's earlier work by naming the law in his honor. Gay-Lussac himself is credited with the law describing pressure, which he found in 1809. It states that the pressure exerted on a container's sides by an ideal gas is proportional to its temperature.
$\frac{P_1}{T_1}=\frac{P_2}{T_2} \,$
### Avogadro's law
Main article: Avogadro's law
In 1811, Amedeo Avogadro verified that equal volumes of pure gases contain the same number of particles. His theory was not generally accepted until 1858 when another Italian chemist Stanislao Cannizzaro was able to explain non-ideal exceptions. For his work with gases a century prior, the number that bears his name Avogadro's constant represents the number of atoms found in 12 grams of elemental carbon-12 (6.022×1023 mol−1). This specific number of gas particles, at standard temperature and pressure (ideal gas law) occupies 22.40 liters, which is referred to as the molar volume.
Avogadro's law states that the volume occupied by an ideal gas is proportional to the number of moles (or molecules) present in the container. This gives rise to the molar volume of a gas, which at STP is 22.4 dm3 (or litres). The relation is given by
$\frac{V_1}{n_1}=\frac{V_2}{n_2} \,$
where n is equal to the number of moles of gas (the number of molecules divided by Avogadro's Number).
### Dalton's law
Dalton's notation.
Main article: Dalton's law
In 1801, John Dalton published the Law of Partial Pressures from his work with ideal gas law relationship: The pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone. Mathematically, this can be represented for n species as:
Pressuretotal = Pressure1 + Pressure2 + ... + Pressuren
The image of Dalton's journal depicts symbology he used as shorthand to record the path he followed. Among his key journal observations upon mixing unreactive "elastic fluids" (gases) were the following:[19]
• Unlike liquids, heavier gases did not drift to the bottom upon mixing.
• Gas particle identity played no role in determining final pressure (they behaved as if their size was negligible).
## Special topics
### Compressibility
Compressibility factors for air.
Main article: Compressibility factor
Thermodynamicists use this factor (Z) to alter the ideal gas equation to account for compressibility effects of real gases. This factor represents the ratio of actual to ideal specific volumes. It is sometimes referred to as a "fudge-factor" or correction to expand the useful range of the ideal gas law for design purposes. Usually this Z value is very close to unity. The compressibility factor image illustrates how Z varies over a range of very cold temperatures.
### Reynolds number
Main article: Reynolds number
In fluid mechanics, the Reynolds number is the ratio of inertial forces (vsρ) to viscous forces (μ/L). It is one of the most important dimensionless numbers in fluid dynamics and is used, usually along with other dimensionless numbers, to provide a criterion for determining dynamic similitude. As such, the Reynolds number provides the link between modeling results (design) and the full-scale actual conditions. It can also be used to characterize the flow.
### Viscosity
Satellite view of weather pattern in vicinity of Robinson Crusoe Islands on 15 September 1999, shows a unique turbulent cloud pattern called a Kármán vortex street
Main article: Viscosity
Viscosity, a physical property, is a measure of how well adjacent molecules stick to one another. A solid can withstand a shearing force due to the strength of these sticky intermolecular forces. A fluid will continuously deform when subjected to a similar load. While a gas has a lower value of viscosity than a liquid, it is still an observable property. If gases had no viscosity, then they would not stick to the surface of a wing and form a boundary layer. A study of the delta wing in the Schlieren image reveals that the gas particles stick to one another (see Boundary layer section).
### Turbulence
Delta wing in wind tunnel. The shadows form as the indices of refraction change within the gas as it compresses on the leading edge of this wing.
Main article: Turbulence
In fluid dynamics, turbulence or turbulent flow is a flow regime characterized by chaotic, stochastic property changes. This includes low momentum diffusion, high momentum convection, and rapid variation of pressure and velocity in space and time. The Satellite view of weather around Robinson Crusoe Islands illustrates just one example.
### Boundary layer
Main article: Boundary layer
Particles will, in effect, "stick" to the surface of an object moving through it. This layer of particles is called the boundary layer. At the surface of the object, it is essentially static due to the friction of the surface. The object, with its boundary layer is effectively the new shape of the object that the rest of the molecules "see" as the object approaches. This boundary layer can separate from the surface, essentially creating a new surface and completely changing the flow path. The classical example of this is a stalling airfoil. The delta wing image clearly shows the boundary layer thickening as the gas flows from right to left along the leading edge.
### Maximum entropy principle
Main article: Principle of maximum entropy
As the total number of degrees of freedom approaches infinity, the system will be found in the macrostate that corresponds to the highest multiplicity. In order to illustrate this principle, observe the skin temperature of a frozen metal bar. Using a thermal image of the skin temperature, note the temperature distribution on the surface. This initial observation of temperature represents a "microstate." At some future time, a second observation of the skin temperature produces a second microstate. By continuing this observation process, it is possible to produce a series of microstates that illustrate the thermal history of the bar's surface. Characterization of this historical series of microstates is possible by choosing the macrostate that successfully classifies them all into a single grouping.
### Thermodynamic equilibrium
Main article: Thermodynamic equilibrium
When energy transfer ceases from a system, this condition is referred to as thermodynamic equilibrium. Usually this condition implies the system and surroundings are at the same temperature so that heat no longer transfers between them. It also implies that external forces are balanced (volume does not change), and all chemical reactions within the system are complete. The timeline varies for these events depending on the system in question. A container of ice allowed to melt at room temperature takes hours, while in semiconductors the heat transfer that occurs in the device transition from an on to off state could be on the order of a few nanoseconds.
## Notes
1. This early 20th century discussion infers what is regarded as the plasma state. See page 137 of American Chemical Society, Faraday Society, Chemical Society (Great Britain) The Journal of physical chemistry, Volume 11 Cornell (1907).
2. The work by T. Zelevinski provides another link to latest research about Strontium in this new field of study. See Tanya Zelevinsky (2009). "84Sr—just right for forming a Bose-Einstein condensate". Physics 2: 94.
3. for links material on the Bose-Einstein condensate see Quantum Gas Microscope Offers Glimpse Of Quirky Ultracold Atoms. ScienceDaily. 4 November 2009.
4. J. B. van Helmont, Ortus medicinae. … (Amsterdam, (Netherlands): Louis Elzevir, 1652 (first edition: 1648)). The word "gas" first appears on page 58, where he mentions: "… Gas (meum scil. inventum) …" (… gas (namely, my discovery) …). On page 59, he states: "… in nominis egestate, halitum illum, Gas vocavi, non longe a Chao …" (… in need of a name, I called this vapor "gas", not far from "chaos" …)
5.
6. The authors make the connection between molecular forces of metals and their corresponding physical properties. By extension, this concept would apply to gases as well, though not universally. Cornell (1907) pp. 164–5.
7. One noticeable exception to this physical property connection is conductivity which varies depending on the state of matter (ionic compounds in water) as described by Michael Faraday in the 1833 when he noted that ice does not conduct a current. See page 45 of John Tyndall's Faraday as a Discoverer (1868).
8.
9. Anderson, p.501
10. J. Clerk Maxwell (1904). Theory of Heat. Mineola: Dover Publications. pp. 319–20. ISBN 0-486-41735-2.
11. See pages 137–8 of Society, Cornell (1907).
12. Kenneth Wark (1977). Thermodynamics (3 ed.). McGraw-Hill. p. 12. ISBN 0-07-068280-1.
13. For assumptions of Kinetic Theory see McPherson, pp.60–61
14. Anderson, pp. 289–291
15. John, p.205
16. John, pp. 247–56
17. McPherson, pp.52–55
18. McPherson, pp.55–60
19. John P. Millington (1906). John Dalton. pp. 72, 77–78.
## References
• Anderson, John D. (1984). Fundamentals of Aerodynamics. McGraw-Hill Higher Education. ISBN 0-07-001656-9.
• John, James (1984). Gas Dynamics. Allyn and Bacon. ISBN 0-205-08014-6.
• McPherson, William and Henderson, William (1917). An Elementary study of chemistry.
## Further reading
• Philip Hill and Carl Peterson. Mechanics and Thermodynamics of Propulsion: Second Edition Addison-Wesley, 1992. ISBN 0-201-14659-2
• National Aeronautics and Space Administration (NASA). Animated Gas Lab. Accessed February 2008.
• Georgia State University. HyperPhysics. Accessed February 2008.
• Antony Lewis WordWeb. Accessed February 2008.
• Northwestern Michigan College The Gaseous State. Accessed February 2008.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9151299595832825, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/70140?sort=votes
|
## Applications of full integral weight modular forms in elementary number theory
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Except for Eisenstein series having the divisor functions as their Fourier coefficients, is there any other full integral weight modular form (of some level, preferably full) having arithmetic functions as their Fourier coefficients.
More to the point, my question is, apart from the relations you obtain between $\sigma_3, \sigma_5$ and $\sigma_7$ are there any applications of full integral weight modular forms (preferably cusp forms) to elementary number theory.
-
2
Formulas for sums of squares. Groasswald's book has a chapter dedicated to the connection. – Dror Speiser Jul 12 2011 at 16:39
1
I don't recall all the details, or know a good reference off the top of my head, so I will leave a comment rather than an answer: Suppose $a_1 x_1^2 + \cdots + a_k x_k^2$ is a quadratic form with all $a_i > 0$. Let $r(n)$ be the number of representations of $n$ by this quadratic form. Then $\sum_n r(n)$ q^n is a modular form of weight $k/2$, although not usually of full level. You can use this to conclude facts about the $r(n)$. – Frank Thorne Jul 12 2011 at 16:42
2
You can see Ono, The Web of Modularity for all sorts of interesting examples where a variety of arithmetic functions show up as coefficients of modular forms. – Frank Thorne Jul 12 2011 at 16:43
5
Counting sums of squares is a good example, though the forms that arise aren't quite for the full modular group. Probably the closest that this direction comes to "elementary" would be representations by the quadratic form associated to the even unimodular lattice $D_n^+$ for $n \equiv 0 \bmod 8$ with $n \geq 24$. If you try to extend the $\sigma$ convolutions past 3,5,7 you run into discrepancies coming from coefficients of cuspforms. But there are linear combinations that still work, starting with $441 E_4^3 + 250 E_6^2 = 691 E_{12}$. – Noam D. Elkies Jul 12 2011 at 16:55
Hi Mehmet! You might like the chapters concerning quadratic forms in Iwaniec's "classical topics" book. – David Hansen Jul 13 2011 at 2:22
## 4 Answers
Perhaps this is "out of bounds" given the phrasing of the question, but those Eisenstein series you mention don't just have divisor sums as coefficients - the constant term is a special value of the Riemann zeta function.
This implies all sorts of neat stuff. The various relations between divisor sums that you mention come with relations between zeta values. These give very nice congruences, in particular.
You can take this to reasoning pretty far to deduce things like $p$-adic interpolation of zeta values a la Kubota-Leopoldt from the much simpler interpolation properties of the divisor sum functions. This was done by Serre in the 1973 paper ("Formes modulaires et fontiones zeta $p$-adiques") that gave birth to the theory of $p$-adic modular forms.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Personally, I regard the Dirichlet coefficients of any automorphic $L$-function arithmetic. A nice application of the Fourier coefficients of full level Maass cusp forms is Motohashi's improvement for the error term in the binary additive divisor problem and the asymptotic formula for the fourth moment of the Riemann zeta function. See in particular his papers here and Theorem 5.2 in his book.
-
Another little thing: the simplest Siegel-Weil formulas, equating holomorphic Eisenstein series and linear combinations of theta series attached to positive-definite quadratic forms, can be arranged to be about level-one or small-level things.
Edit: and add Klingen's proof of rationality properties of special values of zeta functions of totally real number fields.
-
Nobody seems to have mentioned "the master" of this subject, and his use of (classical) Eisenstein series to prove things like $p(5n+4) \equiv 0 \ ({\rm mod} \ 5)$ (here $p(m)$ is of course the usual partition function).
Here's his proof (prepared by Hardy), published in Math.Z (1921). B.Berndt published another a bit shorter proof , which employs famous Ramanujan's differential equations. BTW, make sure you are familiar with the Ramanujan "J-series" before you jump to (say) formula (2.2) in Berndt's paper :-)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9266767501831055, "perplexity_flag": "middle"}
|
http://micromath.wordpress.com/2008/04/04/kolmogorovs-53-law/?like=1&_wpnonce=e3a0b2f395
|
# Mathematics under the Microscope
Atomic objects, structures and concepts of mathematics
Posted by: Alexandre Borovik | April 4, 2008
## Kolmogorov’s “5/3″ Law
This is a follow-up to my earlier post on “named” numbers; the text is mostly cannibalised from my book; I refer the reader to the book (available for free download) for bibliography, etc.
The famous mathematician Andrei Kolmogorov was the author of what remains the most striking and beautiful example of a dimensional analysis argument in mathematics. The deduction of his seminal “$5/3$” law for the energy distribution in the turbulent fluid [A. N. Kolmogorov, Local structure of turbulence in an incompressible fluid for very large Reynolds numbers, Doklady Acad Sci. USSR 31 (1941) 301-305] is so simple that it can be done in a few lines.
I was lucky to study at a good secondary school where my physics teacher (Anatoly Mikhailovich Trubachov, to whom I express my eternal gratitude) derived the “$5/3$” law in one of his improvised lectures. In my exposition, I borrow some details from Arnold and Ball (where I have also picked the idea of using a woodcut by Katsushika Hokusai as an illustration).
Multiple scales in the motion of a fluid, from a woodcut by Katsushika Hokusai, The Great Wave off Kanagawa (from the series Thirty-six View sof Mount Fuji, 1823–1829). This image is much beloved by chaos scientists. Source: Wikipedia Commons. Public domain.
The turbulent flow of a liquid consists of vortices; the flow in every vortex is made of smaller vortices, all the way down the scale to the point when the viscosity of the fluid turns the kinetic energy of motion into heat. If there is no influx of energy (like the wind whipping up a storm in Hokusai’s woodcut), the energy of the motion will eventually dissipate and the water will stand still.
So, assume that we have a balanced energy flow, the storm is already at full strength and stays that way. The motion of a liquid is made of waves of different lengths; Kolmogorov asked the question, what is the share of energy carried by waves of a particular length?
Here it is a somewhat simplified description of his analysis. We start by making a list of the quantities involved and their dimensions. First, we have the energy flow (let me recall, in our setup it is the same as the dissipation of energy). The dimension of energy is
$\frac{{\rm mass} \cdot {\rm length}^2}{{\rm time}^2}$
(remember the formula $K = mv^2/2$ for the kinetic energy of a moving material point?). It will be convenient to make all calculations per unit of mass. Then the energy flow $\epsilon$ has dimension
$\frac{{\rm energy}}{{\rm mass}\cdot {\rm time}} =\frac{{length}^2}{{time}^3}$
For counting waves, it is convenient to use the wave number, that is, the number of waves fitting into the unit of length. Therefore the wave number $k$ has dimension
$\frac{1}{{\rm length}}.$
Finally, the energy spectrum $E(k)$ is the quantity such that, given the interval
$\Delta k= k_1-k_2$
between the two wave numbers, the energy (per unit of mass) carried by waves in this interval should be approximately equal to $E(k_1)\Delta k$. Hence the dimension of $E$ is
$\frac{{\rm energy}}{{\rm mass}\cdot {\rm wavenumber}} = \frac{{\rm length}^3}{{\rm time}^2}.$
To make the next crucial calculations, Kolmogorov made the major assumption that amounted to saying that
The way bigger vortices are made from smaller ones is the same throughout the range of wave numbers, from the biggest vortices (say, like a cyclone covering the whole continent) to a smaller one (like a whirl of dust on a street corner).
(This formulation is a bit cruder than most experts would accept; I borrow it from Arnold~\cite{chto).
Then we can assume that the energy spectrum $E$, the energy flow $\epsilon$ and the wave number $k$ are linked by an equation which does not involve anything else. Since the three quantities involved have completely different dimensions, we can combine them only by means of an equation of the form
$E(k) \approx C \epsilon^x \cdot k^y.$
Here $C$ is a constant; since the equation should remain the same for small scale and for global scale events, the shape of the equation should not depend on the choice of units of measurements, hence $C$ should be dimensionless.
Let us now check how the equation looks in terms of dimensions:
$\frac{{\rm length}^3}{{\rm time}^2} =\left(\frac{{\rm length}^2}{{\rm time}^3} \right)^x \cdot\left(\frac{1}{{\rm length}} \right)^y.$
After equating lengths with lengths and times with times, we have
${\rm length} ^3 = {\rm length}^{2x} \cdot {\rm length}^{-y}$
${\rm time}^2 = {\rm time}^{3x}$
which leads to a system of two simultaneous linear equations in $x$ and $y$,
$3 = 2x -y$
$2 = 3x$
This can be solved with ease and gives us
$x = \frac{2}{3} \;\; {\rm and } \;\; y = -\frac{5}{3}.$
Therefore we come to Kolmogorov’s “$5/3$” Law:
$E(k) \approx C \epsilon^{2/3}k^{-5/3}.$
The dimensionless constant $C$ can be determined from experiments and happens to be pretty close to $1$.
The status of this celebrated result is quite remarkable. In the words of an expert on turbulence, Alexander Chorin,
Nothing illustrates better the way in which turbulence is suspended between ignorance and light than the Kolmogorov theory of turbulence, which is both the cornerstone of what we know and a mystery that has not been fathomed.
The same spectrum [...] appears in the sun, in the oceans, and in manmade machinery. The $5/3$ law is well verified experimentally and, by suggesting that not all scales must be computed anew in each problem, opens the door to practical modelling.
Arnold reminds us that the main premises of Kolmogorov’s argument remain unproven — after more than 60 years! Even worse, Chorin points to the rather disturbing fact that
Kolmogorov’s spectrum often appears in problems where his assumptions clearly fail. [...] The $5/3$ law can now be derived in many ways, often under assumptions that are antithetical to Kolmogorov’s. Turbulence theory finds itself in the odd situation of having to build on its main result while still struggling to understand it.
### Exercises for the reader
The history of dimensional analysis can be traced back at least to Froude’s Law of Steamship Comparisons used to great effect in D’Arcy Thompson’s book On Growth and Form for the analysis of speeds of animals:
The maximal speed of similarly designed steamships is proportional to the square root of their length.
William Froude (1810–1879) was the first to formulate reliable laws for the resistance that water offers to ships and for predicting their stability.
Exercise 1, moderate. Prove Froude’s Law.
Exercise 2, easy. Why does a mouse have (relatively) slimmer body build than an elephant?
Exercise 3, easy. Prove the following corollary of Froude’s Law:
The relative speed of a fish (that is, speed measured in numbers of its lengths covered by the fish per unit of time) is inverse proportional to the square root of its length.
This explains a well-known phenomenon: little fish in a stream appear to be very quick.
Exercise 4, even easier. Estimate, who is relatively faster: an ant or a racehorse?
A research project. Building on ideas from Exercise 2, develop a method for estimation of a maximal possible height of a tree of given species.
The problem is of serious practical value. To explain it to an Englishman, you have to mention just one word: Leylandii.
X Cupressocyparis leylandii : 35 meters and still growing.
For a foreigner, Wikipedia provides more detail:
The Leyland Cypress, X Cupressocyparis leylandii, is often referred to as just Leylandii. It is a fast-growing evergreen tree much used in horticulture, primarily for hedges and screens.
The Leyland Cypress is a hybrid between the Monterey Cypress, Cupressus macrocarpa, and the Nootka Cypress, Cupressus nootkatensis. The hybrid has arisen on nearly 20 separate occasions, always by open pollination. [...]
Leyland Cypresses are commonly planted in gardens to provide a quick boundary or shelter hedge. However, their rapid growth (up to a metre per year), heavy shade and great potential height (often over 20 m tall in garden conditions, they can reach at least 35 m) make them a problem. In Britain they have been the source of a number of high profile disputes between neighbours, even leading to violence (and in one recent case, murder), because of their capacity to cut out light.
The problem is that no-one knows the maximal height of some of the latest hybrids — all known specimens continue to grow…
### Like this:
Posted in Uncategorized
## Responses
1. Something about turbulence has always bothered me. It looks like for a highly tubulent flow the stretching and squeezing will almost instantly transform any piece of fluid to a subatomic size, at least in some direction. So it looks like the the Navier-Stokes equations become unapplicable, and therefore an attempt to describe turbulence by these equations looks like pushing a model beyond its validity, since the whole description of a fluid as a continuous media breaks down at subatomic distances. Any comments?
By: misha on April 11, 2008
at 9:03 pm
2. I have found a relevant article on Roger Temam’s home page
By: misha on April 12, 2008
at 11:09 pm
By: misha on April 12, 2008
at 11:13 pm
On another front, I don’t know whether it’s snowing on just your blog or on all of WordPress, but the snow is proving a variant of Parkinson’s Law: Javascript expands (or slows down in my case) to use 100% of the CPU available.
By: Steve Witham on December 19, 2009
at 7:47 am
5. [...] This is a follow-up to my earlier post on “named” numbers; the text is mostly cannibalised from my book; I refer the reader to the book (available for free download) for bibliography, etc. The famous mathematician Andrei Kolmogorov was the author of what remains the most striking and beautiful example of a dimensional analysis argument in mathematics. The deduction of his seminal “'' law for the energy distribution in the turbulent fl … Read More [...]
By: Kolmogorov’s “5/3″ Law (via Mathematics under the Microscope) | Teness's Blog on November 1, 2010
at 2:47 pm
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338232278823853, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/57386?sort=oldest
|
## Is the derivative of a Lipschitz function better than L^\infty
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
How smooth is the first derivative (in the distribution sense) of a Lipschitz function? Taking difference quotients and testing against an $L^1$ function, we see that $Df$ is in $L^\infty$. In ${\mathbb R}^1$ the converse is true, thanks to the persistence of the formula
$f(x+h) - f(x) = \int_0^1 f'(x+th) dt~ h$
(Proof: convolve with a mollifier)
However, if $f : {\mathbb R}^n \to {\mathbb R}$ is Lipschitz, then by the same argument, its derivative has a restriction to any line which is in $L^\infty$ of that line (more precisely, the tangential component of the derivative restricts). Ordinarily, one cannot restrict a distribution sensibly to lower dimensional subsets (straight lines requiring even more regularity than curves), or at least if you can because its primitive restricts, I don't know of any reason to expect the restriction to have any semblance of regularity.
For $n > 1$, is there a nice Banach space in which the derivative of a Lipschitz function belongs whose elements are smoother than just $L^\infty$?
-
## 2 Answers
Lipschitz functions are exactly $W^{1,\infty}$ (See 'Sobolev space' on wikipedia - under other examples and perhaps the bit about absolute continuity on lines). This means the short answer to your question is no.
-
Can you point me to a proof? I was under this impression that there were non-Lipschitz $W^{1,\infty}$ functions in dimensions greater than 1. – Phil Isett Mar 5 2011 at 1:33
Never mind, I have found a proof in Evans. – Phil Isett Mar 5 2011 at 1:47
1
Of course, in higher dimensions the gradient can not be just any vector-valued $L^\infty$-function $f = (f_i)$, since it satisfies the distributional identity $\dfrac{\partial f_i}{\partial x_j} = \dfrac{\partial f_j}{\partial x_i}.$ So you don't get the whole of $L^\infty$, only those functions that satisfy this identity. – Mark Peletier Jul 22 2011 at 12:47
Mark -- I think your remark here was the (fairly obvious but nonetheless fundamental) thing I was failing to realize. It's just that this particular system of PDE does not exactly bestow upon its solutions any additional regularity. – Phil Isett Aug 25 2011 at 20:26
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Every Lipschitz function is absolutely continuous. Consequently, its derivative exists and is uniformly bounded almost everywhere. The Lipschitz constant is just the $L^\infty$ norm of the derivative.
If you want a Banach space of smoother functions, then just define it. For example, let $X$ be the space of Lipschitz functions on $\mathbb R^n$ with integrable derivatives: `$$X = \{ f :~ \nabla f \in L^1 \cap L^\infty \}.$$`
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396450519561768, "perplexity_flag": "head"}
|
http://ldtopology.wordpress.com/2010/01/26/3-manifold-groups-are-known-right/
|
# Low Dimensional Topology
## January 26, 2010
### 3-manifold groups are known, right?
Filed under: 3-manifolds,Geometric Group Theory — Henry Wilton @ 8:55 pm
During the nice talk that Ian Biringer gave on the structure of hyperbolic 3-manifolds at Caltech on Friday, a 4-manifold theorist in the back was heard to ask ’3-manifold groups are known, right?’
I know what he meant. Any finitely presented group can arise as the fundamental group of a 4-manifold (this is a nice exercise that you can do for yourself, involving surgery on a connect sum of copies of S3 x S1). With a little care, you can deduce that classifying topological 4-manifolds is at least as impossible as classifying finitely presented groups (which is impossible).
In contrast, there are constraints on the fundamental groups of closed 3-manifolds. One of the first is an easy consequence of the existence of Heegaard splittings: any closed 3-manifold group admits a balanced presentation, meaning that there are no more relations than generators.
What does it mean to ‘know’ a class of finitely presentable groups? Do we really ‘know’ the class of (orientable) 3-manifold groups? The point of view that I will adopt in addressing these questions is algorithmic, and comes from combinatorial group theory.
Dehn formulated the three classical decision problems that are often the most basic questions that one asks about a class of groups.
1. The word problem. Is there an algorithm to determine whether or not a given a word in the generators represents the identity?
2. The conjugacy problem. Is there an algorithm to determine whether or not a pair of words in the generators are conjugate?
3. The isomorphism problem. Is there an algorithm to determine whether or not a pair of presentations of groups in the class present isomorphic groups?
The first two of these questions are rather local in nature, as they deal with individual elements of groups. I won’t say much further about them, except to comment that geometrisation implies that both are solvable (the word problem is essentially due to Waldhausen; the solution to the conjugacy problem doesn’t seem to be very well known, and is due to Préaux), and that neither is known in the absence of geometrisation.
Instead, I want to situate the isomorphism problem in the context of some other basic algorithmic properties of classes of groups.
4. A class of finitely presentable groups is recursively enumerable if there is a Turing machine that outputs a list of presentations such that every presentation represents a group in the class and every group is represented by some presentation on the list.
This means that we can eventually confirm that a given group G is in our class: using Tietze transformations, we can modify our list so that eventually every presentation of every group in our class appears; just wait for the given presentation of G to appear. Note that if G isn’t in our class then this procedure will run forever.
5. A class of finitely presentable groups is recursive if both it and its complement are recursively enumerable.
If this is true, then we can eventually determine whether or not a given group G is in our class of groups.
Thinking about it like this, we can reformulate the isomorphism problem in the following way.
3′. A class of groups is recursively enumerable and has solvable isomorphism problem if and only if there is a Turing machine that outputs a list of presentations for the groups in the class such that each group appears exactly once on the list.
In the remainder of this post, I’ll try to explain what I know about these problems for 3-manifold groups.
RECURSIVE ENUMERABILITY
This is an easy consequence of Moise’s famous theorem that 3-manifolds can be triangulated. Just build a Turing machine that lists all 3-dimensional simplicial complexes. It is easy to check whether the link of every vertex is a combinatorial 2-sphere. A complex with this property represents a 3-manifold, and conversely Moise’s Theorem implies that every 3-manifold can be represented in this way. You can easily read off a presentation for the fundamental group from such a triangulation.
We could also do this using the existence of Heegaard splittings, which is a consequence of Moise’s Theorem.
More interestingly, the class of hyperbolic 3-manifold groups is recursively enumerable. This is an immediate consequence of a beautiful theorem of Jason Fox Manning, which provides an algorithm that finds a faithful discrete representation of the fundamental group to PSL2(ℂ) if one exists [9].
RECURSIVENESS
Slightly counter-intuitively, the class of 3-manifold groups is too well behaved to be recursive.
It is a fundamental (though not easy!) fact about finitely presented groups that there is no algorithm to determine whether such a group is trivial.
Suppose now that the class of 3-manifold groups is recursive, so there is an algorithm to determine whether a given presentation represents the fundamental group of some 3-manifold. Applying this algorithm to an arbitrary presentation P, we can determine whether P presents a 3-manifold group or not. But the trivial group is a 3-manifold group, and we can easily use the solution to the word problem for 3-manifold groups to determine which 3-manifold groups are trivial. Putting these two observations together would give an algorithm to determine whether a given group is trivial, which is known to be impossible.
This argument is very general. In fact, a property of groups P is called Markov if some finitely presentable group has P and some other finitely presentable group cannot be embedded in any group with P. If P is Markov then the class of groups with P is not recursive.
Thus, there is a curious sense in which we understand 4-manifold groups better than 3-manifold groups! The class of 4-manifold groups is recursive, for vacuous reasons.
THE ISOMORPHISM PROBLEM
Having seen that recursiveness was far too much to hope for, let’s turn our attention to our final decidability property: the isomorphism problem.
I will sketch an outline of the solution to the isomorphism problem for 3-manifold groups. This is all from the point of view of a geometric group theorist, so I’ll mostly restrict my tools to geometrisation, a few other basic facts about 3-manifolds, and some quite powerful recent group-theoretic results.
After starting to write this post, I discovered that William H. Jaco has a nice series of lecture notes available on the Homeomorphism Problem for 3-manifolds. You should look at Jaco’s notes if you want to understand the ‘right’ way of doing this.
The first step is to compute the Kneser–Milnor decomposition. Every orientable 3-manifold decomposes (more or less uniquely) as a connect sum
$M= M_1\#\ldots \# M_n\# (S^1 \times S^2)\#\ldots\# (S^1\times S^2)$
where each Mi is irreducible, ie contains no essential 2-sphere. Correspondingly, the fundamental group of M has a unique Grushko decomposition as
$\Gamma = \Gamma_1 *\ldots*\Gamma_n*F_r$
where Fr is a free group of rank r. Nowadays, one can compute this decomposition from very general considerations. Nicholas Touikan showed how to compute the Grushko decomposition of any group Γ, given nothing more than a solution to the word problem in Γ [12].
This reduces the problem to the case of irreducible 3-manifolds. Geometrisation tells us that any such 3-manifold M satisfies (at least) one of the following.
1. M is hyperbolic.
2. M is elliptic, ie finitely covered by the 3-sphere.
3. M is Seifert-fibred. (This actually includes case 2, but it might be better to think of the case with infinite fundamental group separately.)
4. M contains an essential torus. (Again, there is some overlap between 3 and 4.)
Let’s deal with these cases one by one.
1. M is hyperbolic
If our 3-manifold is hyperbolic then Manning’s algorithm will eventually find a hyperbolic structure. Another, less subtle, approach to the problem of certifying that M is hyperbolic would be to use an algorithm of Panos Papazoglou [10] to find a word-hyperbolic structure on the fundamental group. Geometrisation implies that the fundamental group is word-hyperbolic if and only if M is hyperbolic or elliptic, and given a word-hyperbolic structure one can determine which case we fall into.
The isomorphism problem for the fundamental groups of closed hyperbolic 3-manifolds was solved by Zlil Sela [11]. I won’t discuss his beautiful solution here, except to comment that it relies on the fact that one can algorithmically solve systems of equations and inequations over word-hyperbolic groups.
We also need to deal with the finite-volume case, which will be important later. I’m not sure whether Manning’s algorithm can be generalised to this case, but Dahmani extended Papazoglou’s algorithm to the ‘finite-volume’ group-theoretic setting, namely relatively hyperbolic groups [2]. Again, geometrisation implies that the fundamental group admits a hyperbolic structure relative to its maximal virtually abelian subgroups if and only if M is hyperbolic of finite volume or elliptic, and we can determine which case we’re in.
Finally, Sela’s solution to the isomorphism problem was generalised to the relatively hyperbolic case by Francois Dahmani and Daniel Groves [3]. So, in summary, the fundamental groups of finite-volume hyperbolic manifolds are recursively enumerable with solvable isomorphism problem.
2. M is elliptic if and only if the fundamental group is finite. In this case, a naive computation of the multiplication table will eventually prove that the fundamental group is finite. (Note that this procedure will not terminate if the group is not finite, but we don’t have to worry about this!) For similarly naive reasons, the isomorphism problem is solvable for finite groups.
There’s a point here that needs to be made clear. All of this is done using a solution to the word problem, which is only known in the presence of geometrisation. On the other hand, algorithms are known that recognise the 3-sphere in the absence of geometrisation. You could say that the difficulty of the Poincare Conjecture lies in recognising that your manifold is simply connected, rather than in recognising that it’s the 3-sphere (though perhaps that’s too glib).
3. M is Seifert-fibred.
A 3-manifold M is Seifert-fibred if and only if it can be foliated by circles [4], which occurs if and only if the fundamental group Γ has a cyclic normal subgroup K [1,5], generated by a generic fibre. Given the fundamental group of a Seifert-fibred 3-manifold, we will always eventually find K by a naive search.
The quotient Q=Γ/K is the fundamental group of a 2-dimensional orbifold O. To check whether another Seifert-fibred group is isomorphic to Γ, we need to compute the Euler number of the bundle (which can be read off a suitable presentation) and to solve the isomorphism problem for the base orbifold O.
This reduces the problem to the isomorphism problem for 2-orbifold groups, which is certainly known (although I don’t know a reference). For instance, one can find a torsion-free normal subgroup, use the index to bound the torsion, and then check finitely many possibilities.
4. M contains an essential torus.
Closed, orientable, irreducible 3-manifolds have a canonical torus decomposition, due to Jaco–Shalen [6] and, independently, Johannsson [8]. This consists of a minimal collection of essential embedded tori, such that the complementary pieces are either Seifert-fibred or atoroidal, (meaning that every essential embedded immersed torus is boundary parallel).
Geometrisation asserts that the atoroidal pieces are either elliptic or hyperbolic.
The final remaining piece of the puzzle is therefore to compute the torus decomposition of M. An algorithm to compute the torus decomposition of a 3-manifold was described by Jaco and Tollefson [7]. However, it turns out that we already have enough machinery at our disposal to compute the torus decomposition in a simple-minded fashion.
Essential tori correspond to splittings of our group Γ as an amalgamated product or HNN extension over ℤ2. A naive enumeration of presentations for Γ will eventually find one that exhibits such a splitting. So ‘cut’ along this torus and repeat. We are done when every piece is either Seifert-fibred or (finite-volume) hyperbolic. As we have seen above, we have procedures that will eventually confirm if our manifold is Seifert-fibred or hyperbolic! Thus, running all these tests in parallel, we will eventually find a finite number of splittings and discover that all the remaining pieces are Seifert-fibred or hyperbolic, as required.
It’s possible that in this process we will ‘overshoot’, and discover a set of tori that is not minimal. In other words, we might accidentally cut along an essential torus in the interior of a Seifert-fibred piece. Therefore, at the end we need to check whether the Seifert-fibred structures of any pieces on either side of each torus are compatible – if they are, we should remove this torus and glue the pieces on either side together.
Once this is done, to check isomorphism is mostly just a matter of checking that the decomposition is the same and that the individual pieces are isomorphic, each of which (appealing to geometrisation and Dahmani–Groves) we already know how to do.
As occurs very often in 3-manifold theory, manifolds finitely covered by torus bundles over the circle form an important exceptional case. Some of these bundles do not admit a Seifert-fibred structure and so have non-trivial torus decomposition, but they are better thought of as lattices in the Lie group Sol. It turns out that, with a few explicit exceptions, such an M actually is a torus bundle, and so is determined by the conjugacy class of its monodromy. This is enough to solve the isomorphism problem in this case.
This completes my take on the isomorphism problem for 3-manifold groups. I want to reiterate that there are many other ways to do this, that I’ve skated over various details, and that this solution is sub-optimal in various senses. Comments, corrections and clarifications welcome!
REFERENCES
[1] Casson, Andrew and Jungreis, Douglas. Convergence groups and Seifert fibered 3-manifolds. Invent. Math. 118 (1994), no. 3, 441–456.
[2] Dahmani, François. Finding relative hyperbolic structures. Bull. Lond. Math. Soc. 40 (2008), no. 3, 395–404.
[3] Dahmani, François and Groves, Daniel. The isomorphism problem for toral relatively hyperbolic groups. Publ. Math. Inst. Hautes Études Sci. No. 107 (2008), 211–290.
[4] Epstein, D. B. A. Periodic flows on three-manifolds. Ann. of Math. (2) 95 1972 66–82.
[5] Gabai, David. Convergence groups are Fuchsian groups. Ann. of Math. (2) 136 (1992), no. 3, 447–510.
[6] Jaco, William and Shalen, Peter B. A new decomposition theorem for irreducible sufficiently-large 3-manifolds. Algebraic and geometric topology (Proc. Sympos. Pure Math., Stanford Univ., Stanford, Calif., 1976), Part 2, pp. 71–84, Proc. Sympos. Pure Math., XXXII, Amer. Math. Soc., Providence, R.I., 1978.
[7] Jaco, William and Tollefson, Jeffrey L. Algorithms for the complete decomposition of a closed 3-manifold. Illinois J. Math. 39 (1995), no. 3, 358–406.
[8] Johannson, Klaus. Homotopy equivalences of 3-manifolds with boundaries.
Lecture Notes in Mathematics, 761. Springer, Berlin, 1979. ii+303 pp. ISBN: 3-540-09714-7
[9] Manning, Jason. Algorithmic detection and description of hyperbolic structures on closed 3-manifolds with solvable word problem. Geom. Topol. 6 (2002), 1–25
[10] Papasoglu, P. An algorithm detecting hyperbolicity. Geometric and computational perspectives on infinite groups (Minneapolis, MN and New Brunswick, NJ, 1994), 193–200, DIMACS Ser. Discrete Math. Theoret. Comput. Sci., 25, Amer. Math. Soc., Providence, RI, 1996.
[11] Sela, Z. The isomorphism problem for hyperbolic groups. I. Ann. of Math. (2) 141 (1995), no. 2, 217–283.
[12] Touikan, Nicholas W. M. Effective Grushko Decompositions, arXiv:0906.3902.
## 12 Comments »
1. Initially, the reference to Manning’s algorithm was missing. It has now been added.
Comment by — January 27, 2010 @ 12:39 pm
2. In Nathan Dunfields August 31, 2009 post on the Virtual Haken Conjecture he mentioned Conjecture 7, that all closed hyperbolic manifold fundamental groups are LERF. So are LERF groups recursively enumerable with solvable isomorphism problem, or do they suffer from the same problem as three manifold groups in general, that is that they are too well behaved to be recursive?
Comment by Mayer A. Landau — January 27, 2010 @ 1:49 pm
• LERF groups have solvable word problem. In fact all residually finite groups do, for a slightly silly reason: list all possible maps to finite groups; if your element is non-trivial, eventually you’ll find a map under which its image is non-trivial. So the same argument shows that LERF groups are not recursive.
This leaves open the (very faint) possibility that they could be recursively enumerable and/or have solvable isomorphism problem. Off the top of my head, I don’t know any classes of groups that are known NOT to be recursively enumerable. But if LERF groups were recursively enumerable that would suggest that they had some sort of structure theory, and it’s very hard to imagine what that would look like.
I would bet my house (if I had one) that the isomorphism problem is not solvable for LERF groups. However, I don’t think we have the technology to prove it, at this point. Constructions of LERF groups are still quite thin on the ground.
Comment by — January 27, 2010 @ 2:27 pm
3. In your section on RECURSIVE ENUMERABILITY you state that all 3-manifold fundamental groups are recursively enumerable due to Moise’s triangulation theorem. But, then you state that the recursive enumerability of hyperbolic 3-manifold fundamental groups is due to Manning. What did Manning have to prove if it already follows from Moise?
Comment by Mayer A. Landau — January 27, 2010 @ 5:51 pm
• It doesn’t follow from Moise. I think you’re assuming that any subclass of a recursively enumerable class is recursively enumerable – but this isn’t true. For instance, the class of all finitely presented groups is recursively enumerable, for silly reasons!
But there are plenty of classes of finitely presentable groups that are not recursively enumerable. In a comment above, I said that I couldn’t think of any, but I was overlooking the fact that the argument in the RECURSIVENESS section shows that the class of all ‘not 3-manifold’ groups is not recursively enumerable! Similarly, if P is any recursively enumerable Markov property then the class of groups with not P is not recursively enumerable.
Comment by — January 27, 2010 @ 6:01 pm
4. That’s kind of weird that a subclass of a recursively enumerable class does not need to be recursively enumerable. So from your definition, in the general case, the Turing machine does not know how to restrict the presentations it puts out in order to conform to some arbitrary subclass.
Comment by Mayer A. Landau — January 28, 2010 @ 12:53 am
• Exactly!
Comment by — January 28, 2010 @ 12:57 am
5. OK, but how about the isomorphism recognition problem. From your discussion it seems that that does respect subclasses. So if 3-manifold groups have an isomorphism recognition algorithm, as you show, then any subclass will as well. So LERF groups that are also 3-manifold groups have an isomorphism problem that is solvable. Is that a correct statement?
Comment by Mayer A. Landau — January 28, 2010 @ 2:22 am
• That is correct. But there are many LERF groups that are not 3-manifold groups.
Comment by — January 28, 2010 @ 11:39 am
6. When you say “A class of finitely presentable groups is recursive if both it and its complement are recursively enumerable”, I assume that you are taking complements in the set (or is it class?) of all finitely presentable groups. Is that correct?
Comment by Mayer A. Landau — January 28, 2010 @ 3:20 pm
• Yes.
Comment by — January 28, 2010 @ 3:32 pm
7. [...] by this post in blog Low-Dimensional Topology about low-dimensions and logic. Categories: Research Musings [...]
Pingback by — April 26, 2010 @ 9:33 am
RSS feed for comments on this post. TrackBack URI
Theme: Rubric. Blog at WordPress.com.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9227153658866882, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/18119/logical-connection-of-newtons-third-law-to-the-first-two
|
# Logical connection of Newton's Third Law to the first two
The first law and second laws of motion are obviously connected. But it seems to me that the third law is not related to the first two, at least logically.
(In Kleppner's Mechanics the author states that the third law is a necessity to make sense of the second law. It didn't make sense to me, though. I'll post the excerpt if anyone would like to see it.)
Thanks, Ron
EDIT: Excerpt from Introduction to Mechanics by Kleppner & Kolenkow (1973), p. 60:
Suppose that an isolated body starts to accelerate in defiance of Newton's second law. What prevents us from explaining away the difficulty by attributing the acceleration to carelessness in isolating the system? If this option is open to us, Newton's second law becomes meaningless. We need an independent way of telling whether or not there is a physical interaction on a system. Newton's third law provides such a test. If the acceleration of a body is the result of an outside force, then somewhere in the universe there must be an equal and opposite force acting on another body. If we find such a force, the dilemma is resolved; the body was not completely isolated. [...]
Thus Newton's third law is not only a vitally important dynamical tool, but it is also an important logical element in making sense of the first two laws.
-
1
Hi Ron, and welcome to Physics Stack Exchange! I think it would help if you can quote the piece from Kleppner that you're talking about, as long as it's not too long. – David Zaslavsky♦ Dec 11 '11 at 3:21
Thanks! I had already edited the original post and added the excerpt I hope it isn't too long. – Ron Dec 11 '11 at 6:06
It is not clear to me, what's exactly your question. – student Dec 11 '11 at 9:52
There is a historical context which hasn't been address by other answers to date, and may play an important part in the seemingly redundant statement of the first and second laws. Previous to Newton, much though on the nature of motion in Europe could be traced back to Aristotle, who claimed that bodies in motion naturally tended to return to rest (nor is that stupid: try sliding a book on a table). The first law is a bald contradiction of this doctrine. – dmckee♦ Dec 11 '11 at 17:45
## 3 Answers
Newton's Fist and Second Law relate forces acting on a single system to conservation or changes of that system's momentum. It doesn't say anything about the nature of these forces, their origin; they could come "out of nowhere" and Laws 1 and 2 still hold.
The Third Law, however, indicates that all forces, or "actions", are just one side of an interaction. This view, that systems act on each other, by either attracting or repelling each other, still holds even in circumstances where other aspects of newtonian mechanics don't (i.e. relativistic or quantum mechanics). All forces observed since Newton's time until today are still modelized as being the results of the fundamental interactions.
One could say that the First Law describes the nature of momentum, the Thid Law the nature of forces, and the Second Law the link between the two. The First and Third Law thus provide the setting where the Second Law is stated.
-
Wow! Thanks! This post made a lot of sense. I agree. Newton, as you have posted, describes the nature of momentum and also, I think, in hindsight he gives a definition of a force, i.e. that which changes the momentum of the object. – Ron Dec 12 '11 at 11:08
Newton's third law of motion gives meaning to the first two laws by restricting what type of fundamental forces act between particles. This restriction gives meaning to the "force" described in the first two laws.
Taken by themselves (by which I mean without any reference to any explicit form for the fundamental forces acting between particles), the third law is what gives the first two laws any predictive power. This is what Mr. Kleppner is referring to when he says that Newton's third law is "an important logical element in making sense of the first two laws."
For example, suppose you are watching two balls float in outer space; ball A and ball B. You see ball A accelerate towards ball B. Using Newton's first law you know there is a force acting on ball A from ball B. Using Newton's second law you know that the force is along a vector connecting the two balls. Finally, using Newton's third law you can predict that ball B should also be accelerating towards ball A. And you can test that prediction. So, the third law is what gives the first two laws any predictive power.
I would argue that instead of Newton's third law it is the explicit form of the fundamental forces which "really" give the first two laws their meaning. In this case, Newton's third law of motion is just a restriction on what form those fundamental forces can take.
To illustrate this idea, suppose we want to use Newton's laws to do some science.
Newton's first law states:
A body in motion will stay in motion unless acted upon by an external force.
Since we don't know what a force is yet, it could be anything and this statement can be rephrased as:
A body in motion will stay in motion unless it doesn't.
You can see why this is not useful.
Newton's second law of motion states:
The acceleration of a body is parallel and proportional to the force exerted on the body and inversely proportional to the mass of the body.
Again, without a definition of force, this statement is useless.
Now, suppose we have a definition for a force, such as gravity.
$F_{gravity} = G \frac{m_{1} m_{2}}{r^2}$
Now, the first two laws have meaning. We can use our definition of force to predict the motion of a particle due to the gravitational attraction of some other body and then go out and test it!
Newton's third law of motion states:
Any force exerted by body A on body B implies an equal and opposite force on body B by body A.
Given a description of all the fundamental forces acting between particles, we don't need Newton's third law. Instead, Newton's third law is telling us how these forces act, namely symmetric with respect to both particles. You can see this reflected in the mathematical formula for the force of gravity; switching $m_1$ and $m_2$ you get the same force.
-
I think you make here two interesting points: Newton's second law is an incomplete law (it contains a definitional element, you still have to work out the explicit form the force takes in a particular circumstance) and Newton's third law poses some restrictions on the mathematical form of the force, since interactions between two particles are symmetric with respect to both. My question is: does Newton's third law incorporate somehow the classical principle of relativity? – quark1245 Dec 11 '11 at 9:32
Today, Newton's first law is often interpreted (or extended) to the statement that inertial reference systems exist. For example let me cite from Jose,Saletan:"Classical Dynamics A contemporary Approach"
There exist certain frames, called inertial, with the following two properties.
Property A) Every isolated particle moves in a straight line in such a frame
Property B) If the notion of time is quantified by defining the unit of time so that one >particular isolated particle moves at constant velocity in this frame, then every other >isolated particle moves at constant velocity in this frame. "
Otherwise the first law would be a trivial consequence of the second one...
Concerning the connection between the second and the third law, note that one has to define the word "mass". This is sometimes done via the third law. If you do so, you need the third law just to understand the variables that occur in the second law.
Note that there where many many critiques in the history of physics concerning the locigal status of Newton's laws and there were many attempts to make it logically clearer.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953708291053772, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/1877/uniform-semi-continuity
|
# Uniform semi-continuity
## Background
It is a standard and important fact in basic calculus/real analysis that a continuous function on a compact metric space is in fact uniformly continuous. That is, suppose $(X,d)$ is a compact metric space and $f\colon X \to\mathbb R$ is such that for every $x\in X$ and $\varepsilon>0$ there exists $\delta>0$ such that $d(x,y)<\delta$ implies $|f(x)-f(y)|<\varepsilon$. Then in fact, such a $\delta$ can be chosen independently of \$x^.
## Question
Does a similar statement hold regarding semi-continuous functions? For concreteness, let's consider upper semi-continuous functions, so suppose $(X,d)$ is compact and $f\colon X \to\mathbb R$ has the property that for every $x\in X$ and $\varepsilon >0$ there exists $\delta >0$ such that $d(x,y)<\delta$ implies $f(y) < f(x)+\varepsilon$. (Note the asymmetry of $x$ and $y$ in this definition.) Then is it true that $\delta=\delta(\varepsilon)$ can be chosen independently of $x$?
## Reformulation
Given $\delta, \epsilon > 0$, consider the set $$X_\delta^\epsilon := \lbrace x\in X \mid f(y) < f(x) + \epsilon \text{ for every } y\in B(x,\delta) \rbrace.$$ Then $f$ is upper semi-continuous if and only if $\displaystyle\bigcup_{\delta>0} X_\delta^\epsilon = X$ for every $\epsilon > 0$, and $f$ is uniformly upper semi-continuous if and only if this union stabilises -- that is, if for every $\epsilon > 0$ there exists $\delta>0$ such that $X_\delta^\epsilon = X$.
-
## 2 Answers
$f(x)=0 \ (x\le0)$, $f(x)=-1/x \ (x\gt0)$ is upper semi-continuous on $[-1;1]$ — but not uniformly.
-
Of course... thanks for the example. Do you know what happens if we require the function f to be bounded below? (In the context that motivated the question, I'm considering such functions.) – Vaughn Climenhaga Aug 9 '10 at 14:09
Turns out boundedness doesn't help (see my answer). In retrospect this all seems quite obvious... maybe this is why I shouldn't post questions in the wee hours of the morning. – Vaughn Climenhaga Aug 9 '10 at 15:10
A little further thought reveals the following: uniform semi-continuity implies uniform continuity. Thus the answer to my question is a resounding "no", since any function that is upper semi-continuous but not continuous cannot be uniformly upper semi-continuous.
Proof. Let $f$ be uniformly upper semi-continuous. Then for every $\epsilon>0$ there exists $\delta>0$ such that for every $x\in X$, we have $f(y) < f(x) + \epsilon$ whenever $y\in B(x,\delta)$. However, since this statement holds for every $x$, it also holds with $x$ and $y$ reversed; in the language of the original post, both $x$ and $y$ are contained in the set $X_\delta^\epsilon = X$. Since $y$ is in this set and $x\in B(y,\delta)$, we also have $f(x) < f(y) + \epsilon$, and thus $|f(x) - f(y)| < \epsilon$. But this is just the definition of uniform continuity.
-
ah! (I should have thought about this instead of constructing artificial counter-examples) – Grigory M Aug 9 '10 at 15:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432177543640137, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/128765/number-of-vectors-in-the-decomposition-of-vector?answertab=votes
|
# number of vectors in the decomposition of vector
Let $x \in R^n$ be vector with entries $x_1\geq\cdots \geq x_n$.
I would like to decompose this vector into vectors $x^1=(x_1,\ldots,x_m,0,\ldots0), x^2=(0,\ldots,0,x_{m+1},\ldots,x_1,0,\ldots,0)$..., where $2x_m>{x_1}$; $2x_{m+1}<{x_1}$; $x_{m+1}<2x_l;x_{l+1}>2x_{l+1}$ and so on.
How many of those vectors $x^i$? (My filing it should be $log$ terms, but I would like to see the proof of it. Any good sours will be also helpful).
Thank you.
-
## 1 Answer
It will depend on $x$; you may end up using anywhere from $1$ to $n$ vectors. For example, if $x = (4,4,4,3,3,3,3)$ then you will only have one vector in the decomposition. But if $x = (10000,4000,1000,400,100,40,10)$ then you will need $n = 7$ vectors.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292862415313721, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/117615/sampling-distributions-probability
|
# sampling distributions / Probability
Suppose life expectancy is normally distributed with mean 60 and variance 9. (Actually this must be an approximation but assume it is exact, just for simplicity.)
(a) For a randomly selected person, what is the probability of a life span greater than 62 years?
(b) For a group of 4 randomly selected people, what is the probability of an average life span greater than 62 years?
(c) For a group of 16 randomly selected people, what is the probability of an average life span greater than 62 years?
-
What have you tried? – Henry Mar 7 '12 at 18:45
## 1 Answer
Hint: If $X$ has normal distribution with mean $\mu$ and variance $\sigma^2$, and $\bar X_n$ is the mean of a random sample of size $n$ from $X$, then $\bar X_n$ has normal distribution with mean $\mu$ and variance $\sigma^2/n$. So, for example, in part b), if $\bar X_4$ is the average lifespan of 4 randomly chosen people, then $\bar X_4$ is normally distributed with mean $60$ and variance $9/4$.
To calculate probabilities for a normal variable, convert to the standard normal: If $X$ has normal distribution with mean $\mu$ and variance $\sigma^2$, then $$P[X\ge a]= P\Bigl[ Z\ge {a-\mu\over \sigma}\Bigr],$$ where $Z$ is the standard normal variable. Values of $P[Z\ge a]=1-P[Z\le a]$ can be found from tables, such as those found here.
-
should I use σ^2/n or σ/n^(1/2)? for b) if i use σ^2/n, its 9/4, if I use σ/n^(1/2), the answer is 3/2 – Forest Mar 7 '12 at 19:29
@Forest In the "conversion to the standard normal" formula, you divide by the standard deviation $\sigma$; so by $\sqrt{9/4}=3/2$ for b). – David Mitra Mar 7 '12 at 19:49
thanks for the help. Able to solve the problem now :) – Forest Mar 7 '12 at 19:59
@Forest You're welcome. Glad to help :) – David Mitra Mar 7 '12 at 20:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8906921148300171, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2010/10/15/
|
# The Unapologetic Mathematician
## Class Functions
Our first observation about characters takes our work from last time and spins it in a new direction.
Let’s say $g$ and $h$ are conjugate elements of the group $G$. That is, there is some $k\in G$ so that $h=kgk^{-1}$. I say that for any $G$-module $V$ with character $\chi$, the character takes the same value on both $g$ and $h$. Indeed, we find that
$\displaystyle\begin{aligned}\chi(h)&=\chi\left(kgk^{-1}\right)\\&=\mathrm{Tr}\left(\rho\left(kgk^{-1}\right)\right)\\&=\mathrm{Tr}\left(\rho(k)\rho(g)\rho(k)^{-1}\right)\\&=\mathrm{Tr}\left(\rho(g)\right)\\&=\chi(g)\end{aligned}$
We see that $\chi$ is not so much a function on the group $G$ as it is a function on the set of conjugacy classes $K\subseteq G$, since it takes the same value for any two elements in the same conjugacy class. We call such a complex-valued function on a group a “class function”. Clearly they form a vector space, and this vector space comes with a very nice basis: given a conjugacy class $K$ we define $f_K:G\to\mathbb{C}$ to be the function that takes the value $1$ for every element of $K$ and the value $0$ otherwise. Any class function is a linear combination of these $f_K$, and so we conclude that the dimension of the space of class functions in $G$ is equal to the number of conjugacy classes in $G$.
The space of class functions also has a nice inner product. Of course, we could just declare the basis $\{f_K\}$ to be orthonormal, but that’s not quite what we’re going to do. Instead, we’ll define
$\displaystyle\langle\chi,\psi\rangle=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\overline{\chi(g)}\psi(g)$
The basis $\{f_K\}$ isn’t orthonormal, but it is orthogonal. However, we can compute:
$\displaystyle\begin{aligned}\langle f_K,f_K\rangle&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\overline{f_K(g)}f_K(g)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{k\in K}\overline{f_K(k)}f_K(k)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{k\in K}1\\&=\frac{\lvert K\rvert}{\lvert G\rvert}\end{aligned}$
Incidentally, this is the reciprocal of the size of the centralizer $Z_k$ of any $k\in K$. Thus if we pick a $k$ in each $K$ we can write down the orthonormal basis $\{\sqrt{\lvert Z_k\rvert}f_K\}$.
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8980759382247925, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/117539/finding-all-prime-numbers-x-for-which-24x1-is-a-perfect-square
|
# Finding all prime numbers $x$ for which $24x+1$ is a perfect square
Clearly, a prime fits the criteria if the result of $\sqrt{24x+1}$ is an integer. By trial and error, I have found that seemingly the only primes to fit this criteria are 2, 5 and 7. How would I go about proving that they are the only ones (or, alternatively, that $x$ must be below a certain value and the only primes below this value that fit the criteria are 2, 5 and 7)?
I've gotten as far as stating that for some integer $a$:
$24x + 1 = a^2$
Then, I rearranged this to give:
$24x = a^2 - 1\\ 24x = (a+1)(a-1)$
I'm not quite sure where to go from here in order to complete the proof that $x$ cannot be above a certain value. Any help would be much appreciated! I'd prefer hints on where to go next rather than full solutions since I'd much rather reach the full solution myself.
-
$x$ is a prime and divides $(a+1)(a-1)$ by your last equality. Therefore ... HTH, AB, – martini Mar 7 '12 at 14:38
1
@martin HTH, AB? – Graphth Mar 7 '12 at 14:39
@Graphth: For HTH see en.wikipedia.org/wiki/HTH, AB is short for "Allzeit bereit" which is a German variant of "Be prepared" (en.wikipedia.org/wiki/Scout_Motto), AB, martini. – martini Mar 7 '12 at 14:44
You are very much on the right track. You might want to suppose that $x >7$ and try to derive a contradiction. You might also use the fact that $x$ can't divide both $a+1$ and $a-1$ if $x >7.$ – Geoff Robinson Mar 7 '12 at 14:45
For an approach with lots of writing and little thinking, try just writing out all the possible factorizations of $24x$ into two integers (remember $x$ is prime, so there aren't many) and see what happens if you let one factor be $a+1$ and the other be $a-1$. – Chris Eagle Mar 7 '12 at 16:25
## 1 Answer
Your approach leads to a solution. As a continuing reminder that $x$ is supposed to be prime, let's call it $p$. We have $$24p=(a-1)(a+1).$$ Since $p$ divides the left-hand side, $p$ divides the right-hand side. It follows that (i) $p$ divides $a-1$ or (ii) $p$ divides $a+1$. (We are not excluding the possibility that $p$ divides both.)
Case (i): Suppose that $p$ divides $a-1$. Then $a-1=pk$ for some integer $k$. It follows that $a+1=pk+2$, and therefore $$24p=(pk)(pk+2).$$ By cancellation, we conclude that $$24=k(pk+2).$$ Now case (i) is in principle finished. We must have $pk+2 \le 24$, so $pk \le 22$. In particular, $p \le 19$, so it is a question of checking a small number of possibilities. The checking can be done efficiently, or not so efficiently.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9373483657836914, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/222460/cantor-set-proofs
|
# Cantor Set proofs
Let $S_0=[0,1]$ and let $S_k$ be defined in the following manner for $k\geq 1$: \begin{align*} S_1&=S_0-\left(\frac{1}{3},\frac{2}{3}\right)=\left[0,\frac{1}{3}\right]\cup\left[\frac{2}{3}, 1\right],\\ S_2&=S_1-\left\{\left(\frac{1}{9}, \frac{2}{9}\right)\cup \left(\frac{7}{9}, \frac{8}{9}\right)\right\}=\left[0,\frac{1}{9}\right]\cup\left[\frac{2}{9}, \frac{3}{9}\right]\cup\left[\frac{6}{9}, \frac{7}{9}\right]\cup\left[\frac{8}{9},1\right],\\ S_3&=S_2-\left\{ \left(\frac{1}{27}, \frac{2}{27}\right)\cup \left(\frac{7}{27}, \frac{8}{27}\right)\cup \left(\frac{19}{27}, \frac{20}{27}\right)\cup \left(\frac{25}{27}, \frac{26}{27}\right) \right\}\\ &=\left[0, \frac{1}{27}\right]\cup\left[ \frac{2}{27}, \frac{3}{27}\right]\cup\left[ \frac{6}{27}, \frac{7}{27}\right]\cup\left[ \frac{8}{27}, \frac{9}{27}\right]\cup\left[ \frac{18}{27}, \frac{19}{27}\right]\cup\left[ \frac{20}{27}, \frac{21}{27}\right]\cup\left[ \frac{24}{25}, \frac{25}{27}\right]\cup\left[ \frac{26}{27}, 1\right]\\ \vdots \end{align*} Then put $C=\bigcap_{k=0}^\infty S_k$. This set is known as the Cantor set. a.) Prove that $C$ is nonempty. b.) Prove that $C$ is compact. c.) Prove that $C$ is not an open set.
My knowledge of a.) is that Cantor's intersection theorem is closely related to the Heine-Borel theorem and Bolzano-Weierstrass theorem, each of which can be easily derived from either of the other two. Hence, some how we these can be used to show that the Cantor set is nonempty. However, I can't figure out how to set this proof up.
For b.) I need to show that $S$ is totally bounded, then I could use Heine–Borel theorem to say it is compact. Once again, I need a little help setting it up.
Does this logic work for c.)? Now suppose that there is an open set $U$ contained in $S$. Then there must be an open interval $(a, b)$ contained in $S$. Now pick an integer $N$ such that $1 / 3 N < b - a$. Then the interval $(a, b)$ can not be contained in the set $AN$, because that set is comprised of intervals of length $1 / 3N$. But if that interval is not contained in $AN$ it can not be contained in $S$. Hence, no open set can be contained in the Cantor set $S$.
-
Self work, ideas, insights...??? – DonAntonio Oct 28 '12 at 4:32
When I wrote the above I meant actual self work, not to write down the definition of the Cantor set... – DonAntonio Oct 28 '12 at 4:43
What properties of compact sets do you know? – Arthur Fischer Oct 28 '12 at 4:46
In your idea for solving (c) I expect you mean $1 / 3^N$ and not $1 / 3N$. – Arthur Fischer Oct 28 '12 at 5:16
## 2 Answers
a) There are two approaches. First you can explicitly show that for example $\frac{1}{3}$ is an element of every $S_k$ and hence is contained in the Cantor set.
If you want a more theoretical argument, then we would apply the Cantor intersection theorem as you said. Note that $$S_0 \supseteq S_1 \supseteq S_2 \supseteq \cdots$$ Each $S_k$ is a compact set (why?) and use these facts to apply the intersection theorem.
b) Note that $C \subseteq S_0$ which is bounded. You also need to prove that the set is closed.
c) $\mathbb{R}$ is a connected space and so the only sets which are clopen are $\mathbb{R}$ itself and $\emptyset$. From a we know the set is non-empty. From b we know the set is compact (and hence closed). What can you conclude?
-
a) You can check that $0$ is in each $S_n$, which means $0 \in S$, so $S$ is nonempty.
b) Since $S_1$ is closed, then by induction we can show that each $S_n$ is closed. Also, we know it is bounded since it's in $[0,1]$. Hence, you have it is compact in $\mathbb{R}$.
c) Check that $1/3$ is in $S$, but every open interval around $1/3$ contains something in the interval $(1/3, 2/3)$ which is outside of $S$. Hence, $1/3$ is in $S$ but is not an interior point of $S$, so $S$ is not open. Indeed, $S$ has empty interior.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8902859091758728, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/24978/nasty-examples-for-different-classes-of-functions/25017
|
# Nasty examples for different classes of functions
Let $f: \mathbb{R} \to \mathbb{R}$ be a function. Usually when proving a theorem where $f$ is assumed to be continuous, differentiable, $C^1$ or smooth, it is enough to draw intuition by assuming that $f$ is piecewise smooth (something that one could perhaps draw on a paper without lifting your pencil). What I'm saying is that in all these cases my mental picture is about the same. This works most of the time, but sometimes it of course doesn't.
Hence I would like to ask for examples of continuous, differentiable and $C^1$ functions, which would highlight the differences between the different classes. I'm especially interested in how nasty differentiable functions can be compared to continuously differentiable ones. Also if it is the case that the one dimensional case happens to be uninteresting, feel free to expand your answer to functions $\mathbb{R}^n \to \mathbb{R}^m$. The optimal answer would also list some general minimal 'sanity-checks' for different classes of functions, which a proof of a theorem concerning a particular class would have to take into account.
-
what about Dirac delta function? – Dilawar Mar 4 '11 at 10:48
@Dilawar: $\delta$ is neither continuous, differentiable, $C^1$, nor smooth. In fact, it isn't even a function. – Willie Wong♦ Mar 4 '11 at 10:54
@Willie: It's not a Real function, it is still a function from the real numbers into the extended real line. – Asaf Karagila Mar 4 '11 at 14:51
2
@Asaf: No, the Dirac delta can't be properly understood in that way. If you think of $\delta$ as a function $\delta : \mathbb{R} \to [-\infty, +\infty]$ with $\delta(0) = +\infty$, $\delta(x) = 0$ otherwise, how can you account for the fact that $\delta \ne 2\delta$? One really does need distribution theory to obtain all the desired properties of this object. – Nate Eldredge Mar 4 '11 at 15:15
It really depends on which theorem you want; for example, the absolute value is sufficient to show the necessity of differentiability in the Mean Value Theorem (or Rolle's Theorem), and it's not particularly nasty... – Arturo Magidin Mar 4 '11 at 16:30
show 1 more comment
## 3 Answers
The Wikipedia article http://en.wikipedia.org/wiki/Pompeiu_derivative gives one example of how bad a non-continuous derivative can be.
One can show that any set whose complement is a dense intersection of countably many open sets is the point of discontinuities for some derivative. In particular a derivative can be discontinuous almost everywhere and on a dense set.
See the book "Differentiation of Real Functions" by Andrew Bruckner for this and much more.
-
Thank you! I think this answer is sort of what I was looking for and since the question was somewhat vague (I was just interested in seeing what can be said about the subject), I have decided to accept this one as an answer. – J. J. Mar 7 '11 at 17:45
Although this is maybe not a good example of very "nasty" functions, you could look at $f_i = x^i \sin(1/x)$ for $i=0,1,2,3$ in order to see the distinction between those classes of functions. If you set $f_i(0)=0$ for all $i$, then $f_0$ is not continuous in $0$, $f_1$ is continuous but not differentiable in $0$, $f_2$ is differentiable but not $C^1$ and $f_3$ is $C^1$.
-
This is indeed a valid example, but not particularly nasty. :) (I agree that the term 'nasty' is not well-defined.) I'm wondering for example if those points where the function's derivative fails to be continuous can have an accumulation point. EDIT: Actually that's probably true. Can those points be dense? – J. J. Mar 4 '11 at 9:55
derivatives have the intermediate value property (even if they are discontinuous). – yoyo Mar 4 '11 at 16:38
Another example of what can go wrong is Volterra's function.
• It is differentiable everywhere.
• Its derivative is bounded everywhere.
• Its derivative is not Riemann-integrable.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9538196921348572, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/74069/polynomials-in-graphs
|
## Polynomials in graphs
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have part of a physical simulation which I've realised can be modelled using a directed graph where each node is a polynomial. I then calculate this graph by functional composition and summing to calculate a "function flow" as shown in the example below.
e.g. three polynomials $f(x)$, $g(x)$, $h(x)$ are connected as follows: $f$ -> $g$ -> $h$
We start at $f(x)$ because it only has an outgoing connection. Because $f(x)$ has an outgoing edge to $g(x)$ we combine them to make $g(f(x))$. Because $g(x)$ has an outgoing edge to $h(x)$ we combine them to make $h(g(f(x)))$. Because $h(x)$ has no outgoing edges we stop.
If there were multiple incoming edges to a node, the incoming polynomials would be summed before the composition. Also loops are allowed as the compositions will converge in my case.
The question is, is this something that has already been studied somewhere? I imagine there are a lot of physical processes like this. It also looks very Category-like.
-
Interesting process, mind stating what you're simulating? – jc Aug 30 2011 at 16:00
1
Yea, sure. It's a electricity supply chain. So we have a transformer powering a distribution system powering some heavy equipment, for example. The functions are related to losses, which happen to be close enough to polynomial for the types of devices we're looking at. So we want to say things like if I draw x at my equipment end, what should I expect to be drawn at the power socket. – Kevin Aug 30 2011 at 17:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578868746757507, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/288492/splitting-field-of-xpe-1-over-mathbb-z-p
|
# Splitting field of $x^{{p}^e}-1$ over $\mathbb Z_p$
I'd like a hint for determining the splitting field of $x^{{p}^e}-1$ over the integers mod $p$, $\mathbb Z_p$, where $e$ is an arbitrary natural number. Thanks.
-
1
– achille hui Jan 28 at 0:47
@achillehui that's right, I think the answer is Zmodp, I mean the splitting field in this case is $Z_p$ itself – User112358 Jan 28 at 1:43
## 1 Answer
Approach 1: Well, there is one obvious root. What is it? What is its multiplicity? Work out the easy stuff, then go from there.
Approach 2: You're asking to solve the equation $x^{p^e} - 1 = 0$....
-
Is it $Z_p$ the answer? – User112358 Jan 28 at 1:44
Yep. If you don't guess the factorization, you can easily count the multiplicity of the root $1$ by seeing how many times $x-1$ divides the derivative. Or you can recognize that $1$ is the only $p$-th root of unity in a field of characteristic $p$. – Hurkyl Jan 28 at 3:11
Thank you, I found the factorization indeed, it follows directly from the "freshman's dream". – User112358 Jan 28 at 3:53
1
@user59898 $x^{p^e} -1 = (x -1)^{p^e}$ and so the splitting field is just $\Bbb{Z}/p\Bbb{Z}$ itself. – BenjaLim Jan 28 at 10:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9086429476737976, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/20707/formatting-equation-output-neatly
|
# Formatting Equation Output Neatly
I looked around and couldn't find the answer to this anywhere, so I'm sorry if this is a bad question - I'm pretty new to mathematica. I wrote a program to help me compute some annoying series expansions, and the output is pretty ugly, for example:
$\epsilon ^2 \left(\left(a_{5,1} a_{6,1}+a_{1,1} \left(a_{2,1}+a_{6,1}\right)\right) X_{1,2}+\left(a_{5,1} \left(a_{9,1}+a_{10,1}\right)+a_{1,1} \left(a_{3,1}+a_{9,1}+a_{10,1}\right)\right) X_{1,3}+\left(a_{5,1} \left(a_{7,1}+a_{8,1}\right)+a_{1,1} \left(a_{4,1}+a_{7,1}+a_{8,1}\right)\right) X_{1,4}+\left(a_{6,1} \left(a_{9,1}+a_{10,1}\right)+a_{2,1} \left(a_{3,1}+a_{9,1}+a_{10,1}\right)\right) X_{2,3}+\left(a_{6,1} \left(a_{7,1}+a_{8,1}\right)+a_{2,1} \left(a_{4,1}+a_{7,1}+a_{8,1}\right)\right) X_{2,4}+a_{3,1} \left(a_{4,1}+a_{7,1}+a_{8,1}\right) X_{3,4}\right)+\epsilon \left(X_3 \epsilon a_{3,2}+X_4 \epsilon a_{4,2}+X_1 \left(\epsilon a_{1,2}+\epsilon a_{5,2}+a_{1,1}+a_{5,1}\right)+X_2 \left(\epsilon a_{2,2}+\epsilon a_{6,2}+a_{2,1}+a_{6,1}\right)+X_4 \epsilon a_{7,2}+X_4 \epsilon a_{8,2}+X_3 \epsilon a_{9,2}+X_3 \epsilon a_{10,2}+X_3 a_{3,1}+X_4 a_{4,1}+X_4 a_{7,1}+X_4 a_{8,1}+X_3 a_{9,1}+X_3 a_{10,1}\right)$
I'm wondering if there's a way to have it output this in a more readable way, for example, something like:
$X_{1}*(coefficients)\\ X_{2}*(coefficients)\\ ...\\ X_{1,2}*(coefficents)\\ etc...$
Is there a way to do so? Thanks!
-
1
Please include the Mathematica code that produced your output for the convenience of those who wish to help. – Mr.Wizard♦ Mar 5 at 22:17
## 1 Answer
One simple way would be something like this:
```` Column@Apply[List,
Collect[Sum[Expand[(a - b)^i] X[i, j], {i, 3}, {j, 3}], _X] /.
y_*z_X :> Row[{Style[z, Bold, Red, 22] , "*(", y, ")"}]]
````
-
So in this case, X[1,1] is my $X_{1}$? And I'm slightly confused how to modify this code to match my case - I'm not sure what would go in for "List", for example, as I have no list. – laplacian13 Mar 5 at 22:38
So what do you have? Subscript? Please enter a Mathematica expression into the question. Please also read a bit about Mathematica, FullForm, Apply, Plus, List, etc. Or follow one of the online tutorials or go to a training or some such. – Rolf Mertig Mar 5 at 22:54
Yes, I'm just using subscripts. The subscripts do not correspond to a matrix or anything (I see now where that confusion may have come from), but rather each $X_{i}$ is a vector field, and each $X_{i,j}$ is a Lie bracket of two vector fields. I've done some tutorials and read about Mathematica, I just never found what I was looking for. – laplacian13 Mar 5 at 22:58
Just post your program to do the series expansions. Tthen people will help you. – Rolf Mertig Mar 6 at 8:35
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923197865486145, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/279919/function-for-unique-hash-code?answertab=votes
|
# Function for unique hash code
I am interested in finding $F(x,y)$, such that $x$ and $y$ $\in \mathbb Z^+$ and $F(x,y)$ is one to one function i.e., $F(x,y)$ is unique for any unique unordered pairs of $x$ and $y$.
Regards,
Apologies for noobish language I am new to branch of number theory.
-
By writing "unordered pairs," you are asking that $F(x,y)=F(y,x)$, is that right? – Gerry Myerson Jan 16 at 8:55
Yes, precisely the order is immaterial . – user58460 Jan 16 at 8:58
## 1 Answer
$F(x,y)=(1+\max(x,y))^2-|x-y|$ will do.
-
Thanks, seems to be working well. How did you derive at this? Any specific property ? – user58460 Jan 16 at 9:09
I figured the easiest way was to get some function such that $F(x,x)$ increased pretty quickly with $x$ --- quadratically would do --- and then subtract something off it as you went away from $(x,x)$, subtracting more the farther down from or to the left of $(x,x)$ you went. That's what this function does. Along the $(x,x)$ line, it goes $1,4,9,16,\dots$, and it drops by $1$ for each step you take away from $(x,x)$. – Gerry Myerson Jan 16 at 10:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947220504283905, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/280617/unintelligible-statement-in-a-text-of-logic?answertab=active
|
# Unintelligible statement in a text of Logic
re-reading some of the Shoenfield's book, "Mathematical logic", I noticed a statement incomprehensible. The offending statement is on page 89, where it says:
With the result mentioned in (iv), this give a new proof of completness of ACF.
If, kindly, someone in possession of the text could take a look (pages 86-89), should detect the strangeness that I mentioned. For this I am curious to hear independent opinions.
(Note: just look at the pages I mentioned taking the definitions and results presented, i.e. there is no need to go into details of the demonstrations.)
-
1
In both the statement on p. 89 and (iv) on p. 88 you should interpret $ACF$ as each $ACF(n)$. – Brian M. Scott Jan 17 at 18:35
thank you very much for confirming my doubt and clarification of the way in which the expression of the text to be read (and should have been written - I think it is a mistake of writing the text, ACF is one thing, ACF(n) is another one). That could not be ACF was clear, and I had thought of ACF(n), but the fact that there was ACF in (iv) and after the theorem, gave me serious doubts (a double misprint seemed unlikely). So thank you again. Mr Blass has obviously assimilated ACF and ACF(n), and therefore did not detect the underlying problem to my question: it is impossible completeness of ACF. – Bento Jan 17 at 19:30
## 1 Answer
I see nothing strange here. The result [about ACF] mentioned in (iv) [on page 88] is that ACF is categorical in all uncountable powers but not in $\aleph_0$. Then on page 89, the Los-Vaught theorem says that categoricity in even a single infinite power implies completeness provided the theory is consistent and has only infinite models. Since ACF is certainly consistent (the complex numbers form a model) and has only infinite models (well known from algebra), this gives a proof of completeness of ACF. It's a new proof in the sense that it's different from the proof by quantifier elimination already given on page 86.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9588415622711182, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/83798-differentiation-problem.html
|
# Thread:
1. ## Differentiation Problem
Hi
A rectangle PQRS is placed inside the scalene triangle ABC as shown. If the area of triangle ABC is constant, prove that the maximum area of the rectangle is one-half the area of triangle ABC.
My friends and I have been trying to solve this for quite a while but nothing we do seems to get us near answering the question.
2. Originally Posted by xwrathbringerx
Hi
A rectangle PQRS is placed inside the scalene triangle ABC as shown. If the area of triangle ABC is constant, prove that the maximum area of the rectangle is one-half the area of triangle ABC.
My friends and I have been trying to solve this for quite a while but nothing we do seems to get us near answering the question.
I used no calculus in my answer!
Assume that the rectangle inscribed has the largest area. Now, let us try to show that $A=\tfrac{1}{2}A_0$ (where I let $A_0$ be the area of the scalene triangle)
If you drop down a vertical line from the top of the triangle to its base, wrt the rectangle, you will create similar triangles all around it.
Consider the two triangles on the right side of the vertical line. Each of the smaller triangles have a height of $y$ (which is what I called the height of the rectangle) Since there are two of these triangles, let us assume that the vertical line has a length of $h=y+y=2y$.
Now, let us consider the whole scalene triangle and the similar scalene produced above the rectangle. The base of the smaller triangle is $x$ (what I called the length of the rectangle). By similar triangles $\frac{y}{2y}=\frac{x}{b}\implies b=2x$ (where b is the base of the original scalene triangle.)
Thus, the area of the original scalene triangle is $A=\tfrac{1}{2}bh=\tfrac{1}{2}(2x)(2y)=2xy$. Since we're told that the area is constant, we can then say that $A_0=2xy$
Now, since I let $x$ be the length of the rectangle and $y$ be the width of the rectangle. Thus, its area is $A=lw=xy$. But we saw that $A_0=2xy\implies xy=\tfrac{1}{2}A_0$. Therefore, the area of the rectangle is $A=\tfrac{1}{2}A_0$
-------------------------------------------------------------------------
After I finished typing this, I realized how to use calculus...but be prepared, for it will get messy... ><
Now, using calculus:
Like we did in the geometry approach, drop down a vertical line from the apex to the base. Let the base and the vertical line be perpendicular, and let the intersection of the base and vertical line represent the origin of the Cartesian system. See the figure below.
Now, let the vertical line have a height of $h$. Since we have created two triangles, let the triangle to the right of the vertical line have a hypotenuse of length $L_1$, and let the triangle to the left of the vertical line have a hypotenuse of length $L_2$. Let the base of the rectangle to the right of the vertical line have a length of $x_1$ and the base of the rectangle to the left of the vertical line have a length of $x_2$. Now, to find the bases of these triangles, apply the Pythagorean theorem. Thus, the base of the triangle to the right of the vertical line will have a length of $a^2+h^2=L_1^2\implies a=\sqrt{L_1^2-h^2}$, and the base of the triangle to the left of the vertical line will have a base of length $b^2+h^2=L_2^2\implies b=\sqrt{L_2^2-h^2}$.
Now, let us create equations of line segments to represent the height of the rectangle ( $y$) as it varies along $L_1$ and $L_2$.
Focusing on the segment with length $L_1$:
Let us assume that the y-intercept of the line is $(0,h)$. Note that when $x_1=\sqrt{L_1^2-h^2}$, the segment intersects the base ( $y=0$). So we can conclude that the slope of the line segment is $m_1=\frac{0-h}{\sqrt{L_1^2-h^2}-0}=-\frac{h}{\sqrt{L_1^2-h^2}}$. Thus the equation of the line that represents $y$ as $x_1$ varies with respect to $L_1$ is $y=-\frac{h}{\sqrt{L_1^2-h^2}}x_1+h$.
Focusing on the segment with length $L_2$:
Let us assume that the y-intercept of the line is $(0,h)$. Note that when $x_2=\sqrt{L_2^2-h^2}$, the segment intersects the base ( $y=0$). [Also keep in mind that $x_2$ is in the region of $x<0$ in the cartesian system. Since I want to make $x_2>0$, when I find the equation of the line, although the slope is positive, $-x_2$ maintains the negativity.] So we can conclude that the slope of the line segment is $m_2=\frac{0-h}{-\sqrt{L_2^2-h^2}-0}=\frac{h}{\sqrt{L_2^2-h^2}}$. Thus the equation of the line that represents $y$ as $x_2$ varies with respect to $L_2$ is $y=-\frac{h}{\sqrt{L_2^2-h^2}}x_2+h$.
Now that we have all this information, we can set up the area equations.
Let the area of the rectangle to the right of the vertical line be represented by $A_1$. Thus, $A_1(x_1)= x_1y=-\frac{h}{\sqrt{L_1^2-h^2}}x_1^2+hx_1$. Now, we maximize the Area. We get $A_1^{\prime}\!\left(x_1\right)=-\frac{2h}{\sqrt{L_1^2-h^2}}x_1+h$. To maximize, we must have $-\frac{2h}{\sqrt{L_1^2-h^2}}x_1+h=0\implies \frac{2h}{\sqrt{L_1^2-h^2}}x_1=h\implies x_1=\tfrac{1}{2}\sqrt{L_1^2-h^2}$
Since this is the maximum value of the base, then the maximum height must be $y=-\frac{h}{\sqrt{L_1^2-h^2}}\left(\tfrac{1}{2}\sqrt{L_1^2-h^2}\right)+h=-\tfrac{1}{2}h+h=\tfrac{1}{2}h$
Similarly, let the area of the rectangle to the left of the vertical line be represented by $A_2$. Thus, $A_2(x_2)= x_2y=-\frac{h}{\sqrt{L_2^2-h^2}}x_2^2+hx_2$. Now, we maximize the Area. We get $A_2^{\prime}\!\left(x_2\right)=-\frac{2h}{\sqrt{L_2^2-h^2}}x_2+h$. To maximize, we must have $-\frac{2h}{\sqrt{L_2^2-h^2}}x_2+h=0\implies \frac{2h}{\sqrt{L_2^2-h^2}}x_2=h\implies x_2=\tfrac{1}{2}\sqrt{L_2^2-h^2}$.
Since this is the maximum value of the base, then the maximum height must be $y=-\frac{h}{\sqrt{L_2^2-h^2}}\left(\tfrac{1}{2}\sqrt{L_2^2-h^2}\right)+h=-\tfrac{1}{2}h+h=\tfrac{1}{2}h$
Therefore, the total area of the rectangle is $A=A_1+A_2=\tfrac{1}{2}\sqrt{L_1^2-h^2}\cdot\tfrac{1}{2}h+\tfrac{1}{2}\sqrt{L_2^2-h^2}\cdot\tfrac{1}{2}h=\tfrac{1}{4}h\left(\sqrt{L_ 1^2-h^2}+\sqrt{L_2^2-h^2}\right)$.
But, the height of the triangle is $h$ and the length of the base is $\sqrt{L_1^2-h^2}+\sqrt{L_2^2+h^2}$. Since the area is constant, we see that $A_0=\tfrac{1}{2}bh=\tfrac{1}{2}h\left(\sqrt{L_1^2-h^2}+\sqrt{L_2^2-h^2}\right)$.
Therefore, the maximum area of our rectangle $A=\tfrac{1}{4}h\left(\sqrt{L_1^2-h^2}+\sqrt{L_2^2-h^2}\right)=\tfrac{1}{2}\left[\tfrac{1}{2}h\left(\sqrt{L_1^2-h^2}+\sqrt{L_2^2-h^2}\right)\right]=\tfrac{1}{2}A_0$
I hope you can follow this. If you have any additional questions regarding this, please let us know.
Again, I have given you two different approaches. Hopefully, you can make sense of this.
3. Consider the two triangles on the right side of the vertical line. Each of the smaller triangles have a height of (which is what I called the height of the rectangle) Since there are two of these triangles, let us assume that the vertical line has a length of .
Sorry but I don't seem to get this part...how did you find out that each of the smaller triangles has a height of y?
4. Originally Posted by xwrathbringerx
Sorry but I don't seem to get this part...how did you find out that each of the smaller triangles has a height of y?
Because the height of one of those triangles is the height of the rectangle! xD
If we assume the two triangles to have the same height, then they will each be $y$ units.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 61, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395869374275208, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/184165/proving-there-are-infinitely-many-primes-of-the-form-a2k1?answertab=active
|
# Proving there are infinitely many primes of the form $a2^k+1.$
Fix $k \in \mathbb{Z}_+$. Prove that we can find infinitely many primes of the form $a2^k +1,$ where $a$ is a positive integer.
We can use the result that: If $p \ne 2$ is a prime, and if $p$ divides $s^{2^t}+1$, for $s > 1$ and $t \ge 1$, then $p \equiv 1 \pmod {2^{t+1}}$.
Ive trying to get something inductively:
For $k = 1$, there are infinitely many primes of the form $2a + 1$.
Suppose there are infinitely many primes of the form $a2^k + 1$, and then show that there are infinitely many primes of the form $a2^{k+1} + 1$.
If there are infinitely primes of the form $a2^k + 1$, where $a$ is even, then we have that, $a2^k + 1 = (2q)2^k + 1 = q2^{k+1} + 1$. Hence we are done.
Therefore, suppose there are only infinitely many primes of form $a2^k + 1$, where $a$ is odd. - but I can't get a contradiction out of this.
-
1
– joriki Aug 19 '12 at 2:43
## 1 Answer
Suppose a prime $p$ divides both $s^{2^t}+1$ and $s^{2^u}+1$, $t\gt u$. Then it divides their difference, $s^{2^t}-s^{2^u}=s^{2^u}(s^v-1)$, where $v=2^t-2^u=2^u(2^{t-u}-1)$. Now $p$ can't divide $s^{2^u}$, since it divides $s^{2^u}+1$, so it must divide $s^v-1$. But $s^{2^u}\equiv-1\pmod p$, and $2^{t-u}-1$ is odd, so $s^v\equiv-1\pmod p$, contradiction (unless $p=2$).
Thus, the numbers $s^{2^t}+1$, $t=k-1,k,k+1,\dots$, are pairwise coprime (aside from factors of 2), so they have distinct prime factors. But each of those prime factors $p$ satisfies $p\equiv1\pmod{2^{t+1}}$, hence, $p\equiv1\pmod{2^k}$, hence, $p=a2^k+1$ for some $a$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537258148193359, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/24913/quick-proofs-of-hard-theorems/24968
|
## Quick proofs of hard theorems
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Mathematics is rife with the fruit of abstraction. Many problems which first are solved via "direct" methods (long and difficult calculations, tricky estimates, and gritty technical theorems) later turn out to follow beautifully from basic properties of simple devices, though it often takes some work to set up the new machinery. I would like to hear about some examples of problems which were originally solved using arduous direct techniques, but were later found to be corollaries of more sophisticated results.
I am not as interested in problems which motivated the development of complex machinery that eventually solved them, such as the Poincare conjecture in dimension five or higher (which motivated the development of surgery theory) or the Weil conjectures (which motivated the development of l-adic and other cohomology theories). I would also prefer results which really did have difficult solutions before the quick proofs were found. Finally, I insist that the proofs really be quick (it should be possible to explain it in a few sentences granting the machinery on which it depends) but certainly not necessarily easy (i.e. it is fine if the machinery is extremely difficult to construct).
In summary, I'm looking for results that everyone thought was really hard but which turned out to be almost trivial (or at least natural) when looked at in the right way. I'll post an answer which gives what I would consider to be an example.
I decided to make this a community wiki, and I think the usual "one example per answer" guideline makes sense here.
-
## 30 Answers
Here is my example. In the 1930's (I think), Wiener gave a proof that if $f$ is a continuous nonvanishing function on the circle with absolutely convergent Fourier series, then so is $1/f$. The proof was a long piece of hard analysis, involving detailed local calculations and complicated estimates. Later (in the 1940's?), Gelfand found that the statement follows from the basic theory of Banach algebras as follows. The functions on the circle with absolutely convergent Fourier series can be characterized as the image of the Gelfand transform $\Gamma: l^1(\mathbb{Z}) \to C(S^1)$. In general if $\Gamma: B \to C(M)$ is the Gelfand transform from a commutative Banach algebra to the ring of continuous functions on its maximal ideal space, then $x$ is invertible in $B$ if and only if $\Gamma(x)$ is invertible in $C(M)$. So the hypotheses on $f$ imply that $f = \Gamma(x)$ for some invertible $x$ in $l^1(\mathbb{Z})$, and a simple calculation shows that $1/f = \Gamma(x^{-1})$.
-
1
Gelfand's result was first published in 1939 ("On normed rings" announces the basics of the new theory, "To the theory of normed rings II" has this application and more), although a 1941 paper ("Normierte Ringe") which provides more details and proofs seems to be more often cited. These are in the Collected papers, volume 1. – Jonas Meyer May 17 2010 at 1:01
For Wiener's proof, see Lemma IIe in "Tauberian theorems", 1932: jstor.org/stable/1968102. – Jonas Meyer May 17 2010 at 1:12
3
It's a great example, of course. The Banach algebra in question is $l^1$ with the algebra multiplication being convolution. I haven't looked at this in a while, but when I did look at Weiner's original work, it struck me that he was using convolution in a very modern way -- the whole argument was based on properties of convolution, and the algebraic properties in particular were key. I've occasionally wondered if Gelfand noticed this aspect and if it was in any way an inspiration for what he did. – Carl Offner May 30 2010 at 14:59
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is a theorem in finite group theory, that if $a$, $b$, and $c$ are integers all greater than $1$, there exists a finite group $G$ with elements $x$ and $y$ such that: $x$ has order $a$, $y$ has order $b$, and $xy$ has order $c$. I think the first person to prove this was G.A. Miller, whose proof looked at lots of separate cases, and had tons of long, tedious calculations in symmetric groups (I will try and find the paper and post the reference later). I don't know who discovered the more modern proofs, but Derek Holt posted a proof on the group-pub that is one of the most elegant things I've ever seen. Unfortunately, it doesn't seem to be available on the archive of the list, so I will just post it here verbatim:
Let q be a prime power such that q-1 is divisible by 2a, 2b, and 2c. We will construct elements x,y of SL(2,q) such that x, y, and xy have orders 2a, 2b, and 2c, and then the images of x,y,xy in PSL(2,q) will have orders a, b, and c as required.
An element of SL(2,q) with distinct eigenvalues is diagonalizable in GL(2,q), and so its order is determined by its characteristic polynomial which is determined by its trace. In particular, since 2a,2b,2c > 2, this applies to elements with these orders.
Let u and v be elements of the field F_q with multiplicative orders 2a and 2b, and let x = [ [u, 1], [0, u^-1] ] and y = [ [v, 0], [t, v^-1] ] be in SL(2,q), where t remains to be chosen. Then x and y have orders 2a and 2b.
The trace of xy is uv + t + u^-1v^-1, and so by suitable choice of t, we can make this equal to any value we like. So we can make it equal to the trace of an element of SL(2,q) with order 2c, and then xy will have order 2c.
-
4
Nice! I can't help but feel like there should be a geometric proof based on looking at triangle groups: en.wikipedia.org/wiki/… – Qiaochu Yuan May 16 2010 at 22:27
1
Yes, you can do it with triangle groups; the hyperbolic case follows since those triangle groups are residually finite (basically my Malcev's theorem), and then you can do the spherical (already finite) and euclidean cases by hand. – Steve D May 16 2010 at 22:41
3
(Or observe that the spherical and Euclidean cases are also residually finite. All these groups are linear!) – HW May 16 2010 at 23:07
1
Do you think that the theorem is hard or is it just the case of a messy initial proof? After all, both Steve's approach rely on methods from the 19th century. – Victor Protsak May 17 2010 at 23:20
Cantor's proof of the existence of transcendental numbers. With a (now) obvious one-line argument he showed that there are uncountably many of them --- when Liouville, Hermite and others had to take (putative) transcendental numbers one at a time ...
Newman's argument (especially Korevaar's and Zagier's version of it) turned the Prime Number Theorem, which took a century to be proved, into something that can be explained in a few minutes to any graduate student.
-
In 1917 Hardy and Ramanujan proved that all but $o(x)$ integers $n \leq x$ have $\log\log n + O((\log\log n)^{1/2 + \epsilon})$ distinct prime factors. The proof was long and relied on establishing (by induction!) an precise bound for the number of integers with exactly $k$ distinct prime factors (with $k$ arbitrary, and possibly tending to infinity with $x$). A short "two-line" proof was found by Turan in 1934.
Hardy disliked Turan's proof, because as he claimed, it did not give proper insight. However as it turned it was Turan's method that was prone to generalization. Twenty years later his inequality became the more general Turan-Kubilius inequality. Curiously enough it was later realized by Elliott that taking the "dual" of Turan-Kubilius's inequality yields immediately the arithmetic large sieve inequality! :-)
-
15
As Persi Diaconis puts it (when discussing Hardy-Ramanujan's proof): "Impressive as the argument is, to a probabilist, the project seems out of focus; they are proving the weak law of large numbers by using the local central limit theorem. If all that is wanted is their theorem, there are much easier arguments. With all their work, one could reach much stronger conclusions". See www-stat.stanford.edu/~cgates/PERSI/papers/Hardy.pdf for the rest of Persi's nice article. – maks May 16 2010 at 23:42
The fundamental theorem of algebra is a very easy consequence of Liouville's theorem in complex analysis.
There is also the (even simpler) proof due to Schep.
-
Also, there is the Rouche's Theorem method in a similar vein. – Matt May 18 2010 at 6:10
Of course there are many proofs of the fundamental theorem of algebra. Recently I have come to like those that hinge on openness of holomorphic mappings: if f is polynomial, there cannot be a nonzero minimum value |f(z_0)|, because f takes an open neighborhood of z_0 to an open neighborhood of f(z_0). For a very elementary proof based on a similar idea, see ncatlab.org/nlab/show/… – Todd Trimble Jan 25 2011 at 0:43
The Nielsen-Schreier subgroup theorem: subgroups of free groups are free. This has a very quick proof using the fact that a group is free precisely when it acts freely and without inversions on a tree.
-
1
A fact which is itself easy to prove. – Pete L. Clark May 17 2010 at 15:23
The Brouwer fixed point theorem might be such an example. With homotopy theory it's easy to prove, but the original proof was "hands on".
-
Lomonosov's 1973 proof that every compact operator $T$ has a hyperinvariant subspace (i.e., a subspace that is invariant for every operator that commutes with $T$) was much simpler than proofs existing then that every compact operator has an invariant subspace. See http://en.wikipedia.org/wiki/Invariant_subspace_problem. However, Wikipedia fails to mention that Lomonosov's proof was further simplified to replace the Schauder fixed point theorem by the spectral radius formula $\lim \|T^n\|^{1/n}$ (see e.g. Rudin's Functional Analysis), so that Lomonosov's theorem is taught (or assigned as an exercise) in classes in which the spectral radius formula is introduced.
-
I actually remember doing that exercise - neat example! – Paul Siegel May 16 2010 at 19:30
The Van der Waerden conjecture for the permanents was stated in 1926 and remained open for over 50 years. It was considered "one of the famous open problems in combinatorial theory" (in van Lint's words). It turned out to be an easy consequence of the Alexandrov-Fenchel inequality from late 1930s. See this article for the history and basically the whole proof.
-
3
This article homepages.cwi.nl/~lex/files/perma5.pdf from a recent issue of the American Mathematical Monthly describes the recent - and completely elementary - proof of Van der Waerden's theorem due to Leonid Gurvits. – Jon Yard May 17 2010 at 17:15
The Cayley-Hamilton theorem. Apparently, Cayley only proved it for $2\times2$ and - in a horrendous calculation - for $3\times3$ matrices and then wrote something outrageous in the spirit of "and similarly, we can prove it for any $n$". Hamilton then proved another special case in a paper on linear operators on the space of quaternions. Nowadays, it is proven in full generality in just a couple of lines, using the fact that the set of diagonalisable matrices of a given dimension, for which the theorem is trivially true, is dense in the set of all matrices of the same dimension.
-
1
Is there an easy and elementary proof that the diagonal matrices are Zariski dense? – Lennart Meier Oct 5 2010 at 12:17
3
The map that assigns to a matrix the discriminant of its characteristic polynomial is a polynomial function in the entries, hence continuous. Over an algebraically closed field, the set of non-diagonalisable matrices is the pre-image of 0 under this map, since a matrix is diagonalisable if and only if it has distinct eigenvalues. The assertion now easily follows. You have two options: either working over the complex numbers (a pretty cute little argument, also only 3 lines, reduces to this case) and then you can even use the Euclidean topology on C^{n^2}, or working with the Zariski topology. – Alex Bartel Oct 5 2010 at 13:08
See also the discussion here: mathoverflow.net/questions/12657/… – Alex Bartel Oct 5 2010 at 13:09
@Alex: I think you mean the discriminant of its minimal polynomial. – Guillermo Mantilla Oct 11 2010 at 7:41
1
I think I like the argument using the Zariski topology better. It is very direct. Let $c(t)$ be the characteristic polynomial function, let $f_{ij}$ be the polynomial map $a \mapsto c(a)_{ij}$ and let $\Delta$ be the discriminant of $c(t)$. So $\Delta$ and $f_{ij}$ are polynomial functions on the space of $n \times n$ matrices. Now $f_{ij}$ vanishes on any matrix $a$ with distinct eigenvalues: $f_{ij}(a) = 0$ whenever $\Delta(a) \neq 0$. So $\Delta f_{ij}$ is identically zero for all $i,j$. Hence $f_{ij} = 0$ for all $i,j$ as required. – Konstantin Ardakov Sep 24 2011 at 11:38
show 1 more comment
The associativity of the group law on an elliptic curve can be proved by a tedious and unenlightening calculation, but it can be derived pretty quickly once you have developed some curve theory (Riemann-Roch, etc.).
-
4
The theory of the Weierstrass p-function isn't entirely unenlightening. I assume you're referring to the chord-and-tangent proof, which is awful, but it's not so bad over C when elliptic functions are available. – Qiaochu Yuan May 17 2010 at 3:59
1
@Timothy: Riemann-Roch seems like a lot just to prove associativity. You only need to note the $P+(Q+R)=(P+Q)+R$ holds separately for the obvious cases $P=O$ and $Q=O$, then apply the Rigidity Theorem. This has a simple proof (Theorem 7.13 of J.S. Milne's notes on Alg. Geom. ver 5.10. jmilne.org/math/CourseNotes/ag.html). – George Lowther Jan 25 2011 at 2:20
Quadratic Reciprocity. Gauss' original proof is entirely elementary, but far from easy (I think I recall reading that Gauss himself said he spent more than a year tormented trying to find a proof).
There are many modern proofs. Using algebraic number theory and in particular the Kronecker-Weber theorem, we now have a conceptual proof, literally 4-5 lines in length.
-
4
Gauss himself later found another 5 proofs; in particular, his proof based on Gauss sums (reproduced in Ireland and Rosen) is very short and uses all the cyclotomy that you can possibly need. – Victor Protsak May 18 2010 at 2:26
Though the idea behind it all is childishly simple, yet the method of analytic geometry is so powerful that the very ordinary boys of seventeen can use it to prove results which would have baffled the greatest of Greek geometers--Euclid, Archimedes, and Apollonius.
---E.T. Bell, in Men of Mathematics
-
adding to the answer by negative refraction – Gerald Edgar Jul 11 2010 at 12:23
The theorem that the left-hand trefoil knot is not isotopic to the right-hand trefoil knot was originally proved (by Max Dehn in 1914), by a rather grueling analysis of the automorphisms of the trefoil knot group. The theorem became much easier with the advent of the Jones polynomial in the 1980s.
-
3
Surely it's much easier to just note that the signature is not zero and that it is alternating. The Jones polynomial looks to me like massive overkill. – Daniel Moskovich May 17 2010 at 3:23
1
Daniel, thanks for this information, which I have since found in Lickorish's An Introduction to Knot Theory, p.86. I still think it is debatable, however, whether the signature approach is easier than the Jones polynomial. – John Stillwell May 17 2010 at 7:08
1
The proof became much easier even before the Jones polynomoal. Seifert's classification of Seifert fibred 3-manifolds does the job. See for example Hatcher's 3-manifolds notes (on-line). – Ryan Budney Jun 18 2010 at 6:12
I've lately have found myself admiring the proof of the fundamental theorem of algebra using linear algebra, due to H. Derksen, American Mathematical Monthly, 110 (7) (2003), 620–623.
He proves directly that linear operators on finite dimensional complex vector spaces admit eigenvectors, and deduces the fundamental theorem from this. I like the argument because it is completely elementary: All it uses is that odd dimensional polynomials over the reals have a real root, and that complex numbers have complex square roots (in particular, it avoids the machinery of complex or real analysis, and can even be presented without any reference to determinants). Moreover, the proof gives the result that $R(\sqrt{-1})$ is algebraically closed whenever $R$ is a real closed field, which before I had only seen proved using Galois theory or analogous, relatively sophisticated techniques.
Derksen's proof is a nice induction where first odd dimensions are taken care of, then dimensions of the form 4k+2, then of the form 8k+4, etc.
-
1
Well, this is not really a quick proof. The proofs by complex variables are much faster (granting enough build-up in complex variables). To be fair, a few years ago I made Derksen's proof the goal of my undergraduate linear algebra course for math majors, but it took me two days to go through the argument carefully and I decided it might have been hard for them to appreciate when the argument goes on that long. – KConrad May 16 2010 at 23:48
2
" Well, this is not really a quick proof. The proofs by complex variables are much faster (granting enough build-up in complex variables) " Oh, sure, "granting enough build-up" is the key here. This proof is perhaps the fastest I know from the ground up. But we rarely start at the ground anymore. – Andres Caicedo May 17 2010 at 0:09
2
What about the topological proof? Building up to the fundamental group of the circle only takes about a page! – Steven Gubkin May 17 2010 at 1:08
2
What I meant by "this (Derksen's proof) is not really a quick proof" is that even if you try to explain it to a mathematician I still think it will take a bit of lead-in to get into the argument and you don't walk away thinking "oh, that was very natural", which was the attitude which the original question was about. By the way, are you saying Derksen's proof is the fastest you know which starts from scratch? There are proofs by multivariable calculus which take less time. A proof with double integrals is at math.uconn.edu/~kconrad/blurbs/fundthmalg/… – KConrad May 17 2010 at 4:54
1
Ha! @KConrad: I've been looking at the notes at your site. Very nice! – Andres Caicedo May 17 2010 at 15:12
show 1 more comment
Bezout's theorem - You can prove it with grueling arguments about resultants, or you can use cohomological machinery to do it in one line.
-
2
I think that this one is only a half victory. When I went through the cohomological machinery use to prove Bezout's theorem, it wasn't as quick as it first seemed. At the same time, I think that there is a resultant-ish proof based on the Hilbert-Poincare series of the projective varieties involved, which is comparable in difficulty to the homological proof. Note that if you want Bezout's theorem in positive characteristic, you have to either develop etale cohomology or do something more direct. – Greg Kuperberg May 18 2010 at 3:26
I think I've only ever seen the nasty resultant version. How does the cohomological proof go? If the cohomological machinery is hard to develop from scratch but Bezout's theorem falls out easily once you have it, this would be a great example. – Paul Siegel May 20 2010 at 13:42
Isn't the resultant proof in CP^n trivial? If you have n hypersurfaces of deg d_1,...,d_n and look at their resultant with an extra linear form u, saying Bezout's thm is the same as saying that the degree of this resultant wrt u is the product d_1...d_n. Moreover this polynomial in u is in the Brill locus of completely factorizable homogeneous polynomials and the linear factors are the intersection points of the n hypersurfaces. – Abdelmalek Abdesselam Jul 17 2010 at 21:19
It was about 20 years ago that I have learned it, so I migth not remember things correctly. However, I think Stoke's theorem was considered non-trivial originally. But using differential forms it can be proved by a one line argument.
-
5
We are told by Maxwell in his A Treatise on Electricity and Magnetism (1873, p. 27), "This theorem was given by Professor Stokes, Smith’s Prize Examination, 1854, question 8." However, this does not mean that Stokes's theorem was considered easy -- those old Cambridge exams were possibly the most difficult of all time. – John Stillwell May 18 2010 at 5:07
I have two entries, although there is a wealth of elementary geometric examples similar to #1 and several alternative proofs of #2.
(1) Pascal's theorem: If H is hexagon whose vertices lie on a conic section Q then the points $A,B,C$ where the pairs of the opposite sides intersect are collinear.
I think that the first proof used Menelaus's criterion of collinearity and required a figure, as well as keeping track of various points and lines in order to use Menelaus's theorem. A beautiful short proof based on Bezout's theorem is in vol 1 of Shafarevich's "Algebraic geometry":
If the sides of H are given by the vanishing of linear forms $l_1,l_2,l_3$ and $m_1,m_2,m_3$ in homogeneous projective coordinates, where $l_i$ is the opposite of $m_i$, then $l_1 l_2 l_3 - \lambda m_1 m_2 m_3$ vanishes at the vertices of $H$ and one more arbitrarily chosen point on Q, for a suitable $\lambda$; since $6+1>2\cdot 3$, by Bezout, the cubic is reducible, so it consists of Q and another component, which is a line passing through $A,B,C$.
(2) Isoperimetric inequality: If a simple closed curve in the plane has length $L$ and bounds the region of area $A$ then $L^2-4\pi A\geq 0$ (with equality only in the case of a circle).
The first proof of the isoperimetric property of the circle was attempted by Jacob Steiner using the "four rod" method (related to "Steiner's symmetrization"), but it proceeded under the assumption that the minimum is attained and so was incomplete. Weierstrass gave the first rigorous proof based on variational calculus and it was painstaking. Adolf Hurwitz found an essentially one-line proof (after all the notation has been set up) that is reproduced in "Einfuhrung in die Differentialgeometrie" by Wilhelm Blaschke (p.33 of 1950 edition):
$$L^2-4\pi A = 2\pi^2 \sum_2^{\infty} \frac{a_k^2+{a_k}^{\prime 2}}{k^2-1}\geq 0.$$
Here $a_k$ and $a_k^{\prime}$ are the Fourier coefficients of the position vector of the curve w.r.t. unit tangent vector.
-
The Menelaos proof isn't particularly complicated, if done right. Geometry books like to obscure it by giving the points arbitrary names, but if you follow the path of "rewriting the problem in terms of triangle geometry, and then applying Menelaos" (a very good general tactics, since triangles are probably the mathematical object we know most about), it becomes straightforward and quick. Pascal, rewritten in terms of triangles: Let ABC be a triangle, A' and A'' two points on the line BC, B' and B'' two points on the line CA, and C' and C'' two points on the line AB. [...] – darij grinberg May 18 2010 at 10:18
[...] Then, if A', A'', B', B'', C', C'' lie on one circle, then the points of intersection of B''C' with BC, of C''A' with CA, and of A''B' with AB are collinear. Proof: Apply Menelaos to this points. In order to obtain their respective ratios, you have to apply Menelaos 3 more times, but each time it is immediately clear what points you are applying it to. At the end you must prove that the product of 6 fractions is 1, which follows from the intersecting chords theorem. Of course, the cubic curves proof is fascinating, but I would say the Menelaos proof is not harder. – darij grinberg May 18 2010 at 10:20
Sure, the strategy is clear but I'd call it long and nontransparent: for example, I cannot do it in my head. Before I learned the Bezout argument, my favorite way had been through proving Briancon's theorem by "going into space" that I had read in Prasolov, if I'm not mistaken. Brianchon's theorem concerns circumscribed hexagons, so it's projective dual to Pascal's theorem. – Victor Protsak May 18 2010 at 23:37
Excellent! I should have thought of the isoperimetric inequality example; I went through both the calculus of variations approach and the Fourier series proof as an undergrad, and the Fourier series argument is a remarkable improvement. Fourier analysis is probably responsible for a lot of examples suitable for this question. – Paul Siegel May 20 2010 at 13:49
The proof of Pascals's theorem presented in jstor.org/stable/2324214 is also rather nice. It is perhaps not especially conceptual, but it uses essentially only the inscribed angle theorem. – Lennart Meier Oct 5 2010 at 12:10
Don't know for sure if this example qualifies, but it certainly is a hard problem which becomes trivial from the right point of view. (I learned this from Martin Gardner, proper credits might be researched if necessary).
Problem: three circles in the plane, no two with the same radius, pairwise disjoint. For each pair of circles, there are four straight lines tangent to both; take the two which leave both circles on the same side; they intersect at a point. Repeat the construction for each pair of circles. We get three points: prove that they are collinear.
You may want to think a little about the problem; can be solved both by plane or analytic geometry, with some effort. Not too difficult, but not a one-liner.
Now consider the following solution: add a dimension. You have three spheres, and if you section them through their centers with a plane you get the original three circles. Consider the cones determined by each couple of spheres; the section is the couple of tangent lines seen above, and the tips of the cones are the three points in the problem. Now take two planes touching the three spheres from above and from below....
-
1
There is another one-line solution to this. Each of the three points is the center of a homotopy which maps one of the three circles to another one. The composition of the three homotopies (in the right order) is id, since it fixes a circle and is not a 180° rotation. So the centers of the homotheties are collinear. – darij grinberg Jan 24 2011 at 23:25
this uses more 'advanced' math, much beyond simple visualization, but it's very nice indeed – Piero D'Ancona Jan 25 2011 at 15:12
1
I think it was from one of Peter Winkler's books that I first became aware that the 3D proof doesn't quite work as nicely as it seems to at first glance. In particular, it doesn't apply in all cases. Perhaps the easiest example to see is the case of two giant spheres and a tiny sphere. Then there is no plane touching all three spheres from above or from below. – Timothy Chow Aug 11 2011 at 14:39
I believe Schur's Lemma was originally considered difficult (after all it did get named). However it is now a one-line proof in an undergraduate course.
I suspect Schur was interested in finite dimensional representations of finite dimensional algebras over the complex numbers. Then the lemma is that the endomorphism ring of an irreducible representation is the complex numbers. Don't ask me why this was considered difficult. The definition of an abstract algebra was not published until after Molien and Wedderburn's results so I can see the statement would have been convoluted.
-
1
As far as I know, Schur's proof was the one that we still use. "After all it did get named" is not a good argument: it's an extremely important result, even though the proof is very easy, hence nomenclature "lemma". A better example within the same realm would be Hurwitz's proof of complete reducibility of representations of GL_n using averaging over the maximal compact subgroup: that had previously been known only in special cases, via the (complicated) Cayley $\Omega$ process. – Victor Protsak May 18 2010 at 2:30
Hmm. I still consider the $\Omega$ processes more interesting than the theorems they are used to prove... – darij grinberg May 18 2010 at 10:23
I think Victor is right. In fact, according to Karin Erdman, Schur himself used to be quite embarassed by the fact that such a trivial statement would get his name attached to it, considering that he proved much bigger and more difficult (although possibly not more important) results. – Alex Bartel Oct 5 2010 at 9:05
2
I once simultaneously audited a physics course taught by J. Van Vleck (who later won a Nobel prize) and took a math course from George Mackey. Both of them proved Schur's Lemma, and only a few days apart. Van Vleck's proof was done entirely with complicated matrix manipulations and took about fifteen minutes. Mackey gave the easy one sentence proof without even writing anything on the board ! – Dick Palais Jan 25 2011 at 6:58
Here is another example from functional analysis. There are several basic results, such as the principle of uniform boundedness and the open mapping theorem, that follow easily from the Baire category theorem. However, I recall from reading Halmos's autobiography I Want to Be a Mathematician that the original proofs of these results were rather complicated and the theorems were considered to be significant achievements.
-
1
I think the Baire category theorem is responsible for quick proofs of lots of hard theorems, such as the existence of continuous nowhere differentiable functions. I recently worked out a way to use it to prove the existence of bump functions, too. I think the Hahn-Banach theorem was also considered difficult in its time, and lots of its easy consequences started life as nontrivial theorems. I guess there are a wealth of examples in functional analysis, perhaps since analysis has been around for so long. For that reason I was expecting more examples in number theory and Algebraic geometry. – Paul Siegel May 20 2010 at 14:00
1
@Paul: can you explain (or link to) your argument about Baire and bump functions? It would be nice to see. – Andrea Ferretti May 21 2010 at 12:42
Gelfand–Mazur theorem. "A complex Banach algebra, with unit 1, in which every nonzero element is invertible, is isometrically isomorphic to the complex numbers."
The proof is the one everybody knows.
-
How about de Branges' proof of the Bieberbach conjecture? My understanding is that his original proof ran to 100+ pages, but others soon found a way of bringing it down to considerably less than that - maybe not a quick proof, but a relatively quick proof.
-
1
How does that fit the requirements? What is the sophisticated result with short proof from which it follows? de Branges' original "proof" was notorious for its complexity and gaps, so yes, anything else would be an advance... – Victor Protsak May 17 2010 at 23:06
I was told by my (graduate school) teacher of functional analysis that originally the complex case of the Hahn-Banach theorem was considered a major open problem. It was eventually shown to be such a simple consequence of the real case, that now, no one knows who came up with the trick.
-
I've never actually looked this up, but I've seen in several places (including the notes to Chapter 3 of Rudin's Functional Analysis) that the complex version was proved by H.V. Bohnenblust and A. Sobczyk, Bull. Amer. Math. Soc., vol. 44, pp. 91-93, 1938, and by G.A. Soukhomlinoff, Mat. Sbornik, vol. 3, pp. 353-358, 1938. (And yes, the proof was both simple and unexpected.) – Carl Offner May 30 2010 at 16:05
Yes, Yosida also quotes Bohnenblust and Sobczyk – Pietro Majer Jul 19 2010 at 13:59
The original article containing proof of the Radon-Nikodým theorem has about 50 pages. John von Neumann proved it by little trick and Riesz representation theorem (that one about Hilbert space functionals) in three lines.
-
The incompressibility method based on Kolmogorov complexity is desribed in "Kolmogorov Incompressibility Method in Formal Proofs A Critical Survey", V Megalooikonomou - 1997, as often being more elegant, intuitive, simpler and shorter than counting arguments, or the probabilistic method, in areas such as lower bounds, average case complexity, random graphs or pumping lemmas in formal language theory.
-
The fundamental theorem of calculus; all the long and difficult proofs of Eudoxus and Archimedes became clear and simple. Similarly with co-ordinate geometry.
-
Power series. Both conceptually and computationally, in the 17th century they replaced a multitude of ad-hoc methods that had been used for millennia.
-
1
Methods such as? – Andres Caicedo Jun 14 2010 at 6:46
Most of the problems tackled in introductory calculus courses (tangent lines of and areas under basic curves, volumes and areas of solids of revolution, etc) had to be solved on a case-by-case basis, with some pretty complicated and ingenious proofs; now any undergraduate can solve them in a few lines by rote methodology.
-
People say that Hilbert's basis theorem was once proven using pages of explicit computation with polynomials, but now everyone learns Hilbert's beautiful, if non-constructive, proof instead. Regrettably I have no idea what the "old proof" looks like.
-
1
What? I think there is by definition no constructive proof of Hilbert's basis theorem. Maybe you mean the Gröbner basis for the invariant ring of a group action, or the projective resolution? These are both still best proven constructively, as the constructive proofs yield tons of additional results. – darij grinberg May 16 2010 at 20:16
2
Okay, that's always the question with the word "constructive". If we have an ideal given by some equations, or even by generators, how much do we know about the set of leading coefficients of elements of this ideal? Not enough to find its generators. But then again, Hilbert's basis theorem is not 100% constructive itself, for this very reason: we have no idea how the ideal is given. – darij grinberg May 16 2010 at 20:37
3
@Qiaochu: I read an article on the history of the HBT, and the way it actually "went down" was that the theorem was proven individually for explicit rings. Hilbert proved the HBT and in doing so proved the general theorem. – Harry Gindi May 16 2010 at 22:50
1
The quote is Gordan's. – Mariano Suárez-Alvarez May 16 2010 at 23:52
7
Do read McLarty's article people.math.jussieu.fr/~harris/theology.pdf on Gordan's attitude towards Hilbert's work. Historical reality is much more interesting than the myth. – David Corfield May 17 2010 at 13:39
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555788636207581, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/243862/backward-euler-method-error
|
# backward euler method error
Let
$x'(t)=f(t,x(t)), t\in(0,T)$ with $x(0)=x_0$
$f$ satifies the Lipschitz-condition $f(t,x)-f(t,y)\le L|x-y|$
$h\in (0,\frac{1}{L})$ is the step size and the approximation $x_k$ for $x(t_k)=hk$ is given by $x_k=x_{k-1}+hf(t_k,x_k)$.
Now I would be very interested how to derive the error
$$|x_k-x(t_k)|\le\frac{1}{1-Lh}\left(|x_{k-1}-x(t_{k-1})|+\frac{h^2}{2} \max_{s\in [0,T]}|x''(s)|\right)$$
I tried to look up it up in some numerical analysis books but it is always different
-
## 1 Answer
First, we get local truncation error.
$x(t_{k+1}) = x(t_k) + hf(t_k,x(t_k)) + \tau_k$
$\tau_k = x(t_{k+1}) - x(t_k) - hf(t_k,x(t_k)) = \frac{h^2}{2}x''(\eta)$. Where $\eta \in (t_k,t_{k+1})$.
Then we get the bound,
$$|x(t_{k+1}) - x_{k+1}| \le (1+hL)|x(t_{k}) - x_{k}| + |\tau_k|$$ $$\le (1+hL)|x(t_{k}) - x_{k}| + \frac{h^2}{2}\max_{s \in (0,T)}|x''(s)|$$ $$\le \frac{1}{1-hL}|x(t_{k}) - x_{k}| + \frac{h^2}{2}\max_{s \in (0,T)}|x''(s)|$$
Where the last step is from geometric series. Maybe it would be helpful if you listed the other results you are talking about, and then we can show that they're equivalent.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.925955593585968, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/118419/list
|
## Return to Answer
1 [made Community Wiki]
My favorite reference for point-free topology is the very new book.
Frames and Locales: Topology Without Points by Picado and Pultr.
This book is an excellent book for those who want to learn about point-free topology for the first time and as a reference for those who are already familiar with point-free topology.
As for recent results in point-free topology, I have recently been researching a duality in point-free topology. My new duality represents all zero-dimensional frames as Boolean algebras along with specified least upper bounds.
We therefore define a Boolean admissibility system to be a pair $(B,\mathcal{A})$ such that $\mathcal{A}$ is a subset of the powerset $P(B)$ that satisfies the following properties.
1. If $R\in\mathcal{A}$, then $R$ has a least upper bound.
2. $\mathcal{A}$ contains each finite subset of $P(B)$
3. If $R\in\mathcal{A},S\subseteq B,S\subseteq\downarrow\bigvee R=\{a\in B|a\leq\bigvee R\}$ and $R$ refines $S$(i.e. for each $r\in R$ there is an $s\in S$ with $r\leq s$), then $S\in\mathcal{A}$ as well.
4. If $R\in\mathcal{A}$ and $R_{r}\in\mathcal{A},\bigvee R_{r}=r$ for $r\in R$, then $\bigcup_{r\in R}R_{r}\in\mathcal{A}$
5. If $R\in\mathcal{A}$, then $\{r\wedge a|r\in R\}\in\mathcal{A}$ for each $a\in B$.
Property $1$ states that $\mathcal{A}$ is a collection of least upper bounds and properties $2-5$ state that $\mathcal{A}$ contains all sets with least upper bounds that you would want to include. For instance, in a Boolean algebra you would always want to include the least upper bound of a finite set. Axioms $2-5$ get rid of all the trivial differences between Boolean admissibility systems. A Boolean admissibility system $(B,\mathcal{A})$ is called subcomplete if whenever $R\cup S\in\mathcal{A}$ and $r\wedge s=0$ whenever $r\in R,s\in S$, then $\bigvee R$ exists.
I recently proved that the category of Boolean admissibility systems is equivalent to the category of all pairs $(L,A)$ such that $L$ is a frame and $A$ is a Boolean sublattice of $L$ which is a "basis" for $L$(i.e. $A$ is a sublattice of $L$ consisting of complemented elements where each element in $L$ is the join of elements in $A$). This equivalence of categories restricts to an equivalence between the category of all zero-dimensional frames and subcomplete Boolean admissibility systems.
With this duality, I was able to characterize point-free topological properties in terms of the corresponding Boolean admissibility systems. These properties include ultraparacompactness, ultranormality, $\kappa$-compact zero-dimensional frames(where $\kappa$ is a cardinal), extremally disconnected frames(as Boolean admissibility systems which are complete Boolean algebras), Lindelof $P$-frames(as $\sigma$-complete Boolean algebras), and other properties.
This result does not have as much of a pointed analogue since very rarely does a Boolean admissibility system correspond to zero-dimensional space (i.e. a spatial zero dimensional frame). The Boolean admissibility systems that correspond to topologies are precisely the subcomplete Boolean admissibility systems $(B,\mathcal{A})$ where each ideal closed under taking least upper bounds in $\mathcal{A}$ can be extended to a maximal ideal closed under taking least upper bounds in $\mathcal{A}$. This property can be characterized by a very strong distributivity property and very few Boolean admissibility systems satisfy this property.
I should also note that one can represent any pair $(L,A)$ where $L$ is a frame and $A$ is a "basis" for $L$ as the poset $A$ along with specified least upper bounds. Unfortunately, even though this setting is more general, I have not yet found a way to represent any separation axioms in terms of posets with specified least upper bounds.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285575747489929, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=3944305
|
Physics Forums
## row operations performed on two matrices
if you perform row operations on a matrix A to convert it to the identity matrix and then use the same row operations and apply it to another matrix B, why is it that the end result of B^r does not depends on B's actual sequence
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
What do you mean by B's actual sequence?
Recognitions: Gold Member Science Advisor Staff Emeritus And, what do you mean by "B^r"? Every row reduction is equivalent to an "elementary matrix"- the result of applying that row reduction to the identity matrix. Applying a given row operation to a matrix is the same as multiplying the corresponding elementary matrix. And applying row operations to A to reduce it the identity matrix means that the product of the corresponding elementary matrices is $A^{-1}$. Applying those row operations to B gives $A^{-1}B$. That means, in particular, that if you have the matrix equation Ax= B, and apply the the row operations that reduce A to the identity matrix to B, you get $x= A^{-1}B$, the solution to the equation.
## row operations performed on two matrices
When I say Bs actual sequence, I mean the numbers that compose that matrix such as a 3x3 matrix with the numbers 654,896,327 and when I say Br I mean performing the exact same row operations that you did on A and applying them to B in the same order and I want to know why it doesn't matter what the actual sequence of B is as long as you're performing the same row operations on it as you did with another matrix, A
I guess the short answer is that the result you get does depend on the entries of B in exactly the way that HallsofIvy explained. What doesn't matter I guess is the exact sequence of steps you took to row reduce A. As long as you do row operations that eventually reduce A to the identity, the result of all those row operations combines to be the same operation. When you apply that operation on B, you'll always get the matrix A^(-1)B.
Thread Tools
| | | |
|---------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: row operations performed on two matrices | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 12 |
| | Calculus & Beyond Homework | 1 |
| | General Math | 8 |
| | Linear & Abstract Algebra | 3 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938105583190918, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/178180/graded-ring-finite-sum
|
# Graded Ring - Finite Sum
I've just read that if $R=R_0\oplus R_1 \oplus \dots$ is a graded ring and $f\in R$ then there's a unique decomposition of $f$ as $f=f_0+\dots+f_n$ with $f_i\in R_i$. I can't see immediately why in general this would have to be a finite sum! Could someone possibly enlighten me? I've got a feeling I'm being stupid, so apologies if it's completely obvious!
-
2
Basically, the direct sum is defined to be the subset of the direct product with all but finitely many of the terms equal to zero. Without a topology there isn't any way to define what it would mean for an infinite sum to converge. – Mike B Aug 2 '12 at 21:35
## 1 Answer
It's because the graded ring is a direct sum of the $R_i$'s, not a direct product. Every element of a direct sum has only finitely many non-zero summands by definition.
PS: I always try to keep examples in mind when learning a definition. In the case of graded rings, the first I think of is $k[x_1,\ldots, x_n]$ graded by degree, where $k$ is a field. Here the idea that only finite sums are allowed is exactly what we should expect from experience.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9725666046142578, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/284889/does-every-noninvertible-element-of-a-commutative-ring-lie-in-a-proper-maximal-i/284891
|
# Does every noninvertible element of a commutative ring lie in a proper maximal ideal?
More formally stated:
Prove that if $R$ is a commutative ring with $1$, then every element of $R$ that is not invertible is contained in a proper maximal ideal.
I know I have to assume Zorn's Lemma, but I don't see how non-invertible elements must lie in a proper maximal Ideal. Any hints?
-
This follows from showing that noninvertible elements lie in some proper ideal and that all proper ideals are contained in maximal ideals. In this vein, what is an ideal containing noninvertible $x$? It is proper? Now, consider the set of all ideals containing this ideal. How might you express the supremum of a chain $I_0\subset I_1\subset\dots I_n\subset\dots$? – peoplepower Jan 23 at 8:50
I don't see why this should be tagged as [axiom-of-choice]. There is no question about the necessity of the use of Zorn's in the proof, there is a question about the proof itself. @BDub: ping. – Asaf Karagila Jan 23 at 9:38
@AsafKaragila I agree, to me, ring-theory and ideals seem like the most pertinent tags. I didn't retag the question with AC. If the edit log shows that I did, I think it's because I submitted a formatting edit on a version of the question before your retags took effect? Perhaps the tags reappeared when it was approved. – Ben Jan 23 at 9:44
@BDub: You need 300 reputation for retagging, so I suppose that you just retagged without loading my revision, which caused an override. – Asaf Karagila Jan 23 at 9:47
## 1 Answer
It is well known that every proper ideal is contained in a maximal ideal. If $a$ is a noninvertible element, then the generated ideal $(a)$ is not the whole ring. If it were, then $1\in (a)$, implying $ab=1$ for some $b$, a contradiction. As a proper ideal, $(a)$, and hence $a$, must then be contained in a maximal ideal.
-
it should be noted that this result uses the axiom of choice. – Ittay Weiss Jan 23 at 9:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349133372306824, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/36421/does-a-material-exist-that-reduces-a-magnetic-field-without-being-affected-by-th?answertab=active
|
# Does a material exist that reduces a magnetic field without being affected by the magnetic field itself?
Consider a common bar magnet, `magnet 1` resting on a surface with its `North` pole facing up. Suspended distance $y$ above it (supported side-to-side by a plastic tube) is a second, smaller bar magnet, `magnet 2`, with its `North` pole facing down. The magnetic forces between them exceed the force of gravity, and keep `magnet 2` suspended. Consider some material, `material-X` that is moving towards the gap between the two magnets at an initial velocity $v$.
Does a material, `material-X` exist that would reduce the distance $y$ between the two magnets, and pass through the gap without changing velocity $v$?
-
such a strange question – Physiks lover Sep 15 '12 at 12:47
## 4 Answers
The material you are looking for could be a super conductor. These material have zero resistance for current and thus can compensate penetrating field lines within the first layers of the material. This phenomenon is called the Meissner effect and is the very definition of the supra-conductive state.
In your case of a plate between two magnets, this would definitively reduce $y$.
For the velocity:
Here, normally the eddy currents induced by a magnetic field lead to a loss of power, given by:
$$P = \frac{\pi^2 B_\text{p}^{\,2} d^2 f^2 }{6k \rho D},$$
since, however, a super conductor has zero resistance and thus, de facto,
$$\rho = \infty$$
no kinetic energy should be lost, and thus the velocity will remain unchanged.
There is only one problem:
Superconductor can only exist under very low temperature, so this might not be realizable in the case of your machine...you would at least need a cooling system working with liquid nitrogen to cool it.
Other than superconductors, I do not see any possible material because, either the material is a conductor, then you always have losses due to the eddy currents (thus reducing $v$) or the material is not a conductor (then $y$ will not decrease).
-
Is this phenomenon observable in a machine or experiment somewhere? – adamdport Sep 27 '12 at 16:10
– dedoco Sep 28 '12 at 10:53
1
The point is, however, when the superconductor enters into the magnetic field, the field lines are deviated, which will be related to work... so actually, entering the region between the two magnets will cost some energy. If the plate leaves the area after, the energy will be won back. – dedoco Sep 28 '12 at 11:01
– dedoco Sep 28 '12 at 11:02
Any material with a permeabilty mu different from the one of air would modify the equilibrium. metglas's permeability is 10^6 time greater than the permeability of air and any super conductor has a zero mu.
Any material which is not conductor would not be affected by its motion through B. In a conductor, a varying B induces an electric field than runs a current. The induced current exerts a JxB force on the conductor.
-
So does this mean such a `material-x` cannot exist? – adamdport Sep 20 '12 at 18:55
There exist materials with very large magnetic permeability, like the so-called µ-metal. They are used to fabricate shields which attenuate the magnetic field of the Earth at the electron-beam path in sensitive electron-optical instruments.
Since your question combines two distinct parts, I will split it in order to address each of them them separately.
1. Static case: Do the magnetic poles come closer to each another when a magnetically shielding plate is placed between them?
Mu-materials do not "kill" the magnetic field between your magnetic poles, but only divert its direction by channeling part of it into the metallic shield. This would strongly alter the field strength ${\bf B}$ at the shield surface, by almost suppressing its parallel components. This results in a reduced magnetic pressure $p=\frac{\bf B^2}{8 \pi \mu}$ in the immediate vicinity of the shield surface. If this reduction of the magnetic field at the shield would significantly alter the magnetic pressure at the site of the magnets causing them to move? A more detailed calculation would be here necessary, I am afraid.
2. Movement of the plate: Is it possible that the velocity of the shielding plate will not be altered?
Consider the following very simple and intuitive experiment: take a copper pipe and hold it vertically. Take a small magnet and let it fall inside the pipe. The magnet falls: i) slowly and ii) with uniform speed.
Your geometry can be made similar to that of the falling pipe: consider a column of magnets levitating upon each other, i.e. with paired poles, N-N and S-S. Now take a "multi-plate" shield made of parallel sheets, firmly kept in place at equal distance from each other (like a 2D comb). This wold mimic a multiple falling pipes in parallel.
If you now hold the column of magnets in vertical direction and you pull the multi-plate with constant force (analogue of gravity) through them, then you will reach a regime of constant velocity - by analogy with the falling pipe experiment.
This suggest that the column of magnets or better put, their magnetic field, acts on the copper plates a viscous medium: $$m_{plate}\dot{v}=-\gamma_{\bf B} \ v+F_{pull}$$ where $\gamma_{\bf B}$ would be an effective friction coefficient due to the magnetic field, perturbed by the presence of the plates. After some time you will eventually reach a regime where the force of friction would compensate your pull and the velocity will remain constant: $v= \frac{F_{pull}}{\gamma_{\bf B}}$.
If this speed is equal to the speed you had before pulling the plate(s) into the magnetic field, it is a matter of how you manage your force of pull. Note: if no pull, then the plate will be simply stopped by the magnetic brake effect. So you have to pull accordingly, if you want to have constant velocity.
BONUS: A Magnetic Toy.
-
NO. actio = re-action: if A affects B, then B affects A. So, any material that reduces the magnetic field, and if only by deflecting it, is affected by it in some way.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930199921131134, "perplexity_flag": "middle"}
|
http://electronics.stackexchange.com/questions/11218/how-to-properly-bias-a-opto-coupler/11221
|
# How to properly Bias a Opto Coupler
I had some problems setting up a Opto Coupler today. The data sheet is here. I set it up putting a 470 ohm resistor to ground from the cathode (pin 2). I am understand the selection of this resistor as it is used to set the current through the LED to about 10mA when he ANODE(pin 1) is given 5 Volts. What I don't understand is why a 270 Ohm resistor(as shown in the test circuit) is needed between pin 4 and 6. From testing I know that with the 270 resistor I get a range of about .2 to 5 volts on the output, and without the outputs range is 0.08 to 1.4 volts. I know its something to do with the biasing the circuit and the gains from the internal transistor... Wondering if someone could point me towards a source that actually explains how to bias a opto coupler... its been a few years since I played with transistors.
-
## 1 Answer
Page 1 of the datasheet clearly says 'open collector output'. Without a pull-up, the voltage is undefined.
The purpose of 270 $\Omega$ on the test circuit is simply to provide a defined amount of current for the output logic to sink during the test. Whatever pull-up value you use is up to you, just don't exceed the sinking capability of the output stage.
When it comes to biasing an opto, the best practice is to choose a resistor that will allow the minimum amount of current needed to do the job over the expected current transfer ratio (CTR) distribution of the batch. Generally, the lower the current, the longer the expected life of the part.
-
1
I was halfway through an identical answer. – Kevin Vermeer Mar 9 '11 at 18:33
2
Looks like I win! :) – Madmanguruman Mar 9 '11 at 22:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399433732032776, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/5193/padless-one-time-pad-encryption/5202
|
# “Padless” One-time-Pad encryption
I have just read about the perfect security of an OTP encryption and what came to my mind was that what if the Pad used for encryption/decryption did not have to be transported separately from the message, but instead together with it?
And what if it was computed using an algorithm from the message itself (from all the characters in an 1:1 way so that it had the same size) providing a random-looking pad for encrypting the message, and after encrypting the message would be stored in an interwoven way in the message itself, expanding the file size by a factor of 2. The encrypted file itself would have double size of the original one but no-one would be able to find out what is the message and what is the key since it would be interwoven using a secret algorithm, perhaps the same one as was used for generating the pad.
Would this be a good encryption comparable to real OTP with the pad transferred and generated separately? Did I just invent the wheel, or is this a breakthrough thing (I mean storing the pad inside the file in a secret way preventing others to extract it)?
I know calculating the pad from the message itself would perhaps break the real randomness so this could be avoided and the pad simply taken from a real random key number generator.
-
What "secret algorithm" would you use to interleave the pad with the ciphertext? What entropy source would it use, where would you store the key? Would it still be unconditionally secure? Your scheme is far too vague at the moment, and to be fair, I think it's safe to say you invented a broken wheel for now. Could you detail some of the aspects of the algorithm, such as how would you mix the pad into the ciphertext, and how would the encryption/decryption work? Then we'll be able to say something. – Thomas Oct 29 '12 at 8:08
Any combination of mathematical functions would work, like sinus, log, etc. The user would just enter some formula for every message/file to be encrypted using an OTP which would be buried inside the file using this one-time function. The final file would be double size of the original since it would include the OTP, too (with the same size as the encrypted content). Encryption would be standard XOR, as used in standard OTP. – Lubo Oct 30 '12 at 6:01
Those functions are defined on reals. How do you plan to restrict them to integers (e.g. bit, or byte, whatever positions to interleave)? Floor function? And can you estimate the entropy of any one formula? (hint: it is quite hard and the overall entropy is quite low because, precisely, the functions are continuous and smooth) – Thomas Oct 30 '12 at 6:34
Hint: you can roughly estimate the entropy of the function by considering every possible "interleaved pad" configuration, and seeing how many of these are reachable by the function over its parameters. You can immediately see that for a simple sine wave, almost every configuration will be unreachable, thus the entropy is extremely low and thus the pad is easily recovered. Also note the difficulty of such an analysis in the general case, compared with the relatively straightforward proof of "standard" OTP. – Thomas Oct 30 '12 at 6:40
– Stephen Touset Oct 30 '12 at 6:53
## 4 Answers
Your "secret algorithm" is now effectively the key.
If the algorithm stays fixed for all messages, you now no longer have a one-time pad. So let's make your algorithm be tweakable by, say, accepting as input a string that describes its exact operation. Now we have the benefit of being able to publicize the "secret algorithm", since the actual secret details are contained in this input string. Except now that input string is the key, and now you have to find a way to transmit it... which is the problem you were trying to solve in the first place.
-
2
And if the "secret algorithm" is shorter than the message, then you no longer have a One Time Pad. If the "secret algorithm" is longer than the message, then why not use a plain OTP? – rossum Oct 29 '12 at 21:06
The perfectly random OTP would be buried in the message, the algorithm to bury it in there would change with every message (the user would input it) and as such would be much shorter that the OTP (which will be exactly same size as the message) so much easier to transfer securely (one could just tell someone else and he would just remember the function in his/her memory) – Lubo Oct 30 '12 at 5:58
<code></code> does not work, I would insert a code sample. – Lubo Oct 30 '12 at 6:11
1
As I said in my answer, now that input is actually the key. And since it's shorter (by your own words), it is by definition no longer a one-time pad, nor does it any longer provide perfect secrecy. – Stephen Touset Oct 30 '12 at 6:49
Security by obscurity ( your security algorithm to compute the pad from the message itself) will not be secure for longer time .
Through the ages what worked in cryptography is , Making the algorithm (i.e the crypto system ) public and just keeping the key secret Kerckhoff's principle .
-
The "secret algorithm" to interleave the pad with the ciphertext would be entered by the user (e.g. 25*$i*sin($i)/124) and would serve as the only password in this case. Let's forget about the computing of the pad from the message itself, and suppose it would come from a real RNG. Would this be secure enough (i.e. interleaving the pad using a secret function withing the source previously encrypted with the same function)? How could someone guess what is the encrypted content and what is the pad then? – Lubo Oct 29 '12 at 11:04
That would not be tough using cryptanalytic techniques, its dangerous assumption that no body might understand your secure function – sashank Oct 29 '12 at 12:05
Is it really possible to separate the truly random pad from the encrypted content by some cryptanalytic technique? I can not imagine how could this work in case the interleaving function is fairly complicated, or even if it is simple since the text is already encrypted by truly random pad and the pad is random, too. Interleaving them together further increases the randomness of the whole data. – Lubo Oct 29 '12 at 14:53
I did it in Perl, the basic idea is this:`i=i=filesize-1;while(i>=0) {i>=0) { iPos=int(i∗sin(eval(i * sin(eval(sEncFunc))); sKeyFragment=substrsKeyFragment = substr sDataFromFileTemp, iPos,1,′′;iPos, 1, ''; sEncryptionKey .= sKeyFragment;#pridaj do kluca# print OUTFILETEMP "sKeyFragment;#pridaj do kluca# print OUTFILETEMP "sDataFromFileTemp\tsKeyFragment\tsKeyFragment\tiPos\n"; $i--;}` – Lubo Oct 30 '12 at 6:03
OMG, this got all corrupted, how do I paste a code snippet properly? – Lubo Oct 30 '12 at 6:09
show 5 more comments
There have been a few designs along this idea (transmitting the pad together with the message), see for instance Diffie's cipher and the likes. The problem with these designs is that the secret key can only be used once as (for instance) a simple known plaintext attack would reveal it, so that it does not fulfill the highest requirements in terms of security of an encryption scheme.
As other have pointed out, the alternative you suggest, that is keeping the details of the extraction from the information sent, is something cryptographers avoid. Additionally, it likely won't meet the requirements in terms of security of an encryption scheme (I foresee that chosen paintext attacks would allow the reconstruction of the underlying allegedly secret scheme).
-
What you're proposing is an algorithm, which is very much not a one-time pad. This is frequently a source of confusion. People have even sold commercial "one-time-pad encryption" software based on this flawed idea.
In cryptography, a one-time pad has a very specific meaning. Each byte/character/whatever of plaintext is individually modified by a corresponding byte of key material such that each byte has an equal possibility of having any possible value in its range. Each byte of the key material must be completely random, meaning the value of one key byte is not dependent on any of the previous or following key bytes or message bytes. (Interpreting this last requirement means the key material must never be reused, even for a different message.) That's the entirety of a one-time pad.
Historically the modification function was a simple alphabetic substitution cipher. Each character of the plaintext was added to its key letter, and deciphering subtracted each key letter from the corresponding letter of the ciphertext. Digital implementations of OTPs use XOR for both operations because it's invertible - the same operation works for both encryption and decryption. But in both cases, the algorithm is extremely simple. All security resides solely in the key.
A OTP is secure only because the key material bytes don't relate to each other. If the bytes are related in any way, the attacker might be able to figure out the relationship. For example, if you use a code phrase instead of random letters for the key, an attacker can try different phrases, and when he stumbles upon "WHENINTHECOURSEOF" he might expect the next letters to be "HUMANEVENTS". See how the relationship makes them guessable?
The randomness has to be truly random. The output of the C rand() function is not random - if you run it three times, each time starting with the same seed, you get the same numbers out three times. Computers are deterministic state machines, and as such are horrible sources of randomness. Attackers and cryptanalysts already know this from several well publicized flaws. That's why we argue endlessly about what constitutes a "cryptographically secure random number generator", "true entropy sources", and the like.
Similarly, if you come up with your own "random" algorithm like "add one to the first letter, subtract two from the next, add three to the third, etc.," the attacker might spot the pattern and try subtracting four from the fourth.
You might get slightly more clever, and say "I'll keep these numbers +1, -2, +3, -4 secret, and only tell them to my friend who is decrypting them." Your solution is now exactly a secret key algorithm (albeit an unproven one).
These are all deliberately simple examples. I'm sure you can think of a "pattern" that you're sure we could not guess, and I'd equally promise we probably wouldn't try too hard. The reason is that it's not a OTP, it's an algorithm, and it's likely not worth the trouble. These have been proposed hundreds of times before, and all are built on the same misconceptions.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474385976791382, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/40920/what-if-current-foundations-of-mathematics-are-inconsistent/41030
|
## What if Current Foundations of Mathematics are Inconsistent? [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The title of the question is also the title of a talk by Vladimir Voevodsky, available here.
Had this kind of opinion been expressed before?
EDIT. Thanks to all answerers, commentators, voters, and viewers! --- Here are three more links:
Question arising from Voevodsky's talk on inconsistency by John Stillwell,
Nelson's program to show inconsistency of ZF, by Andreas Thom,
Pierre Colmez, La logique c’est pas logique !
EDIT. Here the link to the FOM list discussing these themes.
-
65
Then the unreasonable effectiveness of mathematics would become slightly more unreasonable – Piero D'Ancona Oct 3 2010 at 11:20
17
I'm confused. Is the question, "What if the current foundations of Mathematics are inconsistent?" Or is the question, "Has this kind of opinion been expressed before?" – Gerry Myerson Oct 3 2010 at 12:15
9
+1 just for pointing out this talk. – Michael Oct 3 2010 at 12:33
7
I don't understand his objection to Gentzen's proof at 29:00. Why would someone be skeptical about well foundedness of $\epsilon_0$? – muad Oct 3 2010 at 13:36
11
Bourbaki dropped the axiom(-scheme) of replacement in their development of mathemetics, so they don't, I think, have enough mathematics to build the ordinals. However their work seems to indicate that they had enough to do an awful lot of mathematics (probably all of the mathematics I've ever done and will do won't need replacement). My guess is that if ZFC is inconsistent then replacement will be the first axiom for the chop. – Kevin Buzzard Oct 3 2010 at 20:54
show 19 more comments
## 16 Answers
The talk in question was given as part of a celebration (this past weekend) of the 80th anniversary of the founding of the Institute for Advanced Study in Princeton. As you might guess there were quite a few very well-known mathematicians and physicists in the audience. (To name just a few, Jack Milnor, Jean Bourgain, Robert Langlands, Frank Wilczek, and Freeman Dyson, all of whom also spoke during the weekend.) The talk was a gem, and what did come as a surprise, at least to me, was that towards the end of his talk Voevodsky let on that he hoped that someone did find an inconsistency---and that by that time there was no audible gasp from the audience. There was of course a very lively discussion after the talk, and nobody seemed willing to say they felt that the "Current Foundations" (whatever they are) are definitely consistent. Of course Voevodsky was NOT saying that he felt that the body of theorems making up the "classic mathematics" that we normally deal with might be inconsistent, that is quite a different matter. What we should keep in mind is that a hundred years ago an earlier generation of mathematicians were quite surprised by not one but several "antinomies", like Russell's Paradox, The Burali-Forti Paradox, etc., (and that was followed by the greatest century in the history of Mathematics). As to the question "Had this kind of opinion been expressed before?", yes of course it has, but perhaps not so forcefully or in such a high-level forum. One person who has been expressing such ideas in recent years is my old friend Ed Nelson, who was also in the audience. (You can see his ideas in a recent paper: http://www.math.princeton.edu/~nelson/papers/warn.pdf). I spoke with him after the talk and he seemed pleased that it was now becoming acceptable to discuss the matter seriously.
-
6
+1 for the link to Nelson's paper -- it's an interesting read. Thanks! – José Figueroa-O'Farrill Oct 3 2010 at 15:26
12
I'm waiting for Andy Putman to comment on the proper spelling of "antinomy". – Gerry Myerson Oct 3 2010 at 22:18
17
@Gerry : Have I really acquired that strong a reputation for pedantry? Not that I'm claiming it's false -- I just asked my wife if I was an obnoxious pedant, and she nodded her head vigorously -- but the only spelling errors I recall correcting on MO involve my last name, which mathematicians seem unable to spell correctly. When I was a grad student, I recall visiting another university to give a talk, and on the drive from the airport a rather prominent mathematician who couldn't spell my last name angrily chewed me out for not posting my papers to the arXiv... – Andy Putman Oct 4 2010 at 1:43
21
@Andy, sorry, no implication of pedantry intended - I just noticed that the answer above suffered from the same m and n transposition that affects so many who trip over your last name, and so I thought you'd be the logical person to point out that "antimonies", in this context, should be "antinomies". – Gerry Myerson Oct 4 2010 at 4:43
13
Ah, that's actually a pretty clever joke! So clever, of course, that I totally missed it when I first saw it. Well, then, I must insist on the correct spelling! In fact, I'll correct it myself! – Andy Putman Oct 4 2010 at 4:48
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Contrary to popular opinion, there is no single foundation for mathematics. Probably you're referring to ZF or ZFC, but most mathematics can be developed on the basis of axioms that are logically much weaker than that. If an inconsistency in ZF were discovered, we would analyze the inconsistency and then scale back to some weaker system that would avoid the inconsistency yet still suffice for 99%+ of mathematics. Much of the work of finding other candidates for foundations, and figuring out how much mathematics can be developed from them, has already been done by those working in the field known as "reverse mathematics." The basic text in this field is Simpson's Subsystems of Second-Order Arithmetic, but there is a growing literature.
We've already seen a dry run of this kind of instantaneous damage control. When Kunen's inconsistency theorem showed that Reinhardt cardinals were inconsistent, his work was hailed as a major achievement, but all we did was toss out Reinhardt cardinals and restrict ourselves to large cardinals below that bound.
For most mathematicians, "ZFC" is just an arbitrary trigraph that is cited when the need arises to specify a particular foundation for mathematics. I daresay many people who toss the trigraph around couldn't even state all the axioms of ZFC precisely. If we scale back to some other system that goes by some other trigraph, it won't take much retraining to learn the new trigraph. For most researchers, that will be the only impact on their day-to-day work.
-
10
Your last paragraph almost made me ruin my (new!) keyboard with coffee. +1, as MO does not do that often enough! – Mariano Suárez-Alvarez Oct 4 2010 at 17:39
4
Thus Paul Feyerabend's "Great scientists are methodological opportunists who use any moves that come to hand, even if they thereby violate canons of empiricist methodology" translates into "Great mathematicans are foundational opportunists who use any moves that come to hand, even if they thereby violate canons of logical methodology"? – Bruce Arnold Oct 4 2010 at 20:59
7
@Bruce: No, the point is that ZFC is way overkill for most of mathematics, so we have tons of slack to scale it down if necessary. See this MO question: mathoverflow.net/questions/36580/… or this one: mathoverflow.net/questions/39452/… – Timothy Chow Oct 4 2010 at 22:00
5
Reinhardt cardinals havnt been completely tossed out yet: they are inconsistent with ZFC, but it is unknown if they are consistent with ZF. – Richard Borcherds Oct 4 2010 at 22:41
4
But, if I understand that correctly, it appears to me that Voevodsky is concerned about the consistency of first order Peano arithmetic, which is more serious than the non-existence of an "overly large" cardinal. In particular, Voevodsky has stated in his talk that the ordinal $\epsilon_0$ might not exist. – Greg Graviton Oct 6 2010 at 16:54
show 2 more comments
If I found an inconsistency in mathematics, I would write up solutions to the six remaining Clay problems, collect my six million, retire and let you guys sort out the mess.
-
26
If you could write up proofs of those problems where it wasn't obvious you had found a contradiction in ZFC, you'd deserve all the money. – Ryan Budney Oct 3 2010 at 23:14
14
Uhm, the Yang-Mills mass-gap problem includes the rigorous mathematical formulation of the problem itself, so you can't get the prize just by using the inconsistency of ZFC... – Yuji Tachikawa Oct 4 2010 at 1:44
I agree with Ryan. This is a great idea ;-). – Martin Brandenburg Oct 4 2010 at 8:21
24
You could collect seven or eight million, not six or seven (depending on Yang Mills): the P=NP prize is the only one formulated in such a way that either a positive or negative answer is guaranteed 1 million (in contrast, if one found a zero of zeta off the line, a committee would have to decide whether this was really a significant achievement worthy of that million). So if you prove inconsistency, you (or your lawyer) can claim P=NP and P<>NP... – Paul-Olivier Dehaye Oct 4 2010 at 13:13
18
Randall Munroe, creator of the comic strip xkcd, points out a flaw in your scheme: xkcd.com/816 (make sure to mouse over the strip) – Timothy Chow Nov 9 2010 at 20:01
show 1 more comment
I'm annoyed by the careless use of the word "proof" in Voevodsky's lecture. Of course, in the context of everyday mathematical discussions, it is normally sufficiently clear what one means by "proof" (it usually means something like "argument that is formalizable in ZFC"; even though I agree with Timothy Chow that most mathematicians wouldn't be able to explain exactly what ZFC is, they are nevertheless trained to recognize certain things as being "proofs" and I believe that those things that mathematicians normally recognize as proofs correspond to "proofs in ZFC"). But in the context of a discussion about foundations, it is far from clear what "proof" means and it is good practice to be more precise (proof in PRA? proof in PA? proof in ZFC? what?). There is no absolute notion of proof that, once presented, eliminates any possibility of doubt forever.
There doesn't seem to be anything new/interesting about Voevodsky's lecture. Anyone that is mildly educated about foundations has already entertained the question "what if ZFC is inconsistent?" or "what if PA is inconsistent?"; questions like that come around, from time to time, in any forum that discusses foundations of mathematics.
As Voevodsky mentioned, it is possible to present a constructive proof that an inconsistency in PA leads to an ever decreasing sequence in epsilon_0 (he mentioned a proof by Gentzen; there is also one by Gödel himself). Such proof convinces me that PA is consistent, as I find the idea of constructing an ever decreasing sequence in epsilon_0 rather crazy. But, of course, one can say "so what? I'm skeptical" (of course, one could also say that about any proof).
Sadly, Voevodsky's proposal about what to do if PA turns out to be inconsistent seems to me somewhat silly. If I understand him correctly, what he proposes is that we should have a system which is inconsistent, but we should also have some algorithm which separates "unreliable proofs" from "reliable proofs" (in such a way, I suppose, that there shouldn't be a "reliable proof" of both P and not(P); otherwise, I cannot understand what "reliable" could possibly mean). This "two step" scheme doesn't help at all. Instead of having "proofs" that can prove both P and not(P) and "reliable proofs" that do not prove both P and not(P), we could just restrict the term "proof" to the "reliable proofs". But, if one assumes the existence of an algorithm that decides whether something is or isn't a proof, and if the system is sufficiently complex to allow for interesting mathematics to be done within it, then Gödel's arguments would again present the usual obstruction for the existence of finitary proofs of consistency.
-
3
I think Voevodsky intends to get around your last argument by not actually assuming the existence of an algorithm that separates reliable and unreliable proofs. Rather, it seems that he describes a probabilistic algorithm that given a reliable proof will generically produce a certificate of its reliability in finite time, but given an unreliable proof it will simply not halt. So there would be no way of proving unreliability of a proof using this hypothetical algorithm, and he leaves open the possibility that there will exist reliable proofs whose reliability cannot be proven either. – Dan Petersen Oct 7 2010 at 7:26
2
Ok, so normally we have an algorithm that checks whether something is a proof (i.e., the set of all proofs is recursive), which implies that the set of all theorems is $\Sigma_1$ (i.e., recursively enumerable). The new proposal would be: let's have an algorithm that takes a proposed proof as input, sometimes it halts and answers "yes, that is a (reliable) proof" and sometimes it doesn't halt. This makes the set of all (reliable) proofs $\Sigma_1$ (instead of recursive), but this again implies that the set of all theorems is $\Sigma_1$, so I guess Gödel-like arguments would apply just as well. – Daniel Tausk Oct 7 2010 at 12:32
1
Probably I misused the word probabilistic algorithm -- I'm no computer scientist. What I meant is that he seems to leave open the possibility that there are reliable proofs for which this hypothetical algorithm would not halt, but that one should get a certificate for "most" reliable proofs. – Dan Petersen Oct 7 2010 at 13:20
1
Ok, so maybe we would have "proofs", "reliable proofs" and "certifiably reliable proofs", i.e., those "reliable proofs" for which the algorithm halts in finite time and answers "yes, this is reliable" (actually, it would be semantically less messy to restrict the term "proof" just for the "certifiably reliable proofs"). Since a Gödel-like theorem would block finitary proofs of consistency of the theory in which only the "certifiably reliable proofs" are considered, it would, a fortiori, block finitary proofs of consistency of the theory in which all "reliable proofs" are considered. – Daniel Tausk Oct 7 2010 at 14:05
7
Unfortunately, there seems to be no trivial way of getting around Gödel's theorem (otherwise, people like Gödel and Hilbert would have already found a way to do it). Either we work with systems for which no strictly finitary proof of consistency (say, a proof in PRA or less) is possible or we work with systems that cannot handle the vast majority of what we today call "mathematics". – Daniel Tausk Oct 7 2010 at 14:10
I once heard Mike Freedman (the Fields medalist) say he thinks ZFC is probably inconsistent but that the minimal length paradox is so long no-one has found it yet. Once a paradox is found, he said, we'll just patch it up with a new axiom, and continue. His reasoning seemed to be that it was unlikely that we happened to find a consistent theory.
-
13
Couldn't it also be the case that the minimal-length paradox is so long that human mathematics will always be able to avoid it? – Qiaochu Yuan Oct 3 2010 at 16:24
5
To add to that, if ZFC was found to be inconsistent I doubt the inconsistency would be as interesting as something like Russell's paradox. If it were, I suppose that would be quite informative. – Ryan Budney Oct 3 2010 at 16:30
13
Compare Pierre Cartier, as quoted by David Ruelle in Chance and Chaos: "The axioms of set theory are inconsistent, but the proof of inconsistency is too long for our physical universe." – Todd Trimble Oct 3 2010 at 20:09
5
I'm curious: Is there a rigorous sense in which it can be said that "most" theories are inconsistent? (I'd imagine the answer here to be yes.) But it might be worth asking if there's some sort of phase transition, such that almost all inconsistent theories have a relatively short contradiction... – Harrison Brown Oct 4 2010 at 4:23
4
I'm also intrigued: what does Freedman intend by "patch [the inconsistent theory] with a new axiom"? Strengthening a too-weak theory is easy, and can indeed be patched — at the crudest level, you can just add an axiom doing whatever you want. But weakening an inconsistent theory is harder: all the existing axioms work together in complicated ways, and taking any one axiom out usually makes it break down (it certainly does with ZFC), so you have to rewrite at least parts of the theory from scratch. Hence things like constructive set theories, dependent type theory, etc.. – Peter LeFanu Lumsdaine Oct 4 2010 at 17:17
show 6 more comments
Suppose today's news is actually, that in some form "current foundations of mathematics are inconsistent". Would any mathematician stop his-her research work for this? I don't think so. Even an antinomy has a mathematical content; after changing suitably the formal system in which it is formulated, the antinomy would become a positive statement, and the show would go on.
-
16
After all, the show must go on. – babubba Oct 3 2010 at 19:27
2
that's possibly the point of the whole matter ;-) – Pietro Majer Oct 4 2010 at 6:26
This didn't work when I tried to post it the first time, hope this won't wind up as a double post.
Thorsten Altenkirch, a constructive logician and computer scientist, made a memorable quote on the TYPES Forum mailing list in June 2008 which is very much in the spirit of Voevodsky's talk:
It seems to me that Type:Type is an honest form of impredicativity, because at least you know that the system is inconsistent as a logic (as opposed to System F where so far nobody has been able to show this :-). Type:Type includes System F and the calculus of constructions and I think all reasonable programs can be reformed into Type(i):Type(i+1) possibly parametric in i. However, sometimes you don't want to think about the levels initially and sort this out later - i.e. use Type:Type. A similar attitude makes sense in Mathematics, in particular Category Theory, where it is convenient to worry about size conditions later...
The system he is tongue-in-cheek questioning the consistency of is System F, which would correspond to second-order, not first-order, arithmetic. Type:type is an axiom that makes constructive type theory inconsistent (Girard's Paradox), so the "honest impredicativity" he refers to is therefore similar to what Voevodsky was talking about: we're admitting that everything is inconsistent and then doing our work anyway.
-
Voevodsky is not the only one who hopes for a proof of inconsistency (as mentioned in Dick Palais's answer): see Conway and Doyle's Division by Three, bottom of page 34, where they express the same kind of skepticism as Nelson.
-
2
I think it's slightly misleading to represent anything in this paper as Conway's opinion. A footnote on the first page clearly says that he didn't write the paper, nor particularly liked "the exposition", which presumably includes the final remarks on inconsistency. – Pietro KC Oct 4 2010 at 7:04
7
I would fully agree, except that I've heard Conway say similar things in person. – Todd Trimble Oct 4 2010 at 10:48
(I took Doyle's depiction of Conway's objection to the form of the paper -- all the "fluff" -- as referring more to the very discursive expository style.) – Todd Trimble Oct 4 2010 at 11:05
Oh, OK, that's another matter entirely then. :) – Pietro KC Oct 5 2010 at 3:01
Using the following table to you convert between propositional logic and arithmetic of multivariate polynomials over $\mathbb{F}_2$: $$\mbox{TRUE} \leftrightarrow 1$$ $$\mbox{ FALSE} \leftrightarrow 0$$ $$X \mbox{ or } Y \leftrightarrow xy+x+y$$ $$X \mbox{ and } Y \leftrightarrow xy$$ $$!X \leftrightarrow x+1$$ So a proposition $P(X_1,X_2,\ldots, X_n)$ can be satisfied if and only if the corresponding polynomial equation $p(x_1,x_2,\ldots,x_n)=1$ has a solution. For example, the proposition $$X \mbox{ and } !X$$ is not satisfiable. This corresponds to the fact the polynomial $x(x+1)=1$ or $x^2 + x +1=0$ has no solutions over $\mathbb F_2$.
We now should do in logic as we do in algebra. Since this proposition isn't satisfiable over our standard logic we create an algebraic extension of logic where truth values now live in
$$\mathbb F_2[x]/(x^2 + x + 1)!$$
I don't know how to extend these ideas to first order logic.
-
I was TOTALLY late to the party. – Taylor Dupuy Sep 2 2011 at 3:38
This is cute, and I'm voting it up. – Ryan Reich Dec 25 2011 at 0:52
Mathematics is too big to fail.
-
Although I can see where this may be off topic, I actually think that this has some truth to it. As some other answers have said, if the current foundations are found to be inconsistent, we as mathematicians will step in and fix it by modifying the foundations such that they are still sufficiently rich to deal with math as a whole. At least I hope that this can be done in some manner. Perhaps some paraconsistent logic system will be the needed device that will fix up the foundations. – Spice the Bird Dec 25 2011 at 3:44
Comment:
Putman says "Of course Voevodsky was NOT saying that he felt that the body of theorems making up the "classic mathematics" that we normally deal with might be inconsistent, that is quite a different matter."
But wasn't he? His conjecture is "I suggest that the corret interpretation of Goedel's second incompleteness theorem is that it provides a step towards the proof of inconsistency of many formal theories and in particular of the "first order arithmetic"."
What I don't understand is this. If classical arithmetic is inconsistent anywhere, then it is inconsistent everywhere (an inconsistency proves everything). So why haven't we found any inconsistencies yet?
What is cool is that the notion of reliability he talked about seems to be a move toward a "local" notion of consistency.
Humm, does this make sense? Let A and B be closed formulas of some formal system. Define the "logical distance" between A and B to be the shortest proof of B assuming A (inculding the data of the number of applications of the rules of inference, etc.) Say that B is "locally consistent" with A if the logical distance between A and B is strictly less than the logical distance between A and not-B. A theory is locally consistent if for every pair (A, B) the logical distance from A to B is not equal to the logical distance from A to not-B. Etc. Etc.
-
Paul, one simple possibility for why we haven't found any inconsistencies yet is that the shortest proof of an inconsistency is too long to write down physically. – Timothy Chow Sep 1 2011 at 22:51
The inconsistency of mathematics is a quite common option when you consider seriously some non-classical logics.
For an introduction, read the following page from the Stanford Encyclopedia of Philosophy : http://plato.stanford.edu/entries/mathematics-inconsistent/
-
What if the current foundations of Mathematics are inconsistent?
Had this kind of opinion been expressed before?
The opinion that the Peano Arithmetic is likely to be inconsistent is not uncommon, along with ideas on how to deal with this (targeting the "what if" question). Wikipedia has an article about that, and MathOverflow has a question. These have links to works by Nelson, and to a paper by Sazonov, which among others refer to Parikh (1971) and Yessenin-Volpin (1959). These things have been discussed also in a paper by Rashevski (1973) and a few years ago also (quite extensively, with a number of additional references) at the FOM mailing list.
An implicit question is "What do you think of Vladimir Voevodsky's talk?"
His message is obviously: "Guys, your Peano Arithemetic is something not to be taken too seriously. Which is a good reason to be a bit more serious about Voevodsky's univalent foundations!" I hear this message, in particular, when he speaks of "reliable proofs", and in fact it does resonate with me. His subsequent talk about the univalent foundations is much more substantial; having a separate copy of the "slides" helps to follow the video.
-
From http://www.scottaaronson.com/papers/pnp.pdf p. 3 since I had the link handy from another thread:
Have you ever lay awake at night, terrified of what would happen were the Zermelo-Fraenkel axioms found to be inconsistent the next morning? Would bridges collapse? Would every paper ever published in STOC and FOCS have to be withdrawn? ("The theorems are still true, but so are their negations.")
-
This paper looks very nice, particularly for a non-logician! – Spice the Bird Dec 25 2011 at 3:49
In the discussion of Gentzen's proof, Voevodsky expresses total bafflement at why someone would presume the ordinals are well-ordered. He does not say that he rejects any particular argument but rather seems to suggest there are no arguments. Why? Either he was simply not aware of any kind of reason, or somehow thought the audience didn't need to know about that. Neither option is good.
-
In either case this will not affect any practical applications of mathematics, because practical mathematics deals only with finite quantities, and finite arithmetics has been shown to be consistent. The paradoxes arise only when using abstract axioms, such as axiom of infinity, axiom of choice etc. That is the major body of analysis will survive in a form of constructivist analysis or a stricter approach (depending on where the inconsistency is discovered).
-
2
PA has only been shown to be consistent using infinite ordinals (whose existence is an extra assumption). In fact there are a (small) number of people who think that PA might be inconsistent. – David Roberts Feb 20 2011 at 10:30
Peano arithmetic is not finite arithmetic. It includes axiom of potential infinity. If an arithmetic is built over a finite set of numbers, it is consistent. – Anixx Feb 20 2011 at 12:43
And the group about you said does not accept the proof not because it requires extra assumptions, but because they reject the existence of infinite set of natural numbers (i.e. they just DISAGREE with usefulness of one of the axioms). The proof itself finitistic. – Anixx Feb 20 2011 at 12:49
3
To sum it up: 1) the proof that PA is consistent exists, and is finitist. 2) People who disagree are ultrafinitists 3) all mathematical paradoxes so far were discovered outside of finitist realm. – Anixx Feb 20 2011 at 12:54
What of you mean by "the proof that PA is consistent exists, and is finitist?". To what proof are you referring to? – Joël Nov 22 2011 at 17:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.96117103099823, "perplexity_flag": "middle"}
|
http://sbseminar.wordpress.com/2007/08/08/quantum-geometric-satake/
|
## Quantum geometric Satake August 8, 2007
Posted by Joel Kamnitzer in geometric Langlands, quantum groups.
trackback
I want to tell you today about Dennis Gaitsgory’s recent work on quantum geometric Satake. If we had this blog six months ago, perhaps I would have written something about this then and it would have been “brand-new”. Anyway, six months old is still pretty new by math standards.
First, I should tell you a few works about regular geometric Satake (I had better resist writing classical geometric Satake) — one of my favorite subjects. I’ll approach it from the perspective of someone who studies representation theory of complex groups.
Let G be a complex reductive group. Then we have the semisimple category of finite dimensional representations of G. Geometric Satake is about constructing this category in a topological way. We take the Langlands dual group of $G^\vee$ of G (here is a definition) and then take its affine Grassmannian $Gr = G^\vee((t))/G^\vee[[t]]$. The geometric Satake correspondence states that the category of $G^\vee[[t]]$ equivariant perverse sheaves on Gr is equivalent to the representation category.
You might wonder why it is useful to realize the representation category in such a complicated way. Well, it does have some surprisingly useful consequences, such as the theory of MV cycles (see my thesis) or my recent work with Sabin Cautis. Actually, people in geometric Langlands usually view it the other way around — ie as a description of this category of perverse sheaves, but for the purposes of this post I think that my “backwards view” is best.
Quantum geometric Satake is about an attempt to do something similar for quantum groups. Namely to construct some category of perverse sheaves (or equivalently D-modules) using the affine Grassmannian which will realize the category of representations of a quantum group.
The affine Grassmannian has a natural line bundle coming from the central extension of $G^\vee((t))$ and this leads to sheaves of twisted differential operators on Gr (much like one has twisted differential operators on the flag variety). The first attempts to construct a category involve looking at categories of twisted D-module on Gr which are $G^\vee[[t]]$ equivariant — the amount of twisting is supposed to be the quantum paramenter. However, this does not work.
Gaitsgory’s idea (or maybe Jacob Lurie’s idea) is to start with the “Whittaker category” which is another version of geometric Satake. In this version, one considers perverse sheaves on Gr which are equivariant for the group $N((t))$, with respect to some fixed non-degenerate additive character $\chi$. This theory was developed by Frenkel, Gaitsgory, and Vilonen and is motivated by the theory of Whittaker functions for p-adic groups. Surprisingly the above “quantum deformation” does work in this setting. Namely, Gaitsgory considers the category of twisted D-modules on Gr which are equivariant for $N((t))$ with character $\chi$ . He then proves that this category is equivalent to the corresponding category of representations of the quantum group (again the amount of twisting equal the quantum parameter).
Now, there are two big caveats to make, both interesting. First, the orbits of $N((t))$ are infinite dimensional, so this category perverse sheaves (or D-modules) is not a priori well-defined. There is a complicated trick for constructing this category of perverse sheaves which involves a compactification of the stack of N-bundles on a curve (I won’t go into that here).
Secondly, Gaitsgory doesn’t prove an equivalence with the category of representations of the quantum group, but rather with the category of factorizable sheaves which itself is equivalent to the category of representations of the quantum group, by the work of Bezrukavnikov, Finkelberg, Schechtman. This is itself an interesting topic, but unfortunately one for which I have neither energy or expertise to discuss right now!
## Comments»
1. Allen Knutson - August 8, 2007
Does this require/forbid you to be at a root of unity?
If it doesn’t forbid them, are we likely to see a Jared Anderson-like rule for computing fusion coefficients come from this?
2. Joel Kamnitzer - August 8, 2007
So far the result is proven only in the non-root of unity case, although it is conjectured to hold at a root of unity as well.
I’m not sure exactly how fusion would work though. The problem is that this category of perverse sheaves is not obviously a tensor category — the analog in terms of functions is considering functions on G((t)) which are left invariant by G[[t]] and right equivariant by N((t)) and you can’t convolve such functions. However, Dennis has a replacement for this convolution structure, namely a fusion structure (that’s fusion in terms of working with the diagonal X in X x X (X a curve), not to be confused the fusion of representations). Recall that in the usual Satake case there is both fusion and convolution. However the “Jared Anderson rule” uses the convolution structure in the usual case, so I’m not sure what the analog will be here. I’ll keep an eye out for it.
3. David Ben-Zvi - August 9, 2007
I’m curious in what way the “Jared Anderson rule” uses
convolution, rather than fusion — don’t they define the same operation on sheaves? is it the explicit form of convolution that’s used?
(the fact that there are both of them
is supposed to explain, from a topological field theory point of view, why the Satake category is symmetric and not just braided, or E_3 rather than E_2.. I guess when we pass to quantum groups we need to drop back down to braided, so need to lose convolution)
I have tried to confuse the fusion of representations with that
on the Grassmannian — after all they’re both aspects of the same
conformal field theory operator product expansion or
topological field theory pair-of-pants product..
however I don’t understand
how to describe the integrable loop group representations in
a setting where there’s also this geometric fusion that you mention
(they don’t live on the right affine Grassmannian I think) – though
I was told Teleman has a picture for all of this.. anyone?
4. Allen Knutson - August 9, 2007
Recall that in the usual Satake case there is both fusion and convolution.
Huh?
Maybe I’m confused about the word “usual”. Are we losing geometric, but keeping quantum? Because if we’re losing quantum but keeping geometric, then I don’t understand what fusion is.
If convolution corresponds to tensor product (as in geometric Satake), then something’s going to be weird since tensor product of quantum group reps adds levels. And level corresponds to which root of unity, right? So convolution will be some correspondence between three different categories.
I hope I’m making some sense here.
5. Joel Kamnitzer - August 9, 2007
To answer Allen’s question (and David’s first one at the same time), in geometric Satake (which is what I meant above by usual) there are two ways of defining the tensor structure on the category of G[[t]] equivariant perverse sheaves.
The first, called convolution, is to look at convolution of the affine Grassmannian with itself, namely $G((t)) \times_{G[[t]]} Gr$. It is a Gr bundle over Gr which also maps to Gr. Under this description, tensor product multiplicities become components of fibres of the map to Gr (once you restrict to the convolution of one $Gr_\lambda$ over another $Gr_\mu$). This is where Jared’s formula comes from.
The second way to define the tensor structure is called fusion. As David mentions above, the two definitions are equivalent. The fusion structure (which is unrelated (at least not directly related – see the last paragraph of David’s email) to the fusion of quantum group reps) involves working with the Beilinson-Drinfeld Grassmannian over a product of curves. In this formulation, I don’t think that it is possible to see what varieties give you tensor product multiplicities.
6. David Ben-Zvi - August 9, 2007
Thanks Joel!
Regarding Allen’s question: unless I’m confusing notions,
the tensor product of quantum group representations doesn’t add levels, or change q (eg root of unity) — we have a family
of tensor categories (or Hopf algebras)
labeled by q, ie tensor product is q by q. I think what Allen is referring to is that under the equivalence between quantum group representations and loop group representations, tensor product goes
not to tensor product (which adds levels) but to fusion of representations. When we can realize these representations geometrically (as D-modules) on the affine Grassmannian (eg at critical or negative level, but not at positive level where the integrable reps live) then this fusion is realized by the B-D fusion or factorization picture that Joel explains, if I understand correctly. (I think this might be close
to the idea of factorizable sheaves, which maybe we can get Joel to explain — please???)
Joel – is it correct to think that in the fusion picture the multiplicities would be given in terms of the fibers of the (nonalgebraic) collapsing map from a nearby fiber in the fusion family to the Grassmannian? ie it would be given topologically, but not naturally algebraically?
7. Allen Knutson - August 10, 2007
Thanks David — of course I was mixing up quantum group tensoring (“=” affine group fusion) with affine group tensoring. (Though now I’m very vaguely curious how a quantum group person can view affine group tensoring, mixing roots of unity.)
8. Joel Kamnitzer - August 11, 2007
David – I believe that you are correct that in the fusion picture the multiplicities can be given by the fibres of a collapsing map from a nearby fibre. I’ve never thought about trying to understand those fibres systematically — it may be harder than understanding the fibres of the convolution morphism since as you say, this is just a topological picture. So to answer Allen’s original question, perhaps you can get varieties which record tensor product coefficients for quantum groups at a root of unity.
9. Allen Knutson - August 11, 2007
Collapsing maps aren’t algebraic — they’re only well-defined up to homotopy (or perhaps stratified symplectomorphism). In particular, the fibers aren’t going to be varieties.
Example: {xy=1} has a collapsing to {xy=0}, shrinking the {|x|=1} neck to the singular point, and away from there a symplectomorphism. (Ask for it to be U(1)-equivariant and to preserve the real structure, and it becomes unique.) The fiber is S^1.
10. Pre-Talbot seminar « Secret Blogging Seminar - February 8, 2008
[...] and quantum Langlands, which should play a significant role in the workshop. As it happens, Joel posted about this very result last [...]
11. Two Workshops on Representations and Categories – Part I « Theoretical Atlas - November 14, 2011
[...] geometrically. This was a bit outside my grasp, since it involves the Langlands program and the geometric Satake correspondence, neither of which I know much of anything about, but which give geometric/topological ways of [...]
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 13, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246904850006104, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/users/3711/ghshtalt?tab=activity&sort=all&page=5
|
# ghshtalt
reputation
2619
bio
website
location
age
member for 2 years, 6 months
seen Mar 10 at 19:08
profile views 132
check it out
| | | bio | visits | | |
|-------------|----------------|---------|----------|-----------------|-------------------|
| | 842 reputation | website | | member for | 2 years, 6 months |
| 2619 badges | location | | seen | Mar 10 at 19:08 | |
# 239 Actions
| | | |
|-------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Apr17 | comment | How to show certain things related to scalar products@Theo: yeah that was the 'workaround' I was trying out, but now I can go ahead and replace the alphas thanks to what you just taught me. |
| Apr17 | revised | How to show certain things related to scalar productsadded 2 characters in body |
| Apr17 | comment | How to show certain things related to scalar products@Theo: thank you so much for that! I was trying to figure out why it was such a mess... definitely valuable remarks! |
| Apr17 | revised | How to show certain things related to scalar productsadded 6 characters in body; added 114 characters in body; added 1 characters in body |
| Apr17 | asked | How to show certain things related to scalar products |
| Apr17 | comment | Subtraction and division with integers modulo 3@Gerry: thanks for the explanation. I had copied that notation from Paul R. Halmos -Linear Algebra Problem Book... But, yeah I will definitely try to avoid using those sorts of fractions in the future. |
| Apr17 | accepted | How do you show this property of a differentiable function given information about the derivative? |
| Apr17 | comment | Subtraction and division with integers modulo 3Thank you for this answer |
| Apr17 | comment | Subtraction and division with integers modulo 3@Fabian: thanks for the helpful tips |
| Apr17 | accepted | Subtraction and division with integers modulo 3 |
| Apr17 | comment | Subtraction and division with integers modulo 3@quanta: I'm sorry if I'm just completely missing something here, but are you saying there is a problem with the way I am trying to write down/express an idea, or with the idea itself that I am trying to carry out division on the integers modulo 5? |
| Apr17 | comment | Subtraction and division with integers modulo 3@quanta: ok, yeah I meant by that 3 divided by 4 in integers modulo 5 (and I realize the way I used $x$ in my above comment was nonsense). But isn't that how you would carry out division of 3 by 4 in integers modulo 5? Since $4$ is the inverse of $4$, $3*4^{-1}=2$ in integers modulo 5, no? |
| Apr17 | comment | Subtraction and division with integers modulo 3@quanta: thank you for the answer. I'm still digesting it, but I wanted to check if I am understanding part of the idea. If I wanted to divide on say integers modulo 5 (where there are a couple more examples) then for say $\frac{3}{4}=x$ I would first always calculate $4^{-1}$ and then multiply? And because of the gcd stuff you showed, I know that $1\equiv 4x \mod 5 \Rightarrow x = 4 \Rightarrow \frac{3}{4} = 2$? So in these cases it is about finding the inverses first? |
| Apr17 | asked | Subtraction and division with integers modulo 3 |
| Apr16 | comment | How do you show this property of a differentiable function given information about the derivative?@Qiaochu: thanks for that comment, I guess the derivative of a polynomial satisfies $f'(-x)=-f'(x) \Rightarrow f'$ is odd with degree $n \Rightarrow \int f'(x)dx$ is even with degree $n+1$..? Unfortunately I can't say much about a power series or a differentiable function in general... |
| Apr16 | comment | How do you show this property of a differentiable function given information about the derivative?$g(0) = f(0)-f(-0)=0$ ? |
| Apr16 | comment | How do you show this property of a differentiable function given information about the derivative?thank you for this answer. Unfortunately I can't manage to finish the problem with it yet. $g(x) = f(x)-f(-x) \Rightarrow g'(x)=f'(x)+f'(-x) \Rightarrow g'(x) =0 \Rightarrow g(x)=C \Rightarrow ?$ is the rest pretty much as Chris shows, or did you have something else in mind? (The only thing is that I don't know how I would have defined all these other functions and so on...) |
| Apr16 | comment | How do you show this property of a differentiable function given information about the derivative?@Fabian: yeah, careless mistake on my part, thank you for clearing that up |
| Apr16 | comment | How do you show this property of a differentiable function given information about the derivative?@Fabian: Thank you, I noticed that as well, but wasn't sure if it would turn out to be a 'trick.' As for deducing something: what more than $f(-x) = -f(x)$ or maybe even $f(-x)+f(x)=0$ should I see? Also, why is it ok to evaluate the definite integral here? (sorry if i'm missing very obvious stuff!) |
| Apr16 | comment | How do you show this property of a differentiable function given information about the derivative?@Thomas: It's not homework, but thanks for the hint anyway. If I integrate both sides of the above equation I get $f(-x) + C = -\int f(x)dx$ right? I am not sure what to do with that and in I general get thrown off by the constant whenever I try to use integration... |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928633451461792, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/36787?sort=newest
|
A question about disconnecting a Euclidean space or a Hilbert space
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
By a "totally disconnected" point set I mean one whose only connected subsets are singletons. Can a finite dimensional Euclidean space whose dimension is at least two, be separated by any subset that is "totally disconnected"? Such a subset could not be closed in the space, for then it would be locally compact and therefore zero-dimensional. If we move beyond locally compact spaces, can a separable and infinite dimensional Hilbert space be separated by any subset that is "totally disconnected"?
-
1 Answer
Assume the complement of $S$ in $\mathbb{R}^n$ is not connected, say $A$ and $B$ are relatively closed and disjoint in $\mathbb{R}^n\setminus S$ (and nonempty of course); let $O$ be the complement of the closure of $B$ and $U$ the complement of the closure of $A$, then $O$ and $U$ are disjoint nonempty open subsets of $\mathbb{R}^n$ and the complement of their union, $F$, is closed in $\mathbb{R}^n$, a subset of $S$ and it separates $\mathbb{R}^n$. In short: $S$ contains a closed set that also separates; as you noted that set is zero-dimensional and hence the answer is `no' for Euclidean spaces. I don't know (yet) about Hilbert space.
-
Many thanks for your response and its very nice proof. It looks as if the same line of argument could be used for Hilbert space up to the point where you have to show whether or not a closed and totally disconnected subset of Hilbert space can diconnect it. – Garabed Gulbenkian Aug 27 2010 at 20:16
(To be pedantic one should mention that under the given assumptions $S$ has empty interior). – Wlodzimierz Holsztynski May 10 at 23:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414757490158081, "perplexity_flag": "head"}
|
http://en.wikiversity.org/wiki/Introduction_to_Elasticity/Antiplane_shear_example_1
|
From Wikiversity
# Example 1
Given:
The body $-\alpha < \theta < \alpha$, $0 \le r < a$ is supported at $r = a$ and loaded only by a uniform antiplane shear traction $\sigma_{\theta z} = S$ on the surface $\theta = \alpha$, the other surface being traction-free.
A body loaded in antiplane shear
Find:
Find the complete stress field in the body, using strong boundary conditions on $\theta = \pm\alpha$ and weak conditions on $r = a$.
[Hint: Since the traction $\sigma_{\theta z}$ is uniform on the surface $\theta = \alpha$, from the expression for antiplane stress we can see that the displacement varies with $r^1 = r$. The most general solution for the equilibrium equation for this behavior is $u(r,\theta) = Ar\cos\theta + Br\sin\theta$]
## Solution
Step 1: Identify boundary conditions
$\begin{align} \text{at}~ r & = 0 ~;~~ u_r = 0, u_{\theta} = 0 \\ \text{at}~ r & = a ~;~~ u_r = 0, u_{\theta} = 0, u_{z} = 0 \\ \text{at}~ \theta & = -\alpha ~;~~ t_{\theta} = 0, t_{r} = 0, t_{z} = 0 \\ \text{at}~ \theta & = \alpha ~;~~ t_{\theta} = 0, t_{r} = 0, t_{z} = S \end{align}$
The traction boundary conditions in terms of components of the stress tensor are
$\begin{align} \text{at}~ \theta & = -\alpha ~;~~ \sigma_{\theta r} = 0, \sigma_{\theta\theta} = 0, \sigma_{\theta z} = 0 \\ \text{at}~ \theta & = \alpha ~;~~ \sigma_{\theta r} = 0, \sigma_{\theta\theta} = 0, \sigma_{\theta z} = S \end{align}$
Step 2: Assume solution
Assume that the problem satisfies the conditions required for antiplane shear. If $\sigma_{\theta z}$ is to be uniform along $\theta=\alpha$, then
$\sigma_{\theta z} = \frac{\mu}{r} \frac{\partial u_z}{\partial \theta} = C$
or,
$\frac{\partial u_z}{\partial \theta} = \frac{Cr}{\mu}$
The general form of $u_z$ that satisfies the above requirement is
$u_z(r,\theta) = Ar\cos\theta + Br\sin\theta + C$
where $A$, $B$, $C$ are constants.
Step 3: Compute stresses
The stresses are
$\begin{align} \sigma_{\theta z} & = \frac{\mu}{r} \frac{\partial u_z}{\partial \theta} = \mu \left(-A\sin\theta + B\cos\theta\right) \\ \sigma_{rz} & = \mu \frac{\partial u_z}{\partial r} = \mu \left(A\cos\theta + B\sin\theta\right) \end{align}$
Step 4: Check if traction BCs are satisfied
The antiplane strain assumption leads to the $\sigma_{\theta\theta}$ and $\sigma_{r\theta}$ BCs being satisfied. From the boundary conditions on $\sigma_{\theta z}$, we have
$\begin{align} 0 & = \mu \left(A\sin\alpha + B\cos\alpha\right) \\ S & = \mu \left(-A\sin\alpha + B\cos\alpha\right) \end{align}$
Solving,
$A = -\frac{S}{2\mu\sin\alpha} ~;~~ B = \frac{S}{2\mu\cos\alpha}$
This gives us the stress field
$\sigma_{\theta z} = \frac{S}{2} \left(\frac{\sin\theta}{\sin\alpha} + \frac{\cos\theta}{\cos\alpha}\right) ~;~~ \sigma_{rz} = \frac{S}{2} \left(-\frac{\cos\theta}{\sin\alpha} + \frac{\sin\theta}{\cos\alpha}\right)$
Step 5: Compute displacements
The displacement field is
$u_z(r,\theta) = \frac{Sr}{2\mu}\left(-\frac{\cos\theta}{\sin\alpha} + \frac{\sin\theta}{\cos\alpha}\right) + C$
where the constant $C$ corresponds to a superposed rigid body displacement.
Step 6: Check if displacement BCs are satisfied
The displacement BCs on $u_r$ and $u_{\theta}$ are automatically satisfied by the antiplane strain assumption. We will try to satisfy the boundary conditions on $u_z$ in a weak sense, i.e, at $r = a$,
$\int_{-\alpha}^{\alpha} u_z(a, \theta) d\theta = 0~.$
This weak condition does not affect the stress field. Plugging in $u_z$,
$\begin{align} 0 & = \int_{-\alpha}^{\alpha} u_z(a, \theta) d\theta \\ & = \frac{Sa}{2\mu}\int_{-\alpha}^{\alpha} \left(-\frac{\cos\theta}{\sin\alpha} + \frac{\sin\theta}{\cos\alpha} + C\frac{2\mu}{Sa}\right) d\theta \\ & = \frac{Sa}{2\mu}\int_{-\alpha}^{\alpha} \left(-\frac{\cos\theta}{\sin\alpha} + \frac{\sin\theta}{\cos\alpha} + C\frac{2\mu}{Sa}\right) d\theta \\ & = \frac{Sa}{2\mu}\left[ \left(-\frac{\sin\theta}{\sin\alpha} - \frac{\cos\theta}{\cos\alpha} + C\theta\frac{2\mu}{Sa}\right) \right]_{-\alpha}^{\alpha} \\ & = \frac{Sa}{2\mu} \left(-2\frac{\sin\alpha}{\sin\alpha} + 2C\alpha\frac{2\mu}{Sa}\right) \\ & = -\frac{Sa}{\mu} + C\alpha \end{align}$
Therefore,
$C = \frac{Sa}{2\mu\alpha}$
The approximate displacement field is
$u_z(r,\theta) = \frac{S}{2\mu}\left(-r\frac{\cos\theta}{\sin\alpha} + r\frac{\sin\theta}{\cos\alpha} + a\frac{1}{\alpha}\right)$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.824809730052948, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/number-theory/174884-modular-arithmetic.html
|
# Thread:
1. ## Modular arithmetic
If 2 numbers a=b mod(n), does that imply they are congruent modulo as well?
Anyway on to the matter at hand. Calculate 2^258 (mod 259)I did it 2 different ways and unfortunatly got 2 different answers.
2^(64)=86 mod 259,
2^(128)=144,
2^(256)=16 so 2^256.2^2=64 (mod 259). I know my method isn't very clear but you probably get the idea.
Method 2
259 = 7 x 37
2^6=1 mod(7)
2^258=(2^6)^43=1 mod (7)
2^36=1 mod(37)
2^258=(2^36)^7.2^6=27 mod(37)
but 1 x 27=27 not 64.
I have a strong feeling my first method is correct. Which one is correct and why is the other wrong?
2. The first is the correct one. Unfortunately, I don't understand your second method.
My way:
$(2^8)^{32}\cdot 2^3=2^{258}$
$2^8(mod259)\equiv -2$
$2^{258}=(2^8)^{32}\cdot 2^3\equiv(-2)^{32}\cdot 2^3=2^{34}=(2^8)^4\cdot 2^2\equiv (-2)^4\cdot 2^2=2^6=64(mod259)$
3. Originally Posted by Also sprach Zarathustra
The first is the correct one. Unfortunately, I don't understand your second method.
My way:
$(2^8)^32\cdot 2^3=2^258$
$2^8(mod259)\equiv -2$
$2^258=(2^8)^32\cdot 2^3\equiv(-2)^32\cdot 2^3=2^34=(2^8)^4\cdot 2^2\equiv (-2)^4\cdot 2^2=2^6=64(mod259)$
Explaining my second answer. I've seen it that you can express the modulus as a factor of primes and then find your a^b under each modulus you should get the same answer both times,(Your not meant to multiply them as i did). Here is a link with an example using this method.
http://uk.answers.yahoo.com/question...3030517AA4R5UI
4. Ok! Now I am understand what you tried to do!
You thought using Fermat's little theorem.
There is a lemma which states:
If $p$ and $q$ are different primes and $a^p\equiv a(mod q)$ and $a^q\equiv a(mod p)$, then $a^{pq}\equiv a(modpq)$
Try to work with that in your question, and see what you get...
5. Originally Posted by Also sprach Zarathustra
Ok! Now I am understand what you tried to do!
You thought using Fermat's little theorem.
There is a lemma which states:
If $p$ and $q$ are different primes and $a^p\equiv a(mod q)$ and $a^q\equiv a(mod p)$, then $a^{pq}\equiv a(modpq)$
Try to work with that in your question, and see what you get...
The only thing I don't understand is that the answer they obtained was 4, not 2 as the lemma demands.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292500615119934, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/249351/nabla2u-0-implies-every-critical-point-is-a-saddle-point
|
# $\nabla^2u=0$ implies every critical point is a saddle point
Hi everybody I need help with this problem: let $u:R^n \rightarrow R$ be a function so that $\nabla^2u=0$ prove that every critical point of the function is a saddle point.
-
but if you use $u(x,y)=x^2+y^2$ then $\nabla^2u=2+2=4$ – user1080987 Dec 2 '12 at 18:36
I deleted that comment - I read the question wrong (thought it said $\nabla u=0$ instead of $\nabla^2u=0$) – icurays1 Dec 2 '12 at 18:41
1
– Julian Kuelshammer Dec 2 '12 at 19:34
## 1 Answer
Hints:
1. The condition $\nabla^2u=0$ is equivalent to the trace of the Hessian being zero
2. The trace of a square matrix is the sum of its eigenvalues
3. A point $x$ is a saddle point if the Hessian matrix of $u$ at $x$ has both positive and negative eigenvalues.
What if all the eigenvalues are zero, you ask? Well, the Hessian is symmetric and hence diagonalizable; if all the eigenvalues are zero, then the Hessian is similar to the zero matrix! (What does this say about the Hessian?)
Edit:
Indeed, as pointed out in a comment below, this argument is not sufficient for the case when the Hessian matrix is zero at a point, for this only tells us that the function $u$ is locally constant, and nothing about the behavior in various directions. The rigorous proof of this saddle-point property is essentially the maximum/minimum principle for harmonic functions. The "strong" version goes like this:
If $u$ is harmonic on an open, bounded, connected set $\Omega\subset\Bbb{R}^n$, and there exists $x_0\in\Omega$ such that $u(x_0)=\sup\{u(x):x\in\bar{\Omega}\},$ then $u(x)$ is constant on $\Omega$. (Similarly if $u(x_0)=\inf\{u(x):x\in\bar{\Omega}\}$, then $u$ is constant on $\Omega$).
So, suppose $\nabla^2u=0$ on $\Bbb{R}^n$ and $x$ is a critical point of $u$. Then, consider $B(x,M)$ for any arbitrary $M>0$. For each $M$ this is a bounded, open, connected set on which $\nabla^2u=0$. Thus if $x$ were a minimum, $u$ would be constant throughout $B(x,M)$. Since $M$ is arbitrary, it must be that $u$ is constant throughout $\Bbb{R}^n$. Similarly if $x$ were a maximum, $u$ must be constant throughout $\Bbb{R}^n$.
Thus, if $u$ is non constant and $x$ is a critical point, it can be neither a maximum nor minimum, and is hence by definition a saddle point.
So, it seems that we need to assume that $u$ is non constant, since constant functions are harmonic and have no saddle points (every point is a local max and local min). Otherwise the proof goes.
-
oh thank you! yes I've noticed that the condicion $\nabla^2U=0$ implies $Tr(HessU)=0$ but I wasn't sure if the trace of the matrix was allways the sums of the eigenvalues or if that only aplies when you diagolize. – user1080987 Dec 2 '12 at 18:57
– icurays1 Dec 2 '12 at 19:00
1
Can you actually conclude that the critical point is a saddle point when the Hessian is $0$? We might need a more general definition than the standard one ... – Matt Dec 3 '12 at 2:32
Thanks @Matt - completely overlooked that. It seemed too easy... – icurays1 Dec 3 '12 at 4:07
I hate to do this (mostly because this is really close to being a beautiful way to prove the maximum principle in 2-D which I hadn't thought of before), but can we really conclude that a harmonic function with $0$ Hessian is locally constant? If I'm not mistaken $u(x,y)=xy$ has an isolated critical point at $(0,0)$ with Hessian $0$, but every neighborhood of $(0,0)$ contains both positive and negative (and zero) values. – Matt Dec 3 '12 at 4:44
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369820356369019, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/22767/is-the-area-of-a-polygonal-linkage-maximized-by-having-all-vertices-on-a-circle/22776
|
## Is the area of a polygonal linkage maximized by having all vertices on a circle?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider a (non-stellated) polygon in the plane. Imagine that the edges are rigid, but that the vertices consist of flexible joints. That is, one is allowed to move the polygon around in such a way that the vertices stay a fixed distance from their adjacent neighbors. Such a system is called a polygonal linkage.
As the linkage varies in its embedding in the plane, the area of the interior varies. The question is, When is the area maximized?
I have a specific answer I suspect is correct, but I am having trouble showing. I believe it is true that every polygonal linkage has as embedding where all the vertices lie on one circle (this isn't hard to show in the case when the linkage starts non-stellated). My claim is that the area is maximized exactly when all the vertices lie on a circle.
I can show this for a 4-sided polygon, but with techniques that do not generalize.
Also, my requirement that the polygon be non-stellated was only so that it was clear that there was a way to flex it to have all vertices on a circle. This question extends to the stellated case, but the question there is whether every stellated linkage can be flexed to one which is non-stellated.
-
1
Could you sketch your argument in the quadrilateral case? – Steven Gubkin Apr 27 2010 at 19:58
1
Since this question has been answered positively, the following one is natural: How does one compute the radius of the circle with an inscribed maximal solution? This radius is clearly a symmetric function of the lengths. Equivalently, given n strictly positive real numbers $l_1,...,l_n$ such that $2\max(l_1,...,l_n)<l_1+...+l_n$, compute the radius $\rho$ such that one can inscribe a polygon with $n$ sides of length $l_1,\dots,l_n$ inside a circle of radius $\rho$. – Roland Bacher Apr 28 2010 at 8:02
From a computational point of view, the following sequence converges fairly quickly to the correct value $\rho$ of the radius of the circle with maximal inscribed solution. Set $\rho_0=1/(2\pi)\sum_{i=1}^n l_i$ (where $l_1,\dots,l_n$ are the lengths) and define $\rho_1,\rho_2$ recursively by $\rho_{m+1}=1/\pi\sum_{i=1}^n\mathop{arcsin}(l_i/(2\rho_m))$. – Roland Bacher Apr 28 2010 at 11:39
## 5 Answers
This is a theorem of Cramer. See here
For the quadrilateral case the quickest proof is using Brahmagupta's formula
$$Area=\sqrt{(s-a)(s-b)(s-c)(s-d)-abcd\cos^2 \theta}$$ where $a,b,c,d$ are the sides, $s$ is the half perimeter and $\theta$ is half the sum of opposite angles.
Edit: I wonder if this argument works: Pick four consecutive vertices and move the linkage made of these four vertices till it's cyclic. There will be a subsequence of the polygons we get after such operations which converges, by the Weierstrass theorem. In the limit the polygon will be cyclic otherwise you can find four consecutive vertices not on a circle and increase the area again.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If you can show it for a quadrilateral, then it is true in general.
Proof Sketch: by induction on the number of sides. The base case is $n=4$, which you say you have done. Let $X$ be the set of joints. By a compactness argument, the maximum is achieved; pick a particular placement which achieves the maximum.
Consider a new linkage $X'$ with $n-1$ vertices. The distance between $x'_i$ and $x'_{i+1}$ is the same as that between $x_i$ and $x_{i+1}$; the distance between $x'_1$ and $x'_{n-1}$ is the same as the distance between $x_1$ and $x_{n-1}$ in the chosen optimal placement. Then the chosen placement of the $x_i$'s must also be an optimal placement of the $x'_i$'s; if not, we could keep $(x_1, x_{n-1}, x_n)$ in the same place and move the other $x_j$'s to a better placement for linkage $X'$, obtaining a better placement of $X$.
By induction, we see that $\{ x_1, x_2, \ldots, x_{n-1} \}$ lie on a circle. But the same applies to any index, so $X \setminus \{ x_i \}$ lies on a circle $C_i$ for any $i$. We see that $C_i$ and $C_j$ have $n-2$ points in common so (for $n>4$) they must be the same circle. So all the points lie on a circle.
As regards your other question, as to whether every closed linkage to be unfolded to lie on a circle, I believe the answer is also yes. Let $d_1$, $d_2$, ..., $d_n$ be the lengths of the sides. Without loss of generality, let $d_n = \max(d_i)$. Since your linkage closes, we have $d_n < \sum_{i <n} d_i$ by the triangle inequality.
Consider the function $f(R) = \sum 2 \sin^{-1}(d_i/(2R))$. When $R=d_n/2$, all the inverse sines are defined, and we have $$f(R) > \pi + 2 \sin^{-1} (\sum_{i<n} d_i/d_n) \geq 2 \pi,$$ where we have used the inequality $\sin^{-1}(x) + \sin^{-1}(y) \geq \sin^{-1}(x+y)$, following from the convexity of $\sin^{-1}$.
As $R$ goes to infinity, $f(R)$ goes to $0$. By the intermediate value theorem, $f(R)$ is $2 \pi$ somewhere; your linkage can be unfolded onto a circle of this radus $R$.
-
+1, you beat me to the induction proof :) – Gjergji Zaimi Apr 27 2010 at 20:35
I just realized that I may have misunderstood the question in the second part: I prove that there is a way to place the points on a circle so they are at the requisite distances, but I don't show that, from some other placement, you can get to the circular placement by a continuous motion. I have a nagging suspicion that the intent was to ask about the latter question, to which I don't know the answer. – David Speyer Apr 27 2010 at 20:51
1
Well from the work of Conelly, Demaine and Rote mentioned below one can always convexify a linkage. Why not use induction to place all vertices but one on a circle and then "push" the remaining vertex towards the circle? (this is really sketchy..) – Gjergji Zaimi Apr 27 2010 at 21:11
Here is a general way to think about these kind of problems. The Minkowski theorem says that a polytope is uniquely determined by its normals and volumes of the facets. You can loose some of these conditions and ask for the optimum isoperimetric ratio. In this case, in $\Bbb R^2$ you forget normals and conclude that inscribed polygon with given side lengths is optimal. A classical Lindelöf theorem does the opposite: in $\Bbb R^d$, it says that the optimal polytope with prescribed normals are circumscribed around the sphere (see e.g. here, Section 18.3).
-
A theorem of Connelly, Demaine, and Rote shows that a variety of plane linkages can be convexified or straightened:
There's a simple argument showing that from the quadrilateral case we can easily deduce the $n$-vertex case (at least in the convex case). We claim that in any configuration locally maximizing the area the vertices are concyclic. Take any four consectutive vertices $A$, $B$, $C$ and $D$. Then in at a local maximum of area $A$, $B$, $C$ and $D$ are concyclic lest we could raise the area by moving the edges $AB$, $BC$ and $CD$ (by the quadrilateral case). Hence the circumcircles of $ABC$ and $BCD$ coincide. So all the circumcircles of three consecutive vertices are the same circle: the vertices are concyclic.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937903106212616, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Operation_(mathematics)
|
# Operation (mathematics)
Common mathematical operators:
$+$ - plus (addition)
$-$ - minus (subtraction)
$\times$ - times (multiplication)
$\div$ - obelus (division)
The general operation as explained on this page should not be confused with the more specific operators on vector spaces. For a notion in elementary mathematics, see arithmetic operation.
In its simplest meaning in mathematics and logic, an operation is an action or procedure which produces a new value from one or more input values, called "operands". There are two common types of operations: unary and binary. Unary operations involve only one value, such as negation and trigonometric functions. Binary operations, on the other hand, take two values, and include addition, subtraction, multiplication, division, and exponentiation.
Operations can involve mathematical objects other than numbers. The logical values true and false can be combined using logic operations, such as and, or, and not. Vectors can be added and subtracted. Rotations can be combined using the function composition operation, performing the first rotation and then the second. Operations on sets include the binary operations union and intersection and the unary operation of complementation. Operations on functions include composition and convolution.
Operations may not be defined for every possible value. For example, in the real numbers one cannot divide by zero or take square roots of negative numbers. The values for which an operation is defined form a set called its domain. The set which contains the values produced is called the codomain, but the set of actual values attained by the operation is its range. For example, in the real numbers, the squaring operation only produces nonnegative numbers; the codomain is the set of real numbers but the range is the nonnegative numbers.
Operations can involve dissimilar objects. A vector can be multiplied by a scalar to form another vector. And the inner product operation on two vectors produces a scalar. An operation may or may not have certain properties, for example it may be associative, commutative, anticommutative, idempotent, and so on.
The values combined are called operands, arguments, or inputs, and the value produced is called the value, result, or output. Operations can have fewer or more than two inputs.
An operation is like an operator, but the point of view is different. For instance, one often speaks of "the operation of addition" or "addition operation" when focusing on the operands and result, but one says "addition operator" (rarely "operator of addition") when focusing on the process, or from the more abstract viewpoint, the function +: S×S → S.
## General definition
An operation ω is a function of the form ω : V → Y, where V ⊂ X1 × … × Xk. The sets Xk are called the domains of the operation, the set Y is called the codomain of the operation, and the fixed non-negative integer k (the number of arguments) is called the type or arity of the operation. Thus a unary operation has arity one, and a binary operation has arity two. An operation of arity zero, called a nullary operation, is simply an element of the codomain Y. An operation of arity k is called a k-ary operation. Thus a k-ary operation is a (k+1)-ary relation that is functional on its first k domains.
The above describes what is usually called a finitary operation, referring to the finite number of arguments (the value k). There are obvious extensions where the arity is taken to be an infinite ordinal or cardinal, or even an arbitrary set indexing the arguments.
Often, use of the term operation implies that the domain of the function is a power of the codomain (i.e. the Cartesian product of one or more copies of the codomain),[1] although this is by no means universal, as in the example of multiplying a vector by a scalar.
## Notes
1. See e.g. Chapter II, Definition 1.1 in: S. N. Burris and H. P. Sankappanavar, A Course in Universal Algebra, Springer, 1981. [1]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9173815846443176, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/109356/interpretation-of-a-parameter-in-forming-a-pseudodifferential-operator/109610
|
## Interpretation of a parameter in forming a pseudodifferential operator
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In Zworski's Semiclassical Analysis, he defines the following method of quantization: for a symbol $a = a(x,\xi) \in \mathscr{S}(\mathbb{R}^{2n})$ and $u \in \mathscr{S}(\mathbb{R}^n)$,
$$Op_t(a)u(x) : = \frac{1}{(2\pi h)^n} \int_{\mathbb{R}^n \times \mathbb{R}^n} e^{\tfrac{i}{h}\langle x - y, \xi \rangle} a(tx + (1 - t)y, \xi) u(y) dy d\xi$$
where $t \in [0,1]$ is some parameter. The presence of $h$ is only because this form of quantization is motivated by quantum mechanics, and the Weyl Quantization is equal to $Op_{1/2}(a)$, with $t = 1/2$. It's also useful to notice that the above family of quantizations obeys the adjoint formula $$Op_t(a)^* = Op_{1 - t}(\bar{a}),$$ from which we can see that Weyl quantization on real symbols give self-adjoint operators, as desired in quantum mechanics. However, for $t \neq 1/2$, we lose this self-adjointness, and the only other value of $t$ treated in the text (as far as I know) is $t = 1$, because the quantization formula is very simple, and it is also one of the traditional ways to form pseudodifferential operators. My question is:
What are contexts in which we might use a quantization of the above form with $t \neq \tfrac{1}{2}, 1$? Are there any natural situations in which they arise?
Furthermore, perhaps I'm really more interested in
How can we interpret the role of the parameter $t$ in forming the pseudodifferential operators?
-
## 1 Answer
Let me answer to your second query and make $h=1$. You have $$Op_1(a(x) \xi)= a(x) D_x,\quad \text{with $D_x=-i\partial_x$},$$ $$Op_0(a(x) \xi)= D_x a(x),$$ $$Op_{1/2}(a(x) \xi)= \frac 12D_x a(x)+\frac 12a(x)D_x.$$ With $t=1$, you start with the derivations and then you multiply by the coefficients (in the case of a differential operator).
With $t=0$, you start with the multiplications and then you take derivatives.
$t=1/2$ is a symmetric compromise between the two bad solutions above. Note that the most important property of Weyl quantization is its symplectic invariance and not only the fact that real-valued symbols (Hamiltonians) get quantized by (formally) selfadjoint operators.
-
Any ideas where interesting problems/applications might arise where we use the quantization with various $t$ besides the ones mentioned? – Christopher A. Wong Oct 20 at 2:54
This business has to do with the $J^t=\exp{itD_x\cdot D_\xi}$: the adjoint of $op_t(a)=op(J^t a)$ is $op_{1-t}(\overline a)$, so somehow the only way to have full stability with respect to taking adjoints is indeed the Weyl choice. Of course, you have always asymptotic stability for good classes of symbols, say up to $h^\infty$ in the semi-classical case, but it is not very satisfactory at the algebraic level. – Bazin Oct 21 at 21:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374237060546875, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=3871062
|
Physics Forums
## Gauss's Law for Magnetism Question
Hey everyone, I'm new to these forums. Being an electrical engineering major, most of my teachers aren't very concerned with the "physics" side of things. I'm hoping I can gain some insight on Maxwell's equations.
When first stating Gauss's Law for Magnetism, the only reason my electromagnetics text gives for this is that all magnetic field lines close upon themselves. Therefore, the flux due to the B field over a closed surface is zero. This makes perfect sense to me, and I thought that this fact would be true for the H field as well. However, when deriving magnetic boundary conditions, if you assume that the flux due to the B field is always zero, it is impossible that the flux due to the H field is always zero as well. If your Gaussian surface is in free space or in one medium, then both equations can be true, but not if the volume enclosed by your Gaussian surface contains an interface.
My confusion may be a result of not understanding exactly what the difference between B and H is on a fundamental level (I know the constitutive relationships).
What is so special about the B field? Why isn't the flux due to the H field always zero?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
First material reasoning ( You may know all): One of the Maxwell equations states that $\nabla . B =0$ eveywhere From this equation , using divergence theorem ( which is purely mathematical) one concludes that $\oint B.ds=0$ . However from $B= \mu H$ then we have $\nabla . B =\nabla \mu.H+\mu \nabla .H=0$ Since $\mu$ is is discontinuous on the interface, it's divergence is NOT zero everywhere , hence $\nabla .H≠0$ $\Rightarrow$ $\oint H.ds$ may not be zero. In fact that the normal component of $B$ is continuous everywhere, including on the interfaces and it's the normal component which contribute to the flux. This is the result of $\nabla . B =0$ and is not necessarily true for $H$. The physical difference between $B$ and $H$ arises from magnetization $M$ of the magnetized media . In fact $H$ has no physical meaning but is defined to make the Maxwell's equations simpler. We just define $H=\frac{1}{\mu_{0}}B-M$. The discontinuity of the normal component of $H$ is due to discontinuity of the normal component of $M$. As for the reason given in your textbook, and that you expect the same for H, I don't think we can talk about the lines of H because the lines are the force lines which depends on B. However of I want to do an analogy, I can say the H lines ARE closed on themselves too but they may return " weaker" or "stronger" than when they left! This means the the net flux may not be zero.
Thank you very much for your reply, Hassan! You start your explanation by stating that the divergence of B is zero. My book derives this fact from the assumption that the flux of the B field over a closed surface is zero. If instead derive the integral form of Gauss's Law for Magnetism from the differential form, where did the differential form come from? I.e., by what reasoning is Div(B) = 0 instead of Div(H) = 0? You have started to clear up some of my confusion, though. My book never stated that B was the "real" physical field. Also, my book never defined H as you stated- although from what I can tell, there may be some logical gaps in my text when moving into material space. In fact, my book starts off by defining the H field in terms of the Biot-Savart Law, and defines the B field in free space as mu_0 * B. So B is the only "real" (physically) field? Is the fundamental definition of B in terms of the Biot-Savart Law, or is it defined in some other way? Thank you again for the response. Neither of my last two electromagnetics teachers (EE dept.) knew the answer to this question.
## Gauss's Law for Magnetism Question
Quote by Only a Mirage Thank you very much for your reply, Hassan! You start your explanation by stating that the divergence of B is zero. My book derives this fact from the assumption that the flux of the B field over a closed surface is zero.
That's why I said mathematical reasoning.
The differential form comes from the assumption ( law) that the filed lines are closed on themselves.
Seems our textbooks have a different approach to electromagnetism. In my textbook, even the Biot-Savart Law is for B. And the physical meaning of B is understood from Lorentz force. F=qvB
For a better understanding of the relation between B and H , read tiny-tim's post in the following thread:
http://www.physicsforums.com/showthread.php?p=3787165 .
Thanks for the link. I'll check it out.
Tags
b field, electro magnetism, gauss's law, h field, maxwell equations
Thread Tools
| | | |
|---------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Gauss's Law for Magnetism Question | | |
| Thread | Forum | Replies |
| | Classical Physics | 7 |
| | Classical Physics | 36 |
| | Introductory Physics Homework | 2 |
| | Introductory Physics Homework | 1 |
| | Introductory Physics Homework | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436812400817871, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/188127-finite-number-discontinuety-points.html
|
# Thread:
1. ## finite number of discontinuety points
i have a function
f(x)= 1 for x=1/n
f(x)=0 for otherwise
n is a natural number
prove that f(x) is defferentiable [0,1]
?
first i tried to prove that there are finite number of discontenuety points
n>1/a so 0<1/n<a
then the book says that there at most n-1 discontinuty ponts
dont know how they got it from the last innequality
?
2. ## Re: finite number of discontinuety points
Originally Posted by transgalactic
i have a function
f(x)= 1 for x=1/n
f(x)=0 for otherwise
n is a natural number
prove that f(x) is differentiable [0,1]
?
first i tried to prove that there are finite number of discontenuety points
n>1/a so 0<1/n<a
then the book says that there at most n-1 discontinuty ponts
dont know how they got it from the last innequality
I cannot figure out if you are translating these questions or if you are just unfortunately in a very poor course.
I don't think I have ever seen a poorer written question.
There is absolutely no part of that question that is correct.
Even the definition of the function is nonsense.
Moreover, if a function is differentiable at a point, then it must be continuous at that point. Do you see why it is all nonsense?
3. ## Re: finite number of discontinuety points
Originally Posted by transgalactic
i have a function
f(x)= 1 for x=1/n
f(x)=0 for otherwise
n is a natural number
prove that f(x) is defferentiable [0,1]
?
first i tried to prove that there are finite number of discontenuety points
n>1/a so 0<1/n<a
then the book says that there at most n-1 discontinuty ponts
dont know how they got it from the last innequality
?
The function you have defined is 1 at x= 1, 1/2, 1/3, 1/4, 1/5, 1/6, etc. 0 everywhere else. Since for any number that is NOT of the form 1, 1/2, 1/3, etc., there exist an interval around it such none of those are in the interval. As a result, the function is continuous and differentiable (with derivative 0) at all x except 1, 1/2, 1/3, 1/4, etc. It is not continuous at those points so there is an infinite number of points at which it is not continuous.
But, since you say "there are at most n-1 discontinuity point", which only makes sense if n is a fixed number, did you mean to define the function $f_n$ as $f_n(x)= 1$ if x= 1/n, $f(x)= 0$ otherwise, for a specific n? That is, $f_2$ is 0 everywhere except x= 1/2 where it is 1, $f_3(x)$ is 0 everywhere except at x= 1/3 where it is 1, etc? That also doesn't quite make sense- any such function differentiable everywhere except at the single point $x= 1/n$.
4. ## Re: finite number of discontinuety points
Originally Posted by HallsofIvy
The function you have defined is 1 at x= 1, 1/2, 1/3, 1/4, 1/5, 1/6, etc. 0 everywhere else. Since for any number that is NOT of the form 1, 1/2, 1/3, etc., there exist an interval around it such none of those are in the interval. As a result, the function is continuous and differentiable (with derivative 0) at all x except 1, 1/2, 1/3, 1/4, etc. It is not continuous at those points so there is an infinite number of points at which it is not continuous.
Even under your reading of this question, how can $f$ be differentiable on $[0,1]~?$
5. ## Re: finite number of discontinuety points
Originally Posted by Plato
Even under your reading of this question, how can $f$ be differentiable on $[0,1]~?$
May be that the 'true' question is: is f differentiable in x=0?...
Kind regards
$\chi$ $\sigma$
6. ## Re: finite number of discontinuety points
sorry i ment
n-1 discontinuety point in [a,1]
we need to prove defferentiability in [a,1]
0<a<1
cant imagine why there are at most n-1 discontinuety points
7. ## Re: finite number of discontinuety points
You appear to be writing the function incorrectly. You do NOT mean "f(1/n)= 1, for all n, f(x)= 0 otherwise" nor do you mean $f_n(1/n)= 1$, f(x)= 0 otherwise" for a specific n. You seem to be saying " $f_n(1/i)= 1$ for all [tex]i\le n[/itex], f(x)= 0 otherwise..
If that is correct, look at some examples: $f_2(x)= 0$ for all x except 1/2- one point of discontinuity.
$f_3(x)= 0$ for all x except 1/2 and 1/3- two points of discontinuity.
$f_4(x)= 0$ for all x except 1/2, 1/3, and 1/4- three points of discontinuity.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9157068133354187, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/10079/what-strategies-can-i-use-to-evaluate-a-limit-when-limit-returns-unevaluated?answertab=active
|
# What strategies can I use to evaluate a limit when Limit[] returns unevaluated
I'm trying to find the following limit using Mathematica:
$$\lim_{N\to\infty}\sum_{k=1}^N\left(\frac{k-1}{N}\right)^N$$
The problem is taken from here and is known to converge to $\displaystyle\frac{1}{e-1}$. However, using `Limit` in a straightforward manner returns unevaluated:
````Limit[Sum[((k - 1)/n)^n, {k, 1, n}], n -> ∞]
(* Limit[Sum[((k - 1)/n)^n, {k, 1, n}], n -> ∞] *)
````
How can I explore this problem using Mathematica and obtain the limit?
-
## 1 Answer
Identifying the sum as ($N$ times) a Riemann sum should inspire us to look at the integral of the function $x^N$ for $0\le x \lt 1$, whose value is $1/(N+1)$, of which here are a few examples for $N=1,4,16,64$:
````Plot[Evaluate@Table[x^n, {n, {1, 4, 16, 64}}], {x, 0, 1}, Filling -> Axis, PlotStyle -> Thick]
````
Noticing that this area becomes more and more concentrated near $x \approx 1$, we should then suspect that virtually all of the sum's value is coming from the last few terms. Why not, then, use Mathematica to explore this?
````Table[Limit[Sum[((k - 1)/n)^n, {k, n - i, n}], n -> \[Infinity]], {i, 0, 4}]
````
$\left\{\frac{1}{e},\frac{1+e}{e^2},\frac{1+e+e^2}{e^3},\frac{1+e+e^2+e^3}{e^4},\frac{1+e+e^2+e^3+e^4}{e^5}\right\}$
The pattern is clear. Mathematica will identify it:
````FindSequenceFunction[%][i]
````
$\frac{e^{-i} \left(e^i-1\right)}{e-1}$
That is, we can speculate from this evidence that
$$\lim_{n\to \infty } \, \sum_{k=n-i}^n\left(\frac{k-1}{n}\right)^n = \frac{e^{-i} \left(e^i-1\right)}{e-1}.$$
It looks good when we compare the sums to the limiting value of the right hand side, easily seen (or computed by Mathematica) to be $1/(e-1)\approx 0.581977$:
````DiscretePlot[1/(E-1) - Sum[((k-1)/n)^n, {k, 1, n}], {n, 10, 1000, 20}, PlotStyle -> PointSize[0.015]]
````
If we can mathematically justify taking the double limit--first with respect to $n$, then with respect to $i$--then we can conclude the right hand side converges to $1/(e-1)$. I leave that reasoning to the interested reader.
-
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8506132364273071, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/341-need-help.html
|
# Thread:
1. ## Need help
Is there a difference between solving a system of equations by the algebraic method and the graphical method? Why?
2. Originally Posted by JUSFOREL
Is there a difference between solving a system of equations by the algebraic method and the graphical method? Why?
With an algebraic method, you can hope to solve equations to arbitrary accuracy and also systems with more than two unknowns. On the other hand, graphical methods (for equations with two unknowns) work sometimes when you cannot solve algebraically. Example:
$<br /> x^4 + 2y^4 + x^2y^2 = 2 \,\,\,\,\,<br /> sin(x+y) + cos(2x-y) = 1<br />$
Impossible to solve algebraically, you can however find approximate solutions graphically.
Numerical (approximate) solution methods are superior to both in this case.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536377787590027, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/66567-probability-generating-function.html
|
# Thread:
1. ## probability generating function
Is the discrete random variable X has p.g.f Gx(s) what is the p.g.f of mX + n?
I know the p.g.f is given by sum over k (p(k)*s^k) but I'm not sure where to go from here
2. Originally Posted by James0502
Is the discrete random variable X has p.g.f Gx(s) what is the p.g.f of mX + n?
I know the p.g.f is given by sum over k (p(k)*s^k) but I'm not sure where to go from here
Let $Y = mX + n$. Your definition for $G_X(s)$, i.e. sum over k (p(k)*s^k) can alternatively be written as $\mathbb{E}(s^X)$ where $\mathbb{E}(.)$ denotes expectation.
$G_Y(s) = \mathbb{E}(s^Y) = \mathbb{E}(s^{mX+n}) = s^n\mathbb{E}((s^m)^X) = s^n G_X(s^m)$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8924121260643005, "perplexity_flag": "middle"}
|
http://electronics.stackexchange.com/questions/54997/how-can-i-measure-back-emf-to-infer-the-speed-of-a-dc-motor
|
# How can I measure back-EMF to infer the speed of a DC motor?
I'm interested in measuring the back-EMF of a motor to determine a motor's speed because it's cheap and requires no additional mechanical parts. How can I measure the back-EMF when I'm driving the motor?
-
– Nick Alexeev Mar 26 at 18:55
## 1 Answer
One way to do this is to briefly stop driving the motor, long enough to let any residual current from the driving voltage die down, and then simply measure the voltage. The time it takes the current to settle will depend on the inductance of the windings. This is simple to understand, and the undriven interval can be made quite short, but this has obvious disadvantages.
Another method involves a clever use of Ohm's law. A motor can be modeled as a series circuit of an inductor, a resistor, and a voltage source. The inductor represents the inductance of the motor's windings. The resistor is the resistance of that wire. The voltage source represents the back-EMF, and it is directly proportional to the speed of the motor.
If we can know the resistance of the motor, and we can measure the current in the motor, we can infer what the back-EMF must be while the motor is being driven! Here's how:
We can ignore $L_m$ so long as the current through the motor is not changing much, because the voltage across an inductor proportional to the rate of change of current. No change in current means no voltage across the inductor.
If we are driving the motor with PWM, then the inductor serves to keep the current in the motor relatively constant. All we care about then, is really the average voltage of $V_{drv}$, which is just the supply voltage multiplied by the duty cycle.
So, we have an effective voltage we are applying to the motor, which we are modeling as a resistor and a voltage source in series. We also know the current in the motor, and the current in the resistor of our model must be the same because it is a series circuit. We can use Ohm's law to calculate what the voltage across this resistor must be, and the difference between the voltage drop over the resistor and our applied voltage must be the back-EMF.
Example:
motor winding resistance $= R_m = 1.5\Omega$
measured motor current $= I = 2A$
supply voltage $= V_{cc} = 24V$
duty cycle $= d = 80\%$
calculation:
voltage effectively applied to the motor $= \overline{V_{drv}} = dV_{cc} = 80\% \cdot 24V = 19.2V$
voltage drop over motor resistance $= V_{R_m} = IR_m = 2A \cdot 1.5\Omega = 3V$
back-EMF $= V_m = \overline{V_{drv}} - V_{R_m} = 19.2V - 3V = 16.2V$
Putting it all together into one equation:
$V_m = dV_{cc} - R_m I$
-
+1, nicely explained, good reference answer. – Anindo Ghosh Jan 14 at 16:15
A point that's worth noting is that, except to the extent that an inductor has parallel resistance or other leakage, the average voltage across an inductor over any given time interval must be proportional to the difference in current between the start and end of that interval. If an inductor has the same amount of current flowing through it at the start and end of some time interval, the average voltage across the inductor must be zero. That rule applies both to discrete inductors, and also the inductor one models as being in series with an ideal motor. – supercat Jan 14 at 20:06
@supercat: hm, interesting point. I can see how I could use that to further justify ignoring the inductor, but then I thought of something else. If the current actually is changing (during periods of load change or speed change, perhaps) this would introduce an error into this method, wouldn't it? I wonder if this is significant enough to merit consideration. – Phil Frost Jan 14 at 20:22
Also, note that if one is PWM'ing a motor at a decent frequency, efficiency will be best if the current in its inductance does not die down between cycles. Rather than open-circuiting the motor, short-circuit it unless or until the current drops to nothing (hopefully the PWM rate will be fast enough that it won't). If one short-circuits the motor long enough, the current will fall to nothing and then reverse. Reverse current will kill efficiency, so open the circuit at that point (or short through a transistor that only allows one direction of current). Note that... – supercat Jan 14 at 20:30
...if the stall current exceeds the amount one's supply can output without sagging, PWM'ing the motor may actually increase the available starting or slow-speed torque. Note also that if the motor is turning faster than the speed "requested" by the PWM, some of the excess energy will be dumped back into the supply (good for efficiency, if one can safely harness it). – supercat Jan 14 at 20:36
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240055680274963, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=164489&page=2
|
Physics Forums
Page 2 of 3 < 1 2 3 >
Recognitions:
Science Advisor
Staff Emeritus
## How fast is gravity?
OK, I see where you're coming from now: if you assume that G and hbar remain constant, the planck length is just
$$\sqrt{\frac{G \bar{h}}{c^3}}$$
so that's where your factor of sqrt(8) came from.
As far as what I had in mind, if 1 new meter = 2 old meters, then
c = 3e8 old meter / second = 1.5e8 new meter / second
so doubling the meter halves the "speed of light" from 3e8 "old meters" per second to 1.5e8 "new meters"/ second.
Quote by pervect OK, I see where you're coming from now: if you assume that G and hbar remain constant, the planck length is just $$\sqrt{\frac{G \bar{h}}{c^3}}$$ so that's where your factor of sqrt(8) came from. As far as what I had in mind, if 1 new meter = 2 old meters, then c = 3e8 old meter / second = 1.5e8 new meter / second so doubling the meter halves the "speed of light" from 3e8 "old meters" per second to 1.5e8 "new meters"/ second.
that doesn't quite work for me. i think that, if all of the dimensionless parameters remain constant,
c = 299792458 old_meters/old_second = 299792458 new_meters/new_second
and the new_second cannot be the same as the old_second if the meter had changed.
but i think we (as well as Duff) agree: ain't no operational difference. a change in c (or in G or h or any other sole dimensionful "constant") is not merely impossible, but is functionally meaningless.
i still don't know what to think of this inflationary universe theory where the universe expands faster than c at some time in its past.
Several of the comments re the speed of fields are not established by experiment - the speed of light in a vacuum is c, we all know that, but the speed with which a closed non-divergent magnetic field propagates in a loop of magnetic material is not readily explainable in terms of the field starting out at each pole of the energized magnet and meeting itself somewhere in middle - waves go from place to place - we do not know the mechanism by which fields make their forces felt at a distance - It seems when physics needs to explain quantum entanglements and virtual photons the speed barrier is shunted to the side. In the case of gravity, it is usually assumed there is a graviton exchange between attracted particles - but gravity and inertia may be the result of global dynamics - the cosmological constant or, like expansion, an ongoing change that does not happen at one place and travel to another, but rather something that affects spacetime continuously. The curvature of GR may be the result of local mass interaction therewith, in which case it may not be meaningful to assign a propagation velocity to the curvature.
Recognitions:
Science Advisor
Staff Emeritus
Quote by yogi Several of the comments re the speed of fields are not established by experiment
The speeds are certainly established by theory, though. And the theory has survived every experimental test thrown at it, to date.
For instance, if Maxwell's equations were wrong, we'd start to see disagreement with experiment, even if that experiment wasn't directly designed to measure some sort of "speed".
Maxwell's equations certainly give us a good reason to expect that electromagnetism, in general, travels at 'c' in the general sense that if you change something "here", it won't have any effect "there" until after a delay of at least c/distance.
Some care does need to be taken as to what means by speed. Specifically, one has to use the above defintion, and not try and guess the speed from the direction of the coulomb force, a common sorce of confusion that is also often repeated in "speed of gravity" threads.
GR is no different as far as the theoretical aspects go. (However, we don't have any direct measurements of the speed or even the existence of gravity waves, while of course we do have direct observations of light).
The equations are a lot messier than Maxwell's equation, but there is proof that GR is a well posed initial value problem, which implies that the "fields" propagate at less than 'c'. (You can regard the "fields" as changes in the metric, which will also change the Christoffel symbols and the curvature tensor).
The details of the proof that GR is a well posed initial value problem are rather complicated and I'm not especially familiar with them, but you can find the proof in Wald, "General Relativity". I've written a little about this in the past, as to what it means to be a well-posed initial value problem and what this implies about propagation speed.
Quote by pervect The speeds are certainly established by theory, though. And the theory has survived every experimental test thrown at it, to date. For instance, if Maxwell's equations were wrong, we'd start to see disagreement with experiment, even if that experiment wasn't directly designed to measure some sort of "speed". Maxwell's equations certainly give us a good reason to expect that electromagnetism, in general, travels at 'c' in the general sense that if you change something "here", it won't have any effect "there" until after a delay of at least c/distance.
Would concur - there is much indirect/consequential evidence of c as the limiting communication velocity - but being the eternal skeptic, I always find myself compelled to comment when absolute assertions are made about propagation rates of fields
I sort of expected Eugene to jump into this thread somewhere as he has written a couple of papers on the subject
Quote by keinve gravity's influence is technically finite ... once you take out black holes, gravity is finite.
Do you mean in terms of distance? No it's not. It's infinite.
Quote by keinve gravity's influence is technically finite, though not if you count black holes. a singularity is infinintly dense, so it's gravitational influence is infinite. it's jsut the range that the gravity works on that is affected. once you take out black holes, gravity is finite.
Quote by DaveC426913 Do you mean in terms of distance? No it's not. It's infinite.
i thought he meant in terms of magnitude of field (or the degree of curvature of space-time).
Recognitions: Science Advisor I haven't read the whole topic, but wat I was wondering, is if there are people who did some calculations about the speed of gravitational waves without the linearization, so for arbitrary large gravitational fields. The calculations for linear fields I understand, but how would one be sure if this speed is the same for arbitrary fields? Why is it still possible to write down a wave equation for the metric field ?
Recognitions:
Science Advisor
Staff Emeritus
Quote by haushofer I haven't read the whole topic, but wat I was wondering, is if there are people who did some calculations about the speed of gravitational waves without the linearization, so for arbitrary large gravitational fields. The calculations for linear fields I understand, but how would one be sure if this speed is the same for arbitrary fields? Why is it still possible to write down a wave equation for the metric field ?
Yes, Wald talks about this in the context of whether or not gravity is "a well posed initial value problem".
Gravity is one if the unexplained "forces", and we know it has a symmetry with EM and "charge". We also know that matter -waves, give off waves -photons from bound electrons (and electrons can do this if they move fast enough); and we know about this other extremely unstable property (superposition) that, unlike the others, seems to ignore space (it's null-spatial). Is there possibly some symmetry between gravity (an extremely stable, spatial "force" of matter), and superposition -an extremely unstable, non-spatial "force" of some kind??
Recognitions:
Science Advisor
Quote by pervect Yes, Wald talks about this in the context of whether or not gravity is "a well posed initial value problem".
Ok, I can remember such discussions, Carroll also pays attention to it. I will take a look at those texts. If i remember it correctly it was about cutting the space time into slices, and that one tries to describe the evolution at every hypersurface. But how can one proof that one is always able to write down wave equations for the metric, from the field equations? If this question is answered in Wald, I will find out soon :)
Recognitions: Science Advisor Staff Emeritus Yep, that's the sort of thing. Part of what comprises "well posedness" includes the "domain of dependency" on initial values.
some symmetry between gravity (an extremely stable, spatial "force" of matter), and superposition -an extremely unstable, non-spatial "force"
There's a symmetry in our understanding (or lack thereof), of the two...?
But it's "broken" if gravity "acts" at the speed of light which is distance-dependent, and superposition, which is independent of distance, acts instantaneously? A true symmetry would mean both were instantaneous "forces" (independent of spatiality) -as Sir Isaac believed...
Reading this old thread..but..what is the correct answer to the OP question ?: OP Question: How fast is gravity? == Does gravity have a "speed" ? Is it fast or slow ? It does not seem correct to me that we can say "gravity" is fast or slow. Seems to me it would be the object of motion (particle and/or wave) that is fast or slow. Thus, a fast particle/wave is one that moves much in a short period of time, slow particle/wave moves little in a long period of time. Some particles/wave (such as photons) always move the same distance in any period of time and are thus neither fast or slow, they move at c = speed of light. What am I missing in my understanding ?
Recognitions: Gold Member Science Advisor Staff Emeritus Salman2, this thread has been dead since 2007. The speed being referred to by the OP was the speed at which gravitational waves propagate, not the speed of material particles.
Quote by bcrowell Salman2, this thread has been dead since 2007. The speed being referred to by the OP was the speed at which gravitational waves propagate, not the speed of material particles.
OK thanks.
What was the conclusion of the discussion--what is the speed at which gravitational waves propagate--is it c, the same speed that photon wave propagates ?
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by Salman2 What was the conclusion of the discussion--what is the speed at which gravitational waves propagate--is it c, the same speed that photon wave propagates ?
I haven't read the whole discussion, but that is the correct answer.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543293118476868, "perplexity_flag": "middle"}
|
http://www.citizendia.org/Dispersion_(optics)
|
In a prism, material dispersion (a wavelength-dependent refractive index) causes different colors to refract at different angles, splitting white light into a rainbow. In Physics wavelength is the distance between repeating units of a propagating Wave of a given Frequency. The refractive index (or index of Refraction) of a medium is a measure for how much the speed of light (or other waves such as sound waves is reduced inside the medium Refraction is the change in direction of a Wave due to a change in its Speed. A rainbow is an optical and meteorological phenomenon that causes a spectrum of Light to appear in the Sky when the Sun
In optics, dispersion is the phenomenon in which the phase velocity of a wave depends on its frequency. The phase velocity (or phase speed) of a Wave is the rate at which the phase of the wave propagates in space [1] Media having such a property are termed dispersive media.
The most familiar example of dispersion is probably a rainbow, in which dispersion causes the spatial separation of a white light into components of different wavelengths (different colors). A rainbow is an optical and meteorological phenomenon that causes a spectrum of Light to appear in the Sky when the Sun In Physics wavelength is the distance between repeating units of a propagating Wave of a given Frequency. However, dispersion also has an impact in many other circumstances: for example, it causes pulses to spread in optical fibers, degrading signals over long distances; also, a cancellation between dispersion and nonlinear effects leads to soliton waves. In Signal processing, the term pulse has the following meanings A rapid transient change in the amplitude of a signal from a baseline value to a higher An optical fiber (or fibre) is a Glass or Plastic fiber that carries Light along its length This article describes the use of the term nonlinearity in mathematics In Mathematics and Physics, a soliton is a self-reinforcing solitary wave (a wave packet or pulse that maintains its shape while it travels at constant speed Dispersion is most often described for light waves, but it may occur for any kind of wave that interacts with a medium or passes through an inhomogeneous geometry (e. Light, or visible light, is Electromagnetic radiation of a Wavelength that is visible to the Human eye (about 400–700 g. a waveguide), such as sound waves. A waveguide is a structure which guides waves such as Electromagnetic waves Light, or Sound waves Sound' is Vibration transmitted through a Solid, Liquid, or Gas; particularly sound means those vibrations composed of Frequencies Dispersion is sometimes called chromatic dispersion to emphasize its wavelength-dependent nature.
There are generally two sources of dispersion: material dispersion and waveguide dispersion. Material dispersion comes from a frequency-dependent response of a material to waves. For example, material dispersion leads to undesired chromatic aberration in a lens or the separation of colors in a prism. In Optics, chromatic aberration is caused by a lens having a different Refractive index for different Wavelengths of Light A lens is an optical device with perfect or approximate Axial symmetry which transmits and refracts Light, converging or diverging In Geometry, a triangular prism or three-sided prism is a type of prism; it is a Polyhedron made of a triangular base a translated Waveguide dispersion occurs when the speed of a wave in a waveguide (such as an optical fiber) depends on its frequency for geometric reasons, independent of any frequency dependence of the materials from which it is constructed. An optical fiber (or fibre) is a Glass or Plastic fiber that carries Light along its length More generally, "waveguide" dispersion can occur for waves propagating through any inhomogeneous structure (e. g. a photonic crystal), whether or not the waves are confined to some region. Photonic crystals are periodic Optical (nanostructures that are designed to affect the motion of Photons in a similar way that periodicity of a Semiconductor In general, both types of dispersion may be present, although they are not strictly additive. Their combination leads to signal degradation in optical fibers for telecommunications, because the varying delay in arrival time between different components of a signal "smears out" the signal in time. An optical fiber (or fibre) is a Glass or Plastic fiber that carries Light along its length
## Material dispersion in optics
The variation of refractive index vs. wavelength for various glasses. The wavelengths of visible light are shaded in red.
Influences of selected glass component additions on the mean dispersion of a specific base glass (nF valid for λ = 486 nm (blue), nC valid for λ = 656 nm (red))[2]
Material dispersion can be a desirable or undesirable effect in optical applications. The dispersion of light by glass prisms is used to construct spectrometers and spectroradiometers. A spectrometer is an Optical instrument used to measure properties of Light over a specific portion of the Electromagnetic spectrum, typically used Spectroradiometers are designed to measure the Spectral power distributions of Illuminants They operate almost like Spectrophotometers in the visible Holographic gratings are also used, as they allow more accurate discrimination of wavelengths. Holography (from the Greek, ὅλος - hólos whole + γραφή - grafē writing drawing is a technique that allows the However, in lenses, dispersion causes chromatic aberration, an undesired effect that may degrade images in microscopes, telescopes and photographic objectives. In Optics, chromatic aberration is caused by a lens having a different Refractive index for different Wavelengths of Light
The phase velocity, v, of a wave in a given uniform medium is given by
$v = \frac{c}{n}$
where c is the speed of light in a vacuum and n is the refractive index of the medium. The phase velocity (or phase speed) of a Wave is the rate at which the phase of the wave propagates in space The refractive index (or index of Refraction) of a medium is a measure for how much the speed of light (or other waves such as sound waves is reduced inside the medium
In general, the refractive index is some function of the frequency f of the light, thus n = n(f), or alternately, with respect to the wave's wavelength n = n(λ). In Physics wavelength is the distance between repeating units of a propagating Wave of a given Frequency. The wavelength dependency of a material's refractive index is usually quantified by an empirical formula, the Cauchy or Sellmeier equations. Cauchy's equation is an Empirical relationship between the Refractive index n and Wavelength of light λ for a particular transparent In Optics, the Sellmeier equation is an Empirical relationship between Refractive index n and Wavelength λ for a
The most commonly seen consequence of dispersion in optics is the separation of white light into a color spectrum by a prism. In Optics, a dispersive prism is a type of optical prism, normally having the shape of a geometrical triangular prism. From Snell's law it can be seen that the angle of refraction of light in a prism depends on the refractive index of the prism material. In Optics and Physics, Snell's law (also known as Descartes' law or the law of refraction) is a formula used to describe the relationship Refraction is the change in direction of a Wave due to a change in its Speed. Since that refractive index varies with wavelength, it follows that the angle that the light is refracted will also vary with wavelength, causing an angular separation of the colors known as angular dispersion.
For visible light, most transparent materials (e. g. glasses) have:
$1 < n(\lambda_{\rm red}) < n(\lambda_{\rm yellow}) < n(\lambda_{\rm blue})\ ,$
or alternatively:
$\frac{{\rm d}n}{{\rm d}\lambda} < 0,$
that is, refractive index n decreases with increasing wavelength λ. In this case, the medium is said to have normal dispersion. Whereas, if the index increases with increasing wavelength the medium has anomalous dispersion.
At the interface of such a material with air or vacuum (index of ~1), Snell's law predicts that light incident at an angle θ to the normal will be refracted at an angle arcsin( sin (θ) / n) . Thus, blue light, with a higher refractive index, will be bent more strongly than red light, resulting in the well-known rainbow pattern. A rainbow is an optical and meteorological phenomenon that causes a spectrum of Light to appear in the Sky when the Sun
## Group and phase velocity
Another consequence of dispersion manifests itself as a temporal effect. The formula above, v = c / n calculates the phase velocity of a wave; this is the velocity at which the phase of any one frequency component of the wave will propagate. In Physics, velocity is defined as the rate of change of Position. The phase of an oscillation or wave is the fraction of a complete cycle corresponding to an offset in the displacement from a specified reference point at time t = 0 This is not the same as the group velocity of the wave, which is the rate that changes in amplitude (known as the envelope of the wave) will propagate. The group velocity of a Wave is the Velocity with which the variations in the shape of the wave's amplitude (known as the modulation or envelope Amplitude is the magnitude of change in the oscillating variable with each Oscillation, within an oscillating system The group velocity vg is related to the phase velocity by, for a homogeneous medium (here λ is the wavelength in vacuum, not in the medium):
$v_g = c \left( n - \lambda \frac{dn}{d\lambda} \right)^{-1}.$
The group velocity vg is often thought of as the velocity at which energy or information is conveyed along the wave. In most cases this is true, and the group velocity can be thought of as the signal velocity of the waveform. The signal velocity is the speed at which a Wave carries information In some unusual circumstances, where the wavelength of the light is close to an absorption resonance of the medium, it is possible for the group velocity to exceed the speed of light (vg > c), leading to the conclusion that superluminal (faster than light) communication is possible. In Physics, absorption of electromagnetic radiation is the process by which the Energy of a Photon is taken up by matter typically the electrons of an In practice, in such situations the distortion and absorption of the wave is such that the value of the group velocity essentially becomes meaningless, and does not represent the true signal velocity of the wave, which stays less than c.
The group velocity itself is usually a function of the wave's frequency. This results in group velocity dispersion (GVD), which causes a short pulse of light to spread in time as a result of different frequency components of the pulse travelling at different velocities. GVD is often quantified as the group delay dispersion parameter (again, this formula is for a uniform medium only):
$D = - \frac{\lambda}{c} \, \frac{d^2 n}{d \lambda^2}.$
If D is less than zero, the medium is said to have positive dispersion. If D is greater than zero, the medium has negative dispersion. If a light pulse is propagated through a normally dispersive medium, the result is the higher frequency components travel slower than the lower frequency components. Frequency is a measure of the number of occurrences of a repeating event per unit Time. The pulse therefore becomes positively chirped, or up-chirped, increasing in frequency with time. A chirp is a signal in which the Frequency increases ('up-chirp' or decreases ('down-chirp' with time Conversely, if a pulse travels through an anomalously dispersive medium, high frequency components travel faster than the lower ones, and the pulse becomes negatively chirped, or down-chirped, decreasing in frequency with time. A chirp is a signal in which the Frequency increases ('up-chirp' or decreases ('down-chirp' with time
The result of GVD, whether negative or positive, is ultimately temporal spreading of the pulse. This makes dispersion management extremely important in optical communications systems based on optical fiber, since if dispersion is too high, a group of pulses representing a bit-stream will spread in time and merge together, rendering the bit-stream unintelligible. An optical fiber (or fibre) is a Glass or Plastic fiber that carries Light along its length This limits the length of fiber that a signal can be sent down without regeneration. One possible answer to this problem is to send signals down the optical fibre at a wavelength where the GVD is zero (e. g. around ~1. 3-1. 5 μm in silica fibres), so pulses at this wavelength suffer minimal spreading from dispersion—in practice, however, this approach causes more problems than it solves because zero GVD unacceptably amplifies other nonlinear effects (such as four wave mixing). The Chemical compound silicon dioxide, also known as silica or silox (from the Latin " Silex " is an Oxide Fiber or fibre is a class of Materials that are continuous filaments or are in discrete elongated pieces similar to lengths of thread. Four-wave mixing is an Intermodulation distortion in Optical systems similar to the Third-order intercept point in electrical systems Another possible option is to use soliton pulses in the regime of anomalous dispersion, a form of optical pulse which uses a nonlinear optical effect to self-maintain its shape—solitons have the practical problem, however, that they require a certain power level to be maintained in the pulse for the nonlinear effect to be of the correct strength. In Optics, the term soliton is used to refer to any Optical field that does not change during propagation because of a delicate balance between nonlinear Nonlinear optics (NLO is the branch of Optics that describes the behaviour of Light in nonlinear media, that is media in which the dielectric polarization Instead, the solution that is currently used in practice is to perform dispersion compensation, typically by matching the fiber with another fiber of opposite-sign dispersion so that the dispersion effects cancel; such compensation is ultimately limited by nonlinear effects such as self-phase modulation, which interact with dispersion to make it very difficult to undo. Self-phase modulation (SPM is a nonlinear optical effect of Light - Matter interaction
Dispersion control is also important in lasers that produce short pulses. A laser is a device that emits Light ( Electromagnetic radiation) through a process called Stimulated emission. In Optics, an ultrashort pulse of light is an Electromagnetic pulse whose time duration is on the order of the femtosecond (10^{-15} second The overall dispersion of the optical resonator is a major factor in determining the duration of the pulses emitted by the laser. A laser is constructed from three principal parts An energy source (usually referred to as the pump or pump source) A A pair of prisms can be arranged to produce net negative dispersion, which can be used to balance the usually positive dispersion of the laser medium. In Optics, a prism is a transparent optical element with flat polished surfaces that refract Light. Diffraction gratings can also be used to produce dispersive effects; these are often used in high-power laser amplifier systems. Diffraction is normally taken to refer to various phenomena which occur when a wave encounters an obstacle A grating is any regularly spaced collection of essentially identical Parallel, elongated elements Recently, an alternative to prisms and gratings has been developed: chirped mirrors. A chirped mirror is a Dielectric mirror with chirped spaces—spaces of varying depth designed to reflect varying wavelengths of lights—between the dielectric layers These dielectric mirrors are coated so that different wavelengths have different penetration lengths, and therefore different group delays. The coating layers can be tailored to achieve a net negative dispersion.
## Dispersion in waveguides
Optical fibers, which are used in telecommunications, are among the most abundant types of waveguides. An optical fiber (or fibre) is a Glass or Plastic fiber that carries Light along its length Dispersion in these fibers is one of the limiting factors that determine how much data can be transported on a single fiber.
The transverse modes for waves confined laterally within a waveguide generally have different speeds (and field patterns) depending upon their frequency (that is, on the relative size of the wave, the wavelength) compared to the size of the waveguide. A waveguide is a structure which guides waves such as Electromagnetic waves Light, or Sound waves In Physics wavelength is the distance between repeating units of a propagating Wave of a given Frequency.
In general, for a waveguide mode with an angular frequency ω(β) at a propagation constant β (so that the electromagnetic fields in the propagation direction z oscillate proportional to ei(βz − ωt)), the group-velocity dispersion parameter D is defined as:[3]
$D = -\frac{2\pi c}{\lambda^2} \frac{d^2 \beta}{d\omega^2} = \frac{2\pi c}{v_g^2 \lambda^2} \frac{dv_g}{d\omega}$
where λ = 2πc / ω is the vacuum wavelength and vg = dω / dβ is the group velocity. Do not confuse with Angular velocity In Physics (specifically Mechanics and Electrical engineering) angular frequency The propagation constant of an Electromagnetic wave is a measure of the change undergone by the amplitude of the wave as it propagates in a given direction This formula generalizes the one in the previous section for homogeneous media, and includes both waveguide dispersion and material dispersion. The reason for defining the dispersion in this way is that |D| is the (asymptotic) temporal pulse spreading Δt per unit bandwidth Δλ per unit distance travelled, commonly reported in ps / nm km for optical fibers. To help compare Orders of magnitude of different Times this page lists times between 10&minus12 seconds and 10&minus11 seconds (1 Pico A nanometre ( American spelling: nanometer, symbol nm) ( Greek: νάνος nanos dwarf; μετρώ metrό count) is a The kilometre ( American spelling: kilometer) symbol km is a unit of Length in the Metric system, equal to one thousand
A similar effect due to a somewhat different phenomenon is modal dispersion, caused by a waveguide having multiple modes at a given frequency, each with a different speed. Modal dispersion is a distortion mechanism occurring in Multimode fibers and other Waveguides in which the signal is spread in time because the propagation A special case of this is polarization mode dispersion (PMD), which comes from a superposition of two modes that travel at different speeds due to random imperfections that break the symmetry of the waveguide. Polarization mode dispersion (PMD is a form of Modal dispersion where two different Polarizations of light in a waveguide which normally travel at the same speed
## Dispersion in gemology
In the technical terminology of gemology, dispersion is the difference in the refractive index of a material at the B and G Fraunhofer wavelengths of 686. Technical terminology is the specialized Vocabulary of a field Gemology ( gemmology outside the United States) is the Science, Art and Profession of identifying and evaluating Gemstones In Physics and Optics, the Fraunhofer lines are a set of Spectral lines named for the German physicist Joseph von Fraunhofer ( 1787 In Physics wavelength is the distance between repeating units of a propagating Wave of a given Frequency. 7 nm and 430. A nanometre ( American spelling: nanometer, symbol nm) ( Greek: νάνος nanos dwarf; μετρώ metrό count) is a 8 nm and is meant to express the degree to which a prism cut from the gemstone shows "fire", or color. A gemstone or gem, also called a precious or semi-precious stone, is a piece of attractive Mineral, which &mdash when cut and polished &mdash Dispersion is a material property. Fire depends on the dispersion, the cut angles, the lighting environment, the refractive index, and the viewer.
## Dispersion in imaging
In photographic and microscopic lenses, dispersion causes chromatic aberration, distorting the image, and various techniques have been developed to counteract it. In Optics, chromatic aberration is caused by a lens having a different Refractive index for different Wavelengths of Light
## Dispersion in pulsar timing
Pulsars are spinning neutron stars that emit pulses at very regular intervals ranging from milliseconds to seconds. Pulsars are highly magnetized rotating Neutron stars that emit a beam of Electromagnetic radiation in the form of radio waves It is believed that the pulses are emitted simultaneously over a wide range of frequencies. However, as observed on Earth, the components of each pulse emitted at higher radio frequencies arrive before those emitted at lower frequencies. This dispersion occurs because of the ionised component of the interstellar medium, which makes the group velocity frequency dependent. The extra delay added at frequency ν is
$D = 4.15 ms (\frac{\nu}{GHz})^{-2} \times (\frac{DM}{cm^{-3} pc})$
where the dispersion measure DM is
$DM = \int_0^d{n_e\;dl}$
is the integrated free electron column density ne out to the pulsar at a distance d[4].
Of course, this delay cannot be measured directly, since the emission time is unknown. What can be measured is the difference in arrival times at two different frequencies. The delay ΔT between a high frequency νhi and a low frequency νlo component of a pulse will be
$\Delta T = 4.15 ms [(\frac{\nu_{lo}}{GHz})^{-2} - (\frac{\nu_{hi}}{GHz})^{-2} ] \times (\frac{DM}{cm^{-3} pc})$
and so DM is normally computed from measurements at two different frequencies. This allows computation of the absolute delay at any frequency, which is used when combining many different pulsar observations into an integrated timing solution.
## See also
• Dispersion relation
• Sellmeier equation
• Cauchy's equation
• Abbe number
• Kramers–Kronig relation
• Group delay
• Calculation of glass properties incl. Dispersion relations describe the ways that wave propagation varies with the Wavelength or Frequency of a wave. In Optics, the Sellmeier equation is an Empirical relationship between Refractive index n and Wavelength λ for a Cauchy's equation is an Empirical relationship between the Refractive index n and Wavelength of light λ for a particular transparent In Physics and Optics, the Abbe number, also known as the V-number or constringence of a transparent material is a measure The Kramers–Kronig relations are mathematical properties connecting the real and imaginary parts of any complex function which is analytic Group delay is a measure of the transit time of a signal through a Device under test (DUT versus frequency The calculation of glass properties ( glass modeling) is used to predict Glass properties of interest or glass behavior under certain conditions (e dispersion
• Linear response function
• Green-Kubo relations
• Fluctuation theorem
## References
1. ^ Born, Max (October 1999). A linear response function describes the input-output relationshipof a signal transducer such as a radio turning electromagnetic waves into musicor a neuron turning synaptic input into Green–Kubo relations give exact mathematical expression for transport coefficients in terms of integrals of time correlation functions The fluctuation theorem (FT is a theorem from Statistical mechanics dealing with the relative probability that the Entropy of a system which is currently away from Max Born (11 December 1882 &ndash 5 January 1970 was a German Physicist and Mathematician who was instrumental in the development of Quantum Principle of Optics. Cambridge: Cambridge University Press, pp. Cambridge University Press (known colloquially as CUP is a Publisher given a Royal Charter by Henry VIII in 1534 14-24. ISBN 0521642221.
2. ^ Calculation of the Mean Dispersion of Glasses
3. ^ Rajiv Ramaswami and Kumar N. Sivarajan, Optical Networks: A Practical Perspective (Academic Press: London 1998).
4. ^ Lorimer, D. R. , and Kramer, M. , Handbook of Pulsar Astronomy, vol. 4 of Cambridge Observing Handbooks for Research Astronomers, (Cambridge University Press, Cambridge, U. K. ; New York, U. S. A, 2005), 1st edition.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184602499008179, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/80477/list
|
## Return to Answer
2 added 33 characters in body
A sufficient condition is if the Riemannian manifold is conformally flat, this implies that the Weyl curvature vanishes, and the Riemann curvature tensor is a linear combination of the identity operator on two forms and the operator formed by the Kulkarni-Nomizu product of the Ricci curvature and the metric. Using that the Ricci curvature is a symmetric bilinear form, you can diagonalize it relative to the metric, and explicitly show (as in the 3 dimensional case) that the Kulkarni-Nomizu product of Ricci and the metric can be diagonalized over a basis formed by ${e_i\wedge e_j}$.
On the other hand, there are also large classes of manifolds for which it is impossible to satisfy your requirement. For example, consider the four dimensional (anti)-self-dual Einstein manifolds with nonvanishing Weyl curvature. The Einstein equation $Ric = \lambda g$ means that the Ricci and scalar parts of the curvature are just multiplies of the identity. But the self-duality of the Weyl part means any eigen-twoform of the curvature operator must be either self-dual or anti-self-dual, which rules them out from being rank two.
Here are also some possibly relevant papers.
• Vilms considered in this paper conditions related to the curvature operator having bounded rank.
• In this paper the same author studied curvature operators of the form $R = b\wedge b$, where $b$ is symmetric bilinear. In general one sees that a necessary and sufficient condition for curvature operators to be diagonalisable in your sense is that $R = \sum_{i = 1}^{M} b_i\wedge b_i$, where the $b_i$'s are symmetric bilinear forms that can all be simultaneously diagonalised.
1
A sufficient condition is if the Riemannian manifold is conformally flat, this implies that the Weyl curvature vanishes, and the Riemann curvature tensor is a linear combination of the identity operator on two forms and the operator formed by the Kulkarni-Nomizu product of the Ricci curvature and the metric. Using that the Ricci curvature is a symmetric bilinear form, you can diagonalize it relative to the metric, and explicitly show (as in the 3 dimensional case) that the Kulkarni-Nomizu product of Ricci and the metric can be diagonalized over a basis formed by ${e_i\wedge e_j}$.
On the other hand, there are also large classes of manifolds for which it is impossible to satisfy your requirement. For example, consider the four dimensional (anti)-self-dual Einstein manifolds. The Einstein equation $Ric = \lambda g$ means that the Ricci and scalar parts of the curvature are just multiplies of the identity. But the self-duality of the Weyl part means any eigen-twoform of the curvature operator must be either self-dual or anti-self-dual, which rules them out from being rank two.
Here are also some possibly relevant papers.
• Vilms considered in this paper conditions related to the curvature operator having bounded rank.
• In this paper the same author studied curvature operators of the form $R = b\wedge b$, where $b$ is symmetric bilinear. In general one sees that a necessary and sufficient condition for curvature operators to be diagonalisable in your sense is that $R = \sum_{i = 1}^{M} b_i\wedge b_i$, where the $b_i$'s are symmetric bilinear forms that can all be simultaneously diagonalised.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313740730285645, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/53036/books-you-would-like-to-read-if-somebody-would-just-write-them/53808
|
## Books you would like to read (if somebody would just write them…)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I think that the title is self-explanatory but I'm thinking about mathematical subjects that have not received a full treatment in book form or if they have, they could benefit from a different approach. (I do hope this is not inappropriate for MO).
Let me start with some books I would like to read (again with self-explanatory titles)
1) The Weil conjectures for dummies
2) 2-categories for the working mathematician
3) Representations of groups: Linear and permutation representations made side by side
4) The Burnside ring
5) A functor of points approach to algebraic geometry
6) Profinite groups: An approach through examples
Any other suggestions ?
-
7
I really like this question... hopefully someone will take a hint and write number (5) and (2) sometime soon! – Dylan Wilson Jan 24 2011 at 10:30
4
Steve Lack wrote something approximating (2): arxiv.org/abs/math/0702535 – Tom Leinster Jan 24 2011 at 11:31
7
Regarding the Weil conjectures, have you read the appendix to Hartshorne that discusses these? If so, you could also try Nick Katz's exposition on Deligne's work in the Hilbert's Problems book (in the Proceedings of Symposia in Pure Math series) from the 1970s. Also, Deligne's article Weil I is less technical than you might guess, and there is also the textbook by Freitag and Kiehl. – Emerton Jan 24 2011 at 12:44
8
Qiaochu: Demazure and Gabriel wrote a book using the functor of points approach over 3 decades ago. Some people love this book, while others... – Donu Arapura Jan 24 2011 at 17:55
10
Maybe there is a place for the dual question: "Books you would like to write (if somebody would just read them)" so people can mention their book ideas and get some feedback. – Gil Kalai Feb 1 2011 at 15:03
show 10 more comments
## 35 Answers
I don't know for certain that this doesn't exist, so I'm in a no-lose situation: if this is a rubbish answer then it means that a book that I want to exist does exist. Many mathematicians of a pure bent have taken it upon themselves to get a good understanding of theoretical physics. And many have actually managed this. But it seems to me that they usually go native in the process, with the result that I cease to be able to understand what they are saying. It could be that this is just an irreducibly necessary feature of physics, but I doubt it. Out there in book space I believe there exists a book that explains theoretical physics in a way that physicists would dislike intensely but mathematicians would find much easier to read. It may well be that if you want to do serious work in mathematical physics then you have to understand the subject as physicists do. However, this book would be aimed at pure mathematicians who were not necessarily intending to do serious work in mathematical physics but just wanted to understand what was going on from a distance.
I used to have a similar view about explanations of forcing, but I think Timothy Chow's wonderful Forcing for Dummies has filled that gap now.
-
6
Michael Spivak has recently written a book called "Physics for Mathematicians: Mechanics I". I haven't seen it and it's a bit expensive on Amazon, but it might be just what you want (but as far as I can tell it's "only" about classical mechanics...) – Gonçalo Marques Jan 24 2011 at 15:07
3
"Physics for Mathematicians: Mechanics I" is apparently a reworked and expanded version of these notes: math.uga.edu/~shifrin/Spivak_physics.pdf. Now that I know about it, I'm really looking forward to reading it!!! +1 – Vectornaut Jan 24 2011 at 15:36
17
+lots. Physics books are usually written in a way that teaches the mathematics through physical intuition... The trouble is that I have no physical intuition. I'd like a book that teaches the physics through mathematical intuition. – Dylan Wilson Jan 24 2011 at 16:32
9
Have you read Vladimir Arnold's "Mathematical Methods in Classical Mechanics"? I would say that it fits the bill, but maybe you've read it and it falls short in some way. – arsmath Jan 24 2011 at 16:41
3
I really like Folland's book "Quantum field theory, A tourist guide for mathematicians". – Rob Harron Feb 2 2011 at 2:43
show 9 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
• "(Counter)examples in Algebraic Topology"
There are many good textbooks in homology and elementary homotopy theory, but the supply of instructive examples they offer is usually appallingly small (spheres and projective spaces are the standard examples, but often there is little beyond). One reason is that to discuss interesting examples, one needs a lot of machinery, whose development consumes time and space. The books by Hatcher or Bredon offer a lot of examples; and I also like Neil Stricklands bestiary:
http://neil-strickland.staff.shef.ac.uk/courses/bestiary/bestiary.pdf,
and together with the unwritten chapter "things left to do", it is pretty close to what I would love to see as a book.
-
I would have killed for this a couple of years ago: a big book on Floer homology, written to be understandable for graduate students. Includes all the analytical details.
-
8
I should say that very recently such a book has been written by Audin and Damian - "Théorie de Morse et homologie de Floer", which is a beautiful and comprehensive introduction to the easiest parts of Floer homology. My only complaint with this book is that it doesn't go quite far enough - I guess I'm thinking more of a book the size of McDuff and Salamon's wonderful "J-holomorphic curves and symplectic topology" - but written specifically for Floer theory. – Will Merry Jan 24 2011 at 14:32
show 1 more comment
Three views of differential geometry
I have in mind the most rigorous modern view, the most intuitive undergraduate calculus view, and the physicist's tensor calculus view. These perspectives can be so different that it's hard to keep in mind that they're all ultimately concerned with the same thing.
Take one concept at a time examine it from a rigorous, intuitive, and computational viewpoint. For example, take a gradient and define it as a differential form, as a vector perpendicular to a surface, and as a tensor. Or here's how a differential geometer, a calculus student, and a physicist all view integrating over a surface. Here's how they each view Stokes' theorem etc.
-
4
You should read Spivak's 5 volume "A Comprehensive Introduction to Differential Geometry." In particular, the first 3 volumes. He makes sure to treat almost every single aspect 3 ways: in local coordinates (what you call the physicist's "tensor calculus"), with moving frames (the Cartan/Chern approach), and the modern "invariant" formulation. In my opinion, all differential geometers should be comfortable moving back and forth between all three, because they're all useful in various different situations. – Spiro Karigiannis Jan 24 2011 at 13:14
1
I've read Spivak's 1st volume. I had good intentions of going further but never made it. What I have in mind is a little different from Spivak in that I'd like to see the comparisons from the beginning. Maybe start with geometry from the viewpoint of Schey's book "Div, Grad, Curl and All That" and show how the vast machinery of differential geometry makes these concepts rigorous. – John D. Cook Jan 24 2011 at 13:51
show 2 more comments
Spaces of Diffeomorphisms
For 60+ years this has been a foundation of differential topology, featuring prominently in work of Smale, Cerf, Hatcher, Thurston, and many others; but I don't know any adequate reference. Indeed, it seems only a handful of brilliant people know this stuff, and everyone else uses their work as if it were a collection of black boxes.
My dream book would include, among other things, a modern introduction to Cerf theory from the perspective of Igusa's theory of framed functions, leading up to a readable and self-contained proof of Kirby's Theorem. It would also contain exposition and simplification of theorems of Hatcher, Cerf, Kirby, and Seibenmann.
This is a cheerful prod to a certain prospective author of such a book, that when it is written it will surely become an instant classic; I, for one, will pre-order.
-
2
@Maxime. That's a good reference if you want to know about the group theory of Diff, but not if you want to know about its algebraic topology (e.g. what can one say about the homotopy-type of $Diff(S^n)$?). I think Daniel is right that we are missing a book on that topic. – Tim Perutz Jan 31 2011 at 17:03
1
@Tim: "I think Daniel is right that we are missing a book on that topic". So do I (I would love this book to explain the links between the cohomology of these groups and foliations, à la Mather-Thurston and the few known results about these cohomology groups). – Maxime Bourrigan Jan 31 2011 at 17:27
show 1 more comment
The construction of galoisian representations associated to primitive cuspidal eigenforms. I hope the user BCnrd gets the hint.
-
The Springer Correspondence
Tonny Springer developed a subtle correspondence between Weyl group representations (say over `$\mathbb{C}$`) and nilpotent orbits of the related semisimple Lie algebra, showing in particular how to realize the finite group representations in the top cohomology of fibers in his special desingularization of the nilpotent variety. By now the ideas involved have permeated much of the work in Lie theory due to Lusztig and many other people. But there is no systematic treatise on the subject and its connections with other areas of Lie theory, algebraic geometry, combinatorics. In my 1995 book Conjugacy Classes in Semisimple Algebraic Groups I included toward the end a very short survey of Springer theory, following a treatment of the unipotent and nilpotent varieties. But I realized at the time that I didn't understand the subject deeply enough to write a comprehensive account. (I still don't.)
My first exposure to Springer's ideas unfortunately didn't take hold right away. I recall making a short visit to Utrecht around 1975, where I had lunch with Springer at an Indonesian restaurant and he jotted down the new ideas he was excited about. No napkin or other scrap of paper survives, but anyway I understood only later how amazing his insights were. They deserve a thorough treatment in book form.
-
Counterexamples in scheme theory
-
Galois representations.
I know about Serre's Abelian $\ell$-adic representations and elliptic curves, but I am sure that a more general theory has been established since then. There are a few people who have notes on Galois representations on their web pages, but no book that I know of.
-
61
Working on it...! – Laurent Berger Jan 24 2011 at 11:57
8
While waiting for Laurent's book (!) , you could try reading Modular Forms and Fermat's Last Theorem (Cornell, Silverman, Stevens eds.), which is a fantastic graduate level introduction to the subject. – Emerton Jan 24 2011 at 12:37
show 2 more comments
You forestalled some of what I would have posted...
• Quillen's K-theory without topology
• Steenrod algebras through combinatorics and representation theory (as opposed to, through topology)
• Ext and Tor defined constructively, with Haskell code
• Weyl's "Classical Groups" with the proofs of 1938 but the notations of 2010
• The definitive guide to Hochschild homology
• Henri Lombardi's "Algèbre Commutative" in English
• A documentation to Agda
-
4
+1 for the third bullet point (I know a few more that fall into this chapter). – Theo Buehler Jan 24 2011 at 12:03
1
Sorry, it's the fourth one now ;) – darij grinberg Jan 24 2011 at 12:04
3
Does such a definition of higher K-groups without topology actually exist? I have never heard about that, so it sounds more like an ambitious research project. – Johannes Ebert Jan 25 2011 at 8:54
2
Concerning Weyl's The Classical Groups, an argument can be made in favor of either modern text: Goodman & Wallach Symmetry, Representations, and Invariants (2nd ed., Springer GTM 255, 2009) and Procesi Lie Groups (Springer Universitext, 2007). I won't try to make the argument, since what you mean by asking for the same proofs as in Weyl's book might need further discussion. – Jim Humphreys Jan 30 2011 at 18:14
1
I don't believe Goodman-Wallach can really supersede Weyl. For example, where are Capelli's identities in Goodman-Wallach? I only see Theorem 5.7.1, which neither gives an explicit form nor applies to the classical case (Goodman-Wallach require $V=S^2(\mathbb C^n)$ or $V=\wedge^2(\mathbb C^n)$, which lead to the Turnbull rsp. Howe-Umeda-Kostant-Sahi identities rather than the actual Capelli ones), let alone an explicit proof "from the definitions". Procesi's text could do the trick indeed. – darij grinberg Jan 30 2011 at 21:25
show 8 more comments
Categories for the Working Mathematician
I know Saunders Mac Lane already wrote a book by that name, but in my opinion his book doesn't live up to its title. His book would perhaps be better named "Category theory for the working algebraist." I'd like to see a book with more examples, especially examples outside of algebra and algebraic topology.
-
8
I think Steve Awodey's "Category theory" might be just right for you. – Gonçalo Marques Jan 24 2011 at 14:42
1
Actually, as a non-categorist, I think Mac Lane's title is apt if treated as an introduction to the theory rather than as a handbook for practical reference. But each to their own – Yemon Choi Jan 24 2011 at 19:32
show 1 more comment
Somewhat frivolous/exasperated suggestion:
The Homology of Banach and Topological Algebras, Vol. II: Collected folklore and missing bookwork.
I only suggest this because I have been needing to cite this book, on and off, for much of the last five years, and the fact it's not been written hasn't really helped.
-
"Examples in complex geometry."
The algebraic and differential geometry and Hodge theory side of complex geometry is well established in many books, but I've had real trouble finding examples that are worked out in detail (which would be perfect as exercises, perhaps if given with hints) that show how the theory works in practise and provide counterexamples to some implications. For example, an ample line bundle does not have to admit any global sections, but I've never seen an example of such a bundle given in a textbook.
-
My answer is quite simple and stupid. I don't know French; so I would like to read EGA, SGA, and BBD in English (or in Russian:)). I also suspect that these books could be updated in the process of translation.:)
-
2
I have heard once that Yuri Manin had translated SGA (and EGA?) into Russian. If anyone, he knews if that is correct and if translations of BBD exist too. – Thomas Riepe Jan 26 2011 at 20:34
2
... and, of course, French is a very nice language! – Thomas Riepe Jan 26 2011 at 20:35
5
Mikhail, you will be better off learning the little French required to read EGA, SGA, FGA, BBD, DPP, GAGA, SAGA, etc., than waiting for English or Russian translations to appear. – Chandan Singh Dalawat Jan 27 2011 at 5:03
8
The problem is that these books would be complicated reading for me even in English. – Mikhail Bondarko Jan 27 2011 at 10:07
As I have been telling many people involved in mathematical publishing, the one book I would like to read is The Serre-Tate correspondence.
-
• "Faltings explained" : Several of his articles are very hard to read and existing surveys on his concepts don't really fill the gap. I would like to read a book about his work, his themes, background ideas and techniques which is a readable walk through all that, something like Connes' "NCG"-book + Connes/Marcolli's "noncommutative garden".
• "Morava explained" : The same as above on Morava's work, containing a (for the arithmetic geometry inclined reader) readable description of the homotopy theory background. With comments from Manin, Kontsevich and Connes, and a (sci-fi ?) chapter on how homotopy theory and number theory may mutually interfuse (e.g. through "brave new rings").
• Mumford suggested in a letter to Grothendieck to publish a suitable edited selection of letters by Grothendieck to his friends, because the letters he received from him were "by far the most important things which explained your ideas and insights ... vivid and unencumbered by the customary style of formal french publications ... express(ing) succintly the essential ideas and motivations and often giv(ing) quite complete ideas about how to overcome the main technical problems ... a clear alternative (to the existing texts) for students who wish to gain access rapidly to the core of your ideas". (Found in the very beautifull 2nd collection)
-
1
@Sean: Much of the subject since the 1970s could be viewed as "Morava explained". – Tyler Lawson Oct 25 2011 at 23:24
show 2 more comments
There are precisely two books on Arakelov geometry. One by Lang and one by Soule. I would love to see a book written on the subject which focuses mainly on the two dimensional (and one-dimensional) case. Sections 8.3 and 9.1 of Liu's book do this greatly for example (but considers only intersection multiplicities at the finite points). It should include all the theorems done so far. Something like
Chapter 0. Prerequisites
Chapter 1. Arithmetic curves (Riemann-Roch, slopes method, etc. One should include a paragraph or appendix on algebraic curves stating all the theorems that can and have been generalized.)
(N.B. An arithmetic curve is the spec of a ring of integers.)
Chapter 2. Arithmetic surfaces (This would contain all the "arithmetic" analogues of the theorems mentioned in the Appendix. For example, there has been a lot of work on Riemann-Roch theorems, trace formulas, Dirichlet's higher-dimensional unit theorem, Bogomolov inequalities, etc. Also, there are four intersection theories (which are compatible) I know of at the moment. The one developed by Arakelov-Faltings, then Gillet-Soule, then Bost and then Kuhn. The book should include a detailed description of them.
Appendix A. Algebraic surfaces. (A survey of all the classical theorems for algebraic surfaces that have an analogue in Arakelov geometry. This includes Faltings' generalizations of the Riemann-Roch theorem, Noether theorem, etc. but also the theorems generalized to Arakelov theory by Gasbarri, Tang, Rossler, Kuhn, Moriwaki, Bost, etc.)
Appendix B. Riemann surfaces (Just the necessary. Differential forms and Green functions basically.)
-
Remark: Several items below refer to the formalism of locales. Although consistent usage of the language of locales allows one to get rid of the axiom of choice in almost all cases, my main reasons for it are purely pragmatic: The formalism of locales allows one to obtain equivariant and family versions of many theorems without any additional effort, as opposed to the formalism of topological spaces (think of Hahn-Banach theorem, for example).
• A general topology textbook written in the language of locales, with no mention of topological spaces.
• Textbooks on commutative algebra and algebraic topology written in the language of locales. In particular, such textbooks can usually avoid mentioning maximal ideals, the axiom of choice, or Zorn's lemma.
• A measure theory textbook written in the language of locales and commutative von Neumann algebras, with no mention of the set-theoretical approach. The textbook should also have a conceptual exposition of Lp-spaces.
• A linear algebra textbook that does not mention coordinates, bases, or matrices.
• A textbook on smooth manifolds that never mentions coordinates, charts, or atlases. Such a textbook should have a conceptual exposition of integration and use supermanifolds consistently whenever it makes sense, e.g., for differential forms.
• Textbooks on algebraic topology and homological algebra written in the language of (∞,1)-categories.
• Higher categories for the working mathematician. This book should contain a lot of examples of higher categories that are actually used in mathematics outside of category theory. (For example, the bicategory of algebras, bimodules, and intertwiners, the tricategory of conformal nets, defects, sectors, and morphisms of sectors etc.)
• A textbook on topological vector spaces (in particular, on locally convex, Banach, and nuclear spaces) written from the categorical viewpoint. For example, such a textbook would define a nuclear morphism as a morphism that can be factorized in a certain way (see a recent paper by Stephan Stolz and Peter Teichner). The textbook should consistently use the language of locales. For example, this allows one to prove Hahn-Banach, Gelfand-Neumark, or Banach-Alaoglu theorems without using the axiom of choice.
-
1
Also: I thought it was acknowledged that while you can (and to some extent, should) set up linear algebra without coordinates and bases and matrices, getting things done in functional analysis rather often needs you to choose bases, etc. (Cf. the difference between categories of Hilbert spaces and categories of RKHS) – Yemon Choi Jan 24 2011 at 19:23
2
@Michael: Could you please be more precise? What kind of theorem or definition do you have in mind? – Dmitri Pavlov Jan 26 2011 at 2:42
2
@Yemon: One size does not fit all. You and darij seem to subtly imply (or at least this is my feeling when I read your comments) that for any mathematical theory there is the best way to expose it, whereas I am more inclined towards diversity of expositions. Some people (like me) like coordinate-free expositions, while others prefer bases and matrices. There are plenty of linear algebra textbooks written using bases and matrices, but very few or none are written in a coordinate-free way. That's why I included linear algebra in my list. – Dmitri Pavlov Jan 27 2011 at 19:05
3
@Dmitri: I think I for one have not learned anything "literally in a few minutes". Do you have experimental evidence for this claim? Anyway, I agree with you that most people take a lot longer to learn the abstract approach. But moreover, for many people understanding concrete examples is a necessary route to abstraction. If you're going to teach people about dualizable objects in categories, you can go ahead and teach them about bases and matrices first, I think, without wasting anyone's time. – Pete L. Clark Jan 31 2011 at 4:24
3
Having thought things through a bit more, I wish to affirm the principle that the author of a math book ought not to be required to include any material beyond that which is of firm personal interest to herself. (Diligent application of this principle could lead to better books.) So I don't want to discourage anyone from writing this particular take on linear algebra. Rather what I mean to say is that such a book should be used for good rather than ill: raising a generation of mathematicians for whom bases and matrices are no more than an afterthought would be nothing to be proud of. – Pete L. Clark Jan 31 2011 at 15:46
show 42 more comments
Whittaker and Watson with a Facelift
There are a number of classic books, such as Whittaker and Watson's Modern Analysis, that I'd like to see typeset in TeX and updated slightly. Sometimes notation or terminology have changed and a little footnote would help greatly.
Also by Watson, I'd like to see his 1922 book "A Treatise on the Theory of Bessel Functions" with updated typography and notation. A scan of the book is available here. Apparently the book has entered the public domain and so there would be no legal barrier to producing an updated version.
-
6
"We shall now shew..." for instance (W&W, p.13 and many other places.) – Stopple Jan 24 2011 at 20:50
1
Exactly. I had no idea anyone wrote "shew" in the 20th century until I saw that. – John D. Cook Jan 24 2011 at 21:44
1
But what about Watson's 1944 2nd Edition or the reprinted version: books.google.com/… perhaps these are typographically still similar (or the same) as the 1922 versions, but don't know if the copyright lapse applies anymore? – S. Sra Jan 25 2011 at 10:21
1
@John D. Cook: There's also George Bernard Shaw. – Nate Eldredge Jan 27 2011 at 5:30
1
@Stopple I guess that you are hoping for the taming of the ‘shew’? – L Spice Jul 15 2011 at 14:21
I would like to read an SGA-like book on Étale cohomology to replace as a reference SGA 4½. I also have an idea about who could write such a text: Luc Illusie. I'd really love that.
-
3
Dear Lorenzo, What is your objection to SGA 4.5 (which is my personal favourite of the SGAs)? – Emerton Jan 25 2011 at 3:36
1
Dear Emerton, wasn't someone (I think Verdier) originally assigned by Grothendieck the project of replacing the spectral-sequence-laden arguments of SGA4.5 with simpler arguments using derived categories, but it was never finished? – Harry Gindi Jan 31 2011 at 15:17
Algebraic groups by example
There are currently several books on Lie theory which take a very concrete approach, containing many examples (e.g. Rossmann, Hall, Stillwell). Basically they can be read by a student with some knowledge in calculus, linear algebra and perhaps some mathematical maturity. However, I have yet to find a book on the theory of (linear) algebraic groups which doesn't delve into topics from commutative algebra and algebraic geometry before even defining what an algebraic group is, and even then, most texts take a very abstract approach - most proofs seem like general nonsense to me, but maybe that's just because I'm not an algebraist in heart. In any case, I would very like to see a book on the subject which takes a very concrete approach through examples and constructive proofs.
-
Algebraic Geometry from a Homotopical Viewpoint: For the topologist who really wants to like geometry but doesn't know where to start.
-
2
have you seen amazon.com/…? – Sean Tilson Jan 25 2011 at 2:35
5
One of the great things about this question is that it secretly allows us to ask/answer a bunch of questions of the form "Is there a book like blah about bleh?" Thanks Sean, that book looks great!! – Dylan Wilson Jan 25 2011 at 4:07
show 2 more comments
"Quiver varieties with a wealth of examples" ?
-
22
I'd vote for nearly any book title containing the phrase "with a wealth of examples." – John D. Cook Jan 24 2011 at 19:13
14
"with a variety of examples" would be better, though. – darij grinberg Jan 24 2011 at 20:18
1
Some example varieties are not wealthy. Go for the wealth. Gerhard "Ask Me About System Design" Paseman, 2011.01.24 – Gerhard Paseman Jan 24 2011 at 21:53
I would like to read a comprehensive, step-by-step introduction to the Langlands Programme written for non-experts. An Introduction to the Langlands Program (edited by Joseph Bernstein and Stephen Gelbart) is good, but it is a collection of articles, not a textbook or monograph. Stephen Gelbart's "An Elementary Introduction to the Langlands Program" (Bulletin of the AMS, Vol. 10, No. 2, 1984, pp. 177-219) has the right approach, but while quite long, is not a book-length treatment. David Nadler's excellent new article "The Geometric Nature of the Fundamental Lemma" is another example of the sort of expository approach I would like to see in a full-length book about the Langlands Programme.
-
I'm surprised that nobody has expressed the desire to read Bourbaki's Théorie des nombres.
-
26
Some might be surprised that anyone has the desire to read Bourbaki at all ;-) – Johannes Hahn Jan 27 2011 at 14:42
3
@Chandan: I honestly think that my current musings about "abstract algebraic number theory" are highly in the spirit of the text you name above. For instance, one of the points is the generalization of the Dirichlet Unit Theorem to a wider class of rings, and in this regard there is indeed a Samuel Unit Theorem. In general, Samuel's little book on the algebraic theory of numbers often feels like a little coda to Bourbaki. There are exceptions: for some reason I feel confident that Nicolas would die before mentioning the Minkowski Convex Body Theorem. (Perhaps he has.) – Pete L. Clark Jan 30 2011 at 21:22
1
Johannes, =p! – Harry Gindi Jan 31 2011 at 15:12
Introduction to algebraic cycles.
With lots of examples...
-
AD${}^+$ by Hugh Woodin.
-
3
Actually, I'd vote for the "The collected unpublished works of the California Set-theorists" multiple times, if it were allowed! – Todd Eisworth Oct 25 2011 at 20:50
"The proof of the Shimura-Taniyama conjecture, for people who aren't professional algebraists but are willing to try pretty hard."
-
11
Dear David, This exists in textbook form: as I noted in another comment, there is the book Modular forms and Fermat's Last Theorem (Cornell, Silverman, Stevens eds.). – Emerton Jan 24 2011 at 12:46
1
There are also the DDT notes, now available on Darmon's website, along with other related material. – Chandan Singh Dalawat Jan 24 2011 at 14:16
2
Emerton, I have this book but unfortunately haven't had the time to dive into it yet. My impression (horribly mistaken?) was that the last fifteen years have seen some simplifications and improvements to the proof - e.g. appeal to base change to avoid level lowering, appeal to Jacquet-Langlands to study the Hecke algebras in a more hands-on way, the Diamond-Fujiwara version of patching and concomitant avoidance of appeal to multiplicity one, etc. – David Hansen Jan 25 2011 at 16:53
2
Dear David, Yes, but these improvements are amply documented in the research literature; I don't see the need for another text at the moment, given the existence of Cornell, Silverman, and Stevens. After all, the paper of Diamond in Inventiones is well-written, so if one understands everything in Cornell, Silverman, and Stevens except the mult. one statements, it is no trouble to modify things so as to incorporate the results of Diamond's article. As for replacing the geometric arguments for level lowering by base change, this is very powerful in those contexts where one doesn't have ... – Emerton Jan 27 2011 at 4:33
1
... the same tight control of the geometry as one has in the context of modular curves, but it's a matter of one's predilections as to whether it counts as a simplification. (This comment just reflects my own training, which finds Ribet's arithmetic geometry arguments quite a bit easier to follow than the proof of base change.) I think that, with the sole exception of Diamond's paper, which really does count as an unambiguous simplification, these other approaches to the argument just reflect modifications of technique in order to ... – Emerton Jan 27 2011 at 4:39
show 2 more comments
"Cardinal Arithmetic: The New Corrected Edition (including index)" by Saharon Shelah...
-
C.P. Snow once used such persuasion as he had to get G.H.Hardy to write another book, which Hardy promised him to do. It was to be called 'A Day at the Oval' and was to consist of himself watching cricket for a whole day, spreading himself in disquisitions on the game, human nature, his reminiscences, life in general. Unfortunately Hardy's final years of his life were not of delight and the book, though destined to be an eccentric minor classic was never written.
I would love to see such a book, written with incomparable style and mathematical touch.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419002532958984, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/46446/the-jacobian-ideal-generates-the-socle-of-a-complete-intersection/52309
|
## The Jacobian ideal generates the socle of a complete intersection
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is with reference to theorem 5.20 in Vasconcelos book linked (google books) here: http://tinyurl.com/2967eov
I shall restate the theorem here for easy reference: "If $A=k[[x_1,x_2,...,x_n]]/I$ is a complete intersection and $dim_k A$ is not divisible by $char(k)$ then the Jacobian ideal generates the socle of $A$".
I am looking for a proof of this theorem. Vasconcelos references three places to look for one. One is a result of Tate - I have looked at this. One is supposed to be in Kunz's - Introduction to commutative algebra and algebraic geometry" - I could not find a result similar to this in there (it's not a pointed reference). Finally there is a Scheja-Storch paper linked below. http://www.reference-global.com/doi/abs/10.1515/crll.1975.278-279.174 I am specifically looking for a proof similar to Scheja-Storch (Tate seems to use a different approach), but the above paper is in German and I am not fluent at it. It's probably unlikely, but if anyone has an english reference on this proof, I would really appreciate it.
-
@Graham: Thanks, fixed. – Timothy Wagner Nov 18 2010 at 2:52
This doesn't probably help really (since it's not a proof), but this seems to be an exercise on page 382 of Kunz's book, "Kahler differentials". Perhaps reading near there will suggest the proof? – Karl Schwede Nov 18 2010 at 4:16
Ah, thanks Karl. I guess Vaconcelos referenced a different Kunz's book in error. But I checked two versions and in both the above one was cited. I shall look into the Kahler differentials one. Thank you. – Timothy Wagner Nov 18 2010 at 4:42
2
This paper by Eisenbud, Huneke, and Vasconcelos (msri.org/people/staff/de/papers/pdfs/1992-001.pdf) attributes the result to Scheja and Storch, Cor. 4.7 of the paper you linked, but gives no indications of proof. Aha! Prop.2 of this paper by Eisenbud (projecteuclid.org/euclid.bams/1183541138) attributes it 'essentially' to Berger, and has a sketch of a proof. – Graham Leuschke Nov 18 2010 at 14:11
@Graham: Thanks a lot for the references. I will have to look more closely at the second one. – Timothy Wagner Nov 19 2010 at 23:39
## 1 Answer
I'm promoting this comment to an answer, since it appears no one else is jumping in with a proof. Prop. 2 of this paper by Eisenbud attributes it 'essentially' to Berger, and has a sketch of a proof.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293563961982727, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/114972/norm-of-inverse-confluent-vandermonde-matrix/115094
|
## Norm of inverse confluent Vandermonde matrix
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\{x_1,\dots,x_n\}$ be pairwise distinct complex numbers and $l_1+l_2+\dots+l_n=N$. The $N\times N$ confluent Vandermonde matrix is defined as $$V= \begin{bmatrix} v_{1,0}&v_{2,0}&\dots&v_{n,0}\\ v_{1,1}&v_{2,1}&\dots&v_{n,1}\\ \vdots\\ v_{1,N-1}&v_{2,N-1}&\dots&v_{n,N-1} \end{bmatrix}$$ where $v_{j,k}=\begin{bmatrix}x_j^k,&kx_j^{k-1},&\dots&k(k-1)\times\dots\times (k-l_j+1) x_j^{k-l_j+1}\end{bmatrix}$. Let $\|\cdot\|$ denote the row sum matrix norm. In some applications (e.g. interpolation, signal processing) one would like to estimate the quantity $\|V^{-1}\|$.
Gautschi [1] has shown that for $l_1=\dots=l_n=2$ one has $$\|V^{-1}\| \leq \max_{1\leq \lambda\leq n} \beta_{\lambda} \prod_{\nu=1,\nu\neq\lambda}^n \biggl(\frac{1+|x_{\lambda}|}{|x_{\nu}-x_{\lambda}|}\biggr)^2$$ where $\beta_{\lambda}=\max\biggl(1+|x_{\lambda}|,1+2(1+|x_{\lambda}|)\sum_{\nu\neq\lambda}{1\over |x_\nu-x_\lambda|}\biggr)$.
I am interested in a somewhat cruder estimates, as follows: if $|x_j|\leq 1$ and $|x_i-x_j|\geq \delta$, then for the above case we have $$\|V^{-1}\| \leq C n 2^N\delta^{-N+1}\qquad (*)$$ for some absolute constant $C$.
Is it true that something like $(*)$ holds for the general configuration $\{l_1,\dots,l_n\}$?
EDIT: using [2], this seems to boil down to the following. Let $$h_j(x)=\prod_{i \neq j}(x-x_i)^{-l_i}.$$ For $t=0,1,\dots,l_j$ evaluate $h_j^{(t)}(x_j).$
[1] W.Gautschi, "On Inverses of Vandermonde and Confluent Vandermonde matrices II", Numerische Mathematik 5, 425-430, 1963.
[2] R.Schapelle, "The Inverse of the Confluent Vandermonde Matrix", IEEE Trans. on Automatic Control, October 1972, pp.724-725.
-
Also posted on MSE. – Didier Piau Dec 1 at 12:09
## 1 Answer
For $j=1,\dots,n$ and $k=0,1,\dots,l_j-1$ denote by $u_{j,k}$ the row with index $l_1+\dots +l_{j-1}+k$ of the matrix $V^{-1}$. By using a generalization of the Hermite interpolation formula (see [3]), in [2] it is shown that the elements of $u_{j,k}$ are the coefficients of the polynomial $${1\over k!} \sum_{t=0}^{l_j-1-k} {1\over t!} h_j^{(t)}(x_j) (x-x_j)^{k+t} \prod_{i\neq j} (x-x_i)^{l_i}$$
Now thanks to the answer to this MSE question, one has $$|h_j^{(t)}(x_j)|\leq N(N+1)\cdots (N+t-1)\delta^{-N-t}.$$
The sum of absolute values of the coefficients of the polynomials $(x-x_j)^{k+t} \prod_{i\neq j} (x-x_i)^{l_i}$ is at most (see [4, Lemma]) $$(1+|x_j|)^{k+t} \prod_{i\neq j}(1+|x_i|)^{l_i} \leq 2^{N-(l_j-k-t)}.$$
So now $$\|u_{j,k}\| \leq {1\over k!}\sum_{t=0}^{l_j-1-k} {1\over t!} {N(N+1)\cdots (N+t-1) \over {\delta^{N+t}}}2^{N-l_j+k+t}\\ \leq \biggl({2\over \delta}\biggr)^N {1\over {2^{l_j-k}k!}}\sum_{t=0}^{l_j-1-k} {l_j-1-k \choose t} {N(N+1)\cdots(N+t-1)\over (l_j-k-t)\cdots(l_j-k-2)(l_j-k-1)} \biggl({2\over \delta}\biggr)^t\\ \leq \biggl({2\over \delta}\biggr)^N {1\over {2^{l_j-k}k!}} \biggl(1+{2N\over \delta}\biggr)^{l_j-1-k}\\ =\biggl({2\over \delta}\biggr)^N {2\over k!} \biggl({1\over 2}+{N\over\delta}\biggr)^{l_j-1-k}.$$
[3] A.Spitzbart, "A Generalization of Hermite's Interpolation Formula", The American Mathematical Monthly, Vol.67 No.1, p.42-46, 1960.
[4] W.Gautschi, "On Inverses of Vandermonde and Confluent Vandermonde matrices", Numerische Mathematik 4, p.117-123, 1962.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8352583646774292, "perplexity_flag": "head"}
|
http://verso.mat.uam.es/web/index.php?option=com_content&view=article&id=818%3Afebrero-2012&catid=126%3Acoloquios&Itemid=122&lang=en
|
### Private Area
There are no translations available.
#### On the hidden shifted power problem
(joint work of Jean Bourgain, Moubariz Garaev and Sergei Konyagin)
Igor Shparlinski, Macquarie University
10 de febrero, 2012, 12:00 h., Sala Naranja ICMAT (cartel).
We consider the problem of recovering a hidden element s of a finite field $F_q$ of $q$ elements from queries to an oracle that for a given $x \in F_q$ returns $(x+s)^e$ for a given divisor $e|q-1$. This question is motivated by some applications to pairing based cryptography. Using Largange interpolation one can recover s in time $ep^{o(1)}$ on a classical computer. In the case of $e = (q - 1)/2$ an efficient quantum algorithm has been given by W. van Dam, S. Hallgren and L. Ip. We describe some techniques from additive combinatorics and analytic number theory that lead to more efficient classical algorithms than the naive interpolation algorithm, for example, they use substantially fewer queries to the oracle. We formulate some questions and discuss whether quantum algorithms can give further improvement.
Volver
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 7}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8157467842102051, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/differential-geometry+hamiltonian-formalism
|
# Tagged Questions
2answers
108 views
### Are Poisson brackets of second-class constraints independent of the canonical coordinates?
Say we have a constraint system with second-class constraints $\chi_N(q,p)=0$. To define Dirac brackets we need the Poisson brackets of these constraints: $C_{NM}=\{\chi_N(q,p),\chi_M(q,p)\}_P$ . Is ...
5answers
541 views
### What does symplecticity imply?
Symplectic systems are a common object of studies in classical physics and nonlinearity sciences. At first I assumed it was just another way of saying Hamiltonian, but I also heard it in the context ...
4answers
347 views
### Hamiltonian and the space-time structure
I'm reading Arnold's "Mathematical Methods of Classical Mechanics" but I failed to find rigorous development for the allowed forms of Hamiltonian. Space-time structure dictates the form of ...
3answers
181 views
### What are some mechanics examples with a globally non-generic symplecic structure?
In the framework of statistical mechanics, in books and lectures when the fundamentals are stated, i.e. phase space, Hamiltons equation, the density etc., phase space seems usually be assumed to be ...
1answer
170 views
### A question regarding particle trajectories in the symplectic manifold formalism
How to solve a free particle on a 2-sphere using symplectic manifold formalism of classical mechanics ? Is there a way to get coriolis effect directly, without going into Newton mechanics? And is ...
2answers
349 views
### Lorentz invariance of the 3 + 1 decomposition of spacetime
Why is allowed decompose the spacetime metric into a spatial part + temporal part like this for example $$ds^2 ~=~ (-N^2 + N_aN^a)dt^2 + 2N_adtdx^a + q_{ab}dx^adx^b$$ ($N$ is called lapse, $N_a$ is ...
3answers
566 views
### Why is the symplectic manifold version of Hamiltonian mechanics used in Newtonian mechanics?
Books such as Mathematical methods of classical mechanics describe an approach to classical (Newtonian/Galilean) mechanics where Hamiltonian mechanics turn into a theory of symplectic forms on ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8847168684005737, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/204042/simple-test-if-point-is-above-or-below-sine-curve/204193
|
# Simple test if point is above or below sine curve
Is there any simple formula or algorithm for determining if a point lies above or below the sine curve? For instance, if I have a point $(x, y)$, how can I test whether or not $y > \sin(x)$? Obviously taking the actual $\sin(x)$ (or $\cos(x)$) is not an option otherwise I wouldn't be asking.
All three angles, $A, B$, and $x$ are first-quadrant angles in $[0, 90°]$.
Additionally, I know two reference points, A and B such that A < x < B, and I know both the sine and cosine of A and B. I thought perhaps comparing slopes might be useful, for instance, I know that the slope of the tangent at A is greater than the slope of the secant from A to x, which is greater than the slope of the secant from x to B, which is greater than the slope of the tangest at B. But I haven't been able to come up with a way to actually use any of that.
### Background
To clarify what I'm after: I'm working on doing rapid estimations of various useful functions like sines, cosines, exponents, logs, etc., for the many cases that arise in which fast, approximate answers are useful (e.g., assumption checking during debugging or feasibility evaluation).
I'm currently able to estimate sines and cosines of any angle in degrees to within $10\%$ error, but I'd really like to be able to take those initial estimates and then refine them further with some kind of simple iterative process that can be carried out relatively quickly with pencil and paper. I find this useful in various situations, for instance in the lab or in group brainstorming sessions, in which a calculator is not readily available.
-
Why can't you take the actual sin(x) value? – fosho Sep 28 '12 at 17:04
Because I'm doing this by hand, without a calculator or look up table. I know the sines of certain landmark angles (the multiples of 10-degrees) and I'm trying to figure out how to use these to find the sine of other angles by hand. – sh1ftst0rm Sep 28 '12 at 17:10
What is the reason for the downvote? Please explain it for us to improve our questions. – Makoto Kato Sep 28 '12 at 18:36
## 3 Answers
For $0\le x\le\frac\pi2$, you have
• $\sin x\le 1$
• $\sin x \le x$
• $\sin x \le \sin \alpha+(x-\alpha)\cos\alpha$ for suitable $\alpha\in[0,\frac\pi2]$
• $\sin x \ge 1-\frac{(\pi-x)^2}2$
and several other simple approximations that may cover many cases. However, if $y\approx \sin x$, you can hardly avoid calculating $\sin x$.
-
I'm not familiar with that third point: can you describe where this comes from and how I would choose an appropriate alpha? (Or give me the name of a theorem, for instance). Thanks! – sh1ftst0rm Sep 28 '12 at 17:14
Well, $|\sin x| \le 1$ for all $x \in \mathbb{R}$. It follows that if $y < -1$ then $(x,y)$ is below the sine curve, while if $y > 1$ then $(x,y)$ is above the sine curve. The case where $-1 \le y \le 1$ is more complicated, and may well require direct calculation.
-
If there was a simple way to do this, that worked reliably on all inputs, then it would have to work even if $y$ was very close to $\sin(x)$. But to do that, it would effectively have to calculate $\sin(x)$.
So no, there is no simple way.
-
Hi, Tony. Is that more than just a gut instinct? I know primality is a lot different than a continuous function, but just as an illustration you can definitively test if a number is prime, but those tests won't actually produce a prime number on their own. And for sine, for instance, it is always the case the $\sin^2x + \cos^2x = 1$, so that is a definitive test for sine which just happens to not be useful because it requires me to know the cosine a priori. – sh1ftst0rm Oct 1 '12 at 12:19
Yes, it's more than just a gut instinct. If $|y-\sin(x)| < \epsilon$, then you have to be able to calculate $\sin(x)$ to an accuracy of $\epsilon$ to decide whether $y<\sin(x)$. Which is really just what I wrote in my answer :-) – TonyK Oct 1 '12 at 22:31
But that's my point, you don't have to: $y < sqrt(1-\cos^2x)$ implies that $y < \sin(x)$. – sh1ftst0rm Oct 2 '12 at 1:24
But then you have to calculate $\cos (x)$ instead. Which is just as hard. – TonyK Oct 2 '12 at 6:26
Agreed, which is why that particular test isn't useful. But it's a counter example to the idea that there's no way to test it without calculating $\sin(x)$. So I guess the question would be, is it provable that there's no way to perform the test I'm looking for which is easier than calculating $\sin(x)$. This particular example is not easier, but that doesn't mean there aren't any. – sh1ftst0rm Oct 2 '12 at 12:02
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528712630271912, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/107960/how-to-compute-the-picard-rank-of-a-k3-surface/107963
|
## How to compute the Picard rank of a K3 surface?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm curious about the following question:
Given a K3 surface, how does one proceed to compute its rank?
Of course the answer may depend on the form of the input, i.e. how the K3 is "given". So
For a given way of writing down a K3 surface, (e.g. quartics in $\mathbb{P}^3$)
How does one compute the Picard rank of the K3 surface?
(Aside: What I've seen people sometimes did is avoiding this question by nailing down a K3 surface $X$ with its $NS(X)$ together with the intersection form. Then find an embedding given by the ample class.)
-
Somewhat similar: mathoverflow.net/questions/26438/… Are there any sofware packages for computing Picard numbers? – joro Sep 24 at 8:05
Picard lattices of families of K3 surfaces by Belcastro, xxx.lanl.gov/abs/math/9809008, may be of interest. – Balazs Sep 24 at 11:47
1
You might also try looking at some of the papers of Matthias Schütt. – Artie Prendergast-Smith Sep 24 at 16:04
1
Find for example a symplectic action of your K3 surface. Then you will have an lower bound of the Picard lattice. Check for example K. Hashimoto's paper, where he classified invariant lattices of such K3 surfaces. Likewise you can use non-symplectic action in some cases. – Atsushi Kanazawa Sep 24 at 18:12
## 2 Answers
There are some papers of van Luijk, where he computes the ranks of some K3s over number fields. The trick is to note that $NS(X) \hookrightarrow NS(X_p)$, where $X_p$ is the reduction of $X$ modulo a prime ideal $p$. One can determine the rank of $NS(X_p)$ by counting eigenvalues of Frobenius which differ from $q$ (the size of the residue field) by a root of unity. If you want to find rank 1 K3s, you can reduce modulo two different primes and hope to find rank 2 reductions which have lattices which are incompatible in some sense, forcing $NS(X)$ to be rank 1. (The issue here is that the rank of $NS(X_p)$ will always be even, so you can't win by using a single prime.)
I'm not sure how this works when you want to find K3s of larger rank though, unless you had a way of exhibiting linearly independent divisor classes. Anyhow, van Luijk uses this technique to find rank 1 quartics in $\mathbb{P}^3$ and I think others have done the same with genus 2 K3s defined over $\mathbb{Q}$.
I should add that the situation is much easier for Kummer surfaces. If I'm not mistaken, the rank of $X = K(A)$ ($A$ is an abelian surface) is 16 plus the Picard rank of $A$. The 16 comes from the 16 exceptional divisors you get when you blow up $A$ at its 2-torsion points. The rank of $A$ is usually not hard to figure out: a generic $A$ has rank 1, if $A$ is a product of elliptic curves then its rank is 2,3 or 4 depending on whether the curves are isogenous and whether they have CM or not, and there are a few other cases which one can probably figure out...
-
1
In fact, there are tricks to decide rank $1$ by reducing modulo one single prime only: arxiv.org/abs/1006.1972 - check out also the other papers of these authors circling around Picard groups of K3s – Christian Liedtke Sep 25 at 10:30
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In Theorem 6 of the following paper : http://arxiv.org/abs/1111.4117, building on Van Lujik's work, François Charles explains a (theoretical) algorithm that computes the rank of a K3 surface $X$ defined over a number field. This algorithm terminates conjecturally, for instance if $X\times X$ satisfies the Hodge conjecture.
The main new feature of this article, that allows him to obtain an algorithm, is that the discrepancy between the rank of $X$ and the rank of the reduction of $X$ at a typical prime may be read off the algebra of endomorphisms of the transcendental lattice of $X$.
-
1
for this algorithm, we first have to find divisors on the K3 and codimension 2 cylces on the self-product of the K3 by 'going through Hilbert schemes of a suitable projective space' in order to get a lower bound on the rank. I am not an expert, but I'd be interested in how complicated this is in practice, especially, if the rank is large. – Christian Liedtke Sep 25 at 10:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228470921516418, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/300911/solving-the-recurrences-of-algorithms
|
# Solving the recurrences of algorithms
Im having some trouble understanding recurrences.
I have an assignment where I have to solve some recurrences, theyre generally in the form of: $$T(n) = aT(n/b) + f(n)$$
I have 3 general formulas I can use:
1. If $f(n) = O(n^{\log_b a-\varepsilon})$ For some constant $\varepsilon > 0$, then $T(n) = \Theta(n^{\log_b a})$
2. If $f(n) = \Theta(n^{\log_b a}lg^kn), k \ge 0$, then $T(n) =\Theta(n^{\log_b a}lg^{k+1}n)$
3. If $f(n) = \Omega(n^{\log_b a+\varepsilon})$ For some constant $\varepsilon > 0$, and satisfies the regulatory property, i.e. if $af(n/b) \le cf(n)$ for some constant $c < 1$ and all sufficiently large n, then $T(n) = \Theta(f(n))$
My problem is, I dont really understand how I'd use these to find the recurrence, or more specifically I dont know how to choose which formula to use...or maybe theres a simpler way of finding a recurrence
For example the questions I have are:
Solve the following recurrences:
1. $T(n) = 9T(\frac{n}{3}) + n^2lgn + 2n$
2. $T(n) = 3T(\frac{n}{2}) + n^2lg^3n$
3. $T(n) = 2T(\frac{n}{2}) + \sqrt{n}$
4. $T(n) = 3T(\frac{n}{3} + 5) + \frac{n}{2}$
Btw this isnt me asking for answers for my homework, those are just what I eventually need to solve.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926251232624054, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/27891-differential-equation-question.html
|
# Thread:
1. ## Differential Equation question
The problems in the section of the book list the questions in this way:
dy/dx = 4y+e^4xsin(5x) and y(0)=1
First I get P(x)= -4 and then Q(x) = e^4x sin(5x)
then we put it in standard form and put the y on the left and the x on the right. Take the derivative first and then integrate. I understand the problems from #'s 18 on but the first 18 are set up like this:
y'-2xy= e^x^2 In the directions it states: primes denote derivatives with respect to x. The teacher explained the part in the book and with his example I understood and could do the later problems but I guess sometimes the hang up is what the question is asking. If someone has time to show me an example of the type y' -2xy= e^x^2 I would appreciate it. I assume we would do the same, get y on the left and x on the right and follow the same steps but not sure how to start. It seems that math is seeing patterns and if I could see one problem worked out I should be able to see the pattern and finish the rest of them.
Thank You,
Keith
2. Well, we're talking about linear ODEs. Do you know what the integrating factor is?
Originally Posted by keith
The problems in the section of the book list the questions in this way:
dy/dx = 4y+e^4xsin(5x) and y(0)=1
This is $y'-4y=e^{4x}\sin5x.$ (See my signature for LaTeX typesetting.)
Your integrating factor is $e^{-4x},$ so just multiply the entire equation by this term and go from there.
Originally Posted by keith
If someone has time to show me an example of the type y' -2xy= e^x^2 I would appreciate it.
The same idea: your integrating factor in this case is given by $\mu(x)=\exp\left\{\int-2x\,dx\right\}.$ Get it and do similar things with the first one.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490623474121094, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2011/10/13/the-divergence-operator/?like=1&source=post_flair&_wpnonce=c589d5bcbb
|
# The Unapologetic Mathematician
## The Divergence Operator
One fact that I didn’t mention when discussing the curl operator is that the curl of a gradient is zero: $\nabla\times(\nabla f)=0$. In our terms, this is a simple consequence of the nilpotence of the exterior derivative. Indeed, when we work in terms of $1$-forms instead of vector fields, the composition of the two operators is $*d(df)$, and $d^2$ is always zero.
So why do we bring this up now? Because one of the important things to remember from multivariable calculus is that the divergence of a curl is also automatically zero, and this will help us figure out what a divergence is in terms of differential forms. See, if we take our vector field and consider it as a $1$-form, the exterior derivative is already known to be (essentially) the curl. So what else can we do?
We use the Hodge star again to flip the $1$-form back to a $2$-form, so we can apply the exterior derivative to that. We can check that this will be automatically zero if we start with an image of the curl operator; our earlier calculations show that $*^2$ is always the identity mapping — at least on $\mathbb{R}^3$ with this metric — so if we first apply the curl $*d$ and then the steps we’ve just suggested, the result is like applying the operator $d**d=dd=0$.
There’s just one catch: as we’ve written it this gives us a $3$-form, not a function like the divergence operator should! No matter; we can break out the Hodge star once more to flip it back to a $0$-form — a function — just like we want. That is, the divergence operator on $1$-forms is $*d*$.
Let’s calculate this in our canonical basis. If we start with a $1$-form $\alpha=Pdx+Qdy+Rdz$ then we first hit it with the Hodge star:
$\displaystyle*\alpha=Pdy\wedge dz+Qdz\wedge dx+Rdx\wedge dy$
Next comes the exterior derivative:
$\displaystyle d*\alpha=\left(\frac{\partial P}{\partial x}+\frac{\partial Q}{\partial y}+\frac{\partial R}{\partial z}\right)dx\wedge dy\wedge dz$
and then the Hodge star again:
$\displaystyle*d*\alpha=\frac{\partial P}{\partial x}+\frac{\partial Q}{\partial y}+\frac{\partial R}{\partial z}$
which is exactly the definition (in coordinates) of the usual divergence $\nabla\cdot F$ of a vector field $F$ on $\mathbb{R}^3$.
### Like this:
Posted by John Armstrong | Differential Geometry, Geometry
## 3 Comments »
1. [...] interesting to look at what happens when we apply the Hodge star twice. We just used the fact that in our special case of we always get back exactly what we started with. That is, in [...]
Pingback by | October 18, 2011 | Reply
2. [...] a Hodge star to work with, this tells us that we always have some functions on which are not the divergence of any vector field on . LD_AddCustomAttr("AdOpt", "1"); LD_AddCustomAttr("Origin", "other"); [...]
Pingback by | November 24, 2011 | Reply
3. [...] again with Maxwell’s equations, we see all these divergences and curls which, though familiar to many, are really heavy-duty equipment. In particular, they rely [...]
Pingback by | February 22, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9072321057319641, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=131643
|
Physics Forums
## Proof that the boundary and the closure of a subset are closed.
Hello,
I am currently working on proving the following theorem
The boundary $$\partial A$$ and the closure $$\overline{A}$$ of a subset A of $$\mathbb C$$ are closed sets.
Proof: Let $$A \subset \mathbb C.$$ We want to show the set $$\partial A \cap \overline{A}$$ is closed. To show that $$\partial A \cap \overline{A}$$ is closed, we will show that the complement of $$\partial A \cap \overline{A}$$ is open. So the complement of $$\partial A \cap \overline{A}$$ is the set of all points not in $$\partial A \cap \overline{A},$$ i.e. $$\mathbb C \sim (\partial A \cap \overline{A} ).$$ Since $$\overline{A} = A\cup \partial S$$ and the set $$\partial A \cap \overline{A}$$ contains all of its boundary points by definition of the intersection. So the complement of $$\partial A \cap \overline{A}$$ cannot contain any of its boundary points by the definition of the complement. Since the complement does not contain any points on its boundary, then the complement is open. Therefore, since the complement of $$\partial A \cap \overline{A}$$ is open then the set $$\partial A \cap \overline{A}$$ is closed.
I am really poor at doing proofs, any help, insight, or questions would be greatly appreciated. Please let me know. Thanks.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Homework Help The boundary and the closure are both closed. Their intersection is also closed, but their intersection is just the boundary. So what are you trying to prove?
StatusX, thanks for the reply. I am trying to show the theorem: Thm. The boundary $$\partial A$$ and the closure $$\overline{A}$$ of a subset $$A$$ of $$\mathbb C$$ are closed.
Recognitions:
Homework Help
## Proof that the boundary and the closure of a subset are closed.
Then why are you looking at their intersection? Also, what are you using as the definition of these terms?
I misunderstood the statement, I was thinking and = intersection. I realized that the minute I read your first post. I'm using the following definitions. "z is an interior point of A if there exists an r>0 such that the open disk is contained in A." The open set consists of the set of all points of a set that are interior to to that set. The boundary point is so called if for every r>0 the open disk has non-empty intersection with both A and its complement (C-A). The closure of a set A is the union of A and its boundary. So I need to show that both the boundary and the closure are closed sets. So, to show the sets are closed, do I consider the complement of the boundary for both cases? I can see how the closure of A is closed, since its complement is what I was essentially arguing in my first post. I am a little unsure about just the boundary part. Am I at least on the right track now?
Recognitions: Homework Help Yes, the complement of the closure is the interior of the complement, so is open (prove this). Remember the intersection of closed sets is closed, so can you find a couple closed sets whose intersection is the boundary?
Thanks, StatusX. This is what I have. I know my proof isn't complete, I think I am having a problem with the boundary argument. Let $$A \subset \mathbb C.$$ We want to show that $$\partial A$$ is closed and $$\overline{A}$$ is closed. To show that both the boundary and the closure are closed, we need to show that the complement of each is open. So first we will show that the boundary is closed and then we will show the closure is closed. A point $$z\in \mathbb C$$ which for every r>0 there is an open disk $$\Delta(z,r)$$ which has a non-empty intersection with A and its complement $$\mathbb C\sim A$$ is said to be boundary point. Thus, the collection of all such points is the boundary. The complement of the boundary $$\mathbb C \sim \partial A$$ then cannot contain any points for which there exists an open disk who has a non-empty intersection with both A and $$\mathbb C\sim A.$$ Since the complement must have empty intersections with $$\mathbb C\sim A$$ and A this implies there exists an r>0 such that every open disk lies entirely inside the complement. Since for every point in the complement we can find an r>0 such that the open disk about that point lies entirely in the complement, then every point in the complement is an interior point to the complement. Since all points interior to the complement are interior points, then the complement is open. Since the complement of the boundary is open then the boundary is closed. To show the closure of a subset of the complex plane is closed, we need to show that its complement is open. Since $$\overline{A}$$ contains all points in the union of A and its boundary $$\partial A$$ then the complement of the closure is all points outside of this union. Since the complement cannot contain any points in A then we need only worry about the boundary of A. By the previous paragraph we showed the boundary is closed. Since the complement cannot contain any points of A and the boundary is closed, then the complement of $$\overline{A}$$ is open. Since the complement of $$\overline{A}$$ is open then $$\overline{A}$$ is closed. Let me know what you think.
Thread Tools
| | | |
|--------------------------------------------------------------------------------------|--------------------------------------------|---------|
| Similar Threads for: Proof that the boundary and the closure of a subset are closed. | | |
| Thread | Forum | Replies |
| | Calculus | 7 |
| | Calculus & Beyond Homework | 4 |
| | Set Theory, Logic, Probability, Statistics | 2 |
| | Calculus | 5 |
| | Calculus & Beyond Homework | 2 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 33, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9711737632751465, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/73571/list
|
## Return to Question
2 added 12 characters in body
Dear All!
I tried for several evenings to find an answer to the following basic question and I cannot see what is the answer:
Given an integer $n\geq 3$, does there exist a an (infinite) group with exactly $n$ normal subgroups?
If "yes", what about the same questions for finitely generated groups, finitely presented groups?
I guess this must have been done.
1
# (F.g., f.p.) groups with exactly $n$ normal subgroups
Dear All!
I tried for several evenings to find an answer to the following basic question and I cannot see what is the answer:
Given an integer $n\geq 3$, does there exist a group with exactly $n$ normal subgroups?
If "yes", what about the same questions for finitely generated groups, finitely presented groups?
I guess this must have been done.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578783512115479, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/recreational-mathematics+sequences-and-series
|
# Tagged Questions
1answer
42 views
### Number of Distinct Resistances that can be produced from n equal resistance resisters
Here is an interesting problem: The number of distince resistances that can be produced from n equal resistance resisters is given below. The Sequence Surprisingly this is also equal to the number ...
2answers
536 views
### Predicting Real Numbers
Here is an astounding riddle that at first seems impossible to solve. I'm certain the axiom of choice is required in any solution, and I have an outline of one possible solution, but would like to ...
2answers
50 views
### Lengths of increasing/decreasing subsequences of a finite sequence of real numbers
Let $x_1,\ldots,x_n$ be a finite sequence of real numbers. Let $f(\{x_i\}_{i=1}^n)=f(\{x_i\})$ be the length of the largest non-decreasing subsequence, and let $g(\{x_i\})$ be the length of the ...
0answers
22 views
### Given an X and a Y, how to find the equation
I've just been curious lately. If you have an X for lets say 0-50 and corresponding Y values, is there a way to determine the equation without just guessing and checking and trying to find a pattern?
1answer
91 views
### How to calculate $1^k+2^k+3^k+\cdots+N^k$ with given values of $N$ and $k$? [duplicate]
Here $1<N<10^9$ and $0<k<50$ So we have to calculate it in order of $O(\log N)$.
1answer
54 views
### Sequence Question from past post
I recently saw a post about sequences. This made me remember some other post someone had posted here on Math.SE. He did not want answers but wanted general ways to tackle them. I did spend an hour or ...
5answers
179 views
### What is closed-form expression for $F(n)$ when $F(n)=F(n-1)+F(n-2)$ and $F(0)=a$,$F(1)=b$ and $a,b>0$?
What is closed-form expression for $F(n)$ when $F(n)=F(n-1)+F(n-2)$ and $F(0)=a$, $F(1)=b$ and $a,b>0$ ? It seems to be simple generalization of Fibonacci sequence but I can't find closed form for ...
2answers
140 views
### Integer sequences which quickly become unimaginably large, then shrink down to “normal” size again?
There are a number of integer sequences which are known to have a few "ordinary" size values, and then to suddenly grow at unbelievably fast rates. The TREE sequence is one of these sequences, which ...
1answer
61 views
### Question on pathological sine function
Some years ago I came across what was defined as "pathological" function defined as: $$f(x)=\sum_{k=1}^\infty \frac{1}{k^2}\cdot \sin\left(k!\cdot x\right)$$ It was mentioned (in an article I ...
1answer
156 views
### Do roots of a polynomial with coefficients from a Collatz sequence all fall in a disk of radius 1.5?
Consider a modified version of Collatz sequence: $C(n)=\left\{ \begin{array}{ll} \frac{3n+1}{2} & n\ \mathrm{odd} \\ \frac{n}{2}& n\ \mathrm{even}\end{array} \right.$ Let $F_n$ be the ...
1answer
61 views
### Gradually rising or falling numbers
I'm looking for a number series I can use for gradually rising or falling numbers. The number series should not be linear and should converge to a number at some point. (Sorry I'm really scared of ...
3answers
191 views
### Convergence of $x_n = \cos (x_{n-1})$
I define the sequence $x_n = \cos (x_{n-1}), \forall n > 0$. For which starting value of $x_0 \in \mathbb{R}$ does the sequence converge?
1answer
121 views
### Number of combinations of $k$ numbers using arithmetic operations
What is the maximum number of positive rational values that can be obtained by combining $k$ positive integers using only addition, subtraction, multiplication, division, and parentheses? Assume that ...
0answers
164 views
### Largest $x$ such that the power tower (tetration) $x^{x^{x^{x^{…}}}}$ converges? [duplicate]
Possible Duplicate: Infinite tetration, convergence radius Recently in this thread, Pseudo Proofs that are intuitively reasonable, I learned that ...
2answers
217 views
### A sequence of nested fractions with a counter-intuitive limit
Given $a,b\in\mathbb C$, let us construct the following sequence: \begin{align} a+b&=a+b\\ \cfrac a{a+b}+\cfrac b{a+b}&=1\\ \cfrac a{\cfrac a{a+b}+\cfrac b{a+b}}+\cfrac b{\cfrac ...
0answers
150 views
### $2, 5, 13, 17, 29, 421, 401, 53, 281,…,\rightarrow \infty$? $a_{n+1}=\operatorname{ GPF}(qa_n+p)$
I denote by $\operatorname{ GPF}(n)$ the greatest prime factor of $n$, eg. $\operatorname{ GPF}(17)=17$, $\operatorname{ GPF}(18)=3$. Is there a way to prove that the sequence \$a_{n+1}=\operatorname{ ...
3answers
199 views
### A binet-like formula for $\small 1 , 2 , 3 , 6 , 9 , 18 , 27, 54, 81, \ldots$?
This is more a "recreational" problem. By another question I came to the question for a closed-form-formula for this sequence $\small 1 , 2 , 3 , 6 , 9 , 18 , 27, \ldots$ which is just the mixture of ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282269477844238, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/7766-group-proof.html
|
# Thread:
1. ## Group Proof
Hi. I'm stuck on a university level algebra question in my textbook and I was hoping to receive some help on it.
The problem is: Let G be a group and let H1, H2 be subgroups of G. Prove that if H1 union H2 is a subgroup then either H1 is a subset of H2 or H2 is a subset of H1. Sorry about the notation or lack thereof. I don't know all the programming stuff.
Plus I'm just reading something and I was wondering. Are the symbols S3, Q8, and D4 notations for specific groups? I have never seen them before and a problem I have seems to assume I should know them. Anyway, all the numbers after the letters are subscripts.
Thanks for any help. Pvt Bill Pilgrim.
2. Originally Posted by PvtBillPilgrim
Plus I'm just reading something and I was wondering. Are the symbols S3, Q8, and D4 notations for specific groups? I have never seen them before and a problem I have seems to assume I should know them. Anyway, all the numbers after the letters are subscripts.
Yes,
$S_n$ denotes the symettric group, also known as the group of premutations.
$D_4$ denotes the dihedral group on a vertices of a polygon.
I never seen $Q_8$, maybe the octions
3. Originally Posted by PvtBillPilgrim
Hi. I'm stuck on a university level algebra question in my textbook and I was hoping to receive some help on it.
The problem is: Let G be a group and let H1, H2 be subgroups of G. Prove that if H1 union H2 is a subgroup then either H1 is a subset of H2 or H2 is a subset of H1. Sorry about the notation or lack thereof. I don't know all the programming stuff.
I never considered that, it is interesting.
Okay the problem is that when you take the union of two subgroups of a group not necessarily a subgroup, why? Well, there is an identity element, there is an inverse for each element and multiplication is associate. The problem is it fails to be closed.
Let us do this by contradiction...
Assume, that neither $H_1\leq H_2$ and $H_2\leq H_1$ is true. (Note the meaning of $\leq$ is the same as $\subseteq$ because these are groups).
Thus there exists $x,y\in G$ such as,
$x\in H_1 \mbox{ and }x\not \in H_2$
$y\not \in H_2 \mbox{ and }y \in H_1$
But the condition of the problem,
$H_1\cup H_2$ is a group, and $x,y\in (H_1\cup H_2)$
Thus,
$xy\in (H_1\cup H_2)$
Thus,
$xy\in H_1 \mbox{ or }xy\in H_2$
Thus,
$xy=h_1 \, \exists h_1\in H_1$
$xy=h_2 \, \exists h_2\in H_2$
Using group properties we have,
$y=x^{-1}h_1$
$x=h_2y^{-1}$
But,
$x\in H_1$ implies $x^{-1}\in H_1$ and $h_1\in H_1$ thus, $x^{-1}h_1\in H_1$ by closure. Thus, $y\in H_1$
But,
$y\in H_2$ implies $y^{-1}\in H_2$ and $h_2\in H_2$ thus, $h_2y^{-1}\in H_2$ by closure. Thus, $x\in H_2$.
Thus,
$y\in H_2\mbox{ and }x\in H_1$---> Contradiction.
Thus, (de Morgan's negation)
$H_1\leq H_2$ or $H_2\leq H_1$
4. I really appreciate the proof.
Now just a few questions:
How exactly do you define these particular groups, say Q8, S3, D4?
What would be the order of each element of these groups?
What would be the subgroups of these groups (which ones are cyclic?)
I must have missed something in class and now I'm behind. All my notes seem to assume that these are known.
5. Originally Posted by PvtBillPilgrim
I really appreciate the proof.
Now just a few questions:
How exactly do you define these particular groups, say Q8, S3, D4?
I cannot give you a lecture on this. But you should got to Wikipedia and read.
$S_3$ is a group of symettries on $\{1,2,3\}$
Dihedral group is more complicated to explain. $Q_8$ are the octions. These are,
$\{1,-1,i,-i,j,-j,k,-k\}$
And,
$i^2=j^2=k^2=-1$
And, $ijk=-1$
This is non-abelian (in fact all 3 that you mentioned are non-abelian).
What would be the order of each element of these groups?
The order of an element is the smalles positive $n$ (it exists for finite groups) such that,
$a^n=e$.
For example, in the octions
$1^1=1$ Order of 1
$(-1)^2=1$ Order of 2
$(i)^4=(-i)^4=(j)^4=(-j)^4=(k)^4=(-k)^4=1$---> Order of 4
What would be the subgroups of these groups (which ones are cyclic?)
Let us look at the octions.
The Klein 4 group is its subgroup,
$\{1,-1,i,-i\}$
It happens to be cyclic,
$<i>$ is a generator.
6. Just one more question:
What are the subgroups of S3, the symmetric group and which are cyclic?
I figured out everything else, but stuck on this.
7. Have you considered searching for a group table online?
The group $S_3$ is the same as $D_3$ (or $D_6$ depending on your style).
Here is the Dihedral Group D3
8. Here is the group $S_3$.
There are going to be $3!=6$ elements.
Here there are.... (note I given them a name).
$\rho_0=\left( \begin{array}{ccc}1&2&3\\1&2&3 \end{array} \right)$
$\rho_1=\left( \begin{array}{ccc}1&2&3\\2&3&1 \end{array} \right)$
$\rho_2=\left( \begin{array}{ccc}1&2&3\\3&1&2 \end{array} \right)$
$\mu_1=\left( \begin{array}{ccc}1&2&3\\1&3&2 \end{array} \right)$
$\mu_2=\left( \begin{array}{ccc}1&2&3\\3&2&1 \end{array} \right)$
$\mu_3=\left( \begin{array}{ccc}1&2&3\\2&1&3 \end{array} \right)$
These will form a group shown below.
We can see that the set,
$\{\rho_0,\rho_1,\rho_2\}$
Is a subgroup of this group.
Furthermore, since the order of this subgroup is 3, a prime, it is cyclic.
While we have this diagram let us see if we can find any subgroups.
Of course,
$\{\rho_0\}$ is a subgroup, it is the trival subgroup for it contains the identity element.
$S_3$ is the improper subgroup it contains all of these elements.
But are there any other except for the one I mentioned above?
By Lagrange's theorem the order of a subgroup divides the order of the finite group. So if they exist they must be either 2 or 3. (In fact they must exist, it is called Cauchy's theorem but that might be too advanced for you since you are just starting to learn group theory). But still even if you never learned Cauchy's theorem you can still come to a conclusion by looking at the group diagram.
By observing this group we see that $\{\rho_0,\mu_1\},\{\rho_0,\mu_2\},\{\rho_0,\mu_3\}$ are subgroups. It can be shown (again might be too difficult for you yet) that there cannot exist more subgroups of order 2. Or you can just look at all the possibilities and see that these are the only ones.
And all of these are cyclic! Because 2 is a prime.
We have succesfully given all the subgroups and shown they are all cyclic except for the improper subgroup, the group itself.
Attached Thumbnails
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 59, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548472166061401, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/241369/more-than-99-of-groups-of-order-less-than-2000-are-of-order-1024?answertab=votes
|
# More than 99% of groups of order less than 2000 are of order 1024?
In Algebra: Chapter 0, the author made a remark (footnote on page 82), saying that more than 99% of groups of order less than 2000 are of order 1024.
Is this for real? How can one deduce this result? Is there a nice way or do we just check all finite groups up to isomorphism?
Thanks!
-
2
There are $49487365422$ of order 1024 – Pantelis Damianou Nov 20 '12 at 15:32
– user1729 Nov 20 '12 at 15:34
3
No doubt this about isomorphism classes of groups. – Marc van Leeuwen Nov 20 '12 at 17:28
Is there a rough estimate (say one significant digit) of the number of groups of order 2048? – yatima2975 Nov 20 '12 at 23:06
1
– m. k. Nov 21 '12 at 10:59
show 2 more comments
## 2 Answers
Here is a list of the number of groups of order $n$ for $n=1,\ldots,2015$. If you add up the number of groups of order other than $1024$, you get $423{,}164{,}062$. There are $49{,}487{,}365{,}422$ groups of order $1024$, so you can see the assertion is true. (In fact the percentage is about $99.15\%$.)
As far as I know there is no reasonable way to deduce a priori the number of isomorphism classes of groups of a given order, though I believe that combinatorial group theory has some methods for specific cases. A general rule of thumb is that there are a ton of $2$-groups, and in fact I have heard it said that "almost all finite groups are $2$-groups" (though I cannot cite a reference for this statement).
EDIT: As pointed out in the comments, "almost all finite groups are $2$-groups" is still a conjecture. There is an asymptotic bound on the number of $p$-groups of order $p^n$, however. Denoting by $\mu(p,n)$ the number of groups of order $p^n$, $$\mu(p,n)=p^{\left(\frac{2}{27}+O(n^{-1/3})\right)n^3},$$ which is proven here. This colossal growth along with the results of Besche, Eick & O'Brien seem to be what primarily motivated the conjecture.
-
A while ago I tried to find a reference for this "almost all..." result. I think it is just a folklore statement, with the paper which is the subject of this thread proffered as evidence. – user1729 Nov 20 '12 at 15:42
1
(I wonder if there are more groups of order $3^{10}$ than of order $2^{10}$? Genericity proofs are...unsavoury...at least to my pallet...) – user1729 Nov 20 '12 at 15:44
4
Of course it's possible to deduce the number of isomorphism classes of groups of (finite) order $n$: write down all possible $n$ by $n$ multiplication tables, check which satisfy the group axioms, check every bijection between each pair to see if it's a group isomorphism. Since everything is finite, this can all be computed in finite time. The hard part is finding ways to do it in a sane amount of time. – Chris Eagle Nov 20 '12 at 15:55
4
According to the list linked in the answer, there are 504 groups of order $3^6=729$ and 267 groups of order $2^6=64$. There are 15 groups of order $5^4=625$ and also of order $3^4=81$ and 14 groups of order $2^4=16$. – Mark Bennet Nov 20 '12 at 16:40
5
Almost all groups are infinite. – Marc van Leeuwen Nov 20 '12 at 17:29
show 3 more comments
This is true. The amount of groups of order at most 2000 (up to isomorphism) was calculated precisely for the first time in 2001 by Besche, Eick and O'Brien. Here is the announcement of their result:
We announce the construction up to isomorphism of the $49 910 529 484$ groups of order at most $2000$.
In table 1 the number of groups of order $1024$ is given, it is $49 487 365 422$. Hence ~99.2% of all groups of order at most $2000$ have order $1024$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933295726776123, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/problem-solving+packing-problem
|
# Tagged Questions
1answer
176 views
### Mathematics of Tetris 2.0
Based on the question The Mathematics of Tetris, I was wondering if it is possible to have a series of tetris blocks that is impossible to clear. For example, getting the string TTTSS.. forces the ...
1answer
102 views
### packing boxes inside boxes
given 2 boxes (in 3-space) determine if one of the boxes resides within the other, or if a third box must be constructed that holds them both? given that a box is defined by its center($x,y,z$), and ...
2answers
138 views
### Pack box inside “smaller” box
Now, there is a puzzle that is quite well known as far as I know, concerning the packing of rectangular boxes in 3-dimensional space. You also have a measurement of a box as the sum of the hight, ...
4answers
8k views
### The Mathematics of Tetris
I am a big fan of the oldschool games and I once noticed that there is a sort parity associated to one and only one Tetris piece, the $\color{purple}{\text{T}}$ piece. This parity is found with no ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484418034553528, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/181012/are-there-n-th-roots-of-differential-operators?answertab=votes
|
# Are there n-th roots of differential operators?
In analogy to a Dirac operator, it seems to me that formally, the equation
$$\frac{\partial^n}{\partial x^n}f(x,y)=D_yf(x,y)$$
is solved by
$$f(x,y)=\exp{(x \sqrt[n]{D_y})}\ g(y).$$
Is there a theory surronding the $\sqrt[n]{D_y}$-idea?
-
In more generality than you're asking about, positive operators always have unique positive $n^{th}$ roots by the continuous functional calculus. But it seems that most differential operators have some negative and complex eigenvalues, so this may not be any use. – Kevin Carlson Aug 10 '12 at 12:41
6
– Clive Newstead Aug 10 '12 at 12:46
## 2 Answers
The short answer is yes, absolutely, and the theory of such operators is part of microlocal analysis. The basic ingredient is that differential operators can be written as integral operators (in an appropriate generalized sense) via the Fourier transform. E.g.
$$\frac{d}{dx} f(x) = \frac{d}{dx} \int e^{2\pi i kx} \hat{f}(k) dk = \int e^{2 \pi i k x} (2\pi ik) \hat{f}(k) dk.$$ Since $\hat{f}(k) = \int e^{-2\pi i k y} f(y) dy$ (forgive me if I forgot a $2\pi$ somewhere), we have $$\frac{d}{dx} = \int \int (2\pi i k) e^{2 \pi i k(x-y)} dy dk.$$ The right hand side has to be interpreted in a certain distributional sense, but if we are careful such formulae are correct and rigorous. Let's consider your example, $$\frac{\partial^n}{\partial x^n} f(x,y) = D_y f(x,y)$$ and let's assume that $D_y$ is an ordinary polynomial differential operator in $y$ with constant coefficients. Since $D_y$ is a polynomial differential operator with constant coefficients, then $\widehat{D_y g}(k) = P(k) \hat{g}(k)$ for some polynomial $P$. This suggests that whatever $\sqrt[n]{D_y}$ might be, it should satisfy $$\widehat{\sqrt[n]{D_y} g}(k) = \sqrt[n]{P(k)} \hat{g}(k).$$ But using the Fourier transform, we can take this as the definition of $\sqrt[n]{D_y}$: $$\sqrt[n]{D_y} g(y) := \int e^{2\pi i ky} \sqrt[n]{P(k)} \hat{g}(k) dk = \int \int e^{2\pi i k(y-y')} \sqrt[n]{P(k)} g(y') dy' dk.$$ This leads to $$\exp(x \sqrt[n]{D_y}) g(y) = \int \int e^{2\pi i k(y-y')} \exp(x\sqrt[n]{P(k)}) \hat{g}(k) dy' dk.$$ As long as $P(k)$ and $g(y)$ are nice enough that this expression makes sense (and converges in an appropriate sense), this will solve the given PDE.
-
Thanks for the answer. With that definition of $\sqrt[n]{D_y}$, why would $\sqrt[1/2]{D_y}\sqrt[1/2]{D_y}g(y)$ be equal to $D_yg(y)$? You have this integration variable $k$ there and so it seems to me as it's different for all the applications of the operators. (I think maybe its a delta-function integral trick though, which makes all the integral variables them equal.) – Nick Kidman Aug 10 '12 at 13:44
Yes, if you write $\sqrt[1/2]{D_y} \sqrt[1/2]{D_y} g(y)$ as a multiple integral, there will be a term like $\int e^{2\pi i y(k-k')} dy$ which is $\delta(k-k')$ and everything takes care of itself. This is what makes the pseudodifferential calculus work. – Jonathan Aug 10 '12 at 13:50
Since you mentioned the Dirac operator, which works not by analysis but by extending the scalars in a noncommutative way; consider $$\left( \begin{array}{ccc} 0 & 0 & D_y \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)^3$$ and generalize.
-
Mhm, so the matrix is the root of the differential operator $D_y$ to the power of three (times unity), but I'm not really sure how this helps me to compute the root of a given differential operator like $D_y$. The problem complexity will hight depend on that specific situation, e.g. like having to construc a clifford algebra in case of the d'Alambert operator. – Nick Kidman Aug 10 '12 at 14:38
2
This technique depends sensitively on both the given differential operator, as well as the space of functions on which it acts. In the case of the Dirac operator, it is not the Laplacian on functions that admits an algebraic square root, but rather the Laplacian on spinors. This also works for vector-valued functions, since we can decompose vectors into products of spinors. These kinds of $n$th roots are completely different than what I described in my answer, but both can be useful depending on the situation. – Jonathan Aug 10 '12 at 18:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9470871686935425, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.