url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/81228/is-there-a-way-to-formalize-reflexive-relations-in-a-relation-algebra
|
## Is there a way to formalize reflexive relations in a relation algebra?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am doing some research on the algebra of relations (Tarski/Givant axiomatization), and I notice that in a proper relation algebra, an element (relation) is an equivalence element if and only if it is transitive and symmetric. I am wondering as to why "reflexive" is not included. Is there even a possible sentence or set of sentences in the language of relation algebras such that in a proper relation algebra, an element is reflexive if and only if it satisfies it (the sentence or set of sentences)? I am trying to figure out if there is a way to formalize reflexivity in the theory of relation algebras. It would be preferable if there were an equational axiom that could capture reflexivity.
-
1
If you have access to the symbols, $x \cap I = I$. $I$ is the unique element that satisfies $\forall y,I\cdot y=y$. If not, you could try $\forall y\neq 0,y \cdot x \cdot y^T\neq 0$. This is part of the general, sort of category-theoretic trick that an element of a set is just a kind of subset, and so a statement about all elements might really be a statement about all subsets. – Will Sawin Nov 18 2011 at 8:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9326163530349731, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/102497/finding-the-midpoint-of-a-right-angle-triangle
|
# Finding the MidPoint of a Right Angle triangle
I have a rather simple question, not sure if this is in the right SO.
I am attempting to find the angle of a side of a right angle triangle. And also attempting to find the mid point of a right angle triangle no matter the rotation(for a webapp I am making). I am attempting to find the angle where the question mark is, then test that my algorithm/methodology to find the position of the mid point of a shape works(this is for my HTML5 Canvas Web App).
Is my maths correct?...
````// This is half code/mathematics
? = tan-1( opposite/adjacent ); // inverse tan
? = tan-1(50/25);
? = 63.43; // is that correct??
````
My math algorithm below attempts to find the mid point when a shape is rotated. In my webapp I will only ever know the x,y pos & the width & height, but it will also be rotated around the x,y point which means the midpoint can be different. Is it correct?
````// To find the x,y midpoint all I have to do is have
// the hypotenuse of the triangle & the angle of one
// side(the angle I found above). Is that correct?
// h stands for hypotenuse, a stands for the angle I found above
midX = cos(a)*h
midY = sin(a)*h
// so to take the example above, if I use those formulas I should get an mid x,y value of 25,12.5
midX = cos(63.4)*75;
= 33.58; // shouldn't this be 25?
midY = sin(63.4)*75;
= 67.06; // should be 12.5?
````
-
2
Are you interested in finding the "mid-point" of the triangle that is invariant under rotation or find mid points of sides? – user21436 Jan 26 '12 at 1:46
2
– Henry Jan 26 '12 at 1:51
@Henry thanks for the confirmation. I have added more info about the mid points, its really just x=half width, y=half height. – Jake M Jan 26 '12 at 2:16
In order to provide an answer, can you tell us the the dimensions of the first triangle? Also, let me know if I understand, if a triangle have width=(0,0)--(0,1) and height=(0,0)--(0,-1), the midpoint that you want is (0.5,-0.5), do am I right? – leo Jan 26 '12 at 4:01
## 1 Answer
$63.43^\circ$ is correct to two decimal places.
Your midpoint seems to be halfway along the hypotenuse (the circumcentre of a right angled triangle), rather than being inside the triangle. The hypotenuse has length $\sqrt{50^2+25^2} \approx 55.90$ so half of that is about $27.95$.
So for a given angle $\theta$, the coordinates are about $(27.95 \sin \theta, 27.95 \cos \theta)$.
Plug in the angle you found earlier and you get $(25,12.5)$ within rounding errors
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8859899044036865, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4258830
|
Physics Forums
## Why do some functions not have Anti derivatives??
as the title says why are some functions like ## √cotx##(root cotx) not integratable??
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Quote by Hysteria X as the title says why are some functions like ## √cotx##(root cotx) not integratable??
Are you sure ? : http://www.wolframalpha.com/input/?i...8cot%28x%29%29
Can you give us a true example of what you call "not integrable" ?
Are you referring to functions such as sin(x^2) or log(log(x))? They do have antiderivatives, it's just impossible to express them finitely in terms of elementary functions.
## Why do some functions not have Anti derivatives??
Hi Hysteria X !
The remark of Whovian is pertinent. If it is that what you mean, I suggest to have a look at a funny example, on page 2 of the paper "Sophomore's Dream Function" :
http://www.scribd.com/JJacquelin/documents
I believe the only functions that are impossible to integrate are ones with an infinite amount of discontinuities.
Such as the Dirichelt Function? Yep, "most" functions (being most functions people regularly encounter) are integrable. "Most" functions (in that we take a random number from f's codomain to be f(x), where x is an arbitrary number in f's domain, for all x in f's domain) are going to be non-integrable, however.
Yes, the Dirichelet function is a good example (considering the Riemann definition of integration). Of course, there are an infinity of functions of the Dirichelet kind. But, we have also to take account of the kind of integration considered : The Dirichelet function is Lebesgue's integrable.
Blog Entries: 1
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by tahayassen I believe the only functions that are impossible to integrate are ones with an infinite amount of discontinuities.
If we are speaking of the Riemann integral, it can in fact handle infinitely many discontinuities, as long as the "infinity" is not too large. To be precise, a function is Riemann integrable if and only if it is bounded and its set of discontinuities has Lebesgue measure zero. All countably infinite sets have measure zero, and some uncountably infinite ones do, too, for example the Cantor set.
The Dirichlet function (i.e. the indicator function of the rational numbers) is discontinuous everywhere, so it fails the test.
On the other hand, the function (also, confusingly, sometimes called the Dirichlet function) defined by
f(x) = \begin{cases}0 & \textrm{ if }x \textrm{ is irrational} \\
1/q & \textrm{ if }x \textrm{ is rational and }x = p/q \textrm{ in lowest terms}\end{cases}
is continuous at every irrational x, and discontinuous at every rational x, so its set of discontinuities is countable. Therefore this function is Riemann integrable. (Its integral is zero.)
Recognitions: Homework Help Science Advisor There are three different questions which are often confused: 1) when does a function have an antiderivative? 2) when does a function have a definite integral? 3) when does a function have an antiderivative which is of the same "type" as the original function? We will restrict attention to bounded functions on finite intervals. In general a function with a definite integral also has an indefinite integral, and that indefinite integral is an antiderivative, but there is no guarantee the antiderivative will be as simple or elementary as the original function. E.g. the answer is yes to both questions 1 and 2, when f is continuous. Question 3 is harder, and there are many familiar continuous functions f such that the antiderivative F given by the indefinite integral of f, is not at all familiar. The point is that even when F(x) exists with F’(x) = f(x), the function F is usually more exotic than f was. The basic example is that the indefinite integral of the rational function 1/(1+x), say on [0,1], is the exotic (non rational) function ln(1+x). If we choose more complicated continuous functions, like e^(x^2), then the theorems above guarantee that the definite and indefinite integrals exist, and that the indefinite integral F(x) is an antiderivative of e^(x^2). But there is no guarantee that F can be expressed in terms of the familiar exponential, trig, and polynomial functions, and indeed it cannot. F is so exotic, it is an essentially “new” function, in the sense that we have not met with it in precalculus. But since e^(x^2) has a power series, it is not hard to write F down also as a power series. So the antiderivative certainly exists, but it is a function that most precalculus students have not run across. The answers to 1) and 2) also depend on what definition of the integral we use, but essentially whenever the answer to 2) is yes, then so is the answer to 1). Recall a “step function” on [a,b] is a function g such that for some subdivision a = a0 < a1 < a2 < ....< an-1 < an = b, the function g has a constant value ci on each open subinterval (ai-1,ai). Everyone agrees that the integral of such a step function is c1(a1-a0) + a2(c2-c1) +....+cn(an-an-1). I.e. the graph of a step function is the border of a finite collection of rectangles, and the integral is the sum of the signed areas of those rectangles. In beginning calculus we usually define f to have a definite (Riemann) integral if for every positive number e>0, we can find two “step functions” g,h (for the same subdivision) such that g ≤ f ≤ h and the integral of the step function (h-g) is less than e. This means we can fit rectangles above and below the graph of f such that the area between the rectangles is as small as you ask. The usual theorem proved (or left to the appendix) says that if f is continuous on [a,b], then the Riemann integral of f exists on [a,b]. It follows that it also exists on every subinterval [a,x], for a < x < b. If F(x) is the integral of f over [a,x], we call F the indefinite integral of f. If f is continuous, then at every x in [a,b], F has a derivative at x and F’(x) = f(x). Riemann proved that not only continuous functions f have integrals, but also all bounded functions f which are continuous “almost everywhere” or “continuous a.e.” have (Riemann) integrals. A function is continuous almost everywhere if, for every e>0, the set of discontinuities of f can be covered by a sequence of open intervals of total length < e. So the answer to 2) is also yes for f which are continuous a.e. (f is always assumed bounded). If f is continuous a.e. then f has a definite integral and thus also an indefinite integral F(x), and we expect the indefinite integral to be an antiderivative of f. This is true almost everywhere, I.e. F is differentiable with F’(x) = f(x) at every x where f is continuous, i.e. almost everywhere. Of course there is still no guarantee that F is at all a familiar function, even though f is a familiar function. One thing we can say is that the indefinite integral F is continuous, and even Lipschitz continuous, i.e. for some constant B we have |F(x)-F(y)| ≤ B|x-y| for all x,y in [a,b]. Now we broaden our definition of an antiderivative of f to be any Lipschitz function F which is differentiable a.e., and with derivative equal to f a.e. Then anytime question 2) has answer yes in the sense of Riemann integration then also question 1) has answer yes in the new broader sense. As to question 3), basically there is no hope of enhancing our precalculus course by teaching more and more types of more and more exotic functions so that we will know the antiderivatives of all known function. No matter how many classes of specific functions we learn, there will always be antiderivatives of these functions which we have never seen before, and which cannot be expressed in terms of ones we have seen. I.e. I am not an expert here, but it seems that although most elementary collections of specific functions are “closed” under differentiation, they tend not to be closed under antidifferentiation. (Check out some articles on “integration in elementary terms” for more details.) Now one can define the integral of a much larger (abstract) class of bounded functions so that questions 2) and also 1), have answer yes, by changing slightly the definition of the integral. Instead of considering only functions f which can be approximated from above and below by step functions, as in Riemann’s integral, consider any bounded function f on [a,b] which is a pointwise limit a.e. of a bounded sequence of step functions. We just remove the requirement that the step functions have to come at f from above and below, and we let them approach from any direction. Then one can prove the limit of the integrals of these step functions always converges to the same limit, independent of the choice of the approximating sequence of step functions, and we can define that limit to be the (Lebesgue) integral of f. Then for a bounded function f on [a,b] which is the pointwise limit a.e. of a bounded sequence of step functions, f has both a definite integral, and indefinite integral F, and again F is a Lipschitz function which has a derivative a.e. and whose derivative equals f a.e. on [a,b]. Conversely, any Lipschitz function F at all on [a,b] is differentiable a.e. on [a,b], and if we define f(x) = F’(x) at such x, and set f(x) = 0 elsewhere, then f has a Lebesgue integral on [a,b] which equals F(b)-F(a). I hope this is correct, as this subject is one of the hardest for me. Considering the examples in the previous post, if f is the characteristic function of the rationals, with value 1 at each rational, and value zero at each non rational, then f is the pointwise limit of the constant sequence of identically zero step functions, except at the rationals, i.e. almost everywhere. Hence the Lebesgue integral is zero. Hence the indefinite integral is also zero, with derivative equal to f at every irrational, i.e. almost everywhere. The same facts hold for the second, more interesting Dirichlet function, where either the Riemann or the Lebesgue integral gives the same result.
Recognitions: Homework Help Science Advisor after all is said and done, i realize you may be primarily interested in question 3) above. Hence I refer you to this article: http://www.claymath.org/programs/out...s05/Conrad.pdf
Since you can always keep on taking the anti-derivative of an anti-derivative and so on and since functions continuously become more exotic, does that mean that there is an infinite number of new functions that we have yet to discover? If we keep taking the anti-derivative, will we ever loop back to something simple?
Recognitions: Homework Help Science Advisor I don't think so. But it a little confusing to me, since we can get a hold on the situation by using abstract properties of functions rather than trying to write down specific examples. I.e. abstractly, antiderivatives are more special, and hence less exotic in some sense than the functions we integrate to get them. I.e. in some sense a piecewise linear function, which is the antiderivative of a step function, is more exotic because it is not piecewise constant, but less exotic because it is continuous. In general an antiderivative is Lipschitz continuous, in particular continuous, whereas integrable functions do not need to be continuous anywhere. So antiderivatives have better properties than integrable functions. But as far a specific functIONS GOES, it seems to me that taking antiderivatives is generally a way to get new functions from old. But I am not an expert on question 3). It's 1 and 2 that I have been thinking about.
Recognitions:
Science Advisor
Quote by tahayassen If we keep taking the anti-derivative, will we ever loop back to something simple?
Sometimes, yes. Maybe you haven't learned about derivatives of exponential and trig functions yet!
Recognitions: Homework Help Science Advisor as i read conrad's paper, the existing theory only shows that many functions which are elementary in the usual sense of precalculus, do not have elementary antiderivatives. So my assumption that the situation continues as you add more new functions is entirely speculative.
Thread Tools
| | | |
|------------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Why do some functions not have Anti derivatives?? | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 3 |
| | Calculus & Beyond Homework | 2 |
| | Calculus | 2 |
| | Calculus & Beyond Homework | 1 |
| | Introductory Physics Homework | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9253213405609131, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/tags/proton/hot
|
Tag Info
Hot answers tagged proton
14
Protons' repulsion within a nucleus
There is an electrostatic repulsion between the protons in the nucleus. However, there is also an attraction due to another kind of force besides electromagnetism, namely the so-called "strong nuclear interaction". The strong nuclear interaction ultimately boils down to the forces between the "colorful" quarks inside the protons - and neutrons. It is ...
14
What is the difference between a neutron and hydrogen?
A neutron is not a proton and an electron lumped together (as your question seems to suggest you think) A hydrogen atom is a bound state of an electron and a proton (bound by the electromagnetic force) whereas a neutron is a bound state of three quarks (bound by the strong force). You might be tempted to think that a neutron is also a bound state of an ...
8
What's with the very slightly larger mass of the neutron compared to the proton?
Masses and coupling between quarks are free parameters in the standard model, so there is not real explanation to that fact. About the measurment: you can have a look at this wikipedia article about Penning traps which are devices used for precision measurements for nucleus. Through the cyclotron frequency (Larmor factor) we can obtain the mass of the ...
8
What happens if up quarks are replaced by down quarks and down quarks are replaced by up quarks?
The proton ($uud$) turns into a neutron ($ddu$). Up and down quarks don't have equal charges; the up is $+\frac{2}{3}e$ and the down is $-\frac{1}{3}e$. By the way, such an operation has a name - isospin symmetry transformation - corresponding to an approximate SU(2) symmetry that makes the proton and neutron have almost similar masses.
8
What happens to the electron companions of cosmic ray protons?
Great question. The electric field creates such a strong force that it would be very hard to move large amounts of just one type of charge. So astrophysical systems do generally eject equal numbers of protons and electrons. In particular, the solar wind is electrically neutral. So these cosmic rays are created in very nearly equal numbers, but by the ...
7
What color would a proton be if it were visible to the human eye?
Blue. The proton is way smaller than a wavelength of visible light. But blue light has a shorter wavelength than any other visible color, red light is longer wavelength, blue is shorter, other colors in the middle somewhere. White light is a mixture of all the colors of light, all the wavelengths in the visible range. If you illuminate the proton ...
6
Is there a direct relationship between an isotope's neutron count and radioactivity?
As @dmckee says the problem is complicated. It is complicated because it is not a solution of a potential describing one force, but a balance between electromagnetic forces and the strong force that is keeping the quarks within the nucleons. (In the nucleus the strong force is like a type of Van der waals potential, a higher order interaction, overflowing ...
6
How much would the LHC beam be attenuated by the atmosphere?
Hmmm...some back-of-the-envelope calculations: The depth of the air column at sea level is $14\text{ lbs/in}^2 = 2 \times 10^5\text{ g/cm}^2$, so neglecting space-charge effects and assuming minimum ionization the whole way we get about $4 \times 10^5\text{ MeV} = 0.4\text{ TeV}$ energy loss. We are actually above minimum ionization, so we can multiply that ...
6
What is the difference between a neutron and hydrogen?
A neutron is a fermion, a hydrogen atom is a boson. This is related to the fact that a neutron decays into three fermions rather than two which is what you seem to think. A neutron is composed of three valence quarks, $u,d,d$, while a hydrogen atom is made out of $u,u,d,e^-$. The internal size of a neutron is about $10^{-14}$ meters while the internal size ...
6
What's with the very slightly larger mass of the neutron compared to the proton?
A proton is made of two up quarks and a down quark, whereas a neutron is made of two down quarks and an up quark. The quark masses contribute very little of the actual mass of the proton and neutron, which mostly arises from energy associated with the strong interactions among the quarks. Still, they do contribute a small fraction, and the down quark is ...
6
Proton therapy in cancer treatment
The goal of such a treatment is to induce damages in the cells of the tumor by mean of ionizing radiation. These radiations can be X-rays (photons), electron, proton or things like carbon ions. The problem is: if you try to irradiate a tumor, you first have to go through normal tissues and the risk is to damage them also. Photons will transfer energy ...
6
Is there something like Hawking radiation that makes protons emit component quarks?
Such a process is forbidden by energy conservation: the proton is the lightest baryon (that is the lightest bound state of three quarks). hawking radiation finds it's energy by reducing the energy of the black hole, but there is not lighter baryon state for the proton to go to. Baryon number violating proton decay processes are theorized, but have not been ...
5
Is it possible to destroy proton in proton-proton collision?
Most proton-proton collisions will be elastic: throw in two protons and two protons will come out, deflected at some angle. But the more interesting collisions are those where individual constituents of the proton (quarks, antiquarks, or gluons) interact. For instance, all the interesting high-energy proton-proton collisions at the LHC are really collisions ...
5
Addition of Angular Momementa in deeply bound situations, proton spin crisis
Consider two independent systems, two distant atoms, for example. Each atom has is own angular momentum $J_1$ and $J_2$. If these atoms are different and distant (non interacting), then the QM variables are separated, the total wave function becomes a product of the two atomic wave functions, and the total system angular momentum is a sum of the two. The ...
5
Why is Neutron Heavier than Proton?
The neutron is made of two down quarks and an up quark; the proton of two up quarks and a down quark. This leads to two effects that differentiate their masses. One is that the up and down quark themselves have different masses. The other is that the proton is charged, and so quantum corrections involving virtual photons affect its mass. The details are ...
4
What is the difference between a neutron and hydrogen?
The neutron decays into a proton, an electron and an antineutrino. So even the end components are different from Hydrogen which is just a proton with an electron orbiting around it. The binding forces are also different. The proton and the electron are bound by the electromagnetic force. The neutron by the strong to the rest of the nucleons in a nucleus. ...
4
Why does amount of protons define how matter is?
Your day to day experience of the material world is governed by chemistry. This is at some level the science of atoms and groups of atoms. Things like hardness, colour, toxicity and others are all largely determined by the interaction of atoms. In particular the outer coating of atoms, the electrons. Obviously the details of why element or compound A is ...
4
Is there a direct relationship between an isotope's neutron count and radioactivity?
Thanks to @dmckee, and the link he suggested: interactive table of the isotopes. Looking at that table, it seems to me that there is not a reliably direct relationship between number of neutrons and radioactivity. Using Calcium (Ca) as an example (assuming I'm reading the chart correctly): Ca-40: stable Ca-41: radioactive (with a relatively long ...
4
Small confusion related to leaving of electrons from atoms
Not to worry :). The electrons that come out with rubbing are electrons that are loosely bound to the material, from the last energy level of the atoms. To get a second one out from the same atom would take a lot more energy, so it usually does not happen. Not to forget that there are a large number of atoms ( about 10^23/mole) making up any matter so ...
4
Why do electron and proton have the same but opposite electric charge?
Because a proton can decay to a positron. It is an experimental fact that the proton and positron charges are very close. To conclude that they are exactly equal requires an argument. If a proton could theoretically decay to a positron and neutral stuff, this is enough. In QED, charge quantization is equivalent to the statement that the gauge group is ...
4
Why do electron and proton have the same but opposite electric charge?
On the level of QED and above, the equality of the charges has no theoretical explanation. But it is extremely well established experimentally, as even small deviations would add up to huge amounts of electricity in bulk matter. On the level of the standard model, the value of the charges of the up and down quark comes from simple arithmetic from those of ...
4
What does a subatomic charge actually mean?
When physicists say that a particle has electric charge, they mean that it is either a source or sink for electric fields, and that such a particle experiences a force when an electric field is applied to them. In a sense, a single pair of charged particles are a battery, if you arrange them correctly and can figure out how to get them to do useful work for ...
4
Empirical bound on sum of electron and proton charge
In $\beta$ decay a neutron turns into a proton, an electron and an electro antineutrino. So if the proton and electron charge were not the same either the neutron must originally carried a net charge or the antineutrino must carry a charge. For the neutrino current limits are reported by the particle data group as less than 10$^{-15}$ of the electron ...
3
magnetic moment of proton
No spin measurement of proton can give a value more or less than $\hbar/2$. But what do we mean when we say that spin of proton is $\hbar/2$ ? Spin is a 'vector' quantity (at least this is what it is classically). So one should also specify its direction. The thing is that in this case direction doesn't matter much. If you think of proton as some sphere and ...
3
'Density' of a proton
A proton is a bound state of three quarks. The quarks themselves are (as far as we know) pointlike, but because you have the three of them bound together the proton has a finite size. It doesn't have a sharp edge any more than an atom has a sharp edge, but an edge is conventionally defined at a radius of 0.8768 femtometres. Protons are spherical in the same ...
3
Is there something like Hawking radiation that makes protons emit component quarks?
dmckee's answer is certainly a reason why quarks can't just tunnel out of protons. However, even if they could tunnel out, the process would be different to that of Hawking radiation. HR arises because the vacuum states of two frames are different. The observer freely falling into the black hole sees no radiation, whereas the observer held stationary ...
2
Is it possible to destroy proton in proton-proton collision?
If changing the protons into something else counts as "destroying" it, then yes, this is what keeps stars burning. In particular, two protons can interact and form a deuteron, a positron and an antineutrino and some energy.
2
How much would the LHC beam be attenuated by the atmosphere?
Extremely unlikely. The beam would diffuse very rapidly. The LHC beam is condensed into a very small location by magnetic fields and it's energy is maintained through the use of an RF field which replenishes the energy each time the beam circulates while orbiting in a near perfect vacuum. In the absence of such fields the beam would first repel itself ...
2
Why do electron and proton have the same but opposite electric charge?
The answer is "because". It is an experimental fact. It is among the first data that were gathered which supported the atomic theory. If they were not the same the atoms would not be neutral, there would always be left over charge and the chemistry and atomic physics data would be different, if there were chemistry and atoms at all. This fact together ...
2
What is a proton-rich atom?
Well, I will try to give you an intuitive understanding. Consider there are two forces acting on the nucleons, the strong force (attractive, short ranged and acting between all the nucleons) and the electromagnetic force (repulsive, long ranged and acting only between protons). Now if you want to keep your nucleus stable, i.e. attractive forces should be ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388562440872192, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/61937/how-can-i-prove-that-all-rational-numbers-are-either-terminally-real-or-repeatin/61942
|
# How can I prove that all rational numbers are either terminally real or repeating real numbers?
I am trying to figure out how to prove that all rational numbers are either terminally real or repeating real numbers, but I am having a great difficulty in doing so.
Any help will be greatly appreciated.
Thanks!
-
1
Next time, please make the body of your post self-contained. Don't just put the question you're asking in the title. – J. M. Sep 5 '11 at 1:59
## 4 Answers
HINT$\$ Consider what it means for a real $\rm\ 0\: < \: \alpha\: < 1\$ to have a periodic decimal expansion:
$\rm\qquad\qquad\qquad\qquad\ \ \: \alpha\ =\ 0\:.a\:\overline{c}\ =\ 0\:.a_1a_2\cdots a_n\:\overline{c_1c_2\cdots c_k}\ \$ in radix $\rm\:10\:$
$\rm\qquad\qquad\iff\quad \beta\ :=\ 10^n\: \alpha - a\ =\ 0\:.\overline{c_1c_2\cdots c_k}$
$\rm\qquad\qquad\iff\quad 10^k\: \beta\ =\ c + \beta$
$\rm\qquad\qquad\iff\quad (10^k-1)\ \beta\ =\ c$
$\rm\qquad\qquad\iff\quad (10^k-1)\ 10^n\: \alpha\ \in\ \mathbb Z$
Thus to show that a rational $\rm\:\alpha\:$ has such a periodic expansion, it suffices to find $\rm\:k,n\:$ as above, i.e. so that $\rm\ (10^k-1)\ 10^n\:$ serves as a denominator for $\rm\:\alpha\:.\:$ Put $\rm\:\alpha\: = a/b,\:$ and $\rm\: b = 2^i\:5^j\ d,\:$ where $\rm\:2,5\:\nmid d\:.\:$ Choosing $\rm\:n\: >\: i,\ j\:$ ensures that $\rm\:10^n\:\alpha\:$ has no factors of $\rm\:2\:$ or $\rm\:5\:$ in its denominator. Hence it remains to find some $\rm\:k\:$ such that $\rm\:10^k-1\:$ will cancel the remaining factor of $\rm\:d\:$ in the denominator, i.e. such that $\rm\:d\:|\:10^k-1\:,\:$ or $\rm\:10^k\equiv 1\pmod{d}\:.\:$ Since $\rm\:10\:$ is coprime to $\rm\:d\:,\:$ by the Euler-Fermat theorem we may choose $\rm\:k = \phi(d)\:,\:$ which completes the proof sketch. For the converse, see this answer.
-
+1 Exactly the kind of answer I was going to write. – Sasha Sep 5 '11 at 1:24
First, it’s clear that you need only look at proper fractions. Now look at the long division algorithm for calculating the decimal expansion of a rational number. At each stage you get a remainder. What happens if you get a remainder of $0$ at some stage? If you don’t ever get a remainder of $0$, can you keep getting different remainders forever, or must a remainder repeat at some point? What happens if you do get a repeated remainder?
-
Recall that in long division, one gets a remainder at each step: $$\begin{array} & & & 0 & . & 2 & 2 & 7 & 2 \\ \hline 22 & ) & 5&.&0&0&0&0&0 \\ & & 4 & & 4 \\ & & & & 6 & 0 & \leftarrow \\ & & & & 4 & 4 \\ & & & & 1 & 6 & 0 \\ & & & & 1 & 5 & 4 \\ & & & & & & 6 & 0 & \leftarrow & \text{repeating}\\ \end{array}$$ 6 is a remainder. The next remainder is 16. Then the next is 6. This brings us back to where we were at an earlier step: Dividing 60 by 22. We have to get the same answer we got the previous time. Hence we have repetition of "27". The answer is $0.2272727\overline{27}\ldots$, where "27" keeps repeating.
The question then is: Why must we always return to a remainder that we saw earlier? The answer is that the only possible remainders are $0, 1, 2, 3, \ldots, 22$ (if $22$ is what we're dividing by) and there are only finitely many. If we get 0, the process terminates. If we never get 0, we have only 21 possibilities, so we can go at most 21 steps without seeing one that we've seen before. As soon as we get one that we've seen before, the repetition begins.
A related question worth asking is how you know that every repeating decimal corresponds to a rational number. E.g., if you're handed $0.2272727\overline{27}\ldots$ with "27" repeating forever, how do you figure out that it's exactly $5/22$? There's a simple algorithm for that too.
-
Remainders must be less than the divisor. So, for example, if you divide the numerator by the denominator, n, the only remainders can lie between 0 (in which case the division ends) and n-1. What happens if and when you run out of remainders (i.e. you've used all of the numbers between 1 and n-1)? You will be facing division of some remainder, r, by n again. You've been there and done that so you know what the next remainder (hence, the next dividend) will be. And so on...
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532631039619446, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/35801/classical-mechanics-one-dimensional-motion/39257
|
# Classical mechanics. One-dimensional motion
Here is one task below. How to solve equation $$m\ddot {x} + ax = F(t), x(0) = \dot x (0) = 0$$ in quadratures by using two methods?
I tried to create a system of equations
$$\begin{matrix} \dot v = F(t) - w_{0}^{2}x \\ \dot x = v \\ \end{matrix},$$
but I don't know, what to do next without using some vector $\varepsilon = v + \alpha x$. So, i know only one way. Can you help me with the second way?
-
Integrating, i.e. taking the quadrature, twice... I believe you are expected to at least show what you have tried, and where you are stuck, when asking homework questions here. – Jaime Sep 6 '12 at 20:27
Ok. I solved a system which I wrote in a question by entering some vector $\varepsilon = \alpha x + \beta v$. I can write a solution to you. But I need to solve it without that method. – PhysiXxx Sep 6 '12 at 20:37
Because it's not an easy method. – PhysiXxx Sep 6 '12 at 20:58
1
– Jaime Sep 6 '12 at 21:42
Oh, sorry, my equation is $$m\ddot x + ax = F(t).$$ – PhysiXxx Sep 6 '12 at 21:52
show 12 more comments
## 3 Answers
You might not like this answer, but you want to solve for $x\left(t\right)$ in $$m \ddot{x} + \omega^2 x = F\left(t\right),$$ with $\dot{x}\left(0\right) = x\left(0\right) = 0$. A Laplace transform gives you $$x\left(t\right) = \mathcal{L}^{-1}\left\{\frac{\mathcal{L} \left[F\left(t\right)\right] \left(s\right)}{m s^2 + \omega^2}\right\},$$ where we assume $\mathcal{L} \left[F\left(t\right)\right]\left(s\right)$ exists and $F\left(t\right) = 0$ for $t < 0$. Here, $$\mathcal{L} = \int_{0}^{\infty} dt \ e^{-s t},$$ $$\mathcal{L}^{-1} = \frac{1}{2 \pi i}\int_{\mathcal{L}} ds \ e^{s t},$$ and $$\int_{\mathcal{L}} ds$$ denotes integration in the complex $s$ plane along the Laplace contour $\mathcal{L}$. The integral is then evaluated using closure and residue techniques.
-
Unfortunately, I don't know math well presently. – PhysiXxx Sep 7 '12 at 15:12
First solve $$m\ddot {x} + ax = \delta(s)$$ With BC $$x(-\infty) = 0$$ $$\dot x(-\infty) = 0$$
The solution will be sine wave starting at t=s. (zero before that)
Lets call this solution G(s)
Then integrate this solution to find $$x(t) = \int_0^t F(s)G(s) ds$$
-
A lazy way to solve this problem would be to use the same method, but with Matrix calculation...it's basically the same.
Or you can solve $$m\ddot {x} + ax = 0$$
using classical methods, then use the variation of parameters to get the particular solution.
Other solutions would be to use an approximation method, but it could be too complicated.
What type of equation is this ? My guess would be an excited spring or some kind of regulator.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331020712852478, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/102547/is-there-a-bijective-function-which-maps-an-integer-vector-onto-a-single-number
|
# Is there a bijective function which maps an integer vector onto a single number?
I am looking for a function that maps an integer vector onto a single number. Actually it is a algorithmic problem I am having. But there must be such functions around, especially when thinking of cryptography.
-
Do you want to map an integer vector to a rational (even Real) number or an integer number? – Bardia Jan 26 '12 at 12:47
I want to map an integer vector to an intger number. – steffi Jan 26 '12 at 15:43
## 5 Answers
Assuming the length is fixed, you can just interleave the bits of the numbers.
For example, if the vector is $(1,2,3)$ we have $1 = 01_2$, $2=10_2$, $3=11_2$, so the result is $53 = 110101_2$. The first bit of $53$ is the first bit of $1$, the second bit is the first bit of $2$, ..., the sixth bit of $53$ is the second bit of $3$. This works essentially the same way if the values are real. A similar construction is used to prove the cardinal equation 𝕮$^2$ = 𝕮.
To encode a 3-vector this way in C:
```#include <stdio.h>
#include <stdint.h>
uint32_t zip(uint8_t a, uint8_t b, uint8_t c) {
int i;
uint32_t d = 0;
for (i = 0; i < 8; ++i) {
d |= (a & (1 << i)) << (0 + (i << 1));
d |= (b & (1 << i)) << (1 + (i << 1));
d |= (c & (1 << i)) << (2 + (i << 1));
}
return d;
}
int main(int argc, char **argv) {
printf("%u\n", zip(1, 2, 3));
return 0;
}
```
-
Thanks! The point is that I am actually working on a compression problem. So, by using this transformation, unfortunately I do not save any bits... – steffi Jan 26 '12 at 9:44
5
You can't save any bits if you want a bijection unless you have assumptions about the complexity of the input vector. If you just want to uniquely identify vectors for practical purposes, use a hash function. – Dan Brumleve Jan 26 '12 at 9:57
Here's something that almost works: map the integer vector $(a_1,a_2,\dots,a_n)$ to the (rational) number $2^{a_1}3^{a_2}\times\cdots\times p_n^{a_n}$, where the numbers $2,3,\dots,p_n$ are the first $n$ primes. The Unique Factorization Theorem almost guarantees that no two distinct vectors go to the same number.
The problem is with zeros: $(-5,6,0)$ and $(-5,6)$ both go to $729/32$. We can fix it by letting $f$ be any 1-1 map from the integers to the nonzero integers, for example, $f(x)=x+1$ if $x\ge0$, $f(x)=x$ if $x\lt0$, and then map $(a_1,a_2,\dots,a_n)$ to $2^{f(a_1)}3^{f(a_2)}\times\cdots\times p_n^{f(a_n)}$. This map answers the question.
EDIT: After I posted what's above, OP commented elsewhere that the image is to be an integer. This can be achieved by letting $f$ be any 1-1 map from the integers to the positive integers, for example, $f(n)=2n$ if $n\gt0$, $f(n)=1-2n$ if $n\le0$. Now $(-5,6,0)$ maps to $2^{11}3^{12}5$, and $(-5,6)$ maps to $2^{11}3^{12}$. The map is not quite a bijection, e.g., nothing maps to $3$.
-
You could use an adaptation of Cantor's method of counting the rational numbers. Take a two-dimensional vector instead of the rational numbers, include the numbers with alternationg sign to get the negatives and you are there. For higher dimension, you could use the same method rescursively.
-
If the vector contains fixed-length integers, then you can compute a digest for it, say using MD5, SHA1, SHA2, etc. This is certainly not bijective (because it's not injective) but is unlikely that there'll any be collisions.
-
Actually there is an easy way to map a vector of integers to a single integer in a bijective way. First treat numbers as strings say 0 1 00 01 10 11 .. for 0,1,2,3,4,5, ... then bijectively combine the strings a pair at a time using a method like http://bijective.dogma.net/compres8.htm The method on that page is for bytes but its easy to make it work for bits. If you can't see how to do that you and change bijectively each string to bytes. example so you want to map a vector or three integers to a single integer. Using method from that page. convert to a strings each of the three integers and then convert to bytes. DSC adds bijectively a number from 0 to N-1 to any file bijectively. For first nunber let it be ZERO then when you add the second number which you have as bytes you use N from the length of previous result it has to be at least one since its non zero. Then for third integer you combine as previous where N is the length of previous result. Its that easy. know that you have this final file you can bijectively convert to string and then that string to a number.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.906857967376709, "perplexity_flag": "head"}
|
http://openwetware.org/index.php?title=User:Pranav_Rathi/Notebook/OT/2010/08/18/CrystaLaser_specifications&diff=655040&oldid=655027
|
# User:Pranav Rathi/Notebook/OT/2010/08/18/CrystaLaser specifications
### From OpenWetWare
(Difference between revisions)
| | | | |
|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| () | | () | |
| Line 86: | | Line 86: | |
| | [[Image:Beamwaistexp.png|700x600px|Diameter Vs Z]] | | [[Image:Beamwaistexp.png|700x600px|Diameter Vs Z]] |
| | | | |
| - | =====M<sup>2</sup>===== | + | =====Beam propagation factor M<sup>2</sup>===== |
| | The beam propagation factor M<sup>2</sup> was specifically introduced to enable accurate calculation of the properties of laser beams which depart from the theoretically perfect TEM<sub>00</sub> beam. This is important because it is quite literally impossible to construct a real world laser that achieves this theoretically ideal performance level. | | The beam propagation factor M<sup>2</sup> was specifically introduced to enable accurate calculation of the properties of laser beams which depart from the theoretically perfect TEM<sub>00</sub> beam. This is important because it is quite literally impossible to construct a real world laser that achieves this theoretically ideal performance level. |
| | | | |
| Line 114: | | Line 114: | |
| | | | |
| | w<sub>R</sub>=.63mm with a divergence of .54mrad. The data suggests the real far field divergence angle to be 1.1mrad (w<sub>R</sub>/z at that z). This gives: | | w<sub>R</sub>=.63mm with a divergence of .54mrad. The data suggests the real far field divergence angle to be 1.1mrad (w<sub>R</sub>/z at that z). This gives: |
| - | ::<math> | + | |
| - | M^2≈2 | + | '''M<sup>2</sup>≈2''' |
| - | </math> | + | |
| | | | |
| | | | |
## Specifications
We are expecting our laser any time. To know the laser more we are looking forward to investigate number of things. These specifications are already given by the maker, but we will verify them.
### Polarization
Laser is TM (transverse magnetic) or P or Horizontal linearly polarized (in the specimen plane laser is still TM polarized; when looking into the sample plane from the front of the microscope). We investigated these two ways: 1) by putting a glass interface at Brewster’s angle and measured the reflected and transmitted power. At this angle all the light is transmitted because the laser is P-polarized, 2) by putting a polarizing beam splitter which uses birefringence to separate the two polarizations; P is reflected and S is transmitted, by measuring and comparing the powers, the desired polarizability is determined. We performed the experiment at 1.8 W where P is 1.77 W and S is less than .03 W*
### Beam waist at the output window
We used knife edge method (this method is used to determine the beam waist (not the beam diameter) directly); measure the input power of 1.86W at 86.5 and 13.5 % at the laser head (15mm). It gave us the beam waist (Wo) of .82mm (beam diameter =1.64mm).
### Possible power fluctuations if any
The power supply temperature is really critical. Laser starts at roughly 1.8 W but if the temperature of the power supply is controlled very well it reaches to 2 W in few minutes and stay there. It’s really stupid of manufacturer that they do not have any fans inside so we put two chopper fans on the top of it to cool it and keep it cool. If no fans are used then within an hour the power supply reaches above 50 degrees of Celsius and then, not only the laser output falls but also the power supply turns itself off after every few minutes.
### Mode Profile
Higher order modes had been a serious problem in our old laser, which compelled us to buy this one. The success of our experiments depends on the requirement of TEM00 profile, efficiency of trap and stiffness is a function of profile.So mode profiling is critical; we want our laser to be in TEM00. I am not going to discuss the technique of mode profiling; it can be learned from this link: [1] [2].
As a result it’s confirmed that this laser is TEM00 mode. Check out the pics:
A LabView program is written to show a 3D Gaussian profile, it also contains a MatLab code[3].
## Specs by the Manufacturer
All the laser specs and the manual are in the document: [Specs[4]]
## Beam Profile
The original beam waist of the laser is .2mm, but since we requested the 4x beam expansion option, the resultant beam waist is .84 at the output aperture of the laser. As the nature of Gaussian beam it still converges in the far field. We do not know where? So there is a beam waist somewhere in the far field. There are two ways to solve the problem; by using Gaussian formal but, for that we need the beam parameters before expansion optics and information about the expansion optics, which we do not have. So the only way we have, is experimentally measure the beam waist along the z-axis at many points and verify its location for the minimum. Once this is found we put the AOM there. So the experimental data gives us the beam waist and its distance from the laser in the z-direction. We use scanning knife edge method to measure the beam waist.
### Method
• In this method we used a knife blade on a translation stage with 10 micron accuracy. The blade is moved transverse to the beam and the power of the uneclipsed portion is recorded with a power meter. The cross section of a Gaussian beam is given by:
$I(r)=I_0 exp(\frac {-2r^2}{w_L^2})$
Where I(r) is the Intensity as function of radius (distance in transverse direction), I0 is the input intensity at r = 0, and wL is the beam radius. Here the beam radius is defined as the radius where the intensity is reduced to 1/e2 of the value at r = 0. This can be seen by letting r = wL.
setup
Power Profile
The experiment data is obtained by gradually moving the blade across from point A to B, and recording the power. Without going into the math the intensity at the points can be obtained. For starting point A
$\mathbf{I_A(r=0)}=I_0 exp(-2)=I_0*.865$
For stopping point B
$\mathbf{I_B}=I_0 *(1-.865)$
By measuring this distance the beam waist can be measured and beam diameter is just twice of it:
$\mathbf{\omega_0}=r_{.135}-r_{.865}$
this is the method we used below.
• Beam waist can also be measured the same way in terms of the power. The power transmitted by a partially occluding knife edge:
$\mathbf {p(r)}=\frac{P_0}{\omega_0} \sqrt{\frac{2}{\pi}} \int\limits_r^\infty exp(-\frac{2r^2}{\omega^2}) dr$
After integrating for transmitted power:
$\mathbf {p(r)}=\frac{P_0}{2}{erfc}(2^{1/2}\frac{r}{\omega_0})$
Now the power of 10% and 90% is measured at two points and the value of the points substituted here:
$\mathbf{\omega_0}=.783(r_{.1} - r_{.9})$
The difference between the methods is; the first method measures the value little higher than the second method (power), but the difference is still under 13%. So either method is GOOD but the second is more accurate. Here is a link of a LabView code to calculate the beam waist with knife edge method[5].
#### Data
We measured the beam waist at every 12.5, 15 and 25mm, over a range of 2000mm from the output aperture of the laser head. The measurement is minimum at 612.5 mm from the laser, thus the beam waist is at 612.5±12.5mm from the laser. And it is to be 1.26±.1 mm.
#### Analysis
Here the plot beam diameter Vs Z is presented. Experimental data is presented as blue and model is red. As it can be seen that model does not fit the data. Experimental beam expands much faster than the model; this proves that the beam waist before the expansion optics must be relatively smaller. And also we are missing an important characterization parameter. The real word lasers work differently in a way that their beam do not follow the regular Gaussian formula for large propagation lengths (more than Raleigh range). So that's why we will have introduce a beam propagation factor called M2.
##### Beam propagation factor M2
The beam propagation factor M2 was specifically introduced to enable accurate calculation of the properties of laser beams which depart from the theoretically perfect TEM00 beam. This is important because it is quite literally impossible to construct a real world laser that achieves this theoretically ideal performance level.
M2 is defined as the ratio of a beam’s actual divergence to the divergence of an ideal, diffraction limited, Gaussian, TEM00 beam having the same waist size and location. Specifically, beam divergence for an ideal, diffraction limited beam is given by:
$\theta_{0}=\frac{\lambda}{\pi w_0}$
this is theoretical half divergence angle in radian.
$\theta_{R}=M^2\frac{\lambda} {\pi w_0}$
so
$M^2=\frac{\theta_{R}} {\theta_{0}}$
Where:
• λ is the laser wavelength
• θR is the far field divergence angle of the real beam.
• w0 is the beam waist radius and θ0 is the far field divergence angle of the theoratical beam.
• M2 is the beam propagation factor
This definition of M2 allows us to make simple change to optical formulas by taking M2 factor as multiplication, to account for the actual beam divergence. This is the reason why M2 is also sometimes referred to as the “times diffraction limit number”. The more information about M2 is available in these links:[6][7]
My experimental beam waist is:
wR=.63mm with a divergence of .54mrad. The data suggests the real far field divergence angle to be 1.1mrad (wR/z at that z). This gives:
M2≈2
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9179880023002625, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/182682-estimate-expectation.html
|
# Thread:
1. ## Estimate for an expectation
Hello,
I have the following situation: Two stochastic processes X and Y. For X, I know that $E[|X_s]^2]\leq C$ for all $s\in[t,T]$. Now I need an estimate for the expression: $E[|X_sY_s|^2$, aim is to get rid of the X. Something like $E[|X_sY_s|^2]\leq E[|X_s|^2]E[|Y_s|^2]$ (does this hold???). Then I would obtain $E[|X_sY_s|^2]\leq C E[|Y_s|^2]$. Then I would be happy.
I hope, somebody has an idea.
Thanks in adavance,
twingeling
2. Hello,
Can't you just use Cauchy-Schwarz inequality ?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9118252396583557, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/78033/all-solutions-to-the-matrix-equation-ax-b/78039
|
# All solutions to the matrix equation $AX=B$
Could you please give me some tips, directions, on how to find all $3\times 3$ matrices$X$, so that $AX=B$. The matrices $A$ and $B$ are given.
I've made a long way, multiplying $A$ with $x_{k}$: $$\begin{align} Ax_{1} &= b_{1}\\ Ax_{2} &= b_{2}\\ Ax_{3} &= b_{3}\end{align}$$ Solved them as matrices and got $X$ with all it $x_{k}$ (columns), but it appears, that it wrong. Because when I substitute $X$ I get another $B$ matrix (the last column is wrong).
Do I use wrong strategy? Or the strategy is right and the answer should be right? How can I find all $X$? What does it mean?
Best.
-
1
If $\mathbf A$ is nonsingular, $\mathbf X$ is unique. Just treat the columns of $\mathbf B$ as successive right-hand sides of a linear system, and the solutions of those linear systems are the columns of $\mathbf X$. – J. M. Nov 2 '11 at 1:11
Did you check the rank of $A$? Is $A$ invertible? – user13838 Nov 2 '11 at 1:11
If your last column is wrong you probably made a mistake when you solved the LAST equation. – N. S. Nov 2 '11 at 1:20
Yes, It's invertible (checked it online). @percusse, what should rank tell me? – Lissa Nov 2 '11 at 1:24
2
@Lissa After having rest, please take your time and accept the answers of your questions (which you have not done yet!) by using the tick mark next to your favorite answers. In a way, it wraps up the question. – user13838 Nov 2 '11 at 2:15
show 1 more comment
## 1 Answer
Your strategy is right: $A X = B$ does mean the product of $A$ with each column of $X$ is the corresponding column of $B$. Perhaps something went wrong in your calculation.
If $A$ is nonsingular, $X$ is unique, and in fact it is $A^{-1} B$. If $A$ is singular (and $3 \times 3$), then depending on $B$ there might not be any solution; and if there is a solution, it will not be unique, because you can add any vector $x$ with $Ax = 0$ to any column of a solution $X$ and get another solution.
-
Thank you for response. I found the mistake, so I got X-Matrix. Do I have to find inverse matrix to prove, that there's only one 3x3-X-Matrix? – Lissa Nov 2 '11 at 1:37
That's one way to prove it. Another way is to calculate the determinant of $A$: if that is not 0, the matrix is nonsingular. – Robert Israel Nov 2 '11 at 23:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272303581237793, "perplexity_flag": "head"}
|
http://terrytao.wordpress.com/author/gilkalai/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Author Archive
You are currently browsing Gil Kalai’s articles.
## (Gil Kalai) The entropy/influence conjecture
16 August, 2007 in guest blog, math.CA, math.PR, question | Tags: boolean functions, entropy, Fourier transform, Gil Kalai, influence | by Gil Kalai | 18 comments
[This post is authored by Gil Kalai, who has kindly “guest blogged” this week’s “open problem of the week”. - T.]
The entropy-influence conjecture seeks to relate two somewhat different measures as to how a boolean function has concentrated Fourier coefficients, namely the total influence and the entropy.
We begin by defining the total influence. Let $\{-1,+1\}^n$ be the discrete cube, i.e. the set of $\pm 1$ vectors $(x_1,\ldots,x_n)$ of length n. A boolean function is any function $f: \{-1,+1\}^n \to \{-1,+1\}$ from the discrete cube to {-1,+1}. One can think of such functions as “voting methods”, which take the preferences of n voters (+1 for yes, -1 for no) as input and return a yes/no verdict as output. For instance, if n is odd, the “majority vote” function $\hbox{sgn}(x_1+\ldots+x_n)$ returns +1 if there are more +1 variables than -1, or -1 otherwise, whereas if $1 \leq k \leq n$, the “$k^{th}$ dictator” function returns the value $x_k$ of the $k^{th}$ variable.
We give the cube $\{-1,+1\}^n$ the uniform probability measure $\mu$ (thus we assume that the n voters vote randomly and independently). Given any boolean function f and any variable $1 \leq k \leq n$, define the influence $I_k(f)$ of the $k^{th}$ variable to be the quantity
$I_k(f) := \mu \{ x \in \{-1,+1\}^n: f(\sigma_k(x)) \neq f(x) \}$
where $\sigma_k(x)$ is the element of the cube formed by flipping the sign of the $k^{th}$ variable. Informally, $I_k(f)$ measures the probability that the $k^{th}$ voter could actually determine the outcome of an election; it is sometimes referred to as the Banzhaf power index. The total influence I(f) of f (also known as the average sensitivity and the edge-boundary density) is then defined as
$I(f) := \sum_{k=1}^n I_k(f).$
Thus for instance a dictator function has total influence 1, whereas majority vote has total influence comparable to $\sqrt{n}$. The influence can range between 0 (for constant functions +1, -1) and n (for the parity function $x_1 \ldots x_k$ or its negation). If f has mean zero (i.e. it is equal to +1 half of the time), then the edge-isoperimetric inequality asserts that $I(f) \geq 1$ (with equality if and only if there is a dictatorship), whilst the Kahn-Kalai-Linial (KKL) theorem asserts that $I_k(f) \gg \frac{\log n}{n}$ for some k. There is a result of Friedgut that if $I(f)$ is bounded by A (say) and $\varepsilon > 0$, then f is within a distance $\varepsilon$ (in $L^1$ norm) of another boolean function g which only depends on $O_{A,\varepsilon}(1)$ of the variables (such functions are known as juntas).
Read the rest of this entry »
## (Gil Kalai) The weak epsilon-net problem
22 April, 2007 in guest blog, math.MG, question | Tags: convex geometry, entropy, epsilon-net, Gil Kalai | by Gil Kalai | 13 comments
[This post is authored by Gil Kalai, who has kindly “guest blogged” this week’s “open problem of the week”. - T.]
This is a problem in discrete and convex geometry. It seeks to quantify the intuitively obvious fact that large convex bodies are so “fat” that they cannot avoid “detection” by a small number of observation points. More precisely, we fix a dimension d and make the following definition (introduced by Haussler and Welzl):
• Definition: Let $X \subset {\Bbb R}^d$ be a finite set of points, and let $0 < \epsilon < 1$. We say that a finite set $Y \subset {\Bbb R}^d$ is a weak $\epsilon$-net for X (with respect to convex bodies) if, whenever B is a convex body which is large in the sense that $|B \cap X| > \epsilon |X|$, then B contains at least one point of Y. (If Y is contained in X, we say that Y is a strong $\epsilon$-net for X with respect to convex bodies.)
For example, in one dimension, if $X = \{1,\ldots,N\}$, and $Y = \{ \epsilon N, 2 \epsilon N, \ldots, k \epsilon N \}$ where k is the integer part of $1/\epsilon$, then Y is a weak $\epsilon$-net for X with respect to convex bodies. Thus we see that even when the original set X is very large, one can create a $\epsilon$-net of size as small as $O(1/\epsilon)$. Strong $\epsilon$-nets are of importance in computational learning theory, and are fairly well understood via Vapnik-Chervonenkis (or VC) theory; however, the theory of weak $\epsilon$-nets is still not completely satisfactory.
One can ask what happens in higher dimensions, for instance when X is a discrete cube $X = \{1,\ldots,N\}^d$. It is not too hard to cook up $\epsilon$-nets of size $O_d(1/\epsilon^d)$ (by using tools such as Minkowski’s theorem), but in fact one can create $\epsilon$-nets of size as small as $O( \frac{1}{\epsilon} \log \frac{1}{\epsilon} )$ simply by taking a random subset of X of this cardinality and observing that “up to errors of $\epsilon$“, the total number of essentially different ways a convex body can meet X grows at most polynomially in $1/\epsilon$. (This is a very typical application of the probabilistic method.) On the other hand, since X can contain roughly $1/\epsilon$ disjoint convex bodies, each of which contains at least $\epsilon$ of the points in X, we see that no $\epsilon$-net can have size much smaller than $1/\epsilon$.
Now consider the situation in which X is now an arbitrary finite set, rather than a discrete cube. More precisely, let $f(\epsilon,d)$ be the least number such that every finite set X possesses at least one weak $\epsilon$-net for X with respect to convex bodies of cardinality at most $f(\epsilon,d)$. (One can also replace the finite set X with an arbitrary probability measure; the two formulations are equivalent.) Informally, f is the least number of “guards” one needs to place to prevent a convex body from covering more than $\epsilon$ of any given territory.
• Problem 1: For fixed d, what is the correct rate of growth of f as $\epsilon \to 0$?
Read the rest of this entry »
### Recent Comments
Sandeep Murthy on An elementary non-commutative…
Luqing Ye on 245A, Notes 2: The Lebesgue…
Frank on Soft analysis, hard analysis,…
andrescaicedo on Soft analysis, hard analysis,…
Richard Palais on Pythagoras’ theorem
The Coffee Stains in… on Does one have to be a genius t…
Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
%anchor_text% on Books
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245A, Notes 2: The Lebesgue…
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 59, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.915737509727478, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/statistics/43655-finding-shape-sd-mean.html
|
# Thread:
1. ## finding the shape,sd and mean
im having trouble finding (basically everything) i cant seem to figure out the problem with the given numbers.
In the problem it says that 39% of housholds have a dog in your city and that you takes a simple random sample of 50 households.
basically wer have to describe the shape, mean, and standard deviation.
i assume the .39 is the population mean and 50 is the sample observtion. i could be all wrong though. help please!
2. Originally Posted by lunaj
im having trouble finding (basically everything) i cant seem to figure out the problem with the given numbers.
In the problem it says that 39% of housholds have a dog in your city and that you takes a simple random sample of 50 households.
basically wer have to describe the shape, mean, and standard deviation.
i assume the .39 is the population mean and 50 is the sample observtion. i could be all wrong though. help please!
The number of a of households with a dog in a sample of $50$ has a binomial distribution $B(50,0.39)$ , which we know has mean $0.39 \times 50=18.5$, and a standard deviation of $\sqrt{50 \times 0.39 \times (1-0.39)}$
RonL
3. thank you so much!
im taking the course online so it keeps getting harder and harder. i dont think we've gone over the binomial thing or how to find a mean the way you did. so far its been given to us or there are multiple numbers and we find the mean.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518974423408508, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/56895/finding-simply-connected-open-sets-in-a-connected-set?answertab=active
|
# Finding Simply Connected Open Sets in a Connected Set?
I believe that the following statement is true:
Let $E$ be a connected open subset of $\mathbb{R}^2$. For any $n$ distinct points in $E$, there exists a connected and simply connected open set $G \subset E$ that contains those points.
How can I show this? My original idea was to try to show this by induction, by arguing that there is a simple curve from the $n-1$-th point to the $n$-th point that doesn't intersect the set $G_{n-1}$ more than once.
But then, I realized I don't know how to show that given 2 points in an open connected set, there is a simple curve connecting the two. I thought that given any curve connecting two points, one could construct a simple curve, but I wasn't sure of how to deal with the case when there are infinitely many self-intersections.
Note: The statement I'm trying to show is equivalent to showing that given $n$ distinct points, there is a simple curve that connects the $n$ distinct points. But unfortunately I don't know how to show this either.
-
1
"But then, I realized I don't know how to show that given 2 points in an open connected set, there is a simple curve connecting the two." Any path-connected Hausdorff space is arc-connected, meaning that any two distinct points lie in a subspace which is homeomorphic to the unit interval. – LostInMath Aug 11 '11 at 13:05
@LostInMath - Well, I guess that solves for the problem for $n=2$ :) – Braindead Aug 11 '11 at 13:22
@LostInMath - Where can I find a proof of this statement? – Braindead Aug 11 '11 at 16:20
2
It's much simpler to show that an open subset of euclidean space is connected iff it is path connected. This is not very hard at all. – Grumpy Parsnip Aug 11 '11 at 17:46
1
I've included a proof of the stronger statement that any two points can be joined by a PL arc. – Grumpy Parsnip Aug 11 '11 at 17:59
show 1 more comment
## 1 Answer
Here's a suggestion. Connect the first two points by an arc, as LostInMath mentions. Now draw an arc from the third point to the second point. This might hit the first arc, so actually look at the first place it does, and forget the rest of the arc. This gives you a Y-shaped tree (or an interval if the arc makes it all the way to the second point). Repeat this process to get a tree joining all of the $n$ points. A tree is simply connected so will work just as well as a line going through all the points. Now take a small neighborhood of the tree to get a simply connected open set. To see that this neighborhood is contractible requires some technique, but to make things simpler, I claim that the arcs that we chose to make the tree can be assumed to be piecewise linear. That is, they can be assumed to be a finite union of straight line segments. It is not hard to show (see below) that any connected open set in Euclidean space has the property that any two points can be joined by such PL arcs. Now showing that a neighborhood of a tree comprised of PL arcs is contractible is not difficult.
Here is a proof of the PL connectedness statement. First note that whether two points are connected by a PL line is an equivalence relation. So we can partition the open set into equivalence classes under this relation. Now I claim that an equivalence class is open. This is because for any point $x$, I can find a ball centered around $x$ contained in the big open set. I can join $x$ to any other point in the ball by a radial line segment. Hence all points in the ball are in the same equivalence class. So an equivalence class is open. So by connectivity there can be only one equivalence class.
-
I think that the construction of the simply connected neighborhood is still a non-trivial problem and begs for the rigorous proof. – Ilya Aug 11 '11 at 16:56
@Gortaur: you can triangulate your space so that the tree is a subcomplex. Now take a regular neighborhood. – Steve D Aug 11 '11 at 17:33
@Gortaur: I agree that it is a nontrivial statement. – Grumpy Parsnip Aug 11 '11 at 17:47
@Steve: This now obviously works since I've proven the tree is made of PL arcs. – Grumpy Parsnip Aug 11 '11 at 18:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9530585408210754, "perplexity_flag": "head"}
|
http://cstheory.stackexchange.com/questions/14025/strongly-edge-guarding-a-3d-triangulation
|
# Strongly edge-guarding a 3d triangulation
Let $T$ be a planar triangulation. It is known that one can guard the faces of $T$ using at most $\lfloor n/3 \rfloor$ edge-guards (Worst-case-optimal algorithms for guarding planar graphs and polyhedral surfaces). I am trying to obtain a similar upper bound for an extension of this problem, as follows.
Now, let $T$ be a three-dimensional triangulation (a tetrahedralization), and let $S$ be a subset of its edges. We say that $S$ strongly guards $T$ if, for every tetrahedron in $T$, one of the six edges of that tetrahedron lies in $S$. Is there a known nontrivial upper bound for the number of edges required to strongly guard all tetrahedra of a tetrahedralization?
Obviously, this problem can be solved via edge-coloring with no monochromatic tetrahedra. Is there any upper bound on the edge chromatic number for three-dimensional triangulations better than $\Delta + 1$? Maybe the assumption that $T$ is a Delaunay triangulation can conduct to a probabilistic bound.
-
1
I clarified the definition of "guard", but I may have it wrong. In the art-gallery literature, an edge in a 2d triangulation usually guards every triangle that contains at least one endpoint of that edge. In particular, this is the definition used in the paper you cite. – JɛffE Oct 26 '12 at 15:28
A few more questions: (1) Do you mean a triangulation of a convex polytope in $R^3$, of an arbitrary genus-zero polyhedron in $R^3$, of an arbitrary polyhedron in $R^3$, of an arbitrary topological ball, or of an arbitrary 3-manifold? [The paper you cite considers only planar triangulations.] (2) Do you want upper bounds in terms of the number of vertices or the number of edges? [Even if you consider only Delaunay triangulations, an $n$-vertex triangulation can have $\Omega(n^2)$ edges.] – JɛffE Oct 26 '12 at 15:32
@JɛffE, your definition of "guard" is exactly what I meant. The word "triangulation" stands for triangulation of a finite point set $X \subset \mathbf{R}^3$ or, equivalently, triangulation of a convex polytope in $\mathbf{R}^3$, as you suggested. – Vicente Helano Oct 27 '12 at 2:17
I am interested in an upper bound in terms of the number of edges. A 3D Delaunay triangulation can have $\Omega(n^2)$ edges in the worst case, but if we consider points distributed uniformly at random inside the 3-ball there will be an expected linear number of them, right? (Higher-dimensional Voronoi diagrams in expected linear time) – Vicente Helano Oct 27 '12 at 2:41
– JɛffE Oct 27 '12 at 9:16
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9042631387710571, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/13575/teaching-myself-differential-topology-and-differential-geometry?answertab=oldest
|
# Teaching myself differential topology and differential geometry
I have a hazy notion of some stuff in differential geometry and a better, but still not quite rigorous understanding of basics of differential topology.
I have decided to fix this lacuna once for all. Unfortunately I cannot attend a course right now. I must teach myself all the stuff by reading books.
Towards this purpose I want to know what are the most important basic theorems in differential geometry and differential topology. For a start, for differential topology, I think I must read Stokes' theorem and de Rham theorem with complete proofs.
Differential geometry is a bit more difficult. What is a connection? Which notion should I use? I want to know about parallel transport and holonomy. What are the most important and basic theorems here? Are there concise books which can teach me the stuff faster than the voluminous Spivak books?
Also finally I want to read into some algebraic geometry and Hodge/Kähler stuff.
Suggestions about important theorems and concepts to learn, and book references, will be most helpful.
-
1
I enjoyed do Carmo's "Riemannian Geometry", which I found very readable. Of course there's much more to differential geometry than Riemannian geometry, but it's a start... – Aaron Mazel-Gee Dec 9 '10 at 1:02
2
– Matt Calhoun Dec 9 '10 at 1:10
– Matt Calhoun Dec 9 '10 at 1:20
Also, Griffiths & Harris is a pretty standard "classical algebraic geometry" book. A word of advice: don't get caught up in chapter 0. It's about 100 pages of not-so-easy complex analysis review. (Or, do get caught up in it, if that's your thing.) – Aaron Mazel-Gee Dec 9 '10 at 9:45
Are there any good courses videos of MIT/standford etc.? – 0x90 Feb 22 at 18:20
show 1 more comment
## 8 Answers
Guillemin and Pollack's "Differential Topology" is about the friendliest introduction to the subject you could hope for. It's an excellent non-course book. Good supplementary books would be Milnor's "Topology from a differentiable viewpoint" (much more terse), and Hirsch's "Differential Topology" (much more elaborate, focusing on the key analytical theorems).
For differential geometry it's much more of a mixed bag as it really depends on where you want to go. I've always viewed Ehresmann connections as the fundamental notion of connection. But it suits my tastes. But I don't know much in the way of great self-learning differential geometry texts, they're all rather quirky special-interest textbooks or undergraduate-level grab-bags of light topics. I haven't spent any serious amount of time with the Spivak books so I don't feel comfortable giving any advice on them.
-
+1 for Guillemin/Pollack-one of the great classic textbooks on any subject by 2 masters. – Mathemagician1234 May 5 '12 at 18:41
For differential topology, I would add Poincare duality to something you may want to know. A good textbook is Madsen and Tornehave's From Calculus to Cohomology. Another nice book is John Lee's Introduction to Smooth Manifolds.
For differential geometry, I don't really know any good texts. Besides the standard Spivak, the other canonical choice would be Kobayashi-Nomizu's Foundations of Differential Geometry, which is by no means easy going. There is a new book by Jeffrey Lee called Manifolds and Differential Geometry in the AMS Graduate Studies series. I have not looked at it personally in depth, but it has some decent reviews. It covers a large swath of the differential topology, and also the basic theory of connections. (As a side remark, if you like doing computations, Kobayashi's original paper "Theory of connections" is not very hard to read, and may be a good starting place before you jump into some of the more special-topic/advanced texts like Kolar, Slovak, and Michor's Natural operations in differential geometry.)
A book I've enjoyed and found useful (though not so much as a textbook) is Morita's Geometry of differential forms.
-
@Willie: As you seem to know a bit (a lot) about this, could you suggest what would be a nice book to start with if someone is interested in harmonic analysis and PDEs and wants to know how to do this kind of stuff on non-Euclidean spaces (I guess that is what Diff Geom is about?)? Also, do you have a reference where there things are applicable in PDE (or harmonic analysis)? – Jonas Teuwen Dec 8 '10 at 23:19
1
@Jonas: I don't actually know much about harmonic analysis on non-Euclidean spaces. AFAIK most of the introductory material in that direction is in the context of symmetric spaces, and a standard reference for that is Helgason's book "Differential Geometry, Lie Groups, and Symmetric Spaces". Then you may want to look at Joseph Wolf's "Harmonic analysis on commutative spaces". In a slightly different direction, you can also look at Eli Stein's "Topics in harmonic analysis related to the Littlewood Paley theory". – Willie Wong♦ Dec 8 '10 at 23:34
1
For PDEs, the information in most advanced texts are perfectly applicable to the case of manifolds (at least in regard to scalar functions; sections of vector bundles can get a bit trickier). So much of Hormander's "Analysis of Linear Partial Differential Operator" is applicable and Taylor's "Partial Differential Equation" also (the latter also explicitly formulate the discussion on manifolds, though the text in general is very dense). You may also want to look at Jost's "Riemannian Geometry and Geometric Analysis". There are in fact lots of words written about PDEs on manifolds... – Willie Wong♦ Dec 8 '10 at 23:41
... so it is hard to give a concrete recommendation. Another problem is that each branch of PDEs has its own trove of literature: from exterior differentiation systems (Cartan and others), to elliptic geometric PDEs (Einstein manifolds, conformal geometry, harmonic maps etc), to parabolic theory (Ricci flow, mean curvature flow), and to hyperbolic theory (wave maps, general relativity), it is hard to give the reference without knowing what your goals are. – Willie Wong♦ Dec 8 '10 at 23:46
1
@Jonas: Then let me give a quick description of differences on the manifold setting. Simple Fourier analysis does not carry well directly to the manifold setting, since Fourier analysis requires some symmetries. You can do it by looking at coordinate patches, but the (pseudo)differential operators you define will depend on the coordinate chart you chose (though usually the principal part is invariant under coordinate change). In the absence of symmetries which allows you to define the Fourier transform group theoretically, you can otherwise do frequency decomposition using spectral theory... – Willie Wong♦ Dec 9 '10 at 1:42
show 6 more comments
I would like to recommend Modern Differential Geometry of curves and surfaces with Mathematica, by Alfred Gray, Elsa Abbena, and Simon Salamon. You can look at it on Google books to decide if it fits your style. If you are a Mathematica user, I think this is a wonderful avenue for self-study, for you can see and manipulate all the central constructions yourself. I use Gray's code frequently; I was a fan.
PS. Here is how he died: "of a heart attack which occurred while working with students in a computer lab at 4 a.m."!
-
1
Alternatively, if you're a Maple guy, there's Oprea's [ Differential Geometry and Its Applications ](books.google.com/books?id=xb48zk0wJfIC). – J. M. Dec 9 '10 at 2:25
I'm doing exactly the same thing as you right now. I'm self-learning differential topology and differential geometry. To those ends, I really cannot recommend John Lee's "Introduction to Smooth Manifolds" and "Riemannian Manifolds: An Introduction to Curvature" highly enough. "Smooth Manifolds" covers Stokes Theorem, the de Rham theorem and more, while "Riemnannian Manifolds" covers connections, metrics, etc.
The attention to detail that Lee writes with is so fantastic. When reading his texts that you know you're learning things the standard way with no omissions. And of course, the same goes for his proofs.
Plus, the two books are the second and third in a triology (the first being his "Introduction to Topological Manifolds"), so they were really meant to be read in this order.
Of course, I also agree that Guilleman and Pollack, Hirsch, and Milnor are great supplements, and will probably emphasize some of the topological aspects that Lee doesn't go into.
-
Like the other posters, I think Lee's books are fantastic. I'd start with his Introduction to Smooth Manifolds.
For differential geometry, I'd go on to his Riemannian Manifolds and then follow up with do Carmo's Riemannian Geometry. (That's what I did.)
For differential topology, after Lee's Smooth Manifolds, I'd suggest Differential Forms in Algebraic Topology by Bott, Tu and anything (and everything) by Milnor.
-
I would recommend Jost's book "Riemannian geometry and geometric analysis" as well as Sharpe's "Differential geometry".
The first book is pragmatically written and guides the reader to a lot of interesting stuff, like Hodge's theorem, Morse homology and harmonic maps.
The second book is mainly concerned with Cartan connection, but before that it has an excellent chapter on differential topology. Furthermore it treats Ehresmann connections in appendix A.
-
ADDITION: I have compiled what I think is a definitive collection of listmanias at Amazon for a best selection of books an references, mostly in increasing order of difficulty, in almost any branch of geometry and topology. In particular the books I recommend below for differential topology and differential geometry; I hope to fill in commentaries for each title as I have the time in the future.
If you want to have an overall knowledge Physics-flavored the best books are Nakahara's "Geometry, Topology and Physics" and above all: Frankel's "The Geometry of Physics" (great book, but sometimes his notation can bug you a lot compared to standards).
If you want to learn Differential Topology study these in this order: Milnor's "Topology from a Differentiable Viewpoint", Jänich/Bröcker's "Introduction to Differential Topology" and Madsen's "From Calculus to Cohomology". Although it is always nice to have a working knowledge of general point set topology which you can quickly learn from Jänich's "Topology" and more rigorously with Runde's "A Taste of Topology".
To start Algebraic Topology these two are of great help: Croom's "Basic Concepts of Algebraic Topology" and Sato/Hudson "Algebraic Topology an intuitive approach". Graduate level standard references are Hatcher's "Algebraic Topology" and Bredon's "Topology and Geometry", tom Dieck's "Algebraic Topology" along with Bott/Tu "Differential Forms in Algebraic Topology."
To really understand the classic and intuitive motivations for modern differential geometry you should master curves and surfaces from books like Toponogov's "Differential Geometry of Curves and Surfaces" and make the transition with Kühnel's "Differential Geometry - Curves, Surfaces, Manifolds". Other nice classic texts are Kreyszig "Differential Geometry" and Struik's "Lectures on Classical Differential Geometry".
For modern differential geometry I cannot stress enough to study carefully the books of Jeffrey M. Lee "Manifolds and Differential Geometry" and Livio Nicolaescu's "Geometry of Manifolds". Both are deep, readable, thorough and cover a lot of topics with a very modern style and notation. In particular, Nicolaescu's is my favorite. For Riemannian Geometry I would recommend Jost's "Riemannian Geometry and Geometric Analysis" and Petersen's "Riemannian Geometry". A nice introduction for Symplectic Geometry is Cannas da Silva "Lectures on Symplectic Geometry" or Berndt's "An Introduction to Symplectic Geometry". If you need some Lie groups and algebras the book by Kirilov "An Introduction to Lie Groops and Lie Algebras" is nice; for applications to geometry the best is Helgason's "Differential Geometry - Lie Groups and Symmetric Spaces".
FOR TONS OF SOLVED PROBLEMS ON DIFFERENTIAL GEOMETRY the best book by far is the recent volume by Gadea/Muñoz - "Analysis and Algebra on Differentiable Manifolds: a workbook for students and teachers". From manifolds to riemannian geometry and bundles, along with amazing summary appendices for theory review and tables of useful formulas.
EDIT (ADDED): However, I would argue that one of the best introductions to manifolds is the old soviet book published by MIR, Mishchenko/Fomenko - "A Course of Differential Geometry and Topology". It develops everything up from $\mathbb{R}^n$, curves and surfaces to arrive at smooth manifolds and LOTS of examples (Lie groups, classification of surfaces, etc). It is also filled with LOTS of figures and classic drawings of every construction giving a very visual and geometric motivation. It even develops Riemannian geometry, de Rham cohomology and variational calculus on manifolds very easily and their explanations are very down to Earth. If you can get a copy of this title for a cheap price (the link above sends you to Amazon marketplace and there are cheap "like new" copies) I think it is worth it. Nevertheless, since its treatment is a bit dated, the kind of algebraic formulation is not used (forget about pullbacks and functors, like Tu or Lee mention), that is why an old fashion geometrical treatment may be very helpful to complement modern titles. In the end, we must not forget that the old masters were much more visual an intuitive than the modern abstract approaches to geometry. Since this last book is out of print and the publisher does not longer exist, you may be very interested in an online "low-quality" copy which can be downloaded here (the 3 files linked in rapidshare).
If you are interested in learning Algebraic Geometry I recommend the books of my Amazon list. They are in recommended order to learn from the beginning by yourself. In particular, from that list, a quick path to understand basic Algebraic Geometry would be to read Bertrametti et al. "Lectures on Curves, Surfaces and Projective Varieties", Shafarevich's "Basic Algebraic Geometry" vol. 1, 2 and Perrin's "Algebraic Geometry an Introduction". But then you are entering the world of abstract algebra.
If you are interested in Complex Geometry (Kähler, Hodge...) I recommend Moroianu's "Lectures on Kähler Geometry", Ballmann's "Lectures on Kähler Manifolds" and Huybrechts' "Complex Geometry". To connect this with Analysis of Several Complex Variables I recommend trying Fritzsche/Grauert "From Holomorphic Functions to Complex Manifolds" and also Wells' "Differential Analysis on Complex Manifolds". Afterwards, to connect this with algebraic geometry, try, in this order, Miranda's "Algebraic Curves and Riemann Surfaces", Mumford's "Algebraic Geometry - Complex Projective Varieties", Voisin's "Hodge Theory and Complex Algebraic Geometry" vol. 1 and 2, and Griffiths/Harris "Principles of Algebraic Geometry".
-
1
That's certainly a nice list! But your amazon link doesn't work. – wildildildlife Feb 18 '11 at 11:53
I have changed the link to the Amazon list, hope now it works – Javier Álvarez Feb 18 '11 at 12:54
1
+1 for a great recommendation list. – Mathemagician1234 Mar 6 '12 at 10:42
@JavierÁlvarez Sorry to bother you on this old post but the highest math class I have taken thus far is linear algebra. Would the list you recommend help me or should I start reading more basics books? I am going to take abstract algebra, complex analysis, and analysis 1 next semester. – diimension Dec 13 '12 at 2:21
@diimension: there is no bother at all! this list, and my other Amazon listmanias, will be very useful to you AFTER your next semester when you get background on rigorous analysis and algebra. Then, books like Runde's and Munkres' on topology will be at your level and you should by all means try them. Pressley or Bär should be your start in differential geometry. Keep studying and everything will be at your reach! At your level right now you could start reading the basic book by "Jänich" on topology at the same time you study next semester courses. – Javier Álvarez Dec 13 '12 at 8:02
show 1 more comment
I had seen a mention of this work on Differential Geometry by Theodore Shifrin at UGA giving it great comments mathoverflow.
It's currently a free and legal download. It's an entry level text and the prior responders have put a lot of effort into giving outstanding suggestions. But I thought it might be of interest.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400691986083984, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/38278/enumerative-algorithm-through-inclusion-exclusion/38286
|
## Enumerative algorithm through inclusion-exclusion
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello everybody !
I wondered, without really knowing where to search, whether there was a "smart" way to enumerate/iterate over all the elements of a set which can be counted by inclusion-exclusion. For instance, list all the derangements on $n$ elements (I'm not especially interested in enumerating the derangements and I guess there is bound to be a good specific way to enumerate them, but that's the first thing that came to my mind).
In particular, I would like to avoid having to remember all the elements which have already been enumerated (as this can grow very large)...
Thank you for your lights :-)
Nathann
-
I think I do not understand your question very well: It seems to me that inclusion-exclusion is in general a smart way for counting the number of elements in a set. Do you ask for a total order on all elements such that the successor (or predecessor) of an element can be computed efficiently? – Roland Bacher Sep 10 2010 at 8:07
Maybe he is asking for a positive formula or a categorification for inclusion exclusion. – Gjergji Zaimi Sep 10 2010 at 8:35
Inclusion/Exclusion is sometimes a very good way to find the cardinality of a set, but if you want to write a program returning all the elements of a set, the cardinality is not enough. So let us say one is able to compute the cardinality of a set through inclusion-exclusion : is it also possible to list its elements ? – Nathann Cohen Sep 10 2010 at 8:44
1
Nathan, if you wish to enumerate, it seems that a walk over the tree of the search space is the right way to do it. You can optimize the walk over the tree by pruning your search as soon as it becomes apparent that there can be no further chance of a solution along a branch. You should use a backtracking algorithm as I explain in an answer below. Walking the pruned tree with a depth-first algorithm also will let you avoid keeping track of everything which you have already tested. – sleepless in beantown Sep 10 2010 at 19:12
Do you wish to iterate over all of the search space in order to confirm that your counting by inclusion-exclusion is correct? Or do you want to generate a list of all possible answers in your solution space? Why exactly do you wish to write a program to list the elements? Perhaps a better explanation of the motivation or the underlying problem will help me understand your goal. What formula do you have for your inclusion-exclusion counting of the cardinality of this set? – sleepless in beantown Sep 10 2010 at 19:54
show 1 more comment
## 4 Answers
Both yes and no. Let me illustrate...
Inclusion-exclusion is typically used to find the cardinality of a set A contain all combinatorial objects that avoid a substructure S (either that, or the complement of A).
Suppose you are at object $x \in A$ and the next object is $y \in A$. Whatever method you use for finding y from x would need to find the combinatorial trade T formed from the difference between y and x. The trade T will typically depend on the structure of x and y, that is, you will probably not be able to use the same trade T in going from most x' to y' later in the iterator.
For example, consider (0,1) sequences of length n without two consecutive 1s. Here's the list for n=3.
````000
001
010
100
101
````
These can be counted using inclusion-exclusion. Notice that, no matter which order we choose to iterate in, the trade T that arises in going from 000 to abc will somehow contain the information of which of a,b,c are non-zero -- i.e. which numbers to toggle. This trade can clearly not be used everywhere in the iterator, although in some cases it could: e.g. if 000 -> 001 then the trade could be reused in going from 100 -> 101.
In some areas of combinatorics, such as Latin squares, we start off with one member L of the set A, then store a sequence of trades $t_1,t_2,\ldots$ (this can require much hard-disk space). We iterate through the Latin squares quickly by applying the trades in sequence, that is, $L \mapsto t_i L$ iteratively.
The problem therefore becomes finding a sequence of trades that are quite "small" (and therefore require less storage) -- e.g. in the binary sequences case, toggles only one or two bits.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Consider the smallest nontrivial inclusion-exclusion situation:
$$|A \cup B| = |A| + |B| - |A \cap B|$$
(removing one set of the double counting of $A\cap B$).
In the species interpretation,
$$A \cup B = A \oplus B - A \cap B$$
the $\oplus$ corresponds to disjoint union, but there is no accepted intepretation of $-$ (yes, it is the complement, but there is no systematic way to say which items are being removed.
But one can transform the above equation using grade school arithmetic to get a reasonable correspondence:
$$A \cup B \oplus A \cap B= A \oplus B$$
Form an isomorphism from one side to the other. Then generate and test, that is, assuming you want $A \cup B$, generate $A \oplus B$ sequentially (by unranking, Gray code, whatever), then check if in $A \cap B$ (use a ranking procedure and the isomorphism) and repeatedly try the next one if a member (this is the exclusion step).
Of course, this is not as clean as what one would want (a 'direct' construction of only those items wanted. Also, if the desired set is small or the overlapping is complicated, then lots will need to be excluded and so lots will need to be excluded before reaching the next one. However, you don't need to keep around a list of 'items so far' or ' items to avoid' as long as you have a mapping function (the isomorphism) between the two sides and needed ranking/unranking procedures.
-
Consider using a backtracking and depth-first tree search.
This will iterate over the entire space of solutions. First place or define some ordinality over the alphabet which will compose the components of the elements of the set to be enumerated. This will also allow you to define an ordinality of the sets composed of those elements described as an ordered list. Then the only thing which you have to keep track of as you iterate is the current element of the set which you are testing. Everything else to be tested will be further down further down or to the right on the tree. (or "left" depending on how you draw the tree...)
Example: using backtracking and a depth-tree to help simplify the 8-queen problem on the 8x8 chessboard. The alphabet in this case is the set {1,2,...64}, each element of the set of solutions to the 8-queen problem is a non-empty set consisting of elements of the alphabet.
Worst approach: iterate over all $2^{64}$ positions which could contain queens; this includes cases with less than or more than $8$ queens up to $64$.
Better: iterate over all $\binom{64}{8}$ ways to choose 8 elements out of the 64 squares.
Even better: keep track of a current list, and a still-possible list, and a to-be-skipped list. Add a possible candidate alphabet item to the current list. Based on the current list, figure out which alphabet elements are excluded and eliminate them from the still-possible list. Add the next possible candidate alphabet item to the current list. If you run out of candidates, backtrack, remove the last placed alphabet item, move it to the to-be-skipped list, and skip the last used choice and iterate for the next available alphabet item. I've left out some of the details of clearing out the skip-list, etc, but this should give you the gist of the approach.
This sort of program can be interrupted in the middle and continued from a particular point in the tree without having knowledge of the contents of the previosly explored parts of the tree. This is because calculating the successor to the current set which you are testing can be defined as a function of the current set.
-
There has been some research on algorithms for approximate inclusion-exclusion, in the following sense. If we know the size of all intersections between any $2 \le k \le n$ sets in some family of $n$ sets, we can compute the size of the union of all $n$ sets in the family exactly, by inclusion-exclusion. But what if we only want to approximate the size of the union?
Linial and Nisan have shown that a good approximation can be found if we just know the size of all $k$-wise intersections, for $k=O(\sqrt{n})$. Sherstov has recently extended this to computing more general functions than just the size of the union (where how big $k$ needs to be depends on the function to be computed).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292841553688049, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/295618/interesting-tea-time-problem/295658
|
# Interesting tea-time problem
Problem A: Please fill each blank with a number such that all the statements are true:
0 appears in all these statements $____$ time(s)
1 appears in all these statements $____$ time(s)
2 appears in all these statements $____$ time(s)
3 appears in all these statements $____$ time(s)
4 appears in all these statements $____$ time(s)
5 appears in all these statements $____$ time(s)
6 appears in all these statements $____$ time(s)
7 appears in all these statements $____$ time(s)
8 appears in all these statements $____$ time(s)
9 appears in all these statements $____$ time(s)
Note: they are treated as numbers, not digits. e.g. 11 counts as occurrence of 11, but not two 1.
EDIT
How do number of solutions behave, with respect to number of statements there are? I need a sketch of the proof.
-
2
For problem B, won't the smallest number always be 0? Also, does $11$ count as two appearances of $1$, or one appearance of $11$? – Erick Wong Feb 5 at 18:29
yes you are right, i will change to smallest positive number – mezhang Feb 5 at 19:50
@ErickWong, lets consider 11 as a number not as two digits. – mezhang Feb 5 at 19:54
## 3 Answers
Note this does carry the assumption of counting the occurrence of the number at the start of the statement and not strictly in the boxes as there is a bit of interpretation there:
The first one has at least one solution. 1732111211 would be the values where there are 7 1s in the statements occurring in the all but 3 of the lines, those being the 2,3, and 7 lines. 2 appears 3 times as it is the number of 7s and 3s in the sequence. 3 appears twice as it is the number of 2s as well as its own line for its other appearance.
The first one would appear to be easily generalized as if one wanted to take away the line with 9s, then the number of 1s will drop to 6 and the 2 that was on the 7s will come down one row. This could be repeated up to a few times I'd think. 0,1,5, and 6 would give 4 times that a 1 would appear so for at least 7 rows this can be done. One could add lines for 10,11, and so on which would increase the number of 1s and then the 2 that is on the 7s would shift up.
-
For the revised B there are exactly 3 solutions consisting of single digits: $173311121291$, $174121121291$, $191311111391$. No single-digit solutions were found to the original B, and the solution to A is unique within this class.
-
Douglas Hofstadter wrote about these problems years ago. He suggested a useful approach is to fill in the blanks with something, then just count and refill, iterating to convergence. On the first one, I got trapped in a loop between $1741111121$ and $1821211211$ and for the second a loop between $254311311150$ and $262323111160$ where the largest is taken from the current iteration instead of the last.
-
1
For the first one $1732111211$ works. – Erick Wong Feb 5 at 19:05
Did he talk about uniqueness of solution? – mezhang Feb 5 at 19:52
@mezhang: No, and he mentions the possibility of getting stuck in a loop. – Ross Millikan Feb 5 at 19:54
@mezhang See my new answer (by brute force computation) – Erick Wong Feb 5 at 20:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9614351391792297, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/217854/no-probability-density-for-linearly-dependent-random-vector
|
# No probability density for linearly dependent random vector?
For a random vector, I know that if the covariance matrix is non invertible, the random vector doesn't have a pdf. However, is there an intuitive explanation why linear dependence between the variables in a random vector infers non existence of a pdf?
Thanks.
-
## 1 Answer
Linear dependence of a random variable $X$ with values in $\mathbb R^n$ means there exists a strict subspace $V$ of $\mathbb R^n$ such that $\mathbb P(X\in V)=1$. Since $V$ has Lebesgue measure zero, the distribution of $X$ has no density.
-
Thanks, I understand that, but is there an explanation with cdfs? I think that it would somehow mean that the $F_X(x_1,x_2,...)$ will not be differentiable. But how is it intuitively shown when the vector is linearly dependent? – ido Oct 21 '12 at 10:19
To me, intuition passes by the explanation in my answer more than by CDFs. One can try to translate things into CDFs but this is awkward. – Did Oct 21 '12 at 10:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387118816375732, "perplexity_flag": "head"}
|
http://medlibrary.org/medwiki/Magnetic_dipole_moment
|
# Magnetic dipole moment
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
Electromagnetism
• Magnetic dipole moment
Scientists
The magnetic moment of a magnet is a quantity that determines the force that the magnet can exert on electric currents and the torque that a magnetic field will exert on it. A loop of electric current, a bar magnet, an electron, a molecule, and a planet all have magnetic moments.
Both the magnetic moment and magnetic field may be considered to be vectors having a magnitude and direction. The direction of the magnetic moment points from the south to north pole of a magnet. The magnetic field produced by a magnet is proportional to its magnetic moment as well. More precisely, the term magnetic moment normally refers to a system's magnetic dipole moment, which produces the first term in the multipole expansion of a general magnetic field. The dipole component of an object's magnetic field is symmetric about the direction of its magnetic dipole moment, and decreases as the inverse cube of the distance from the object.
## Two definitions of moment[]
The preferred definition of a magnetic moment has changed over time. Before the 1930s, textbooks defined the moment using magnetic poles. Since then, most have defined it in terms of Ampèrian currents.[1]
### Magnetic pole definition[]
"Magnetic pole strength" redirects to here. For other uses of "magnetic pole" see Magnetic pole (disambiguation).
An electrostatic analogue for a magnetic moment: two opposing charges separated by a finite distance.
The sources of magnetic moments in materials can be represented by poles in analogy to electrostatics. Consider a bar magnet which has magnetic poles of equal magnitude but opposite polarity. Each pole is the source of magnetic force which weakens with distance. Since magnetic poles always come in pairs, their forces partially cancel each other because while one pole pulls, the other repels. This cancellation is greatest when the poles are close to each other i.e. when the bar magnet is short. The magnetic force produced by a bar magnet, at a given point in space, therefore depends on two factors: the strength p of its poles (the magnetic pole strength), and the vector ℓ separating them. The moment is defined as[1]
$\mathbf{m}=p\boldsymbol{\ell}.$
It points in the direction from South to North pole. The analogy with electric dipoles should not be taken too far because magnetic dipoles are associated with angular momentum (see Magnetic moment and angular momentum). Nevertheless, magnetic poles are very useful for magnetostatic calculations, particularly in applications to ferromagnets.[1] Practitioners using the magnetic pole approach generally represent the magnetic field by the irrotational field H, in analogy to the electric field E.
### Current loop definition[]
Moment μ of a planar current having magnitude I and enclosing an area S
Suppose a planar closed loop carries an electric current I and has vector area S (x, y, and z coordinates of this vector are the areas of projections of the loop onto the yz, zx, and xy planes). Its magnetic moment m, vector, is defined as:
$\mathbf{m}=I \mathbf{S}.$
By convention, the direction of the vector area is given by the right hand grip rule (curling the fingers of one's right hand in the direction of the current around the loop, when the palm of the hand is "touching" the loop's outer edge, and the straight thumb indicates the direction of the vector area and thus of the magnetic moment). [2]
If the loop is not planar, the moment is given as
$\mathbf{m}=\frac{I}{2}\int\mathbf{r}\times{\rm d}\mathbf{r}.$
where × is the vector cross product. In the most general case of an arbitrary current distribution in space, the magnetic moment of such a distribution can be found from the following equation:
$\mathbf{m}=\frac{1}{2}\int\mathbf{r}\times\mathbf{J}\,{\rm d}V,$
where r is the position vector pointing from the origin to the location of the volume element, and J is the current density vector at that location.
The above equation can be used for calculating a magnetic moment of any assembly of moving charges, such as a spinning charged solid, by substituting
$\mathbf{J}=\rho \mathbf{v},$
where ρ is the electric charge density at a given point and v is the instantaneous linear velocity of that point.
For example, the magnetic moment produced by an electric charge moving along a circular path is
$\mathbf{m}=\frac{1}{2}\, q\, \mathbf{r}\times\mathbf{v}$,
where r is the position of the charge q relative to the center of the circle and v is the instantaneous velocity of the charge.
Practitioners using the current loop model generally represent the magnetic field by the solenoidal field B, analogous to the electrostatic field D.
#### Magnetic moment of a solenoid[]
3-D image of a solenoid
A generalization of the above current loop is a coil, or solenoid. Its moment is the vector sum of the moments of individual turns. If the solenoid has N identical turns (single-layer winding) and vector area S,
$\mathbf{m}=N I \mathbf{S}.$
## Units[]
The unit for magnetic moment is not a base unit in the International System of Units (SI) and it can be represented in more than one way. For example, in the current loop definition, the area is measured in square meters and I is measured in amperes, so the magnetic moment is measured in ampere–square meters (A m2). In the equation for torque on a moment, the torque is measured in joules and the magnetic field in teslas, so the moment is measured in joules per tesla (J⋅T−1). These two representations are equivalent:
1 A·m2 = 1 J·T−1.
In the CGS system, there are several different sets of electromagnetism units, of which the main ones are ESU, Gaussian, and EMU. Among these, there are two alternative (non-equivalent) units of magnetic dipole moment in CGS:
(ESU CGS) 1 stat A·cm2 = 3.33564095×10−14 (A·m2 or J·T−1)
and (more frequently used)
(EMU CGS and Gaussian-CGS) 1 erg/G = 1 abA·cm2 = 10−3 (m2·A or J/T).
The ratio of these two non-equivalent CGS units (EMU/ESU) is equal exactly to the speed of light in free space, expressed in cm·s−1.
All formulas in this article are correct in SI units, but in other unit systems, the formulas may need to be changed. For example, in SI units, a loop of current with current I and area A has magnetic moment I×A (see below), but in Gaussian units the magnetic moment is I×A/c.
## Magnetic moment and angular momentum[]
The magnetic moment has a close connection with angular momentum called the gyromagnetic effect. This effect is expressed on a macroscopic scale in the Einstein-de Haas effect, or "rotation by magnetization," and its inverse, the Barnett effect, or "magnetization by rotation."[3] In particular, when a magnetic moment is subject to a torque in a magnetic field that tends to align it with the applied magnetic field, the moment precesses (rotates about the axis of the applied field). This is a consequence of the angular momentum associated with the moment.
Viewing a magnetic dipole as a rotating charged sphere brings out the close connection between magnetic moment and angular momentum. Both the magnetic moment and the angular momentum increase with the rate of rotation of the sphere. The ratio of the two is called the gyromagnetic ratio, usually denoted by the symbol γ.[4] [5]
For a spinning charged solid with a uniform charge density to mass density ratio, the gyromagnetic ratio is equal to half the charge-to-mass ratio. This implies that a more massive assembly of charges spinning with the same angular momentum will have a proportionately weaker magnetic moment, compared to its lighter counterpart. Even though atomic particles cannot be accurately described as spinning charge distributions of uniform charge-to-mass ratio, this general trend can be observed in the atomic world, where the intrinsic angular momentum (spin) of each type of particle is a constant: a small half-integer times the reduced Planck constant ħ. This is the basis for defining the magnetic moment units of Bohr magneton (assuming charge-to-mass ratio of the electron) and nuclear magneton (assuming charge-to-mass ratio of the proton).
## Effects of an external magnetic field on a magnetic moment[]
### Force on a moment[]
See also: force between magnets
A magnetic moment in an externally-produced magnetic field has a potential energy U:
$U=-\mathbf{m}\cdot\mathbf{B}$
In a case when the external magnetic field is non-uniform, there will be a force, proportional to the magnetic field gradient, acting on the magnetic moment itself. There has been some discussion on how to calculate the force acting on a magnetic dipole. There are two expressions for the force acting on a magnetic dipole, depending on whether the model used for the dipole is a current loop or two monopoles (analogous to the electric dipole).[6] The force obtained in the case of a current loop model is
$\mathbf{F}_\text{loop}=\nabla \left(\mathbf{m}\cdot\mathbf{B}\right)$
In the case of a pair of monopoles being used (i.e. electric dipole model)
$\mathbf{F}_\text{dipole}=\left(\mathbf{m}\cdot \nabla \right) \mathbf{B}$
and one can be put in terms of the other via the relation
$\mathbf{F}_\text{loop}=\mathbf{F}_\text{dipole} + \mathbf{m}\times \left(\nabla \times \mathbf{B} \right)$
In all these expressions m is the dipole and B is the magnetic field at its position. Note that if there are no currents or time-varying electrical fields ∇ × B = 0 and the two expressions agree.
An electron, nucleus, or atom placed in a uniform magnetic field will precess with a frequency known as the Larmor frequency. See Resonance.
### Torque on a moment[]
The magnetic moment can also be defined as a vector relating the aligning torque on the object from an externally applied magnetic field to the field vector itself. The relationship is given by [3]
$\boldsymbol{\tau} = \mathbf{m} \times\mathbf{B}$
where τ is the torque acting on the dipole and B is the external magnetic field.
## Magnetic dipoles[]
Main article: Magnetic dipole
See also: Dipole
A magnetic dipole is the limit of either a current loop or a pair of poles as the dimensions of the source are reduced to zero while keeping the moment constant. As long as these limits only applies to fields far from the sources, they are equivalent. However, the two models give different predictions for the internal field (see below).
### External magnetic field produced by a magnetic dipole moment[]
Magnetic field lines around a "magnetostatic dipole" the magnetic dipole itself is in the center and is seen from the side.
Any system possessing a net magnetic dipole moment m will produce a dipolar magnetic field (described below) in the space surrounding the system. While the net magnetic field produced by the system can also have higher-order multipole components, those will drop off with distance more rapidly, so that only the dipolar component will dominate the magnetic field of the system at distances far away from it.
The vector potential of magnetic field produced by magnetic moment m is
${\mathbf{A}}({\mathbf{r}})=\frac{\mu_{0}}{4\pi}\frac{{\mathbf{m}}\times{\mathbf{r}}}{r^{3}},$
and magnetic flux density is
$\mathbf{B}({\mathbf{r}})=\nabla\times{\mathbf{A}}=\frac{\mu_{0}}{4\pi}\left(\frac{3\mathbf{r}(\mathbf{m}\cdot\mathbf{r})}{r^{5}}-\frac{{\mathbf{m}}}{r^{3}}\right).$
Alternatively one can obtain the scalar potential first from the magnetic pole perspective,
$\psi({\mathbf{r}})=\frac{{\mathbf{m}}\cdot{\mathbf{r}}}{4\pi r^{3}},$
and hence magnetic field strength is
${\mathbf{H}}({\mathbf{r}})=-\nabla\psi=\frac{1}{4\pi}\left(\frac{3\mathbf{r}(\mathbf{m}\cdot\mathbf{r})}{r^{5}}-\frac{{\mathbf{m}}}{r^{3}}\right).$
The magnetic field of an ideal magnetic dipole is depicted on the left.
### Internal magnetic field of a dipole[]
The magnetic field of a current loop
The two models for a dipole (current loop and magnetic poles) give the same predictions for the magnetic field far from the source. However, inside the source region they give different predictions. The magnetic field between poles (see figure for Magnetic pole definition) is in the opposite direction to the magnetic moment (which points from the negative charge to the positive charge), while inside a current loop it is in the same direction (see the figure to the right). Clearly, the limits of these fields must also be different as the sources shrink to zero size. This distinction only matters if the dipole limit is used to calculate fields inside a magnetic material.[1]
If a magnetic dipole is formed by making a current loop smaller and smaller, but keeping the product of current and area constant, the limiting field is
$\mathbf{B}(\mathbf{x})=\frac{\mu_0}{4\pi}\left[\frac{3\mathbf{n}(\mathbf{n}\cdot \mathbf{m})-\mathbf{m}}{|\mathbf{x}|^3} + \frac{8\pi}{3}\mathbf{m}\delta(\mathbf{x})\right].$
Unlike the expressions in the previous section, this limit is correct for the internal field of the dipole.[1][7]
If a magnetic dipole is formed by taking a "north pole" and a "south pole", bringing them closer and closer together but keeping the product of magnetic pole-charge and distance constant, the limiting field is[1]
$\mathbf{H}(\mathbf{x}) =\frac{1}{4\pi}\left[\frac{3\mathbf{n}(\mathbf{n}\cdot \mathbf{m})-\mathbf{m}}{|\mathbf{x}|^3} - \frac{4\pi}{3}\mathbf{m}\delta(\mathbf{x})\right].$
These fields are related by $\mathbf{B}= \mu_0\left(\mathbf{H}+\mathbf{M}\right)$, where
$\mathbf{M}(\mathbf{x}) = \mathbf{m}\delta(\mathbf{x})$
is the magnetization.
### Forces between two magnetic dipoles[]
See also: Magnetic dipole-dipole interaction
As discussed earlier, the force exerted by a dipole loop with moment m1 on another with moment m2 is
$\mathbf{F} =\nabla \left(\mathbf{m}_2\cdot\mathbf{B}_1\right),$
where B2 is the magnetic field due to moment 2. The result of calculating the gradient is[8][9]
$\mathbf{F}(\mathbf{r}, \mathbf{m}_1, \mathbf{m}_2) = \frac{3 \mu_0}{4 \pi r^4}\left[\mathbf{m}_2 (\mathbf{m}_1\cdot \hat{\mathbf{r}}) + \mathbf{m}_1(\mathbf{m}_2\cdot \hat{\mathbf{r}}) + \hat{\mathbf{r}}(\mathbf{m}_1\cdot\mathbf{m}_2) - 5\hat{\mathbf{r}} (\mathbf{m}_1\cdot \hat{\mathbf{r}})(\mathbf{m}_2\cdot \hat{\mathbf{r}})\right],$
where $\hat{\mathbf{r}}$ is the unit vector pointing from magnet 1 to magnet 2 and r is the distance. An equivalent expression is[9]
$\mathbf{F} = \frac {3 \mu_0} {4 \pi r^4} \left[ (\hat{\mathbf{r}} \times \mathbf{m}_1) \times \mathbf{m}_2 + (\hat{\mathbf{r}} \times \mathbf{m}_2) \times \mathbf{m}_1 - 2 \hat{\mathbf{r}}(\mathbf{m}_1 \cdot \mathbf{m}_2) + 5 \hat{\mathbf{r}} (\hat{\mathbf{r}} \times \mathbf{m}_1) \cdot (\hat{\mathbf{r}} \times \mathbf{m}_2) \right].$
The force acting on m1 is in opposite direction.
The torque of magnet 2 on magnet 1 is
$\boldsymbol{\tau}=\mathbf{m}_2 \times \mathbf{B}_1.$
## Examples of magnetic moments[]
### Two kinds of magnetic sources[]
Fundamentally, contributions to any system's magnetic moment may come from sources of two kinds: (1) motion of electric charges, such as electric currents, and (2) the intrinsic magnetism of elementary particles, such as the electron.
Contributions due to the sources of the first kind can be calculated from knowing the distribution of all the electric currents (or, alternatively, of all the electric charges and their velocities) inside the system, by using the formulas below. On the other hand, the magnitude of each elementary particle's intrinsic magnetic moment is a fixed number, often measured experimentally to a great precision. For example, any electron's magnetic moment is measured to be −9.284764×10−24 J/T.[10] The direction of the magnetic moment of any elementary particle is entirely determined by the direction of its spin (the minus in front of the value above indicates that any electron's magnetic moment is antiparallel to its spin).
The net magnetic moment of any system is a vector sum of contributions from one or both types of sources. For example, the magnetic moment of an atom of hydrogen-1 (the lightest hydrogen isotope, consisting of a proton and an electron) is a vector sum of the following contributions:
1. the intrinsic moment of the electron,
2. the orbital motion of the electron around the proton,
3. the intrinsic moment of the proton.
Similarly, the magnetic moment of a bar magnet is the sum of the intrinsic and orbital magnetic moments of the unpaired electrons of the magnet's material.
### Magnetic moment of an atom[]
For an atom, individual electron spins are added to get a total spin, and individual orbital angular momenta are added to get a total orbital angular momentum. These two then are added using angular momentum coupling to get a total angular momentum. The magnitude of the atomic dipole moment is then[11]
$m_\text{Atom} = g_J \mu_B \sqrt{J(J+1)}$
where J is the total angular momentum quantum number, gJ is the Landé g-factor, and μB is the Bohr magneton. The component of this magnetic moment along the direction of the magnetic field is then[12]
$m_\text{Atom}(z) = -m g_J \mu_B$
where m is called the magnetic quantum number or the equatorial quantum number, which can take on any of 2J+1 values:[13]
$-J, -(J-1) \cdots 0 \cdots +(J-1), +J$.
The negative sign occurs because electrons have negative charge.
Due to the angular momentum, the dynamics of a magnetic dipole in a magnetic field differs from that of an electric dipole in an electric field. The field does exert a torque on the magnetic dipole tending to align it with the field. However, torque is proportional to rate of change of angular momentum, so precession occurs: the direction of spin changes. This behavior is described by the Landau-Lifshitz-Gilbert equation:[14][15]
$\frac{1}{\gamma} \frac{{\rm d}\mathbf{m}}{{\rm d}t} = \mathbf{m \times H_\text{eff}} - \frac{\lambda}{\gamma m}\mathbf{m} \times \frac{{\rm d}\mathbf{m}}{{\rm d}t}$
where $\scriptstyle\gamma$ is gyromagnetic ratio, m is magnetic moment, λ is damping coefficient and Heff is effective magnetic field (the external field plus any self-field). The first term describes precession of the moment about the effective field, while the second is a damping term related to dissipation of energy caused by interaction with the surroundings.
### Magnetic moment of an electron[]
See also: Anomalous magnetic dipole moment
Electrons and many elementary particles also have intrinsic magnetic moments, an explanation of which requires a quantum mechanical treatment and relates to the intrinsic angular momentum of the particles as discussed in the article electron magnetic dipole moment. It is these intrinsic magnetic moments that give rise to the macroscopic effects of magnetism, and other phenomena, such as electron paramagnetic resonance.
The magnetic moment of the electron is
$\mathbf{m}_\text{S} = -\frac{g_\text{S} \mu_\text{B} \mathbf{S}}{\hbar},$
where μB is the Bohr magneton, S is electron spin, and the g-factor gS is 2 according to Dirac's theory, but due to quantum electrodynamic effects it is slightly larger in reality: 2.002 319 304 36. The deviation from 2 is known as the anomalous magnetic dipole moment.
Again it is important to notice that m is a negative constant multiplied by the spin, so the magnetic moment of the electron is antiparallel to the spin. This can be understood with the following classical picture: if we imagine that the spin angular momentum is created by the electron mass spinning around some axis, the electric current that this rotation creates circulates in the opposite direction, because of the negative charge of the electron; such current loops produce a magnetic moment which is antiparallel to the spin. Hence, for a positron (the anti-particle of the electron) the magnetic moment is parallel to its spin.
### Magnetic moment of a nucleus[]
See also: Nuclear magnetic moment
The nuclear system is a complex physical system consisting of nucleons, i.e., protons and neutrons. The quantum mechanical properties of the nucleons include the spin among others. Since the electromagnetic moments of the nucleus depend on the spin of the individual nucleons, one can look at these properties with measurements of nuclear moments, and more specifically the nuclear magnetic dipole moment.
Most common nuclei exist in their ground state, although nuclei of some isotopes have long-lived excited states. Each energy state of a nucleus of a given isotope is characterized by a well-defined magnetic dipole moment, the magnitude of which is a fixed number, often measured experimentally to a great precision. This number is very sensitive to the individual contributions from nucleons, and a measurement or prediction of its value can reveal important information about the content of the nuclear wave function. There are several theoretical models that predict the value of the magnetic dipole moment and a number of experimental techniques aiming to carry out measurements in nuclei along the nuclear chart.
### Magnetic moment of a molecule[]
Any molecule has a well-defined magnitude of magnetic moment, which may depend on the molecule's energy state. Typically, the overall magnetic moment of a molecule is a combination of the following contributions, in the order of their typical strength:
• magnetic moments due to its unpaired electron spins (paramagnetic contribution), if any
• orbital motion of its electrons, which in the ground state is often proportional to the external magnetic field (diamagnetic contribution)
• the combined magnetic moment of its nuclear spins, which depends on the nuclear spin configuration.
#### Examples of molecular magnetism[]
• Oxygen molecule, O2, exhibits strong paramagnetism, due to unpaired spins of its outermost two electrons.
• Carbon dioxide molecule, CO2, mostly exhibits diamagnetism, a much weaker magnetic moment of the electron orbitals that is proportional to the external magnetic field. In the rare instance when a magnetic isotope, such as 13C or 17O, is present, it will contribute its nuclear magnetism to the molecule's magnetic moment.
• Hydrogen molecule, H2, in a weak (or zero) magnetic field exhibits nuclear magnetism, and can be in a para- or an ortho- nuclear spin configuration.
### Elementary particles[]
In atomic and nuclear physics, the symbol μ represents the magnitude of the magnetic moment, often measured in Bohr magnetons or nuclear magnetons, associated with the intrinsic spin of the particle and/or with the orbital motion of the particle in a system. Values of the intrinsic magnetic moments of some particles are given in the table below:
Intrinsic magnetic moments and spins of some elementary particles [16]
Particle Magnetic dipole moment in SI units (10−27 J⋅T−1) Spin quantum number (dimensionless)
electron -9284.764 1/2
proton 14.106067 1/2
neutron -9.66236 1/2
muon -44.904478 1/2
deuteron 4.3307346 1
triton 15.046094 1/2
helion -10.746174 1/2
alpha particle 0 0
For relation between the notions of magnetic moment and magnetization see magnetization.
## References and notes[]
1. Feynman, Richard P.; Leighton, Robert B.; Sands, Matthew (2006). 2. ISBN 0-8053-9045-6 [Amazon-US | Amazon-UK].
2. ^ a b B. D. Cullity, C. D. Graham (2008). Introduction to Magnetic Materials (2 ed.). Wiley-IEEE Press. p. 103. ISBN 0-471-47741-9 [Amazon-US | Amazon-UK].
3. Uwe Krey, Anthony Owen (2007). Basic Theoretical Physics. Springer. pp. 151–152. ISBN 3-540-36804-3 [Amazon-US | Amazon-UK].
4. Richard B. Buxton (2002). Introduction to functional magnetic resonance imaging. Cambridge University Press. p. 136. ISBN 0-521-58113-3 [Amazon-US | Amazon-UK].
5. Boyer, Timothy H. (1988). "The Force on a Magnetic Dipole". 56 (8): 688–692. Bibcode:1988AmJPh..56..688B. doi:10.1119/1.15501.
6. Furlani, Edward P. (2001). Permanent Magnet and Electromechanical Devices: Materials, Analysis, and Applications. Academic Press. p. 140. ISBN 0-12-269951-3 [Amazon-US | Amazon-UK].
7. ^ a b K.W. Yung, P.B. Landecker, D.D. Villani (1998). An Analytic Solution for the Force between Two Magnetic Dipoles (PDF). Retrieved November 24, 2012.
8. RJD Tilley (2004). Understanding Solids. John Wiley and Sons. p. 368. ISBN 0-470-85275-5 [Amazon-US | Amazon-UK].
9. Paul Allen Tipler, Ralph A. Llewellyn (2002). Modern Physics (4 ed.). Macmillan. p. 310. ISBN 0-7167-4345-0 [Amazon-US | Amazon-UK].
10. JA Crowther (2007). Ions, Electrons and Ionizing Radiations (reprinted Cambridge (1934) 6 ed.). Rene Press. p. 277. ISBN 1-4067-2039-9 [Amazon-US | Amazon-UK].
11. Stuart Alan Rice (2004). Advances in chemical physics. Wiley. pp. 208 ff. ISBN 0-471-44528-2 [Amazon-US | Amazon-UK].
12. Marcus Steiner (2004). Micromagnetism and Electrical Resistance of Ferromagnetic Electrodes for Spin Injection Devices. Cuvillier Verlag. p. 6. ISBN 3-86537-176-0 [Amazon-US | Amazon-UK].
13. "Search results matching ' magnetic moment '...". CODATA internationally recommended values of the Fundamental Physical Constants. National Institute of Standards and Technology. Retrieved 11 May 2012.
## Further reading[]
• Brown, Jr., William Fuller (1962). Magnetostatic Principles in Ferromagnetism. North-Holland.
• Jackson, John David (1975). Classical electrodynamics (2d ed.). New York: Wiley. ISBN 047143132X [Amazon-US | Amazon-UK].
## []
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Magnetic dipole moment", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Magnetic_dipole_moment
• ## Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• ## Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• ## About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8606914281845093, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/189839-deduce-lagrange-s-theorem.html
|
# Thread:
1. ## Deduce Lagrange's Theorem
Let H be a group acting on a set A. Prove that the relation ~ on A defined by a~b iff $a=hb$ for some h in H is an equivalence relation.
I have already shown this.
Let H be a subgroup of the finite group G and let H act on G by left multiplication. Let x exist in G and let O be the orbit of x under the action of H. Prove that the map $H\to O$ defined by $h\mapsto hx$ is a bijection.
I have already shown this too.
From these two statements, deduce Lagrange's Theorem: if G is a finite group and $H\leq G$, then $|H|| |G|$.
I understand Lagrange's Theorem. I need an explanation on how I can deduce his theorem from the above.
2. ## Re: Deduce Lagrange's Theorem
Originally Posted by dwsmith
I understand Lagrange's Theorem. I need an explanation on how I can deduce his theorem from the above.
Hint All the equivalence classes have the same cardinality.
3. ## Re: Deduce Lagrange's Theorem
Originally Posted by FernandoRevilla
Hint All the equivalence classes have the same cardinality.
I know that since each one is a coset but I still don't understand.
4. ## Re: Deduce Lagrange's Theorem
if every coset has the same size, then they all have the same size as the coset containing the identity, which is H.
so the coset containing the identity, H, has cardinality |H|, as does every coset Hg.
so, summing over our partition of G:
|G| = |H| + |Hg1| + |Hg2| +....+ |Hgk|, where k is the number of distinct cosets. and....?
5. ## Re: Deduce Lagrange's Theorem
Originally Posted by dwsmith
I know that since each one is a coset but I still don't understand.
The set $G$ has the form $G=\cup_{i}C_i$ and the family of cosets $\mathcal{C}=\{C_i\}$ is pairwise disjoint with $|C_i|=|H|$. So, $|G|=|\mathcal{C}||H|$ .
6. ## Re: Deduce Lagrange's Theorem
and the pairwise-disjointness arises because being in the same coset is an equivalence relation (what you proved at the beginning), and equivalence classes partition the set they are on.
(equivalence classes are "quotient sets").
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320112466812134, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/36150/what-is-the-meaning-of-following-expresion-c-frac-delta-qdt-mathematicly/36153
|
# What is the meaning of following expresion $C=\frac{\delta Q}{dT}$ mathematicly
Our professor raised the following question during our lecture in Statistical Physics (even so it's related to Thermodynamics):
Many text books (even wikipedia) writes wrong expressions (from mathematical point of view) for Heat Capacity Coefficient, and the right way to wright it is as following: $$C=\frac{\delta Q}{dT}$$ But as we see it is neither usual differential, nor a functional derivative, so the question what is this?
I couldn't find the answer in math books, and it is true that many text books writes it in very different ways mixing exact and inexact differentials, so anybody have a clue what is the right expression for c and why from mathematical point of view?
-
## 5 Answers
I) The use of $\delta$ in the derivative
$$C~=~\frac{\delta Q}{dT}$$
is because in thermodynamics, heat $Q$ is not a state function. In particular, the differential $\delta Q$ is in-exact.
II) In detail, the heat capacity $C$ is not obtained by differentiation of some ordinary function wrt. temperature $T$. Rather it should be viewed as a ratio
$$C~=~\frac{Q}{\Delta T}$$
where $\Delta T$ is sufficiently small (as seen from all physically relevant purposes).
-
+1 Your state function reference is good... – Killercam Sep 11 '12 at 13:29
Maybe your answer is the clearest one among others, but it doesn't address the main issue, why we don't use functional derivative? that Q is state function is known, why we not use $$\delta T$$ instead? and what is the meaning of dividing variation on infinitesimal differential , what is this Mathematically/Geometrically ? – TMS Sep 11 '12 at 14:24
You can take the expression $C=\frac{\delta Q}{\mathrm dT}$ as the infinitesimal version of $$C=\frac{Q}{\Delta T}$$ or a formal rewrite of $$\delta Q=C\mathrm dT$$ which, however, doesn't make sense in the language of differential forms as division by the form $\mathrm dT$ is not defined.
Let's take a look at the meaning of $\delta Q=C\mathrm dT$ assuming differential forms:
By the second law of thermodynamics, $\delta Q = T\mathrm dS$. The $\delta$ has no special meaning, it's just a reminder that we're dealing with a differential form and not a function (we can't write $\mathrm dQ$ here as the form is not exact, ie not the differential of some state function $Q$).
Thermodynamical systems are in general at least two-dimensional and allow different choices of coordinates, so assume $S$ is represented by a function of temperature and another variable, eg $S=S(V,T)$ or $S=S(P,T)$.
The definition of heat capacity from above assumes that $S$ is a function of $T$ alone as the right-hand side doesn't contain terms with $\mathrm dV$ or $\mathrm dP$. In general, we thus need a further restriction on permitted processes, like $V=\mathrm{const}$ or $P=\mathrm{const}$, which yields $C_V$ or $C_P$ respectively.
Under this assumption, we have $$\mathrm dS = \frac{\partial S}{\partial T} \mathrm dT$$ ie $$C\mathrm dT = \delta Q = T\frac{\partial S}{\partial T} \mathrm dT$$ and finally $$C = T\frac{\partial S}{\partial T}$$
A further note for the more mathematically inclined:
Geometrically, the restrictions $V=\mathrm{const}$ or $P=\mathrm{const}$ define a 1-dimensional submanifold where the pullback of $\delta Q$ via the natural embedding will be (locally) exact. In fact, this pullback needs to be included to make the equations above conform to the notation used in differential geometry:
Let $\nu$ be our embedding with $\mathrm d\tau = \nu^*\mathrm dT$ non-degenerate. There's a function $C_\nu$ and (as $\nu^*\delta Q$ is closed) another function $Q_\nu$ (or rather a family of locally defined functions) with $$\nu^*\delta Q = C_\nu \mathrm d\tau = \mathrm dQ_\nu$$ that is $$C_\nu = \frac{\partial Q_\nu}{\partial\tau}$$ In case of $V=\mathrm{const}$, $Q_\nu$ is the pullback of the internal energy $U$, whereas in case of $P=\mathrm{const}$, $Q_\nu$ is the pullback of the Enthalpy $H$.
In physicist's notation this reads $$C_V = \left(\frac{\partial U}{\partial T}\right)_V \\ C_P = \left(\frac{\partial H}{\partial T}\right)_P$$
-
Following your logic, i see that one must write it as follows:$$C=\frac{\delta Q}{\partial T}\:,\: C_{p}=\left(\frac{\delta Q}{dT}\right)_{p}$$Do you agree? – TMS Sep 11 '12 at 15:30
(I wrote the second one in dT because this expression assumes that P=Const , while the first one because Q(T,P) , if true the question remains why not to write functional derivatives? – TMS Sep 11 '12 at 15:36
@TMS: you can find arguments for most of these notations and even other ones like $$C_P=\frac{\mathrm dQ_V}{\mathrm dT}$$ best find a bunch of physicists and take a vote ;) – Christoph Sep 11 '12 at 17:18
The heat capacity can change with $T$ making this a non-exact differential. This is also the case with other equations in thermodynamics. The heat capasity you reference here of course also varies with pressure and volume and this is what leads to the following definitions of heat capacity at constant pressure $C_{p}$ and constant volume $C_{v}$.
$C_{p} = (\frac{\partial Q}{\partial T})_{p}$
and
$C_{v} = (\frac{\partial Q}{\partial T})_{v}$
I would interpret your original equation simply as
$Q = C\,\Delta T$
That is $Q$ is the amount of heat required by a substance with heat capacity $C$, to change the substance's temperature by $\Delta T$.
I hope this helps.
Extension to Address Comments:
Of course having a partial derivative makes sense in this context. Let's take the constant volum case; when heat is added to a substance (fluid for example) at constant volume no work is done, so the heat added equals the increase in the internal energy of the fluid. Writing $Q_{v}$ for the heat added at constant volume (like in the equations above), we have
$Q_{v} = C_{v} \Delta T$
scince $W = 0$ (work), we can write
$Q_{v} = \Delta U + W = \Delta U$.
Thus,
$\Delta U = C_{v} \Delta T$.
Taking the limit as $\Delta T$ aproaches zero we find
$\mathrm{d} U = C_{v} \mathrm{d}T$
-
+1: more or less the same things I was getting at – Christoph Sep 11 '12 at 13:35
I was a bit too quick with my upvote - using lower-case letters normally means specific heat capacities and your definitions are bogus - $Q$ is not a state function, thus taking partial derivatives makes no sense... – Christoph Sep 11 '12 at 13:54
See extension to answer. This does make sense in this context... – Killercam Sep 11 '12 at 14:15
1
thanks for the clarification, it was a thinko on my part (on restriction to a 1-dim submanifold, any form is locally exact, so of course there's a function $Q$ on the submanifold); however, one should note that in case of $V=\mathrm{const}$, $Q$ will be the pullback of the internal energy $U$, whereas in case of $P=\mathrm{const}$ it'll be the pullback of the Enthalpy $H$, so using different symbols $Q_V$ and $Q_P$ (as you did) is probably a good idea; I should add something about that to my own answer... – Christoph Sep 11 '12 at 15:21
Agreed. Thanks for highlighting my error. All the best... – Killercam Sep 11 '12 at 15:31
You see, as was stated $Q$ is not a characteristic of the system (not a state function). It depends on the process. The straightforward way to implement the notion of process is to view $Q$ as a function of time. Thus you could view $C$ as:
$$C(t) = \frac{dQ/dt}{dT/dt}$$
In the case of a reversible process with no exchange of matter with the environment:
$$\frac{dQ}{dt} = T \frac{dS}{dt}$$
Let's consider a monoatomic ideal gas, and a process with constant volume and speeds that allow us to use equilibrium relations and assume reversibility:
$$C_V = \frac{T \; dS/dt}{dT/dt} = \frac{dU/dt}{dT/dt} = \frac{\frac{3}{2} R NT \; dT/dt}{dT/dt} = \frac{3}{2} R NT$$
To sum up, you can always view $dQ$ as a full differential, but on time, treating $Q$ as a function of time. I learned this trick from the book "Modern Thermodynamics: From Heat Engines to Dissipative Structures" by Kondepudi and Prigogine.
Note, that's perfectly strict mathematically --- just the quotient of two derivatives, no differential forms or some slippery reasoning with infinitesimals.
-
I don't have that book, but I suspect that making Q(t) makes it exact differential or Path function in general, because we still have infinity ways on how we heat up (for example) our system. – TMS Sep 11 '12 at 16:09
there isn't much about it in the book, it is just said how to get rid of $\delta$ ( "inexact differential" as you call them) by viewing functions of time (thus turning them into "exact differential"). – Yrogirg Sep 11 '12 at 16:13
Most of the given answers already describe how to reach the mathematical definition and I can only add a more phenomenological approach.
Experimentally specific heat is defined as the coefficient of thermal energy input to temperature rise of an adiabatic system (with either constant pressure or volume)
$$C = \frac{\Delta Q}{\Delta T},$$ and we also know from experiments that $C = C(T)$. So not to average over a large interval we have to reduce the thermal energy as much as possible and measure the response, the temperature increase: $$C(T) = \lim\limits_{\Delta Q\rightarrow 0} \frac{\Delta Q}{\Delta T}$$ If there was a unique function $Q(T,C)$ we would write this as the derivative $$C(T) = \lim\limits_{h\rightarrow 0} \frac{Q(T, C)-Q(T+h, C)}{h} =\frac{dQ}{dT},$$ but it turns out that it matters how the thermal energy is put into the system. This means that there is no unique function $$Q = \int dQ,$$ so we fall back on the inexact differential, denoted by $\delta Q$, and define the specific heat as $$C(T) =\frac{\delta Q}{dT}$$
-
I will repeat my self, I know how to derive it, am asking what is the meaning of such a structure and if it is right mathematically, because I found no Math book that using such structures (variational derivative over exact deferential) – TMS Sep 12 '12 at 16:51
@TMS: Ok, then the question is more what is an inexact differential, as you can also express $\delta Q$ as $f(T,C)dT+g(T,C)dC$ but there is no function $Q(T,C)$, where $f=\partial Q/\partial T$ and $g=\partial Q/\partial C$? – Alexander Sep 12 '12 at 17:34
Sorry I didn't understood your comment. – TMS Sep 12 '12 at 17:57
– Alexander Sep 12 '12 at 18:13
Putting in mind that there are Huge math books on variational methods, forms, ect.. I suspect that it is as simple as you state.. – TMS Sep 12 '12 at 18:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 22, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469529390335083, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/charge+voltage
|
# Tagged Questions
1answer
24 views
### About the electrostatic voltage
What's the difference between electrostatic voltage and normal voltage, like the battery's voltage. How to calculate the charge on a charged plate if we knew its electrostatic voltage?
1answer
129 views
### Electron volt and Voltage
Voltage is the work done per unit charge. Given by: V = W/q Electron volt is the maximum kinetic energy gained by the electron in falling through a potential difference of 1 volt. Given by: K.E ...
1answer
119 views
### About voltage and charge of van de graff generator
I have read that in case of Van de graff generator $V=kQ/r$ where $r$ is radius of the sphere. If that's the case, does the same voltage results in bigger charges in bigger radii?
1answer
46 views
### Charge of an electrolytic capacitors
I can't understand the electrolytic capacitors, when a capacitor has a capacitance of 100 microfarads, does that mean that when it is charged with 100 volts will the charge of the plate be 0.01 ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8896408677101135, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/50154?sort=votes
|
## Reachability for Markov process
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X$ be a Markov process (in continuous or discrete time) and define an event $$R(T,A) = (\exists t\leq T: X_t \in A).$$ I have seen in one paper that $$\Pr[R(\infty,A)] = \sup\limits_{\tau} \mathbb{E}[I_A(X_\tau)],$$ where $I_A(x) = 1$ for $x\in A$ and $I_A(x)=0$ otherwise is an indicator function and supremum is taken over all stopping times $\tau:\Pr[\tau<\infty] = 1$.
Unfortunately the author did not provide a proof it, so I wonder is it right (and so obvious)? Also, does it imply that $$\Pr[R(T,A)] = \sup\limits_{\tau\leq T} \mathbb{E}[I_A(X_\tau)]?$$
-
It says that if you blindfold someone moving at random and tell him when to stop and look, his chance to see a cow will be less than his chance to see it if he roams forever with open eyes but you can come close if you tell him to stop the first time you see a cow nearby or after a very long time whichever comes first. I doubt I can make it more obvious than that. The supremum can actually be taken over all stop-functions, but stopping times are enough to achieve it. As always, there are some measurability issues with continuous time, but, I guess, they are taken care of in the paper. – fedja Dec 22 2010 at 15:51
It's not a proof ) – Ilya Dec 22 2010 at 16:01
What particular phrase do you have difficulty with when translating back into the formal language? – fedja Dec 22 2010 at 18:52
That's an interesting way of phrasing this. Have you tried it for longer arguments? – Omer Dec 22 2010 at 19:21
Haha, have you mentioned that the definition of $R(T,A)$ does not coincide with the definition of $I_A(X_\tau)$. Of course you can say it's obvious that if two triangles have the same sides, they are equal - but maybe you remember that this simple fact also need to be proved. – Ilya Dec 23 2010 at 8:49
show 2 more comments
## 1 Answer
It is right, and not so obvious.
The question of whether or not a Markov process hits particular sets is usually studied using the concept of capacity.
For a continuous time parameter Markov process taking values in a general topological state space, this leads to non-trivial problems of measurability. For instance, for a Borel $A$ there is no guarantee that the set $R(T,A)\in{\cal F}$ where $(\Omega,{\cal F},\Pr)$ is the probability space. However, under suitable conditions, capacity theory can be used to show that $R(T,A)$ is universally measurable, and hence that $\Pr[R(T,A)]$ makes sense.
Let's assume that the state space and process are "nice"; say, the state space is a locally compact, separable metric space, and the process has right continuous sample paths. For fixed $T<\infty$, the formula $\phi(A)=\Pr[R(T,A)]$ defines a Choquet capacity on the Borel sets $A$. Therefore, $$\phi(A)=\sup(\phi(K): K\subseteq A,\ K\mbox{ compact}).$$
For a compact $K$, define the stopping time $\tau(\omega):=\inf(t\geq 0: X_t(\omega)\in K)$. Since the sample paths of $(X_t)$ are right continuous and $K$ is closed, we have $R(T,K) = (X_{\tau\wedge T} \in K).$
Therefore, $$\Pr[R(T,K)]\leq \mathbb{E}[I_A(X_{\tau\wedge T})]\leq \Pr[R(T,A)].$$
Taking the supremum over compact subsets of $A$ gives $$\Pr[R(T,A)]=\sup_{\tau}\ \mathbb{E}[I_A(X_{\tau\wedge T})],$$ which gives your desired result. Letting $T\to\infty$ gives the infinite version.
The result hinges on the fact that, as far as the process goes, the Borel set $A$ can be well approximated from the inside by compact sets.
You can find more details in Chapter I, Section 10 of Blumenthal and Getoor's Markov Processes and Potential Theory, or in Section 3.3 of Kai Lai Chung's Lectures from Markov Processes to Brownian Motion.
-
Thank you very much, I have proved in the almost same way yesterday for the finite time $T$. I'm just wondering if it is admissible to state that we can put T\to \infty\$ and say that it is a proof for the infinite time horizon? – Ilya Dec 23 2010 at 13:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446962475776672, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/129093/evaluating-int-dfrac-2x-x2-6x-13dx/129095
|
# Evaluating $\int \dfrac {2x} {x^{2} + 6x + 13}dx$
I am having trouble understanding the first step of evaluating $$\int \dfrac {2x} {x^{2} + 6x + 13}dx$$
When faced with integrals such as the one above, how do you know to manipulate the integral into:
$$\int \dfrac {2x+6} {x^{2} + 6x + 13}dx - 6 \int \dfrac {1} {x^{2} + 6x + 13}dx$$
After this first step, I am fully aware of how to complete the square and evaluate the integral, but I am having difficulties seeing the first step when faced with similar problems. Should you always look for what the $"b"$ term is in a given $ax^{2} + bx + c$ function to know what you need to manipulate the numerator with? Are there any other tips and tricks when dealing with inverse trig antiderivatives?
-
2
The first step is to get rid of the x at the top by creating the derivative. So if you have $\frac{dx+e}{ax^2+bx+c}$ you need to write the top as $f(2ax+b)+g$. Note that $2af=d$ and $fb+g=e$, you can find $f$ from the first equation and $g$ from the second. – N. S. Apr 7 '12 at 20:41
Ah, can't believe I glossed over that! After making the $u$ the bottom, you still need the $+6$ for your $du!$ – Joe Apr 7 '12 at 20:43
Exactly ;) And all you need is to create the du... – N. S. Apr 7 '12 at 20:44
Maybe you should first consider if the denominator factors into linear terms (it doesn't here, of course). – David Mitra Apr 7 '12 at 20:48
Any reason for the downvote? – Joe Jun 26 '12 at 14:55
## 4 Answers
I look at that fraction and see that the numerator differs from the derivative of the denominator by a constant, $6$. If the numerator were $2x+6$ instead of $2x$, the fraction would be of the form $u'/u$, and I’d be very happy. So I simply make it $2x+6$, subtracting $6$ to compensate:
$$\frac{2x}{x^2+6x+13}=\frac{(2x+6)-6}{x^2+6x+13}=\frac{2x+6}{x^2+6x+13}-\frac6{x^2+6x+13}\;.$$
Then I consider whether I can integrate the correction term. In this case I recognize it as the derivative of an arctangent, so I know that I’ll be able to handle it, though it will take a little algebra.
-
+1. Lovely answer, thank you. – Joe Apr 7 '12 at 20:48
Just keep in mind which "templates" can be applied. The LHS in your second line is "prepped" for the $\int\frac{du}{u}$ template. Your choices for a rational function with a quadratic denominator are limited to polynomial division and then partial fractions for the remainder, if the denominator factors (which it always will over $\mathbb{C}$). The templates depend on the sign of $a$ and the number of roots. Here are some relevant "templates": $$\eqalign{ \int\frac1{ax+b}dx &= \frac1{a}\ln\bigl|ax+b\bigr|+C \\ \int\frac{dx}{(ax+b)^2} &= -\frac1{a}\left(ax+b\right)^{-1}+C \\ \int\frac1{x^2+a^2}dx &= \frac1{a}\arctan\frac{x}{a}+C \\ \int\frac1{x^2-a^2}dx &= \frac1{2a}\ln\left|\frac{x-a}{x+a}\right|+C \\ }$$ So, in general, to tackle $$I = \int\frac{Ax+B}{ax^2+bx+c}dx$$ you will want to write $Ax+B$ as $\frac{A}{2a}\left(2ax+b\right)+\left(B-\frac{Ab}{2a}\right)$ to obtain $$\eqalign{ I & = \frac{A}{2a}\int\frac{2ax+b}{ax^2+bx+c}dx + \left(B-\frac{Ab}{2a}\right) \int\frac{dx}{ax^2+bx+c} \\& = \frac{A}{2a}\ln\left|ax^2+bx+c\right| + \left(\frac{B}{a}-\frac{Ab}{2a^2}\right) \int\frac{dx}{x^2+\frac{b}{a}x+\frac{c}{a}} }$$ and to tackle the remaining integral, you can find the roots from the quadratic equation or complete the squares using the monic version (which is easier to do substitution with). If $a=0$, use the first "template" above. If you complete the squares and it's a perfect square, or if you get one double root, then use the second. If the roots are complex or there are two distinct real roots, then (after substituting $u=x+\frac{b}{2a}$) use the third or fourth "template".
-
+1. Thanks for the quick, succinct, yet reliable answer. – Joe Apr 7 '12 at 20:49
You want to make the substitution $u=x^2+6x+13$ . Thus, $du=2x+6$ the rest follows easily from there.
-
It's not necessary to write it out like that beforehand. It comes up naturally when you try to solve it.
$\displaystyle \int \dfrac {2x} {x^{2} + 6x + 13}dx = \int \dfrac{2x}{x^2 + 6x + 9 + 4}dx = \int \dfrac{2x}{(x + 3)^2 + 4}dx$
This sounds like a job for... u-substitution! (da-da-daaaa!)
Let $u = (x+3)^2$, so that $du = 2(x+3)dx = (2x + 6)dx$
Now is when we see that we wish we had that 6. There's only one way to get it - add and subract it. So we get
$\displaystyle \int \dfrac {2x} {x^{2} + 6x + 13}dx = \int \dfrac{2x + 6 - 6}{(x + 3)^2 + 4}dx = \int \dfrac{du}{u^2 + 4} + \int \dfrac{-6}{(x+3)^2 + 4}$
At least, that's how I think of it.
-
+1. Definitely the way I would like to approach problems like this rather than having to see it immediately. Thanks for mentioning it. – Joe Apr 7 '12 at 20:55
Or similarly let $x=y-3$, changing the integral to $\frac{2y-6}{y^2+4}dy$, then split the numerator. – Mike Apr 7 '12 at 21:12
Absolutely - or that. There are many ways to approach this integral, it turns out. – mixedmath♦ Apr 7 '12 at 21:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279651045799255, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/52628/could-there-be-a-star-orbiting-around-a-planet/52647
|
# Could there be a star orbiting around a planet?
I wonder if there ever could be a star (really small) which may orbit around a planet (really big)?
-
## 6 Answers
One thing to keep in mind is that objects that are bound gravitationally actually revolve around each other around a point called a barycenter. The fact that the earth looks like its revolving around the sun is because the sun is much more massive and its radius is large enough that it encompasses the barycenter. This is a similar situation with the Earth and Moon. If there were three bodies, where two bodies were of similar size (like a binary star system plus a massive planet) then an analysis of three body systems shows that there are stable configurations where the objects will be in very complicated orbits where it would be difficult to say one orbits the other.
Update: The short answer is yes, it is possible when you look at the complete dynamical system, for the reasons stated above. More evidence of this can be found in the study of regular star orbits where very complicated orbits are possible and can be stable. Currently the cut off for classification of a planet and a brown dwarf is 13 Jupiter masses, which is arbitrary to some degree. The lightest main sequence stars have a mass of 75 Jupiters. This will put the barycenter well outside the radius of either body for binary systems.
A quick check of the two body system using the equation:
$$R = \dfrac{1}{m_1 + m_2}(m_1r_1 + m_2r_2)$$
Setting $m_1 = 75$, $r_1 = 1$, $m_2 = 13$, $r_2 = 2$ gives:
$$\dfrac{75 + 26}{75+13} = 1.147$$
Indicating a barycenter at roughly $\dfrac{1}{7}$ the distance between the objects. More bodies will cause more complicated orbits, where again, it would be difficult to say which object orbits which. It should be noted that if the system was composed of 3 objects, 2 of which had similar mass, it would be possible to develop a system that appears to have two larger objects orbiting a third smaller object. A quick check reveals:
$$R = \dfrac{1}{m_1 + m_2 + m_3}(m_1r_1 + m_2r_2+ m_3r_3)$$
Setting $m_1 = 75$, $r_1 = 1$, $m_2 = 13$, $r_2 = 2$ $m_3 = 75$, $r_3 = 3$ gives:
$$\dfrac{75 + 26 + 225}{75+13+75} = 2$$
Whether such an orbit system is realizable when you consider the full dynamics of a natural system is debatable, but I am not aware of a specific proof that would rule it out.
UPDATE
It should be noted that there are new periodic solutions to 3-body problems when the objects have the same mass.
-
Anything the mass of a star is going to get hot like a star and fuse hydrogen like a star. In other words it will be a star not a planet!
While it's technically possible to have a rocky planet the mass of a star, in practice when stellar systems form there aren't enough metals available to build such a large object. Large objects are invariably built from hydrogen (and helium) and would therefore form a star.
There are plenty of binary systems with a star orbiting a white dwarf or neutron star, but even a dead star is still a star and not a planet.
-
1
I would say mass AND density of a star, if i have the mass of the sun stretched out over a billion light years it wont be dense enough to be a star – RhysW Jan 31 at 15:29
1
@RhysW: gather the mass together and gravity will take care on the density! After all, even the biggest stars started life as a nebulous cloud of gas. – John Rennie Jan 31 at 15:55
1
@JohnRennie tell that to Jupiter (and yes, I know there's people who consider Jupiter to be either a brown dwarf star or a protostar). – jwenting Feb 1 at 7:09
Typically, a star (or stellar remnant, such as a neutron star, white/black dwarf, or black hole) will be the most massive thing in the area, by far. Planets, even gas giants, are a small fraction of the mass of a typical main sequence star.
Now, as in Hal's answer, the relative mass of the planet and its star does make the center of mass, the barycenter, of the planet-star system a point that is different from the center of mass of either body alone. This will cause the star to appear to "wobble" as its planet moves around it. Tracking this wobble over time is how we have discovered most of the extrasolar planets we know of (which is why most of the exoplanets we know of are huge gas giants several times Jupiter's mass; the wobble's easier to see). However, as orbital motion's primary determinant is relative mass (another is relative distance, and the third is tangential velocity), the more massive star will be very close to the barycenter of the system, and the planet will be further away.
Our best-known example, Jupiter, the largest planet in our solar system, has a mass of about 1.9e27 kg. Our Sun has a mass of about 2e30 kg. In other words, Jupiter is about 1/1000 the mass of the Sun. Thus, while Jupiter does indeed have an effect on the position of the Sun as it orbits the Sun, the center of mass of the two objects is still much closer to the Sun than to Jupiter (pretty much on the Sun's surface, as explained by the comment). In fact, all 9 planets of our Solar system (throwing Pluto a bone here), all their moons and rings, and all other orbiting celestial objects such as asteroids and comets, all pooled together into one super-planet, would still be only about .15% of the Sun's mass. That would bring the barycenter of thus dual-body system out into open space a few million miles (depending on the distance between the two), but still far closer to the Sun than the super-planet.
While Jupiter is not even close to the most massive planet we have discovered (http://xkcd.com/1071/large/), we have not yet found a planet more massive than any star we've ever found, much less a planet more massive than its own star. The most massive non-stellar body we have discovered is about 55 times Jupiter's mass (still only about 5% of the mass of our Sun), and is a rogue body (not orbiting a star) that blurs the line between planet and star; it's dense enough to generate temperatures that cause deuterium combustion (not quite true fusion) and thus it produces its own thermal energy. Masses of this type are known as "brown dwarfs". As objects become more massive, they become progressively hotter, until they reach the threshold of true fusion at about 80 MJ and become "red dwarfs".
Thus, not only is there no known planet more massive than its star, it's thought to be impossible for any non-stellar body to gain enough mass for any true star to orbit it, without it becoming a star itself. As the mass of something like a gas giant increases, by attracting nearby wisps of gas, aging comets, etc, the density of the mass also increases as gravity does. This increases the core temperature of that body. Eventually, as with these brown dwarves, the temperature increases to a state of dense pre-fusion plasma, and then from there, things just continue to transition toward true fusion. It's conceivable that a planet could aggregate a mass primarily consisting of something other than hydrogen, that wouldn't fuse until much higher temperatures had been reached, but given what we know of our galaxy it is extremely unlikely for there to be enough of anything but hydrogen available to give a planet that kind of mass.
It is possible, though we haven't seen it yet, for a planet that's not quite a brown dwarf (maybe 40 MJ) to be found orbiting a red dwarf (about 80 MJ); this would meet our definition of a "planetary system" of a planet and star, and not a "binary system" of two stars. However, with the gas giant being only about half the mass of its star, the center of mass, and thus the barycenter of orbit, would be well out into open space between them, and you would, more or less, see them orbit each other. That's about as close as you could get to a geocentric system, and we have not yet observed it.
-
Just one tiny little nitpick for an otherwise great answer: the barycenter of the Sun-Jupiter system is pretty much on the Sun's surface - compare the mass ratio to the separation-radius ratio. – Chris White Jan 31 at 19:35
Edited to correct this - thanks. – KeithS Jan 31 at 20:03
John Rennie has already covered most of it in his answer, I just wanted to add a few explanatory notes.
Usually when stars are "born", they form from dense clouds of hydrogen and helium. Once the cloud gets dense enough, it starts to get really hot, and eventually fusion occurs.
The universe doesn't have an abundance of heavier elements, like calcium and iron. These are the by-products of stellar fusion, so for a planet to form, there needs to be (or have been) a star at some point to generate those elements. If the star that created those elements is still there when the planet is being formed, it is fully possible that the star has a smaller radius than the planet orbiting it (like a neutron star), but the star will also be much denser than the planet, ensuring that the centre of the orbit is nearer to the star than the planet. A star that has a smaller radius and a smaller density would have never turned into a star in the first place, it would have just remained as a nebula or a brown dwarf.
But as has already been pointed out, it's much more likely that a planet that massive will just get really hot and start it's own process of fusion, turning into a star.
-
Planets appear because of stars:
• First you have some huge cloud of cosmic dust with a very big total mass
• At some point some 3d party gravity field may disturb it and the cloud starts to have some center with bigger gravity which pulls more and more of other dust
• At some point the gravity of the center becomes very strong and it pulls the particles around with more speed and the higher gravity is, the higher the temperature becomes inside
• And then a thermonuclear reaction starts because of highly concentrated energy. This is a star. There is a chance that the new-born-star doesn't blow up and here were are.
• During the process of cloud concentration, particles in other parts of the cloud also start gluing to each other (again because of gravity) - thus planets start to form.
So you have a 'big guy' in the center and 'small guys' in other parts of the cloud. If theoretically one of those small guys appear to be larger than 'big guy', then it would become a star instead, right?
So now - you can't have a star going around a planet because from the start it was larger. For instance, take a look at Solar system objects, the Sun is something like 98% of the mass.
And even if we could have a body that is heavier than a star and is located close, it would start to grab the higher layers of the smaller star. Which actually happens, but there are anyway two stars participating in this to form nova.
-
Theoretically YES: But, it has a constraint sticking to it. So, "Yes" slightly. This would be possible only when there already existed a system like such. If the mass of the small star (orbiting mass) is low (enough to be tidally locked with the "BIG" planet), the system would be possible.
But still, This is totally ridiculous. Because, such a thing cannot be formed on its own.
So Practically, it is NO for sure: To understand this, let's skim through their definitions. Though both the celestial objects have a common origin (Nebulae), a Planet is an object whose core hasn't fused enough hydrogen to maintain the reaction. So, it can be named as some kind of "inactive star". On the other hand, a Star satisfies all the necessary conditions, sustaining the fusion reactions.
When a planet has acquired enough mass, so that the core fusion reactions are sustained, it becomes a star. That's all. A star-planet system is somewhat different, because they have a common center of mass. In some unlucky way, a star is already in a negligible orbit around the the common center of mass, while the planet orbits a star (which can be considered that it orbits the planet slightly).
-
Maybe phrase it better? You've said theoretically yes, and then contradicted yourself at the end of the sentence. That's misleading. This isn't even "theoretically" possible. – Kitchi Jan 31 at 8:29
Hey @Kitchi: I disagree with your quote "This isn't even theoretically possible". Why do you say so? – Ϛѓăʑɏ βµԂԃϔ Jan 31 at 10:46
Have a look at my answer to this question for a fuller explanation. :) – Kitchi Jan 31 at 11:05
Any calculations that it is theoretically possible? – Anixx Feb 1 at 9:11
1
"Nice pic" Befunky did that for me. I think I used the "Inkify2" tool for that. There are several other really neat effects in the free teasers. – dmckee♦ Apr 23 at 15:52
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9602701663970947, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/175408-polar-form-1-i.html
|
# Thread:
1. ## Polar form of -1 + i.
Write $z=i-1$ in form of ${re}^{iv}$
2. It's a math exercise! And it is asking you to change a complex number written in Cartesian form to its "polar form". The simplest way to do that is to graph i- 1 as (-1, 1) (using the x-axis as the real axis, the y-axis as the imaginary axis). Now, your "r" is the distance from (0, 0) to (-1, 1) and your "v" is the angle the line from (0,0) to (-1, 1) makes with the positive real (x) axis. Can you calculate those?
3. Well $r=\sqrt{1^2 + 1^2}=\sqrt{2}$
Angle is $315$
4. r is $\sqrt{1^2+1^2} = \sqrt{2}$
While $\theta = \tan^{-1}\left(\frac{-1}{1}\right), \theta \in (-\pi, \pi)$
5. I should type answer in radians?
Then $z={e\sqrt{2}}^{(i*{-0.758})}$ ?
6. I get $\displaystyle z= \sqrt{2}e^{\frac{3\pi}{4}i}$
7. Originally Posted by Critter314
Write $z=i-1$ in form of ${re}^{iv}$
$\text{Arg}(-1+\imath)=\dfrac{3\pi}{4}$.
So $\sqrt{2}\exp\left(\dfrac{3\imath\pi}{4}\right).$
8. How do I find an argument?
315 degrees in radians is $\frac{7\pi}{4}$
9. Originally Posted by Critter314
How do I find an argument?
The principle value of the argument of a complex number $z=a+bi$ not on any axis is found by the following.
$Arg(z) = \left\{ {\begin{array}{rl}<br /> {\arctan \left( {\frac{b}<br /> {a}} \right),} & {a > 0} \\<br /> {\arctan \left( {\frac{b}<br /> {a}} \right) + \pi ,} & {a < 0\;\& \,b > 0} \\ \\<br /> {\arctan \left( {\frac{b}<br /> {a}} \right) - \pi ,} & {a < 0\;\& \,b < \pi } \\<br /> \end{array} } \right.$
Please note that $\mathif{i}-1=-1+\mathif{i}$ and there $a=-1~\&~b=1$.
10. Originally Posted by Critter314
How do I find an argument?
315 degrees in radians is $\frac{7\pi}{4}$
315= 7(45) which is just 45 degrees short of the full circle. That would correspond to 1- i, not -1+ i.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9199767708778381, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/118626/real-symmetric-matrix-has-real-eigenvalues-elementary-proof/118657
|
## real symmetric matrix has real eigenvalues - elementary proof
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Every real symmetric matrix has at least one real eigenvalue. Does anyone know how to prove this elementary, that is without the notion of complex numbers?
-
9
This is a very weird notion of "elementary", isn't it? Defining the complex numbers using the reals takes hardly more than 1 page. There is a real-analysis proof of the spectral theorem which never uses complex numbers; instead, it uses induction and Lagrange multipliers to find the maximum of $\left|\left|Ax\right|\right|$ over $x\in S\left(0,1\right)$ (the sphere with center $0$ and radius $1$). This maximum is then shown to be an eigenvalue of $A$, and the vector $x$ for which the maximum is achieved is an eigenvector. ... – darij grinberg Jan 11 at 13:28
12
What does "has real eigenvalues" mean? Apparently it is not to be understood as "has no nonreal eigenvalues", since mention of complex numbers is forbidden. Does it mean "has at least one real eigenvalue"? Does it mean: (where the size is $n \times n$) "has $n$ linearly independent eigenvectors with real eigenvalues"? – Gerald Edgar Jan 11 at 14:16
4
I'm with Gerald in not being sure exactly what the question's asking. By definition, the eigenvalues of a matrix over a field $k$ are elements of $k$. So strictly speaking, the question is trivial; looking for a nontrivial interpretation, I guess it must be one of the two possibilities that Gerald mentions. @Z254R: yes, I think Gerald is helping to formulate the problem. – Tom Leinster Jan 11 at 17:08
3
@Z254R: As Gerald points out, it is still unclear whether by "has real eigenvalues" the OP means "has at least one real eigenvalue" or "has $n$ real eigenvalues". – Mark Meckes Jan 11 at 20:54
3
@marjeta: the point is that we shouldn't have to spend time guessing exactly what your question means, which is what many of these comments are trying to do. You should make it clear what your question means. – Tom Leinster Jan 12 at 22:23
show 9 more comments
## 9 Answers
If "elementary" means not using complex numbers, consider this.
1. First minimize the Rayleigh ratio $R(x)=(x^TAx)/(x^Tx).$ The minimum exists and is real. This is your first eigenvalue.
2. Then you repeat the usual proof by induction in dimension of the space.
3. Alternatively you can consider the minimax or maximin problem with the same Rayleigh ratio, (find the minimum of a restriction on a subspace, then maximum over all subspaces) and it will give you all eigenvalues.
But of course any proof requires some topology. The standard proof requires Fundamental theorem of Algebra, this proof requires existence of a minimum.
-
Alexander, when you said that the minimum is an eigenvalue, did you mean to prove it by applying the Lagrange multiplier equation to the function $f(x)=x^tAx$ restricted to a level set of $g(x)=x^tx$, or did you have a different idea in mind? – Marcos Cossarini Jan 13 at 23:56
1
Marcos. In that case, you can prove the Lagrange multiplier relation hand: if $\lambda$ is the minimum, then for every $y$, $(x+y)^TA(x+y)\geq \lambda (x+y)^T(x+y)$, that is, $2((y^T (Ax-\lambda x)\geq \lambda y^Ty - y^TAy$. The LHS is homoegeneous of degree $1$, the RHS of degree $2$. So the LHS has to be zero for every $y$. This implies $Ax=\lambda x$. – ACL Jan 14 at 0:15
Marcos: yes. ACL's explanation is one way to do it. – Alexandre Eremenko Jan 14 at 21:31
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
See, e.g., Folkmar Bornemann, Teacher's Corner - kurze Beweise mit langer Wirkung,'' DMV-Mitteilungen 3-2002, Seite 55 (in German, sory). I don't have the original reference, sorry.
The idea is simple, define $\Sigma(A)=\sum_{i=1}^n\sum_{j=i+1}^n a_{ij}^2$ for $A=(a_{ij})$ a symmetric real matrix. Then minimize the function $O(n)\ni J \mapsto \Sigma(J^TAJ)$ over the orthogonal group $O(n)$. The function is continuous and bounded below by zero, and $O(n)$ is compact, so the minimum is attained. But it can not be strictly positive, because if there is an $a_{ij}\not=0$, $i\not=j$, then you can make it zero by a rotation that acts only on the $i$th and $j$th row and column, so that it decreases $\Sigma$ (this is a simple little calculation with $2\times 2$ matrices). Therefore the minimum is zero and it is attained in a matrix $J$ for which $J^TAJ$ is diagonal.
The eigenvalues of $A$ are now the (diagonal) entries of $J^TAJ$. No complex numbers are used, but you have to know that the minimum exists. We get the existence of an orthonormal basis consisting of eigenvectors with real eigenvalues.
-
2
To add a little more detail: The total energy $\frac 12\sum a_{ij}$, which is the sum of the energy on the diagonal and $\Sigma$, is invariant by orthogonal conjugation, so we want to move it to the diagonal. When you apply a rotation $J$ in the plane spanned by the canonic vectors $e_i$ and $e_j$, which only affects the $i$th and $j$th rows and columns, the resulting coefficients $ii$, $ij$, $ji$, $jj$ of $J^tAJ$ depend only on the same coefficients of $A$, so the problem is reduced to increasing the energy on the diagonal of a $2\times 2$ matrix. – Marcos Cossarini Jan 13 at 2:00
I meant $\frac 12\sum a_{ij}^2$. – Marcos Cossarini Jan 13 at 14:21
This feels so wrong! :-) – Mariano Suárez-Alvarez Jan 14 at 4:29
Let me give it a try. This one only uses the existence of a maximum in a compact set, and the Cauchy-Schwarz inequality.
Let $T$ be a selfadjoint operator in a finite dimensional inner product space.
Claim: $T$ has an eigenvalue $\pm\|T\|$.
Proof: Let $v$ in the unit sphere be such that $\|Tv\|$ attains its maximum value $M=\|T\|$. Let $w$ also in the unit sphere be such that $Mw=Tv$ (which is like saying that $w=\frac{Tv}M$, except in the trivial case $T=0$).
This implies that $\langle w,Tv\rangle=M$. In fact, the only way that to unit vectors $v$ and $w$ can satisfy this equation is to have $Mw=Tv$. (Since we know that $\|w\|=1$ and $\|Tv\|\leq M$, the Cauchy-Schwarz inequality tells us that $|\langle w,Tv\rangle\|\leq M$, and the equality case is only attainable when $Tv$ is a scalar multiple of $w$, being $M$ the only possible value of the scalar.)
But by selfadjointness of $T$, we also know that $\langle v,Tw\rangle=M$, so that $Mv=Tw$.
Now, one of the two vectors $v\pm w$ is nonzero, and we can compute
$T(v\pm w)=Tv\pm Tw=Mw\pm Mv=M(w\pm v)=\pm M(v\pm w)$.
This concludes the proof that $\pm\|T\|$ is eigenvalue with eigenvector $v\pm w$. The reality of the other eigenvalues can be proved by induction, restricting to $(v\pm w)^\bot$ as in the usual proof of the spectral theorem.
Remark: The proof above works with real or complex spaces, and also for compact operators in Hilbert spaces.
Comment: I would like to know if this proof can be found in the literature. I obtained it while trying to simplify a proof of the fact that if $T$ is a bounded selfadjoint operator, then $\|T\|=\sup_{\|v\|\leq 1} \langle Tv,v\rangle$ (as found, for example, on p.32 of Conway J.B., "An Introduction to Functional Analysis"). In the case of non-compact operators, one can only prove that $T$ has as an approximate eigenvalue one of the numbers $\pm\|T\|$. The argument is similar to the one above, but knowledge of the equality case of Cauchy-Schwarz is not enough. One has to know that near-equality implies near-dependence. More precisely, let $v$ be a fixed unit vector, $M\geq 0$ and $\varepsilon\in[0,M]$. If $z$ is a vector with $\|z\|\leq M$ such that $|\langle v,z\rangle|\geq \sqrt{M^2-\epsilon^2}$, then it can be proved that $z$ is within distance $\varepsilon$ of $\langle v,z\rangle v$.
-
I don't see why this is different from Alexander Eremenko's answer. – Deane Yang Jan 13 at 22:10
1
I don't understand Alexander's answer. How do you prove that if $R(x)=\frac{x^tAx}{x^tx}$ is maximum, then $x$ is an eigenvector? I got nowhere by derivating $R$, and the only easy way that I see to complete his proof is to normalize $x$ to get a maximum of $x^tAx$ in the unit sphere, and then write the Lagrange multipliers equation that tells you that $x$ is an eigenvector. – Marcos Cossarini Jan 13 at 23:35
But Lagrange multipliers is, in my opinion, different from the argument above, which in fact was originally designed to deal with bounded operators, as explained in the comment. Can Lagrange multiplier be used to prove that $\pm\|T\|$ is an approximate eigenvalue of a bounded operator T? If not, is this enough to conclude that the proofs are different? – Marcos Cossarini Jan 13 at 23:45
1
I think that the main difference is that Alexander extremises $x^tAx$ and I extremise $y^tAx$. That the two situations are not trivially equal is the subject of p.32 of Conway. – Marcos Cossarini Jan 14 at 0:33
We can do it in two steps.
Step 1: show that if $A$ is a real symmetric matrix, there is an orthogonal matrix $L$ such that $A=LHL^T$, where $H$ is tridiagonal and its off-diagonal entries are non-negative. (Apply Gram-Schmidt to sets of vectors of the form ${x,Ax,\ldots,A^mx}$, or use Householder transformations, which is the same thing.)
Step 2. We need to show that the eigenvalues of tridiagonal matrices with non-negative off-diagonal entries are real. We can reduce to the case where $H$ is indecomposable. Assume it is $n\times n$ and let $\phi_{n-r}$ the the characteristic polynomial of the matrix we get by deleting the first $r$ rows and columns of $H$. Then $$\phi_{n-r+1} = (t-a_r)\phi_{n-r} -b_r \phi_{n-r-1},$$ where $b>0$. Now prove by induction on $n$ that the zeros of $\phi_{n-r}$ are real and are interlaced by the zeros of $\phi_{n-r-1}$. The key here is to observe that this induction hypothesis is equivalent to the claim that all poles and zeroes of $\phi_{n-r-1}/\phi_{n-r}$ are real, and in its partial fraction expansion all numerators are positive. From this it follows that the derivative of this rational function is negative everywhere it is defined and hence, between each consecutive pair of zeros of $\phi_{n-r-1}$ there must be a real zero of $\phi_{n-r}$.
-
Might it be done using eigenvalue interlacing on the original matrix rather than reducing to tridiagonal form first? – Brendan McKay Jan 13 at 5:54
I can do it if I am allowed to use spectral decomposition. Write $A$ as $A_1 + bb^T$, where the first row and column of $A_1$ are both zero. (If needed replace $A$ by $-A$.) Then $$\det(tI-A) = \det(tI-A_1-bb^T) = \det(tI-A_1)\det(I-(tI-A)^{-1}bb^T)$$ and since $\det(I-uv^T)=1-v^Tu$, we get that $\det(tI-A)/\det(tI-A_1)$ is equal to $1-b^T(tI-A_1)^{-1}b$. Now use spectral decomposition to deduce that the numerators in $b^T(tI-A_1)^{-1}b$ are real. (This argument is logical, but it might not be a lot of fun in a classroom.) – Chris Godsil Jan 13 at 15:38
This is just the details of the first step of Alexander Eremenko's answer (so upvote his answer if you like mine), which I think is by far the most elementary. You only need two facts: A continuous function on a compact set in $R^n$ achieves its maximum (or minimum), and the derivative of a smooth function vanishes at a local maximum. And there's no need for Lagrange multipliers at all.
Let $C$ be any closed annulus centered at $0$. The function $$R(x) = \frac{x\cdot Ax}{x\cdot x},$$ is continuous on $R^n\backslash{0}$ and therefore achieves a maximum on $C$. Since $R$ is homogeneous of degree $0$, any maximum point $x \in C$ is a maximum point on all of $R^n\backslash{0}$. Therefore, for any $v \in R^n$, $t = 0$ is a local maximum for the function $$f(t) = R(x + tv).$$ Differentiating this, we get $$0 = f'(0) = \frac{2}{x\cdot x}[Ax - R(x) x]\cdot v$$ This holds for any $v$ and therefore $x$ is an eigenvector of $A$ with eigenvalue $R(x)$.
-
(You could add this to his answer, probably) – Mariano Suárez-Alvarez Jan 14 at 4:26
Another elementary proof, based on the order structure of symmetric matrices. Let me first recall the basic definitions and facts to avoid misunderstandings: we define $A\ge B$ iff $(A-B)x\cdot x\ge0$ for all $x\in\mathbb{R}^n$). Also, a lemma:
A symmetric matrix $A$, which is positive and invertible, is also definite positive (that is, $A\ge \epsilon I$ for some $\epsilon > 0\$).
We may say, equivalently: if $A$ is positive but, for any $\epsilon >0$, the matrix $A-\epsilon I$ is not, then $A$ is not invertible. (A quick proof passes through the square root of $A$: $(Ax\cdot x)=\|A^{1/2} x\|^2 \ge \|A^{-1/2}\|^{-2} \| x\|^2$; one has to construct $A^{1/2}$ before, without diagonalization, of course).
As a consequence, $\alpha^*:=\sup_{|x|=1}(Ax \cdot x)$ is an eigenvalue of $A$, because $\alpha^*I-A$ is positive and $\alpha^*I-A-\epsilon I$ is not (and $\alpha _ *:=\inf _ {|x|=1}(Ax \cdot x)$ too, for analogous reasons).
The complete diagonalization is then performed inductively, as in other proofs.
-
This is quite an interesting question, perhaps a research problem. I think an elementary answer should be a high school algebra answer in the sense of Abhyankar and it would have to be in the spirit of what follows. But first a little story.
I was teaching linear algebra and had just covered eigenvalues and characteristic polynomials but was not yet at the chapter on the spectral theorem for real symmetric matrices. I was looking for problems to assign for my students as homework in the textbook we were using. One of the exercises was to show that a real matrix $$A=\left[ \begin{array}{cc} \alpha & \beta \\ \beta & \gamma \end{array} \right]$$ only had real eigenvalues. Not too hard. Write the characteristic polynomial $$\chi(\lambda)=det(\lambda I-A)=\lambda^2-(\alpha+\gamma)\lambda+\alpha\gamma-\beta^2$$ then its discriminant is $$\Delta=(\alpha+\gamma)^2-4(\alpha\gamma-\beta^2)=(\alpha+\gamma)^2+4\beta^2\ge 0\ .$$ Hence two real roots.
The next problem in the book was to do the same for $$A=\left[ \begin{array}{ccc} \alpha & \beta & \gamma\\ \beta & \delta & \varepsilon \\ \gamma & \varepsilon & \zeta \end{array} \right]$$ and (silly me) I also assigned it...
Here is the solution in the 3X3 case. All roots are real if the discriminant (for a binary cubic) is nonnegative. The discriminant of the characteristic polynomial is $$\Delta = (\delta \varepsilon ^{2} + \delta \zeta ^{2} - \zeta \delta ^{2} - \zeta \varepsilon ^{2} + \zeta \alpha ^{2} + \zeta \gamma ^{2} - \alpha \gamma ^{2} - \alpha \zeta ^{2} + \alpha \beta ^{2} + \alpha \delta ^{2} - \delta \alpha ^{2} - \delta \beta ^{2})^{2} \\ \mbox{} + 14(\delta \gamma \varepsilon - \beta \varepsilon ^{2} + \beta \gamma ^{2} - \alpha \gamma \varepsilon )^{2} \\ \mbox{} + 2(\delta \alpha \gamma + \delta \beta \varepsilon + \delta \gamma \zeta - \gamma \delta ^{2} - \gamma \varepsilon ^{2} + \gamma ^{3} - \alpha \beta \varepsilon - \alpha \gamma \zeta )^{2} \\ \mbox{} + 2(\delta \beta \gamma + \delta \varepsilon \zeta - \varepsilon ^{3} + \varepsilon \alpha ^{2} + \varepsilon \gamma ^{2} - \alpha \beta \gamma - \alpha \delta \varepsilon - \alpha \varepsilon \zeta )^{2} \\ \mbox{} + 2(\zeta \alpha \beta + \zeta \beta \delta + \zeta \gamma \varepsilon - \beta \varepsilon ^{2} - \beta \zeta ^{2} + \beta ^{3} - \delta \alpha \beta - \alpha \gamma \varepsilon )^{2} \\ \mbox{} + 14(\zeta \beta \varepsilon - \gamma \varepsilon ^{2} + \gamma \beta ^{2} - \alpha \beta \varepsilon )^{2} \\ \mbox{} + 2(\zeta \beta \gamma + \delta \varepsilon \zeta - \varepsilon ^{3} + \varepsilon \alpha ^{2} + \varepsilon \beta ^{2} - \alpha \beta \gamma - \alpha \delta \varepsilon - \alpha \varepsilon \zeta )^{2} \\ \mbox{} + 14(\varepsilon \beta ^{2} + \zeta \beta \gamma - \delta \beta \gamma - \varepsilon \gamma ^{2})^{2} \\ \mbox{} + 2(\zeta \alpha \beta + \zeta \beta \delta + \zeta \gamma \varepsilon - \beta \gamma ^{2} - \beta \zeta ^{2} + \beta ^{3} - \delta \alpha \beta - \delta \gamma \varepsilon )^{2} \\ \mbox{} + 2(\alpha \gamma \zeta + \zeta \beta \varepsilon - \gamma ^{3} + \gamma \beta ^{2} + \gamma \delta ^{2} - \delta \alpha \gamma - \delta \beta \varepsilon - \delta \gamma \zeta )^{2}\ .$$
This formula comes from a paper by Ilyushechkin in Mat. Zametki, 51, 16-23, 1992.
I suspect the elementary answer should be as follows. First find a list of invariants or covariants of binary forms $C_1,C_2,\ldots$ such that a form with real coefficients has only real roots iff these covariants are nonnegative. Apply this to the characteristic polynomial of a general real symmetric matrix and show that you get sums of squares. I suppose these covariants, via Sturm's sequence type arguments, should correspond to subresultants or rather subdiscriminants. This seems also related to Part 2) of Godsil's answer.
-
The fact that real symmetric matrix is ortogonally diagonalizable can be proved by induction. The crucial part is the start. Namely, the observation that such a matrix has at least one (real) eigenvalue. But this can be done in three steps.
(1) An easy observation (using direct matrix multiplication) shows that all columns of a matrix `$\mathbf{A}\in\mathbb{R}_{m\times n}$` are orthogonal to any vector $z\in\mathbb{R}_{m\times 1}$ iff $z$ belongs to the null space of the transpose $\mathbf{A}^{\sf T}$, i.e. $\mathcal{N}(\mathbf{A}^{\sf T})=\mathcal{R}(\mathbf{A})^{\perp}$.
(2) If $\mathbf{S}^{\sf T}=\mathbf{S}$ and $\mathbf{S}x \neq 0$ for every $x\neq 0$, then the dot product $\langle\mathbf{S}x,x\rangle\neq 0$ for any $x\neq 0$ as well. Otherwise, if $\langle \mathbf{S}z,z\rangle=0$ for some $z\neq 0$, then we have, using (1), $z\in\mathcal{R}(\mathbf{S})^{\perp}=\mathcal{N}(\mathbf{S}^{\sf T})= \mathcal{N}(\mathbf{S})$, i.e. a contradiction $z\ne0$ and $\mathbf{S}z=0$.
(3) If matrix $\mathbf{A}=\mathbf{A}^{\sf T}\in\mathbb{R}_{n\times n}$ has no (real) eigenvalue, then $(t\mathbf{I}-\mathbf{A})x\neq 0$ for any $x\neq 0$ and every $t\in\mathbb{R}$. Consequently, according to (2), we have $\langle(t\mathbf{I}-\mathbf{A})y,y\rangle\neq 0$ for fixed $y\neq 0$ and $t\in\mathbb{R}$. Therefore $t\|y\|^2 -\langle\mathbf{A}y,y\rangle \neq 0$ for every $t\in\mathbb{R}$, which is impossible.
-
3
I'm confused by your (2)...doesn't putting $S=\left(\begin{array}{cc} 0 & 1 \\ 1 & 0\end{array}\right)$ and $z=\left(\begin{array}{c} 1 \\ 0\end{array}\right)$ give a counterexample? The statement that $\langle Sz,z\rangle=0$ isn't enough to imply that $z$ is orthogonal to the range. – Mike Usher Jan 12 at 18:40
1
Thank you for the comment. Unfortunately, the relation $z\perp\mathbf{S}z$ does not imply $z\perp y$ for \underline{every} $y\in\mathcal{R}(\mathbf{S})$. I apologize for the false proof. Vito Lampret – Vito Jan 13 at 13:48
Just found in Godsil-Royle's Algebraic graph theory: One first proves that two eigenvectors associated with two different eigenvalues are necessarily orthogonal to each other (pretty standard), then observes that if $u$ is eigenvector associated with eigenvalue $\lambda$, then $\bar u$ is eigenvector associated with eigenvalue $\bar\lambda$. Now the eigenvalues $\lambda,\bar\lambda$ cannot be different, for otherwise by the above observation $0=u^T u=\|u\|^2$ although $u\not=0$.
(It does contain complex numbers, but is still amazingly straightforward).
-
This is what I would call the standard approach (going through operators on ${\mathbb C}^n$) and as such I don't think it really fulfils the requirements of the original question. – Yemon Choi Feb 26 at 23:45
Yes, this is how an operator theorist would do it. But the question was also the existence of an eigenvalue (possibly without the fundamental theorem of algebra). Is there an argument for it too? – András Bátkai Feb 27 at 7:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 200, "mathjax_display_tex": 10, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260616302490234, "perplexity_flag": "head"}
|
http://cms.math.ca/cmb/v53/n4/
|
Canadian Mathematical Society
www.cms.math.ca
| | | | |
|----------|----|-----------|----|
| | | | | | |
| Site map | | | CMS store | |
location: Publications → journals → CMB
« Vol. 53 No. 3 Vol. 54 No. 1 »
Volume 53 Number 4 (Dec 2010)
Looking for a printed back issue?
Page
Contents
577
Asgharzadeh, Mohsen; Tousi, Massoud
This paper discusses the connection between the local cohomology modules and the Serre classes of $R$-modules. This connection has provided a common language for expressing some results regarding the local cohomology $R$-modules that have appeared in different papers.
587
Birkenmeier, Gary F.; Park, Jae Keol; Rizvi, S. Tariq
We investigate the behavior of the quasi-Baer and the right FI-extending right ring hulls under various ring extensions including group ring extensions, full and triangular matrix ring extensions, and infinite matrix ring extensions. As a consequence, we show that for semiprime rings $R$ and $S$, if $R$ and $S$ are Morita equivalent, then so are the quasi-Baer right ring hulls $\widehat{Q}_{\mathfrak{qB}}(R)$ and $\widehat{Q}_{\mathfrak{qB}}(S)$ of $R$ and $S$, respectively. As an application, we prove that if unital $C^*$-algebras $A$ and $B$ are Morita equivalent as rings, then the bounded central closure of $A$ and that of $B$ are strongly Morita equivalent as $C^*$-algebras. Our results show that the quasi-Baer property is always preserved by infinite matrix rings, unlike the Baer property. Moreover, we give an affirmative answer to an open question of Goel and Jain for the commutative group ring $A[G]$ of a torsion-free Abelian group $G$ over a commutative semiprime quasi-continuous ring $A$. Examples that illustrate and delimit the results of this paper are provided.
602
Boij, Mats; Geramita, Anthony
The bigraded Hilbert function and the minimal free resolutions for the diagonal coinvariants of the dihedral groups are exhibited, as well as for all their bigraded invariant Gorenstein quotients.
614
Böröczky, Károly J.; Schneider, Rolf
For a given convex body $K$ in ${\mathbb R}^d$, a random polytope $K^{(n)}$ is defined (essentially) as the intersection of $n$ independent closed halfspaces containing $K$ and having an isotropic and (in a specified sense) uniform distribution. We prove upper and lower bounds of optimal orders for the difference of the mean widths of $K^{(n)}$ and $K$ as $n$ tends to infinity. For a simplicial polytope $P$, a precise asymptotic formula for the difference of the mean widths of $P^{(n)}$ and $P$ is obtained.
629
Chinen, Naotsugu; Hosaka, Tetsuya
In this paper, we investigate a proper CAT(0) space $(X,d)$ that is homeomorphic to $\mathbb R^2$ and we show that the asymptotic dimension $\operatorname{asdim} (X,d)$ is equal to $2$.
639
Coykendall, Jim; Dutta, Tridib
In this paper, we explore a generalization of the notion of integrality. In particular, we study a near-integrality condition that is intermediate between the concepts of integral and almost integral. This property (referred to as the $\Omega$-almost integral property) is a representative independent specialization of the standard notion of almost integrality. Some of the properties of this generalization are explored in this paper, and these properties are compared with the notion of pseudo-integrality introduced by Anderson, Houston, and Zafrullah. Additionally, it is shown that the $\Omega$-almost integral property serves to characterize the survival/lying over pairs of Dobbs and Coykendall
654
Elliott, P. D. T. A.
It is shown that an old direct argument of Erdős and Heilbronn may be elaborated to yield a result of the current inverse type.
661
Johnstone, Jennifer A.; Spearman, Blair K.
We give an infinite family of congruent number elliptic curves each with rank at least three.
667
Khashyarmanesh, Kazem
Let $R$ be a commutative Noetherian ring and $\mathfrak{a}$ a proper ideal of $R$. We show that if $n:=\operatorname{grade}_R\mathfrak{a}$, then $\operatorname{End}_R(H^n_\mathfrak{a}(R))\cong \operatorname{Ext}_R^n(H^n_\mathfrak{a}(R),R)$. We also prove that, for a nonnegative integer $n$ such that $H^i_\mathfrak{a}(R)=0$ for every $i\neq n$, if $\operatorname{Ext}_R^i(R_z,R)=0$ for all $i >0$ and $z \in \mathfrak{a}$, then $\operatorname{End}_R(H^n_\mathfrak{a}(R))$ is a homomorphic image of $R$, where $R_z$ is the ring of fractions of $R$ with respect to a multiplicatively closed subset $\{z^j \mid j \geqslant 0 \}$ of $R$. Moreover, if $\operatorname{Hom}_R(R_z,R)=0$ for all $z \in \mathfrak{a}$, then $\mu_{H^n_\mathfrak{a}(R)}$ is an isomorphism, where $\mu_{H^n_\mathfrak{a}(R)}$ is the canonical ring homomorphism $R \rightarrow \operatorname{End}_R(H^n_\mathfrak{a}(R))$.
674
Kristály, Alexandru; Papageorgiou, Nikolaos S.; Varga, Csaba
We study a semilinear elliptic problem on a compact Riemannian manifold with boundary, subject to an inhomogeneous Neumann boundary condition. Under various hypotheses on the nonlinear terms, depending on their behaviour in the origin and infinity, we prove multiplicity of solutions by using variational arguments.
684
Proctor, Emily; Stanhope, Elizabeth
We construct a Laplace isospectral deformation of metrics on an orbifold quotient of a nilmanifold. Each orbifold in the deformation contains singular points with order two isotropy. Isospectrality is obtained by modifying a generalization of Sunada's theorem due to DeTurck and Gordon.
690
Puerta, M. E.; Loaiza, G.
The classical approach to studying operator ideals using tensor norms mainly focuses on those tensor norms and operator ideals defined by means of $\ell_p$ spaces. In a previous paper, an interpolation space, defined via the real method and using $\ell_p$ spaces, was used to define a tensor norm, and the associated minimal operator ideals were characterized. In this paper, the next natural step is taken, that is, the corresponding maximal operator ideals are characterized. As an application, necessary and sufficient conditions for the coincidence of the maximal and minimal ideals are given. Finally, the previous results are used in order to find some new metric properties of the mentioned tensor norm.
706
Roberts, R.; Shareshian, J.
We exhibit infinitely many hyperbolic $3$-manifold groups that are not right-orderable.
719
Stasyuk, I.; Tymchatyn, E. D.
We consider the problem of simultaneous extension of continuous convex metrics defined on subcontinua of a Peano continuum. We prove that there is an extension operator for convex metrics that is continuous with respect to the uniform topology.
730
Theriault, Stephen D.
The fiber $W_{n}$ of the double suspension $S^{2n-1}\rightarrow\Omega^{2} S^{2n+1}$ is known to have a classifying space $BW_{n}$. An important conjecture linking the $EHP$ sequence to the homotopy theory of Moore spaces is that $BW_{n}\simeq\Omega T^{2np+1}(p)$, where $T^{2np+1}(p)$ is Anick's space. This is known if $n=1$. We prove the $n=p$ case and establish some related properties.
737
Vougalter, Vitali
A new and elementary proof is given of the recent result of Cuccagna, Pelinovsky, and Vougalter based on the variational principle for the quadratic form of a self-adjoint operator. It is the negative index theorem for a linearized NLS operator in three dimensions.
746
Werner, Caryn
We construct new examples of surfaces of general type with $p_g=0$ and $K^2=5$ as ${\mathbb Z}_2 \times {\mathbb Z}_2$-covers and show that they are genus three hyperelliptic fibrations with bicanonical map of degree two.
757
Woo, Alexander
We extend the idea of interval pattern avoidance defined by Yong and the author for $S_n$ to arbitrary Weyl groups using the definition of pattern avoidance due to Billey and Braden, and Billey and Postnikov. We show that, as previously shown by Yong and the author for $\operatorname{GL}_n$, interval pattern avoidance is a universal tool for characterizing which Schubert varieties have certain local properties, and where these local properties hold.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8925490379333496, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/203890/how-to-solve-cos-pi-2t-ge-0/203893
|
# How to solve $\cos(\pi/2+t)\ge 0$?
I have a trig question. How do you I solve this. I appreciate much if you could show it step by step. Find all the value of in the interval $[0,2\pi]$ for which $\cos(\pi/2+t)\ge 0$.
-
The answer in my solution book is : t E [pi,3pi/2] or t E [3pi/2,2pi]. Can anyone tell me how to reach the answer? – Michael Sep 28 '12 at 9:56
## 3 Answers
\begin{align} \cos(\frac{\pi}{2}+t)&=-\sin(t)\ge0\\ \sin(t)&\le0 \end{align}
From the sine graph, the solution is $[\pi,2\pi]$.
Or if you plot $\cos(\frac{\pi}{2}+t)$ as shown in the following graph,
the solution is also $[\pi,2\pi]$.
-
Thanks for the fast reply, but the answer in my solution book is - t E [pi,3pi/2] or t E [3pi/2,2pi]. I don't know how it get it. – Michael Sep 28 '12 at 9:49
@Michael: your book has the same answer, it's just written complicated. Think about it; $t \in [\pi, 3\pi/2]$ or $t \in [3\pi/2, 2\pi]$ is the same thing as $t \in [\pi, 2\pi]$. – Javier Badia Sep 28 '12 at 12:32
Either something from your book got copied wrong or the answer given was incorrect. Let's find the answer through a simple substition. Let $u=t+\frac\pi2$. If $t\in[0,2\pi]$, then $u\in[\frac\pi2,\frac{5\pi}2]$. Now where on this interval is $\cos u$ positive? From $\frac{3\pi}2$ to $\frac{5\pi}2$. $u\in[\frac{3\pi}2,\frac{5\pi}2]$ corresponds to $t\in[\pi,2\pi]$
-
I double check it. It is. – Michael Sep 28 '12 at 10:20
$\cos(\frac{\pi}2+t)\ge 0$ $\implies 2n\pi-\frac{\pi}2\le \frac{\pi}2+t\le 2n\pi+\frac{\pi}2$ where $n$ is any integer as the angle must lie in the 1st and 4th quadrant.
$\implies (2n-1)\pi\le t\le 2n\pi$
The special values are
$-\pi\le t\le 0$ for $n=0$,
$\pi \le t \le 2\pi$ for $n=1$,
$3\pi \le t \le 4\pi$ for $n=2$,
As $t$ lies in $[0,2\pi]$, the solution should be $\pi \le t \le 2\pi$.
Alternatively,
as $t\ge 0, (2n-1)\pi \ge 0 \implies n\ge 1$
as $t \le 2\pi, 2n\pi\le 2\pi \implies n\le 1$
So, $n=1, \pi \le t \le 2\pi$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9180035591125488, "perplexity_flag": "middle"}
|
http://cs.stackexchange.com/questions/9095/is-resolution-complete-or-only-refutation-complete
|
# Is resolution complete or only refutation-complete?
Going through some knowledge representation tutorials on resolution at the moment, and I came across slide 05.KR, no77.
There it is mentioned that "the procedure is also complete".
I think this completeness can not mean that if a sentence is entailed by KB, then it will be derived by resolution. For example, resolution can not derive $(q \lor \neg q)$ from a KB with single clause $\neg p$. (Example from KRR, Brachman and Levesque, page 53).
Could anyone help me figure out what is meant in this slide? Is the completeness of slide refer to being refutaton-complete and not a complete proof procedure?
-
1
Have you read the fine print on the slide? If KB entails $f$, then you can refute KB$\land\lnot f$ using resolution. – Yuval Filmus Jan 22 at 18:09
I was able to remove some jargon, but was are "KB" and "KRR"? – Raphael♦ Jan 22 at 21:05
1
@Raphael probably Knowledge Base (set of true sentences) and Knowledge Representation and Reasoning. – Pål GD Jan 22 at 21:23
## 2 Answers
Resolution is complete as a refutation system. That is, if $S$ is a contradictory set of clauses, then resolution can refute $S$, i.e. $S \vdash \bot$.
This is sufficient since $T \vdash A$ is equivalent to $T \cup \{\lnot A\} \vdash \bot$. So if we want to see a formula $A$ is derivable from $T$, we only need to check if here is refutation proof for $T \cup \{\lnot A\}$ which can be checked using resolution.
-
Resolution is only refutationally complete, as you mentioned. This is intended and very useful, because it drastically reduces the search space. Instead of having to eventually derive every possible consequence (to find a proof of some conjecture), resolution is only trying to derive the empty clause.
-
Thank you all for your replies – BingWen Hui Jan 25 at 18:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932623565196991, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/2508/rotation-angle-of-a-giant-lily-when-a-child-crawls-on-its-rim?answertab=active
|
# Rotation angle of a giant lily when a child crawls on its rim
Below is a picture of Giant Water Lily. Scientific Name: Victoria Amazonica. Leaves of some of these could be as big as 3 m diameter and carry a weight of 45kg spread evenly and can support a child. Now the problem:
Suppose that a leaf of such flower with a child is floating freely on water. Child crawls along the edge of the leaf until it arrives back to the starting point. In other words, he makes a full circle in the reference frame of the leaf. Question:
What is the total angle $\theta$ that the leaf turns through in time child crawls? (in the reference frame of water). Assume that the leaf is a large rigid circular disk. Ignore air and water resistance.
Edit:
ftiaronsem's solution is absolutely correct if we assume that the leaf can only freely rotate about its geometrical center. However i was keeping in an eye that the leaf is not connected to the ground and can freely move in any direction.
Data given:
$m$ (mass of child)
$M$ (mass of the leaf)
-
1
@Sklivvz: the answer can be computed from conservation of angular momentum. The ratio of the angular speeds of the child and the disc will therefore be the same as ratio of their moment of inertia relative to the disc's center. This ratio can be pretty much arbitrary and so can be the resulting total angle. – Marek Jan 4 '11 at 12:05
1
@Sklivvz: just consider the case when the disc is infinitely heavy (and suppose it would still float somehow for the sake of argument). Then it wouldn't move at all. – Marek Jan 4 '11 at 12:06
5
Should we consider giving this a new title? (I'm not saying we should, I'm asking). The current title is attractive, but it gives absolutely no idea of what's inside. – Bruce Connor Jan 4 '11 at 14:50
1
@Bruce Connor: You are correct. A "cute" title might entertain browsers briefly but it does nothing to bring search users to this site. Search only brings about 30% of your traffic but should eventually be about 60-80% of visitors. Please fix. – Robert Cartaino♦ Jan 4 '11 at 16:00
4
Tried a better title - please revert or change to something better if you disagree. – Sklivvz♦ Jan 4 '11 at 16:30
show 8 more comments
## 6 Answers
This answer is just for the purpose of discussion. It will be edited sometimes and It may contain wrong conclusions.
@Martin. I wanted to add another argument that one has a spinning motion as well as an orbital motion. In order to keep the arguments seperated, I choose a new answer. Please consider the following
If one has a force acting anywhere on a rigid body, this force is causing an acceleration of the center of mass of the body (described by $F=ma$) and it is simultaneously causing an angular acceleration of the body (described by $M=r\times F$).
So in case of our baby, the force acted upon the pad causes the center of mass of the pad to translate and causes the pad to spin. Using the above principle it should be pretty obvious that both a spinning (around the center of mass of the pad) and a translational motion (an orbital one in our case) of the pad take place.
In order to further clarify the intended motion I have drawn new pictures, illustrating the situation. The red line is marking the starting position of the baby. The curved arrow is indicating the angular velocity of the pad (counterclockwise). In these pictures, the baby itself is always moving clockwise.
In these pictures, one can see an orbital motion of the center of the pad around the center of masss of the system (COM). This motion is caused by $F=ma$. Furthermore one can see a spinning motion of the pad around the center of the pad, which is caused by $M=r\times F$. As one can see in these pictures, both angular motions contribute to the displacement between the childs current and its original position.
@Mark. If you are still following this question, feel free to include or modify any of my pictures in your answer.
-
@ftiaronsem. Ok, let's consider only a spinning motion. But in this case the pad does not rotate around the center of mass of the system(the COM). The center of the pad is stationary relative the COM. Only rotation around the COM contributes to the angular momentum of the whole system. You can not add the spinning momentum to the angular momentum of the whole system. – Martin Gales Feb 18 '11 at 9:05
@Martin. One isn't allowed to consider only one motion. A fundamental principle of classical mechanics states that every force on a rigid body, is causing both motions (translational and rotational) to happen at the same time (described by the equations above). In my new pictures you can then observe how these two motions sum up to the total displacement of the child relative to its starting position. – ftiaronsem Feb 18 '11 at 18:06
@ftiaronsem. Your last response is an excellent one. This make things much more clear also to me. Quote: "In these pictures, one can see an orbital motion of the center of the pad around the center of masss of the system (COM). This motion is caused by F=ma". Thus this motion contributes to the linear momentum not to the angular momentum. I was also incorrect when i thought that this translational motion contributes to the angular momentum. I would have understood it earlier: translational motion contributes only to the linear momentum. You have shown yourself (inadvertently) that... – Martin Gales Feb 19 '11 at 9:56
...your solution is incorrect. – Martin Gales Feb 19 '11 at 9:56
Ahh, great, this discussion is really making progress. I will start with a quote from wikipedia concerning angular momentum: The angular momentum L of a particle about a given origin is defined as: $M = r \times p$ where $r$ is the position vector of the particle relative to the origin, $p$ is the linear momentum of the particle There are several important things here to notice. First angular momentum is always defined, no matter whether we have a real rotation or simply a linear motion. One can define an angular momentum for every given point in space. – ftiaronsem Feb 19 '11 at 17:39
show 68 more comments
This answer is just for the purpose of discussion. It will be edited sometimes and It may contain wrong conclusions.
@Martin: I am trying to image the motion you are describing. Unfortunatelly I have some severe difficulties, since I always run into some kind of contradiction. Therefore I have drawn two motions, in which the pad is not rotating. Please tell me which one you think is appropriate for this problem.
Motion one:
Motion two:
Sorry for one time drawing a clockwise and one time drawing a counterclockwise baby-motion. As I noticed, it was already too late to change. But this should make no difference for the sake of argument.
Please also tell, if you think of a different motion of the pad and try describing / or drawing it.
Thanks
ftiaronsem
-
@ftiaronsem, Yes motion two is the motion what I am keeping in an eye. – Martin Gales Feb 16 '11 at 9:35
@Martin. Hmm, but you notice that the center of the pad and the child are always opposite to the COM. They do not change their relative positions. So with this motion the baby is never leaving his initial position on the pad. – ftiaronsem Feb 16 '11 at 15:28
@ftiaronsem, but you said that the pad does not rotate(relative the coordinate system in the figure). In this case the angular displacement of child relative the pad is the same as the angular displacement of the center of the pad relative the COM and the center of the pad and the child are always opposite to the COM. Note that the angular displacement of the center of the pad relative the COM does not count as an angular displacement of the pad relative the coordinate system (which is still zero). – Martin Gales Feb 17 '11 at 6:48
@Martin. Im am very sorry but I again have difficulties in following your resoning. You say In this case the angular displacement of child relative the pad is the same as the angular displacement of the center of the pad relative the COM. I have difficulties understanding the last part: angular displacement of the center of the pad relative the COM There can be no angular displacement between two points. Which lines in my figure have you ment? – ftiaronsem Feb 17 '11 at 17:57
@ftiaronsem Look at the right figure. The straight line between center of pad and child coincides with the horizontal axis. Now, look at the figure which is in the middle. The line have turned counterclockwise by an angle. You said that the pad does not rotate.(that is a translational motion) The only way how the child can get to the position on this figure is to rim by the angle relative the pad. – Martin Gales Feb 18 '11 at 8:11
show 1 more comment
This first part asumes the following two things:
1) The flower is connected to the ground, i.e. by its shaft, so that its center of mass is not moving.
2) The child stops as soon as it reaches it's original position on the flower (Not its original position in the ground reference frame).
As Marek correctly pointed out:
$I\omega_l=mR^2\omega_c$
where $I$ is the Moment of Inertia of the flower, $w_l$ its angular velocity, $R$ its Radius and where $\omega_c$ ist the angular velocity of the child.
This can directly be derived from the conservation of angular momentum.
By integrating this over the time $t$ the child is crawling, you get:
$I\int_0^t\omega_l=mR^2\int_0^t\omega_c$
$I\omega_lt=mR^2\omega_c t$
Now, one might be tempted to think that $\omega_ct = 2 \pi$. However this is not true (Credits go to Sklivvz), since the flower is also spinning. However we know that $w_ct + w_lt = 2\pi$:
$I\omega_lt=mR^2(2\pi - \omega_lt)$
$I\omega_lt + m R^2 \omega_lt =mR^22\pi$
Now by knowing that the moment of inertia of a disk is $\frac{M}{2}R^2$ (I can add a prove if you want me to), we get:
$\frac{M}{2}R^2 \omega_lt + m R^2 \omega_lt =mR^22\pi$
$\frac{M}{2} \theta + m \theta =m2\pi$
$M \theta + 2m \theta =m4\pi$
$\theta=\frac{m4\pi}{M+2m}$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As discussed in the comments, not only the child has an angular velocity around the center of mass, but the entire system is rotating around the center of mass.
In my first attempt to solve this general case of the problem (which I have deleted due to it being nonsense), I assumed that the translational part of the flower is covered by applying the parallel axis theorem. However, as Mark pointed out correctly, the parallel axis theorem is not applicable here. For a detailed explanation watch Mark's great video and read through the book chapter he linked in the comments.
Please see Mark's Post for a detailed and correct analysis of this situation.
-
1
well at least you are partialy right I think. I don't know how you get to $\theta=\frac{m}{M-2m}\pi$. I think the disk turning 180 degrees would only be correct if the entire mass of the disk would be at his edge. However since the mass distribution is homogenues I think it makes sense, that the disk is moving more than 180 degrees, in case both masses are equal. After all the mass elements nearer at the center have a lower contribution to the moment of inertia. – ftiaronsem Jan 4 '11 at 14:23
1
Ahh, wait now i relice what you ment. No, the child is not crawling a circle. It is just crawling until it reaches its starting point. Thats the difference. You assumed the first, I the latter. From the task, you can read that the child crawls until reaching his starting point. (I do not believe that the child is aware of the ground reference frame, so it will be contempt as it reaches its point of origin on the plant ^^) – ftiaronsem Jan 4 '11 at 14:59
1
@ftiaronsem If the baby crawls one way, then the entire pad must move the opposite way. Further, if the baby has angular momentum about the center of mass of the system, then the pad must have opposite angular momentum about the center of mass of the system. The result is that the pad moves in a circle around the center of mass, but that it rotates in the opposite direction that it moves. If the pad goes around the center of mass counterclockwise, the pad rotates clockwise. – Mark Eichenlaub Jan 6 '11 at 10:18
1
@Martin The center of the pad moves in a circle around the center of mass going one way. The pad rotates about its own center the other way. I down voted because I think the answer is wrong. If it changes or if I'm convinced I'm incorrect, I'll remove the down vote. – Mark Eichenlaub Jan 6 '11 at 10:51
1
@ftiaronsem Since the OP "accepted" your answer, it has the Physics.se stamp of approval. I'd appreciate if you could change it to the correct answer at your earliest convenience. – Mark Eichenlaub Jan 10 '11 at 14:11
show 109 more comments
Note: to illustrate the motion of the pad more clearly, I made a video.
Let the pad have radius $R$.
Imagine looking down on the pad from above. The baby is on the right, crawling counterclockwise.
The center of mass of the (baby + pad) system can't move. In the picture below, the green circle represents the starting position of the pad. The starting position of the baby is marked with an arrow pointing in the direction of its motion. $C$ is the center of the pad, pointing in its direction of motion (opposite the baby's by conservation of momentum). $M$ is the center of mass of the system. $d = R\frac{m}{m+M}$ is the distance from the center of the pad to the center of mass of the system.
As time goes on, the baby moves around the edge, and the lily pad moves to stay perfectly opposite the baby. This ensures that the center of mass doesn't move. Because the length $d$ is fixed, $C$ must always stay the same distance from $M$, so $C$ must move in a circle around $M$. The baby moves in another circle of radius $R-d$ so that the distance between the center of the pad and the baby remains $R$. We can draw in the trajectories of the point $C$ and the baby, and add a radius, like this:
Together, the baby and the center of the pad have only one degree of freedom. Either of them can choose to be at any given point on their trajectories, but the other one is then forced to be opposite them. Let's describe their positions by an angle $\theta$ from the horizontal. To show this, I'll move the baby up a bit and draw in $\theta$.
The lily pad has one more degree of freedom - its rotation. Let's call its rotation relative to the water (relative to north) $\phi$.
The total angular momentum of the system is zero. There are contributions from the translational angular momentum of the pad and the baby, and the rotational angular momentum of the pad. This gives
$$Md^2\dot{\theta} + m(R-d)^2\dot{\theta} + \frac{MR^2}{2}\dot{\phi} = 0$$
Simplifying the algebra using the expression for $d$, then integrating over time and using the initial condition $\phi(0) = \theta(0) = 0$, we get
$$\phi = \frac{-2m}{m+M}\theta$$
$\phi$ is what we're after, but we want to know $\phi$ after the baby has crawled far enough to return to its starting point. The minus sign on $\phi$ indicates that the pad spins counter to the rotation of pad and baby around the center of mass.
To find $\phi$, we need to use the information about the total amount the baby crawls. Introduce a new angle $\alpha$ that represents how much the baby has crawled around the pad from the pad's point of view - we are done when the baby gets to $\alpha = 2\pi$.
The true speed of the baby (relative to the water) is $(R-d)\dot{\theta}$. Another way to calculate this is to find the baby's speed relative to the pad, then add the pad's speed relative to the water. We don't need to worry about vectors here because all motion is tangent to the baby's trajectory. This gives
$$(R-d)\dot{\theta} = R\dot{\alpha} + R \dot{\phi} - d\dot{\theta}$$
(note: I originally left out the term $R \dot{\phi}$, which led the initial answer to be off) This simplifies to
$$\theta = \alpha + \phi$$
again using the initial condition $\theta(0) = \alpha(0) = 0$. We want $\alpha = 2\pi$, so set $\theta = 2\pi + \phi$. After simplifying, this gives the final answer
$$\phi = \frac{-4\pi m}{3m+M}$$
-
Sorry about double-using $M$ for pad mass and point of the center of mass. I don't really want to go back and re-do all the pictures. I hope it doesn't cause too much confusion. – Mark Eichenlaub Jan 6 '11 at 14:12
If the child is infinitely heavy, you should get a rotation of $2\pi$, but this results gives $4\pi$. =/ – Bruce Connor Jan 6 '11 at 14:21
@Bruce That is not the case. What is your justification? – Mark Eichenlaub Jan 6 '11 at 14:32
@mark: My mistake. If the child is infinitely heavy, then it will stay in place as the plant spins. But I just realised: the plant will turn around the kid AND turn around its own axis. Each of these rotations will be $2\pi$, summing to a total of $4\pi$. So your limit matches. – Bruce Connor Jan 6 '11 at 14:55
1
@Mark: +1 for the video :D Which programs did you use? – Robert Filter Jan 13 '11 at 21:56
show 26 more comments
ftiaronsem is right, i think!
The law of conservation of angular momentum for this system must be expressed as $I_m\omega_m+I_M\omega_M=0$ where $I_m$ and $I_M$ are the moments of inertia of the child and the pad about the axis which passes through the center of the mass of the whole system and $\omega_m$ and $\omega_M$ are corresponding angular velocities in the reference frame of the ground (or stationary water). More specifically, $\omega_M$ is angular velocity of the center of the pad.
Mark Eichenlaub argues that this is not sufficient and we need to include yet the angular momentum of the pad's translation as well as the angular momentum of the pad's spinning. I think this is a fundamental mistake. This is best explained by analogs in rotational and linear motion, i think.
Let's consider the case where the child moves along a diameter of the pad. This is a linear motion and the law of conservation of linear momentum for this system must be expressed as $mv_m+Mv_M=0$ where $m$ an $M$ are masses of the child and the pad and $v_m$ and $v_M$ are corresponding velocities in the reference frame of the ground (or stationary water). Let's gather the results together:
$$mv_m+Mv_M=0$$ $$I_m\omega_m+I_M\omega_M=0$$ Moments of inertia $I_m,I_M$ and angular velocities $\omega_m,\omega_M$ are the rotational analogs of $m,M$ and $v_m,v_M$ and corresponding equations must look like the same way.
ftiaronsem's final formula
$$\theta = \frac{m\left(1 - \frac{m}{m+M}\right)^22\pi}{\left(\frac{M}{2} + M\left(\frac{m}{m+M}\right)^2 + m\left(1 - \frac{m}{m+M}\right)^2\right)}$$
can be simplified. For purposes of brevity let's introduce the ratio $x=\frac{m}{M}$. Then finally the angular displacement of the pad: $$\theta=\frac{2x}{(1+x)(1+3x)}2\pi$$ An amazing thing of this result is that the angular displacement of the pad has a maximum absolute value. It reaches the maximum displacement at $x=\frac{m}{M}=\frac{1}{\sqrt{3}}$ where the maximum is:
$$\theta_{max}=(2-\sqrt{3})2\pi=96.5^\circ$$ Another surprise is that if the child is infinitely heavy then the pad's rotation is zero!
-
– Mark Eichenlaub Jan 9 '11 at 17:42
To be more specific, you can't always write $L = I\omega$. You can write $L = r\times p$. Only in special simple cases can that be simplified too $L = I\omega$. – Mark Eichenlaub Jan 9 '11 at 22:29
@Mark:The orbital motion case is not a good comparison. The planet's spin is independent of its orbital motion and the rule: "the angular momentum of the orbital motion of the CM around the sun, plus the angular momentum of its spinning motion around its CM" applies. However in given case the pad's spin is not independent of its "orbital" motion and you can not apply this rule like you did. The formula $L = I\omega$ applies in this case because $\vec{r}$ and $\vec{\omega}$ are perpendicular to each other. – Martin Gales Jan 10 '11 at 8:09
Also Mark, consider the specific case where the mass of the pad is zero(or child is infinitely heavy). Then the pad contributes nothing to the angular momentum of the system, rotates it or not. So there is no reason that it would start spinning at all. – Martin Gales Jan 10 '11 at 9:56
And third, Mark. In the case of zero mass of the pad your solution assumes that the pad turns around the child the angle $2\pi$. This is impossible without sliding between the pad and the child. But sliding friction is different from zero. So the rotation of the pad around the child is impossible. – Martin Gales Jan 10 '11 at 12:34
show 34 more comments
This problem can be solved in two steps:
1. Prove that the lily will only spin and not translate
2. Find the spinning velocity and therefore the angle
The lily only rotates
This can be resolved geometrically. The baby must turn right to keep their circular trajectory, with respect to the lily. It will therefore exert a force on the lily equal and opposite to their centripetal force (i.e. the force they use to turn right).
The force exerted by the baby is of constant magnitude and direction towards the center of the lily. The reaction force is also of constant magnitude but with direction outside the lily.
What is the net effect of force on the baby as time passes? We can obtain it be subtracting the centripetal forces at small intervals of time. Geometry and common sense show that the net effect is a vector tangential to the trajectory of the baby (obviously).
Now we can think about the net effect of the force on the lily. The force is exactly the same magnitude as the one on the baby, but with opposite direction. We can also see it as the same direction and negative magnitude. So the forces will compose over time in the same way as the baby, but with the opposite sign. So in other words the net effect is that the lily spins backwards.
As there are no other forces acting on the system, the lily will only spin on its axis.
Calculating the angle
Let $\theta$ be the angle of rotation of the baby, and $\phi$ the angle of rotation of the lily. Let the ratio of the masses, $\frac{M}{m}=\mu$
Since the baby always keeps the same distance from the center of the lily, both $\ddot\theta=0=\ddot\phi$ and therefore
$$\dot\theta = \omega_{b}$$ and $$\dot\phi = \omega_{l}$$
Both are constants.
In the frame of reference of the pond, the lily will rotate with angular velocity $\omega_l$ and the baby will rotate with angular velocity $\omega_b+\omega_l$.
Angular momentum must be conserved, so:
$$(\omega_b+\omega_l) m r^2 = -I\omega_{l}$$
Substituting $I=\frac{Mr^2}{2}$ and solving for $\omega_l$ gives
$$\omega_l = -\omega_b\frac{2}{\mu+2}$$
Now, the time that is necessary for a full rotation of the baby is $t_{f}=2\pi/\omega_b$. The angle $\phi$ rotated by the lily is therefore equal to
$$\phi=\omega_l t_f=-\frac{4\pi}{\mu+2}$$
-
@sklivvz When you say "the lily will only spin on its axis" what axis are you referring to? Is that an axis through the geometrical center of the lily pad? – Mark Eichenlaub Jan 6 '11 at 19:51
@Mark Yes, all the forces are centripetal (or centrifugal), so the net effect is that. – Sklivvz♦ Jan 6 '11 at 19:52
@Sklivvz So you think the following two statements are true: 1) The lily pad alone, not considering the child, experiences a net force. 2)The center of mass of the lily pad does not accelerate. Is that right? – Mark Eichenlaub Jan 6 '11 at 19:56
@Mark if you have something to say, then say it... :-) – Sklivvz♦ Jan 6 '11 at 20:01
@Sklivvz Okay. If the lily pad's center stays in one spot and the baby moves, then the center of mass of the system moves. By conservation of linear momentum, the center of mass of the system cannot move. Therefore, your answer is incorrect. From the point of view of forces, if there is any net force on the lily pad, its center must accelerate. – Mark Eichenlaub Jan 6 '11 at 20:05
show 8 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 95, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924098551273346, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/204547-rearranging-quadratric-equation.html
|
# Thread:
1. ## Rearranging to quadratric equation
Here I've got a sum that I have to reorder into a quadratic equation so I can find the two roots of it, but I'm struggling to understand how. Once into the quadratic form, I can easily find the roots, but the problem for me is getting it into the quadratic form.
18/x^4 + 1/x^2 = 4
Now what I did was put the x^4 on the other side. So it's:
x^4 - x^-2 - 18 = 0
Is this right?
According to the mark scheme, I had to add the two fractions together, but I don't see why that is the right option. Any help? :-)
2. ## Re: Rearranging to quadratric equation
Originally Posted by yorkey
Here I've got a sum that I have to reorder into a quadratic equation so I can find the two roots of it, but I'm struggling to understand how. Once into the quadratic form, I can easily find the roots, but the problem for me is getting it into the quadratic form.
18/x^4 + 1/x^2 = 4
Now what I did was put the x^4 on the other side. So it's:
x^4 - x^-2 - 18 = 0
Is this right?
According to the mark scheme, I had to add the two fractions together, but I don't see why that is the right option. Any help? :-)
Not quite. What you need to do is get rid of the fractions. To do this, multiply both sides by the largest order denominator, in this case, $\displaystyle \begin{align*} x^4 \end{align*}$. This will give
$\displaystyle \begin{align*} \frac{18}{x^4} + \frac{1}{x^2} &= 4 \\ x^4 \left( \frac{18}{x^4} + \frac{1}{x^2} \right) &= 4x^4 \\ 18 + x^2 &= 4x^4 \\ 0 &= 4x^4 - x^2 - 18 \\ 0 &= 4X^2 - X - 18 \textrm{ if we let } X = x^2 \end{align*}$
Now solve for $\displaystyle \begin{align*} X \end{align*}$, and use this to solve for $\displaystyle \begin{align*} x\end{align*}$.
3. ## Re: Rearranging to quadratric equation
It's strange, I don't know this rule. Sorry to keep you, but could you just lay out the whole "multiply both sides by the lowest order denominator" in a more general way so I can remember it?
So: if I have 2 denominators with variables of different exponents, I multiply the entire LHS and the entire RHS by this value, and then simplify. Is that it?
4. ## Re: Rearranging to quadratric equation
Originally Posted by yorkey
It's strange, I don't know this rule. Sorry to keep you, but could you just lay out the whole "multiply both sides by the lowest order denominator" in a more general way so I can remember it?
So: if I have 2 denominators with variables of different exponents, I multiply the entire LHS and the entire RHS by this value, and then simplify. Is that it?
Well the reason is because if you were to try to add the fractions, you need a common denominator. Then once they're added, to simplify so that you can solve the equation, you need to multiply both sides by the denominator. Try it.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9621794819831848, "perplexity_flag": "head"}
|
http://mathhelpforum.com/trigonometry/29256-satellite-find-time-takes-call-between-two-cities.html
|
# Thread:
1. ## satellite, find time it takes a call between two cities.
A communications satellite is in an orbit that is 4.10 x 10^7 m directly above the equator. Consider the moment when the satellite is located midway between Quito, Equador, and Belem, Brazil; two cities almost on the equator that are separated by a distance of 3.40 x 10^6 m.
Find the time it takes for a telephone call to go by the way of satellite between these cities. Ignore the curvature of the earth.
in seconds.
heres what I believe it looks like: http://img518.imageshack.us/img518/6439/46101437mc6.png
also, i know that electromagnetic waves propagate through a vacuum at a speed given by c = 3.00 x 10^8
i wasn't sure why the equation would not be one of the given distances d/c.
Anyone who can explain this one please.
2. Originally Posted by rcmango
A communications satellite is in an orbit that is 4.10 x 10^7 m directly above the equator. Consider the moment when the satellite is located midway between Quito, Equador, and Belem, Brazil; two cities almost on the equator that are separated by a distance of 3.40 x 10^6 m.
Find the time it takes for a telephone call to go by the way of satellite between these cities. Ignore the curvature of the earth.
in seconds.
heres what I believe it looks like: http://img518.imageshack.us/img518/6439/46101437mc6.png
also, i know that electromagnetic waves propagate through a vacuum at a speed given by c = 3.00 x 10^8
i wasn't sure why the equation would not be one of the given distances d/c.
Anyone who can explain this one please.
This is nearly a trick question: You have to "harmonize" all given distances.
I take as a distance unit $10^6\ m$
Let a denote the distance of the satellite above the equator: $a = 41 \cdot 10^6 \ m$
Let g denote the distance on the ground between Quito and Belem: $g = 3.4 \cdot 10^6\ m$
The communication signal is running from Quito via satellite to Belem. Use Pythagorean Theorem to calculate the distance s of the signal:
$s = 2 \cdot \sqrt{\left(\frac12 \cdot 3.4 \cdot 10^6 \right)^2 + \left(41 \cdot 10^6\right)^2}$
And now calculate the time needed by the signal to cover the distance s.
3. okay i get 82070457.53 for s, the distance.
now i tried using the formula i had to divide s by c = 3.00 x 10^8
to get a final answer of about .274 seconds.
Thankyou!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255303740501404, "perplexity_flag": "middle"}
|
http://planetmath.org/ModalLogicS4
|
# modal logic S4
The modal logic S4 is the smallest normal modal logic containing the following schemas:
• (T) $\square A\to A$, and
• (4) $\square A\to\square\square A$.
In this entry, we show that T is valid in a frame iff the frame is reflexive.
###### Proposition 1.
4 is valid in a frame $\mathcal{F}$ iff $\mathcal{F}$ is transitive.
###### Proof.
First, suppose $\mathcal{F}$ is a frame validating 4, with $wRu$ and $uRt$. Let $M$ be a model with $V(p)=\{v\mid wRv\}$, where $p$ a propositional variable. So $\models_{w}\square p$. By assumption, we have $\models_{w}\square p\to\square\square p$. Then $\models_{w}\square\square p$. This means $\models_{v}\square p$ for all $v$ such that $wRv$. Since $wRu$, $\models_{u}\square p$, which means $\models_{s}p$ for all $s$ such that $uRs$. Since $uRt$, we have $\models_{t}p$, or $t\in V(p)$, or $wRt$. Hence $R$ is transitive.
Conversely, let $\mathcal{F}$ be a transitive frame, $M$ a model based on $\mathcal{F}$, and $w$ any world in $M$. Suppose $\models_{w}\square A$. We want to show $\models_{w}\square\square A$, or for all $u$ with $wRu$, we have $\models_{u}\square A$, or for all $u$ with $wRu$ and all $t$ with $uRt$, we have $\models_{t}A$. If $wRu$ and $uRt$, $wRt$ since $R$ is transitive. Then $\models_{t}A$ by assumption. Therefore, $\models_{w}\square A\to\square\square A$. ∎
As a result,
###### Proposition 2.
S4 is sound in the class of preordered frames.
###### Proof.
Since any theorem in S4 is deducible from a finite sequence consisting of tautologies, which are valid in any frame, instances of T, which are valid in reflexive frames, instances of 4, which are valid in transitive frames by the proposition above, and applications of modus ponens and necessitation, both of which preserve validity in any frame, whence the result. ∎
In addition, using the canonical model of S4, which is preordered, we have
###### Proposition 3.
S4 is complete in the class of serial frames.
###### Proof.
Since S4 contains T, its canonical frame $\mathcal{F}_{{\textbf{S4}}}$ is reflexive. We next show that the canonical frame $\mathcal{F}_{{\Lambda}}$ of any consistent normal logic $\Lambda$ containing the schema 4 must be transitive. So suppose $wR_{{\Lambda}}u$ and $uR_{{\Lambda}}v$. If $A\in\Delta_{w}:=\{B\mid\square B\in w\}$, then $\square A\in w$, or $\square\square A\in w$ by modus ponens on 4 and the fact that $w$ is closed under modus ponens. Hence $\square A\in\Delta_{w}$, or $\square A\in u$ since $wR_{{\Lambda}}u$, or $A\in\Delta_{u}$, or $A\in v$ since $uR_{{\Lambda}}v$. As a result, $wR_{{\Lambda}}v$, and therefore $\mathcal{F}_{{\textbf{S4}}}$ is a preordered frame. ∎
By a proper translation, one can map intuitionistic propositional logic PL${}_{i}$ into S4, so that a wff of PL${}_{i}$ is a theorem iff its translate is a theorem of S4.
Major Section:
Reference
Type of Math Object:
Definition
Parent:
## Mathematics Subject Classification
03B42 Logics of knowledge and belief (including belief change)
03B45 Modal logic (including the logic of norms)
## Recent Activity
May 17
new image: sinx_approx.png by jeremyboden
new image: approximation_to_sinx by jeremyboden
new image: approximation_to_sinx by jeremyboden
new question: Solving the word problem for isomorphic groups by mairiwalker
new image: LineDiagrams.jpg by m759
new image: ProjPoints.jpg by m759
new image: AbstrExample3.jpg by m759
new image: four-diamond_figure.jpg by m759
May 16
new problem: Curve fitting using the Exchange Algorithm. by jeremyboden
new question: Undirected graphs and their Chromatic Number by Serchinnho
## Info
Owner: CWoo
Added: 2012-06-21 - 05:21
Author(s): CWoo
## Versions
(v9) by CWoo 2013-03-22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 66, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8785535097122192, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/1339/where-is-the-proof-of-security-of-diffies-cipher?answertab=oldest
|
# Where is the proof of security of Diffie's cipher?
There is an apparently provably secure cipher that was proposed by Diffie, but enhanced by R.A. Rueppel. The scheme, which was mentioned in Applied Cryptography, works like this:
1. Measure the length of the plain-text, $n$.
2. Multiply it by $128$.
3. Generate this much ($128·n$ bytes of) real random data and split it out into 128 byte-arrays of length each equal to the plain-text. This can be thought as a two dimensional array:
1. One of the indices gives the sequence number ($0\dots 127$).
2. One of the indices gives the position in the sequence, $0 \dots n-1$.
4. Use a 128-bit key to choose which of these streams to XOR together. Each bit of the key corresponds to "yes/no" on whether to use particular sequence. All the selected sequences are XORed together to make a single keystream, $K$.
5. Compute $P \oplus K$ to give the cipher-text $C$.
6. Serialize the two dimensional array and append it to the cipher text.
7. Send the whole package to Bob, who can then decrypt by de-serializing the matrix and selecting the same rows.
Apparently, this scheme is completely secure. The attacker has to examine every possible combination of sequences ($2^{127}$ on average) in order to break the encryption scheme.
What is the proof of this? I can't find the paper that discusses this anywhere.
-
## 2 Answers
This looks totally weak. If you know 128 bits of known plaintext, you can infer the corresponding 128 bits of keystream. The keystream being the multiplication of the random matrix by the key (in the vector space $\mathbb{F}_2^{128}$), the key is then revealed through a basic matrix inversion.
-
Interesting, this may be completely wrong then. The algorithm is mentioned in Applied Cryptography. The original Diffie algorithm used 2^k strings, and the key just selected what string to use. The modification was to use a linear combination of just k strings - but the paper discussing the modification was unreferenced in the text. It said that the paper used a linear combination of the k strings and it was equally secure. I used XOR as the linear combination in my example. Perhaps this is the source of my mistake. – Simon Johnson Nov 28 '11 at 17:36
Can something be perfectly secure and still vulnerable to a known-plaintext key recovery? – Ethan Heilman Nov 28 '11 at 20:03
I have done quite a bit of Google searching and apparently the description of the algorithm along with its proof of security is detailed in Contemporary cryptology: the science of information integrity.
At this stage I have no idea whether the algorithm matches what is listed above. Probably not as the construction is insecure. I'll keep digging and fix this comment when I have some more information.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9274699687957764, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/13810/threshold-for-correlation-coefficient-to-indicate-statistical-significance-of-a
|
# Threshold for correlation coefficient to indicate statistical significance of a correlation in a correlation matrix
I have computed a correlation matrix of a data set which contains 455 data points, each data point containing 14 characteristics. So the dimension of the correlation matrix is 14 x 14.
I was wondering whether there is a threshold for the value of the correlation coefficient which points out that there is a significant correlation between two of those characteristics.
I have value ranging from -0.2 to 0.85, and I was thinking that the important ones are those which are above 0.7.
• Is there a general value for the correlation coefficient which should be considered for the threshold or is just context dependent to the data type which I am investigating?
-
1
– user603 Aug 3 '11 at 16:07
@user603 Good catch: it's practically the same question. The innovation here is to ask whether tests for significant correlation might depend on the "data type" (read: data distribution). Let's hope that the replies focus on this aspect instead of going over old ground. – whuber♦ Aug 3 '11 at 16:44
## 2 Answers
### Significance tests for correlations
There are tests of statistical significance that can be applied to individual correlations, which indicate the probability of obtaining a correlation as large or larger than the the sample correlation assuming the null hypothesis is true.
The key point is that what constitutes a statistically significant correlation coefficient depends on:
• Sample size: bigger sample sizes will lead to smaller thresholds
• alpha: often set to .05, smaller alphas will lead to higher thresholds for statistical significance
• one-tailed / two-tailed test: I'm guessing that you would be using two-tailed so this probably doesn't matter
• type of correlation coefficient: I'm guessing you are using Pearson's
• distributional assumptions of x and y
In common circumstances, where alpha is .05, using two-tailed test, with Pearson's correlation, and where normality is at least an adequate approximation, the main factor influencing the cut-off is sample size.
• Here's an online calculator
• `cor.test` will calculate statistical significance of a correlation in R
### Threshold of importance
Another way of interpreting your question is to consider that you are interested not in whether a correlation is statistically significant, but rather whether it is practically important.
Some researchers have offered rules of thumb for interpreting the meaning of correlation coefficients, but these rules of thumb are domain specific.
### Multiple significance testing
However, because you are interested in flagging significant correlations in a matrix, this changes the inferential context. You have $k(k-1)/2$ correlations where $k$ is the number of variables (i.e., $14(13)/2=91$. If the null hypothesis were true for all correlations in the matrix, then the more significance tests you run, then the more likely you are of making a Type I error. E.g., in your case you would on average make $91 * .05 = 4.55$ Type I errors if the null hypothesis were true for all correlations.
As @user603 has pointed out, these issues were well discussed in this earlier question.
In general, I find it useful when interpreting a correlation matrix to focus on higher level structure. This can be done in an informal way by looking at general patterns in the correlation matrix. This can be done more formally by using techniques like PCA and factor analysis. Such approaches avoid many of the issues associated with multiple significance testing.
-
One option would be simulation or permutation testing. If you know the distribution that your data comes from you could simulate from that distribution, but with all the observations independent. If you don't know the distribution then you can permute each of your variables independly of each other and that will give you the same general marginal distribution of each variable, but with any correlation removed.
Do either of the above (keeping the sample size and matrix dimensions the same) a whole bunch of times (10,000 or so) and look at the maximum absolute correlation, or another high quantile that may be of interest. This will give you the distribution from the null hypothesis that you can then compare the maximum of your actual observed correlations to (and the other high quantiles of interest).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285414218902588, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/162177/building-transformation-matrix-from-spherical-to-cartesian-coordinate-system
|
# building transformation matrix from spherical to cartesian coordinate system
How to arrive at the following from given $x = r\sin \theta \cos \phi, y = r\sin \theta \sin \phi, z=r\cos\theta$
$$\begin{bmatrix} A_x\\ A_y\\ A_z \end{bmatrix} = \begin{bmatrix} \sin \theta \cos \phi & \cos \theta \cos \phi & -\sin\phi\\ \sin \theta \sin \phi & \cos \theta \sin \phi & \cos\phi\\ \cos\theta & -\sin\theta & 0 \end{bmatrix} \begin{bmatrix} A_r\\ A_\theta\\ A_\phi \end{bmatrix}$$
Also how show that $$\begin{bmatrix} \hat i\\ \hat j\\ \hat k \end{bmatrix} = \begin{bmatrix} \sin \theta \cos \phi & \cos \theta \cos \phi & -\sin\phi\\ \sin \theta \sin \phi & \cos \theta \sin \phi & \cos\phi\\ \cos\theta & -\sin\theta & 0 \end{bmatrix} \begin{bmatrix} \hat e_r\\ \hat e_\theta\\ \hat e_\phi \end{bmatrix}$$ How to change $(a,b,c)$ into spherical polar coordinates and $(r ,\theta, \phi)$ into cartesian coordinates using this matrix? Thank you!!
-
Find a transformation from Cartesian to cylindrical coordinates. Find a transformation from cylindrical to spherical coordinates. Compose them. – Potato Jun 24 '12 at 0:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8123617172241211, "perplexity_flag": "middle"}
|
http://www.pvk.ca/Blog/2012/08/13/engineering-a-list-merge-sort/
|
# Engineering a List Merge Sort
Aug 13th, 2012
Back in November 2011, Takeru Ohta submitted a very nice patch to replace our (SBCL’s) in-place stable merge sort on linked lists with a simpler, much more efficient implementation. It took me until last May to whip myself into running a bunch of tests to estimate the performance improvements and make sure there weren’t any serious regression, and finally commit the patch. This post summarises what happened as I tried to find further improvements. The result is an implementation that’s linear-time on nearly sorted or reverse-sorted lists, around 4 times as fast on slightly shuffled lists, and up to 30% faster on completely shuffled lists, thanks to design choices guided by statistically significant effects on performance (… on one computer, my dual 2.8 GHz X5660).
I believe the approach I used to choose the implementation can be applied in other contexts, and the tiny tweak to adapt the sort to nearly-sorted inputs is simple (much simpler than explicitly detecting runs like Timsort), if a bit weak, and works with pretty much any merge sort.
## A good starting point
The original code is reproduced below. The sort is parameterised on two functions: a comparator (test) and a key function that extracts the property on which data are compared. The key function is often the identity, but having it available is more convenient than having to pull the calls into the comparator. The sort is also stable, so we use it for both stable and regular sorting; I’d like to keep things that way to minimise maintenance and testing efforts. This implementation seems like a good foundation to me: it’s simple but pretty good (both in runtime and in number of comparisons). Trying to modify already-complicated code is no fun, and there’s little point trying to improve an implementation that doesn’t get the basics right.
There are a few obvious improvements to try out: larger base cases, recognition of sorted subsequences, converting branches to conditional moves, finding some way to avoid the initial call to length (which must traverse the whole linked list), … But first, what would interesting performance metrics be, and on what inputs?
## Brainstorming an experiment up
I think it works better to first determine our objective, then the inputs to consider, and, last, the algorithmic variants to try and compare (decision variables). That’s more or less the reverse order of what’s usually suggested when defining mathematical models. The difference is that, in the current context, the space of inputs and algorithms are usually so large that we have to winnow them down by taking earlier choices into account.
### Objective functions
A priori, three basic performance metrics seem interesting: runtimes, number of calls to the comparator, and number of calls to the key functions. On further thought, the last one doesn’t seem useful: if it really matters, a schwartzian transform suffices to reduce these calls to a minimum, regardless of the sort implementation.
There are some complications when looking at runtimes. The universe of test and key functions is huge, and the sorts can be inlined, which sometimes enables further specialisation on the test and key. I’ve already decided that calls to key don’t matter directly. Let’s suppose it’s very simple, the identity function. The number of comparisons will correlate nicely with performance when comparisons are slow. Again, let’s suppose that the comparator is simple, a straight `<` of fixnums. The performance of sorts, especially with a trivial key and a simple comparator, can vary a lot depending on whether the sort is specialised or not, and both cases are relevant in practice. I’ll have to test for both cases: inlined comparator and key functions, and generic sorts with unknown functions.
This process lead to a set of three objective functions: the number of calls to the comparator, the runtime (number of cycles) of normal, generic sort, and the number of cycles for a specialised sort.
### Inputs
The obvious thing to vary in the input (the list to sort) is the length of the list. The lengths should probably span a wide range of values, from short lists (e.g. 32 elements) to long ones (a few million elements). Programs that are sort-bound on very long lists should probably use vectors, if only around the sort, and then invest in a sophisticated sort.
In real programs, sort is sometimes called on nearly sorted or reverse-sorted sequences, and it’s useful to sort such inputs faster, or with fewer comparisons. However, it’s probably not that interesting if the adaptivity comes at the cost of worse performance on fully shuffled lists. I decided to test on sorted and fully shuffled inputs. I also interpolated between the two by flipping randomly-selected subranges of the list a few times.
Finally, linked lists are different than vectors in one key manner: contiguous elements can be arbitrarily scattered around memory. SBCL’s allocation scheme ensures that consecutively-allocated objects will tend to be located next to each other in memory, and the copying garbage collector is hacked to copy the spine of cons lists in order. However, a list can still temporarily exhibit bad locality, for example after an in-place sort. Again, I decided to go for ordered conses, fully scattered conses (only the conses were shuffled, not the list’s values), and to interpolate, this time by swapping randomly-selected pairs of consecutive subranges a couple times.
### Code tweaks
The textbook way to improve a recursive algorithm is to increase the base cases’ sizes. In the initial code the base case is a sublist of size one; such a list is trivially sorted. We can easily increase that to two (a single conditional swap suffices), and an optimal sorting network for three values is only slightly more complicated. I decided to stop there, with base cases of size one to three. These simple sorts are implemented as a series of conditional swaps (i.e. pairs of max/min computations), and these can be executed branch-free, with only conditional moves. There’s a bit of overhead, and conditional moves introduce more latency than well predicted branches, but it might be useful for the specialised sort on shuffled inputs, and otherwise not hurt too much.
The merge loop could cache the result of calls to the key functions. This won’t be useful in specialised sorts, and won’t affect the number of comparisons, but it’ll probably help the generic sort without really affecting performance otherwise.
With one more level of indirection, the merge loop can be branch-free: `merge-one` can be executed on references to the list heads, and these references can be swapped with conditional moves. Again, the additional complexity makes it hard to guess if the change would be a net improvement.
Like I hinted back in May, we can accelerate the sort on pre-sorted inputs by keeping track of the last cons in each list, and tweaking the merge function: if the first value in one list is greater than (or equal to) the last in the other, we can directly splice them in order. Stability means we have to add a bit of complexity to handle equal values correctly, but nothing major. With this tiny tweak, merge sort is linear-time on sorted or reverse-sorted lists (the recursive step is constant-time, and merge sort recurses on both halves); it also works on recursively-processed sublists, and the performance is thus improved on nearly-sorted inputs in general. There’s little point going through additional comparisons to accelerate the merger of two tiny lists; a minimal length check is in order. In addition to the current version, without any quick merge, I decided to try quick merges when the length of the two sublists summed to at least 8, 16 or 32. I didn’t try limits lower than 8 because any improvement would probably be marginal: trying to detect opportunities for quicker merge introduces two additional comparisons when it fails.
Finally, I tried to see if the initial call to `length` (which has to traverse the whole list) could be avoided, e.g. by switching to a bottom-up sort. The benchmarks I ran in May made me realise that’s probably a bad idea. Such a merge sort almost assuredly has to split its inputs in chunks of power of two (or some other base) sizes. These splits are suboptimal on non-power-of-two inputs; for example, when sorting a list of length `(1+ (ash 1 n))`, the final merge is between a list of length `(ash 1 n)` and a list of length … one. Knowing the exact length of the list means we can split optimally on recursive calls, and that eliminates bumps in runtimes and in the number of comparisons around “round” lengths.
## How can we compare all these possibilities?
I usually don’t try to do anything clever, and simply run a large number of repetitions for all the possible implementations and families of inputs, and then choose a few interesting statistical tests or sic an ANOVA on it. The problem is, I’d maybe want to test with ten lengths (to span the wide range between 32 and a couple million), a couple shuffledness values (say, four, between sorted and shuffled), a couple scatteredness values (say, four, again), and around 48 implementations (size-1 base case, size-3 base case, size-3 base case with conditional moves, times cached or uncached key, times branchful or branch-free merge loop, times four possibilities for the quick merge). That’s a total of 7680 sets of parameter values. If I repeated each possibility 100 times, a reasonable sample size, I’d have to wait around 200 hours, given an average time of 1 second/execution (a generous estimate, given how slow shuffling and scattering lists can be)… and I’d have to do that separately when testing for comparison counts, generic sort runtimes and specialised sort runtimes!
I like working on SBCL, but not enough to give its merge sort multiple CPU-weeks.
Executing multiple repetitions of the full cross product is overkill: that actually gives us enough information to extract information about the interaction between arbitrary pairs (or arbitrary subsets, in fact) of parameters (e.g. shuffledness and the minimum length at which we try to merge in constant-time). The thing is, I’d never even try to interpret all these crossed effects: there are way too many pairs, triplets, etc. I could instead try to determine interesting crosses ahead of time, and find a design that fits my simple needs.
Increasing the length of the list will lead to longer runtimes and more comparisons. Scattering the cons cells around will also slow the sorts down, particularly on long lists. Hopefully, the sorts are similar enough to be affected comparably by the length of the list and by how its conses are scattered in memory.
Pre-sorted lists should be quicker to sort than shuffled ones, even without any clever merge step: all the branches that depend on comparisons are trivially predicted. Hopefully, the effect is more marked when sorted pairs of sublists are merged in constant time.
Finally, the interaction between the remaining algorithmic tweaks is pretty hard to guess, and there are only 12 combinations. I feel it’s reasonable to cross the three parameter sets.
That’s three sets of crossed effects (length and scatteredness, shuffledness and quick merge switch-over, remaining algorithmic tweaks), but I’m not interested in any further interaction, and am actually hoping these interactions are negligible. A Latin square design can help bring the sample to a much more reasonable size.
### Quadrata Latina pro victoria
An NxN Latin square is a square of NxN cells, with one of N symbols in each cell, with the constraint that each symbol appears once in each row and column; it’s a relaxed Sudoku.
When a first set of parameters values is associated with the rows, a second with the columns, and a third with the symbols, a Latin square defines N2 triplets that cover each pair of parameters between the three sets exactly once. As long as interactions are absent or negligible, that’s enough information to separate the effect of each set of parameters. The approach is interesting because there are only N2 cells (i.e. trials), instead of N3. Better, the design can cope with very low repetition counts, as low as a single trial per cell.
Latin squares are also fairly easy to generate. It suffices to fill the first column with the symbols in arbitrary order, the second in the same order, rotated by one position, the third with a rotation by two, etc. The square can be further randomised by shuffling the rows and columns (with Fisher-Yates, for example). That procedure doesn’t sample from the full universe of Latin squares, but it’s supposed to be good enough to uncover pairwise interactions.
Latin squares only make sense when all three sets of parameters are the same size. Latin rectangles can be used when one of the sets is smaller than the two others, by simply removing rows or columns from a random Latin square. Some pairs are then left unexplored, but the data still suffices for uncrossed linear fits, and generating independent rectangles helps cover more possibilities.
I’ll treat all the variables as categorical, even though some take numerical values: it’ll work better on non-linear effects (and I have no clue what functional form to use).
### Optimising for comparison counts
Comparison counts are easier to analyse. They’re oblivious to micro-optimisation issues like conditional moves or scattered conses, and results are deterministic for fixed inputs. There are much fewer possibilities to consider, and less noise.
Four values for the minimum length before checking for constant-time merger (8, 16, 32 or never), and ten shuffledness values (sorted, one, two, five, ten, 50, 100, 500 or 1000 flips, and full shuffle) seem reasonable; when the number of flips is equal to or exceeds the list length, a full shuffle is performed instead. That’s 40 values for one parameter set.
There are only two interesting values for the remaining algorithmic tweaks: size-3 base cases or not (only size-1).
This means there should be 40 list lengths to balance the design. I chose to interpolate from 32 to 16M (inclusively) with a geometric sequence, rounded to the nearest integer.
The resulting Latin rectangles comprise 80 cells. Each scenario was repeated five times (starting from the same five PRNG states), and 30 independent rectangles were generated. In total, that’s 12 000 executions. The are probably smarter ways to do this that better exploit the fact that there are only two algorithmic tweaks variants; I stuck to a very thin Latin rectangle to stay closer to the next two settings. Still, a full cross product with 100 repetitions would have called for 320 000 executions, nearly 30 times as many.
I wish to understand the effect of these various parameters on the number of times the comparison function is called to sort a list. Simple models tend to suppose additive effects. That doesn’t look like it’d work well here. I expect multiplicative effects: enabling quick merge shouldn’t add or subtract to the number of comparisons, but scale it (hopefully by less than one). A logarithmic transformation will convert these multiplications into additions. The ANOVA method and the linear regression I’ll use are parametric methods that suppose that the mean of experimental noise roughly follows a normal distribution. It seems like a reasonable hypothesis: variations will be caused by a sum of many small differences caused by the shuffling, and we’re working with many repetitions, hopefully enough for the central limit theorem to kick in.
The Latin square method also depends on the absence of crossed interactions between rows and columns, rows and symbols, or columns and symbols. If that constraint is violated, the design is highly vulnerable to Type I errors: variations caused by interactions between rows and columns could be assigned to rows or columns, for example.
My first step is to look for such interaction effects.
The main effects are statistically significant (in order, list length, shuffling and quick merge limit, and the algorithmic tweaks), with p < 2e-16. That’s reassuring: the odds of observing such results if they had no effects are negligible. Two of the pairs are, as well. Their effects, on the other hand, don’t seem meaningful. The `Sum Sq` column reports how much of the variance in the data set is explained when the parameters corresponding to each row (one for each degree of freedom `Df`) are introduced in the fit. Only the Size.Scatter:Shuffle.Quick row really improves the fit, and that’s with 1159 degrees of freedom; the mean improvement in fit, `Mean Sq` (per degree of freedom) is tiny.
The additional assumption that interaction effects are negligible seems reasonably satisfied. The linear model should be valid, but, more importantly, we can analyse each set of parameters independently. Let’s look at a regression with only the main effects.
The fit is only slightly worse than with pairwise interactions. The coefficient table follows. What we see is that half of the observations fall within 12% of the linear model’s prediction (the worst case is off by more than 100%), and that nearly all the coefficients are statistically significantly different than zero.
The Size.Scatter coefficients are plotted below. The number of comparison grows with the length of the lists. The logarithmic factor shows in the curve’s slight convexity (compare to the linear interpolation in blue).
The Shuffle.Quick values are the coefficients for the crossed effect of the level of shuffling and the minimum length (cutoff) at which constant-time merge may be executed; their values are reported in the next histogram, with error bars corresponding to one standard deviation. Hopefully, a shorter cutoff lowers the number of comparisons when lists are nearly pre-sorted, and doesn’t increase it too much when lists are fully shuffled. On very nearly sorted lists, looking for pre-sorted inputs as soon as eight or more values are merged divides the number of comparisons by a factor of 4 (these are base-2 logarithms), and the advantage smoothly tails off as lists are shuffled better. Overall, cutting off at eight seems to never do substantially worse than the other choices, and is even roughly equivalent to vanilla merges on fully shuffled inputs.
The coefficient table tells us that nearly all of the Shuffle.Quick coefficients are statistically significant. The statistical significance values are for a null hypothesis that each of these coefficients is actually zero: the observation would be extremely unlikely if that were the case. That test tells us nothing about the relationship between two coefficients.
Comparing differences with standard deviations helps us detect hugely significant difference, but we can use statistical tests to try and make finer distinctions. Tukey’s Honest Significant Difference (HSD) method gives intervals on the difference between two coefficients for a given confidence level. For example, the 99.99% confidence interval between cutoff at 8 and 32 on lists that were flipped 50 times is [-0.245, -0.00553]. This result means that, if the hypotheses for Tukey’s HSD method are satisfied, the probability of observing the results I found is less than .01% when the actual difference in effect between cutoff at 8 and 32 is outside that interval. Since even the upper bound is negative, it also means that the odds of observing the current results are less than .01% if the real value for cutoff at 8 isn’t lower than that of cutoff at 32: it’s pretty sure that looking for quick merges as early as length eight pays off compared to only doing so for merges of length 32 or more. One could also just prove that’s the case. Overall, cutting off at length eight results in fewer comparisons than the other options at nearly all shuffling levels (with very high confidence), and the few cases cases it doesn’t aren’t statistically significant at a 99.99% confidence level – of course, absence of evidence isn’t evidence of absence, but the differences between these estimates tend to be tiny anyway.
The last row, Leaf.Cache.BranchMergeTxFxT, reports the effect of adding base cases that sort lists of length 2 and 3. Doing so causes 4% more comparisons. That’s a bit surprising: adding specialised base cases usually improves performance. The issue is that the sorting networks are only optimal for data-oblivious executions. Sorting three values requires, in theory, 2.58 ($$\lg 3!$$) bits of information (comparisons). A sorting network can’t do better than the ceiling of that, three comparisons, but if control flow can depend on the comparisons, some lists can be sorted in two comparisons.
It seems that, if we wish to minimise the number of comparisons, I should avoid sorting networks for the size-3 base case, and try to detect opportunities for constant-time list merges. Doing so as soon as the merged list will be of length eight or more seems best.
### Optimising the runtime of generic sorts
I decided to keep the same general shape of 40x40xM parameter values when looking at the cycle count for generic sorts. This time, scattering conses around in memory will affect the results. I went with conses laid out linearly, 10 swaps, 50 swaps, and full randomisation of addresses. These 4 scattering values leave 10 list lengths, in a geometric progression from 32 to 16M. Now, it makes sense to try all the other micro-optimisations: trivial base case or base cases of size up to 3, with branches or conditional moves (3 choices), cached calls to key during merge (2 choices), and branches or conditional moves in the merge loop (2 choices). This calls for Latin rectangles of size 40x12; I generated 10 rectangles, and repeated each cell 5 times (starting from the same 5 PRNG seeds). In total, that’s 24 000 executions. A full cross product, without any repetition, would require 19 200 executions; the Latin square design easily saved a factor of 10 in terms of sample size (and computation time) for equivalent power.
I’m interested in execution times, so I generated the inputs ahead of time, before sorting them; during both generation and sorting, the garbage collector was disabled to avoid major slowdowns caused by the mprotect-based write barrier.
Again, I have to apply a logarithmic transformation for the additive model to make sense, and first look at the interaction effects. There’s a similar situation as the previous section on comparison counts: one of the crossed effects is statistically significant, but it’s not overly meaningful. A quick look at the coefficients reveals that the speed-ups caused by processing nearly-sorted lists in close to linear time are overestimated on short lists and slightly underestimated on long ones.
We can basically read the ANOVA with only main effects by skipping the rows corresponding to crossed effects and instead adding their values to the residuals. There are statistically significant coefficients in there, and they’re reported below. Again, I’m quite happy to be able to examine each set of parameters independently, rather than having to understand how, e.g., scattering cons cells around affects quick merges differently than the vanilla merge. Maybe I just didn’t choose the right parameters, or was really unlucky; I’m just trying to do the best I can with induction.
The coefficients for list length crossed with scattering level are plotted below. Sorting seems to be slower on longer list (surprise!), especially when the cons cells are scattered; sorting long scattered lists is about twice as slow as sorting nicely laid-out lists of the same length. The difference between linear and slightly scattered lists isn’t statistically significant.
Just as with comparison counts, sorting pre-sorted lists is faster, with or without special logic. Looking for sorted inputs before merging pays off even on short lists, when the input is nearly sorted: the effect of looking for pre-sorted inputs even on sublists of length eight is consistently more negative (i.e. reduces runtimes) than for the other cutoffs. The difference is statistically significant at nearly all shuffling levels, and never significantly positive.
Finally, the three algorithmic tweaks. Interestingly, the coefficients tell us that, overall, the additional overhead of the branch-free merge loop slows it down by 5%. The fastest combination seems to be larger base cases, with or without conditional moves (C or T), cached calls to key (T), and branchful merge loop (T); the differences are statistically significant against nearly all other combinations, except FxTxT (no leaf sort, cached key, and branchful merge loop). Compared with the current code (FxFxT), the speed up is on the order of 5%, and at least 2% with 99.99% confidence.
If I want to improve the performance of generic sorts, it looks like I want to test for pre-sorted inputs when merging into a list of length 8 or more, probably implement larger base cases, cache calls to the key function, and keep the merge loop branchful.
### Optimising the runtime of specialised sorts
I kept the exact same plan as for generic sorts. The only difference is that independent Latin rectangles were re-generated from scratch. With the overhead from generic indirect calls removed, I’m hoping to see more important effects from the micro-optimisations.
Here as well, all the main and crossed effects are statistically significant. The effect of the micro-optimisations (Leaf.Cache.BranchMerge) are now about as influential as the fast merge minimum length. It’s also even more clear that the crossed effects are much less important than the main ones, and that it’s probably not too bad to ignore the former.
The general aspect of the coefficients is pretty much the same as for generic sorts, except that differences are amplified now that the constant overhead of indirect calls is eliminated.
The coefficients for crossed list length and scattering level coefficients are plotted below. The graph shows that fully shuffling long lists around slows sort down by a factor of 8. The initial check for crossed effect gave good reasons to believe that this effect is fairly homogeneous throughout all implementations.
Checking for sorted inputs before merge still helps, even on short lists (of length 8 or more). In fact even on completely shuffled lists, looking for quick merge on short lists very probably accelerates the sort compared to not looking for pre-sorted inputs, although the speed up compared to other cutoff values isn’t significant to a 99.99% confidence level.
The key function is the identity, and is inlined into nothing in these measurements. It’s not surprising that the difference between cached and uncached key values is tiny. The versions with larger base cases (C or T) sorts, and branchful merge are quicker than the others at 99.99% confidence level; compared to the initial code, they’re at least 13% faster with 99.99% confidence.
When the sort is specialised, I probably want to use a merge function that checks for pre-sorted inputs very early, to implement larger base cases (with conditional moves or branches), and to keep the merge loop branchful.
## Putting it all together
Comparison counts are minimised by avoiding sorting networks, and by enabling opportunistic constant-time merges as early as possible. Generic sorts are fastest with larger base cases (with or without branches), cached calls to the key function, a branchful merge loop and early checks for constant-time merges. Specialised sorts are, similarly, fastest with larger base cases, a branchful merge loop and early checks when merging (without positive or negative effect from caching calls to the key function, even if it’s the identity).
Overall, these result point me toward one implementation: branchful size-2 and size-3 base cases that let me avoid redundant comparisons, cached calls to the key function, branchful merge loop, and checks for constant-time merges when the result is of length eight or more.
The compound effect of these choices is linear time complexity on sorted inputs, speed-ups (and reduction in comparison counts) by factors of 2 to 4 on nearly-sorted inputs, and by 5% to 30% on shuffled lists.
The resulting code follows.
It’s somewhat longer than the original, but not much more complicated: the extra code mostly comes from the tedious but simple leaf sorts. Particularly satisfying is the absence of conditional move hack: SBCL only recognizes trivial forms, and only on X86 or X86-64, so the code tends to be ugly and sometimes sacrifices performance on other platforms. SBCL’s bad support for conditional moves may explain the lack of any speed up from converting branches to select expressions: the conditional swaps had to be implemented as pairs of independent test with T/NIL and conditional moves. Worse, when the comparison is inlined, an additional conditional move converted the result of the integer comparison to a boolean value; in total, three pairs of comparison/conditional move were then executed instead of one comparison and two conditional moves. Previous work [PDF] on out-of-place array merge sort in C found it useful to switch to conditional move-based merge loop and sorting networks. Some of the difference is probably caused by SBCL’s weaker code generation, but the additional overhead inherent to linked list manipulations (compared to linear accesses in arrays) may also play a part.
Another code generation issue is caused by the way the initial version called the comparison function in exactly one place. This meant that arbitrary comparators would almost always be inlined in the specialised sort’s single call site. We lose that property with accelerated merge and larger base cases. That issue doesn’t worry me too much: functions can be declared inline explicitly, and the key function was already called from multiple sites.
I’m a bit surprised that neither the sorting networks nor the merge loop were appreciably sped-up by rewriting them with conditional moves. I’m a lot more surprised by the fact that it pays off to try and detect pre-sorted lists even on tiny merges, and even when the comparator is inlined. The statistical tests were useful here, with results that defy my initial expectations and let me keep the code simpler. I would be pleasantly surprised if complex performance improvement patches, in SBCL and otherwise, went through similar testing. Code is a long-term liability, and we ought to be convinced the additional complexity is worth the trouble.
Independently of that, the Latin square design was very helpful: it easily saved me a couple CPU-weeks, and I can see myself using it regularly in the future. The approach only works if we already have a rough (and simple) performance model, but I have a hard time interpreting complex models with hundreds of interacting parameters anyway. Between a simplistic, but still useful, model and a complex one with a much stronger fit, I’ll usually choose the former… as long as I can be fairly certain the simple model isn’t showing me a mirage.
More generally, research domains that deal with the real world have probably already hit the kind of scaling issues we’re now facing when we try to characterise how computers and digital systems function. Brute forcing is easier with computers than with interns, but it can still pay off to look elsewhere.
Posted by Paul Khuong Aug 13th, 2012
« Binary search is a pathological case for caches A one-instruction write barrier »
# Github Repos
• Status updating...
@pkhuong on Github
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269930720329285, "perplexity_flag": "middle"}
|
http://www.nag.com/numeric/CL/nagdoc_cl23/html/S/sintro.html
|
# NAG Library Chapter Introductions – Approximations of Special Functions
## 1 Scope of the Chapter
This chapter is concerned with the provision of some commonly occurring physical and mathematical functions.
## 2 Background to the Problems
The majority of the functions in this chapter approximate real-valued functions of a single real argument, and the techniques involved are described in Section 2.1. In addition the chapter contains functions for elliptic integrals (see Section 2.2), Bessel and Airy functions of a complex argument (see Section 2.3), complementary error function of a complex argument and various option pricing functions for use in financial applications.
### 2.1 Functions of a Single Real Argument
Most of the functions provided for functions of a single real argument have been based on truncated Chebyshev expansions. This method of approximation was adopted as a compromise between the conflicting requirements of efficiency and ease of implementation on many different machine ranges. For details of the reasons behind this choice and the production and testing procedures followed in constructing this chapter see Schonfelder (1976).
Basically, if the function to be approximated is $f\left(x\right)$, then for $x\in \left[a,b\right]$ an approximation of the form
$fx=gx∑′r=0CrTrt$
is used (${\sum }^{\prime }$ denotes, according to the usual convention, a summation in which the first term is halved), where $g\left(x\right)$ is some suitable auxiliary function which extracts any singularities, asymptotes and, if possible, zeros of the function in the range in question and $t=t\left(x\right)$ is a mapping of the general range $\left[a,b\right]$ to the specific range [$-1,+1$] required by the Chebyshev polynomials, ${T}_{r}\left(t\right)$. For a detailed description of the properties of the Chebyshev polynomials see Clenshaw (1962) and Fox and Parker (1968).
The essential property of these polynomials for the purposes of function approximation is that ${T}_{n}\left(t\right)$ oscillates between $±1$ and it takes its extreme values $n+1$ times in the interval [$-1,+1$]. Therefore, provided the coefficients ${C}_{r}$ decrease in magnitude sufficiently rapidly the error made by truncating the Chebyshev expansion after $n$ terms is approximately given by
$Et≃CnTnt.$
That is, the error oscillates between $±{C}_{n}$ and takes its extreme value $n+1$ times in the interval in question. Now this is just the condition that the approximation be a minimax representation, one which minimizes the maximum error. By suitable choice of the interval, [$a,b$], the auxiliary function, $g\left(x\right)$, and the mapping of the independent variable, $t\left(x\right)$, it is almost always possible to obtain a Chebyshev expansion with rapid convergence and hence truncations that provide near minimax polynomial approximations to the required function. The difference between the true minimax polynomial and the truncated Chebyshev expansion is seldom sufficiently great enough to be of significance.
The evaluation of the Chebyshev expansions follows one of two methods. The first and most efficient, and hence the most commonly used, works with the equivalent simple polynomial. The second method, which is used on the few occasions when the first method proves to be unstable, is based directly on the truncated Chebyshev series, and uses backward recursion to evaluate the sum. For the first method, a suitably truncated Chebyshev expansion (truncation is chosen so that the error is less than the machine precision) is converted to the equivalent simple polynomial. That is, we evaluate the set of coefficients ${b}_{r}$ such that
$yt=∑r=0 n-1brtr=∑′r=0 n-1CrTrt.$
The polynomial can then be evaluated by the efficient Horner's method of nested multiplications,
$yt=b0+tb1+tb2+…tbn- 2+tbn- 1….$
This method of evaluation results in efficient functions but for some expansions there is considerable loss of accuracy due to cancellation effects. In these cases the second method is used. It is well known that if
$bn-1=Cn-1 bn-2=2tbn-1+Cn-2 bj-0=2tbj+1-bj+2+Cj, j=n-3,n-4,…,0$
then
$∑′r=0 CrTrt=12b0-b2$
and this is always stable. This method is most efficiently implemented by using three variables cyclically and explicitly constructing the recursion.
That is,
$α = Cn-1 β = 2tα+Cn-2 γ = 2tβ-α+Cn-3 α = 2tγ-β+Cn-4 β = 2tα-γ+Cn-5 ⋮ say α = 2tγ-β+C2 β = 2tα-γ+C1 yt = tβ-α+12C0$
The auxiliary functions used are normally functions compounded of simple polynomial (usually linear) factors extracting zeros, and the primary compiler-provided functions, sin, cos, ln, exp, sqrt, which extract singularities and/or asymptotes or in some cases basic oscillatory behaviour, leaving a smooth well-behaved function to be approximated by the Chebyshev expansion which can therefore be rapidly convergent.
The mappings of [$a,b$] to [$-1,+1$] used range from simple linear mappings to the case when $b$ is infinite, and considerable improvement in convergence can be obtained by use of a bilinear form of mapping. Another common form of mapping is used when the function is even; that is, it involves only even powers in its expansion. In this case an approximation over the whole interval [$-a,a$] can be provided using a mapping $t=2{\left(x/a\right)}^{2}-1$. This embodies the evenness property but the expansion in $t$ involves all powers and hence removes the necessity of working with an expansion with half its coefficients zero.
For many of the functions an analysis of the error in principle is given, namely, if $E$ and $\nabla $ are the absolute errors in function and argument and $\epsilon $ and $\delta $ are the corresponding relative errors, then
$E ≃ f′x∇ E ≃ xf′xδ ε ≃ x f′ x fx δ.$
If we ignore errors that arise in the argument of the function by propagation of data errors, etc., and consider only those errors that result from the fact that a real number is being represented in the computer in floating point form with finite precision, then $\delta $ is bounded and this bound is independent of the magnitude of $x$. For example, on an $11$-digit machine
$δ≤10-11.$
(This of course implies that the absolute error $\nabla =x\delta $ is also bounded but the bound is now dependent on $x$.) However, because of this the last two relations above are probably of more interest. If possible the relative error propagation is discussed; that is, the behaviour of the error amplification factor $\left|x{f}^{\prime }\left(x\right)/f\left(x\right)\right|$ is described, but in some cases, such as near zeros of the function which cannot be extracted explicitly, absolute error in the result is the quantity of significance and here the factor $\left|x{f}^{\prime }\left(x\right)\right|$ is described. In general, testing of the functions has shown that their error behaviour follows fairly well these theoretical error behaviours. In regions where the error amplification factors are less than or of the order of one, the errors are slightly larger than the above predictions. The errors are here limited largely by the finite precision of arithmetic in the machine, but $\epsilon $ is normally no more than a few times greater than the bound on $\delta $. In regions where the amplification factors are large, of order ten or greater, the theoretical analysis gives a good measure of the accuracy obtainable.
It should be noted that the definitions and notations used for the functions in this chapter are all taken from Abramowitz and Stegun (1972). You are strongly recommended to consult this book for details before using the functions in this chapter.
### 2.2 Approximations to Elliptic Integrals
Four functions provided here are symmetrised variants of the classical (Legendre) elliptic integrals. These alternative definitions have been suggested by Carlson (1965), Carlson (1977b) and Carlson (1977a) and he also developed the basic algorithms used in this chapter.
The symmetrised elliptic integral of the first kind is represented by
$RF x,y,z = 12 ∫0∞ dt t+x t+y t+z ,$
where $x,y,z\ge 0$ and at most one may be equal to zero.
The normalization factor, $\frac{1}{2}$, is chosen so as to make
$RFx,x,x=1/x.$
If any two of the variables are equal, ${R}_{F}$ degenerates into the second function
$RC x,y = RF x,y,y = 12 ∫0∞ dt t+x t+y ,$
where the argument restrictions are now $x\ge 0$ and $y\ne 0$.
This function is related to the logarithm or inverse hyperbolic functions if $0<y<x$, and to the inverse circular functions if $0\le x\le y$.
The symmetrised elliptic integral of the second kind is defined by
$RD x,y,z = 32 ∫0∞ dt t+x t+y t+z3$
with $z>0$, $x\ge 0$ and $y\ge 0$, but only one of $x$ or $y$ may be zero.
The function is a degenerate special case of the symmetrised elliptic integral of the third kind
$RJ x,y,z,ρ = 32 ∫0∞ dt t+x t+y t+z t+ρ$
with $\rho \ne 0$ and $x,y,z\ge 0$ with at most one equality holding. Thus ${R}_{D}\left(x,y,z\right)={R}_{J}\left(x,y,z,z\right)$. The normalization of both these functions is chosen so that
$RDx,x,x=RJx,x,x,x=1/xx.$
The algorithms used for all these functions are based on duplication theorems. These allow a recursion system to be established which constructs a new set of arguments from the old using a combination of arithmetic and geometric means. The value of the function at the original arguments can then be simply related to the value at the new arguments. These recursive reductions are used until the arguments differ from the mean by an amount small enough for a Taylor series about the mean to give sufficient accuracy when retaining terms of order less than six. Each step of the recurrences reduces the difference from the mean by a factor of four, and as the truncation error is of order six, the truncation error goes like ${\left(4096\right)}^{-n}$, where $n$ is the number of iterations.
The above forms can be related to the more traditional canonical forms (see Section 17.2 of Abramowitz and Stegun (1972)), as follows.
If we write $q={\mathrm{cos}}^{2}\varphi ,r=1-m {\mathrm{sin}}^{2}\varphi ,s=1-n {\mathrm{sin}}^{2}\varphi $, where $0\le \varphi \le \frac{1}{2}\pi $, we have
the classical elliptic integral of the first kind:
$Fϕ∣m = ∫0ϕ 1-m sin2θ -12 dθ = sinϕ RF q,r,1 ;$
the classical elliptic integral of the second kind:
$Eϕ∣m = ∫0ϕ 1-m sin2θ 12 dθ = sinϕ RF q,r,1 -13m sin3 ϕ RD q,r,1$
the classical elliptic integral of the third kind:
$Πn; ϕ∣m = ∫0ϕ 1-n sin2θ -1 1-m sin2θ -12 dθ = sinϕ RF q,r,1 + 13 n sin3 ϕ RJ q,r,1,s .$
Also the classical complete elliptic integral of the first kind:
$Km = ∫ 0 π2 1 - m sin2θ -12 dθ = RF 0,1-m,1 ;$
the classical complete elliptic integral of the second kind:
$Em = ∫ 0 π2 1-m sin2 θ 12 dθ = RF 0,1-m,1 - 13 m RD 0,1-m,1 .$
For convenience, Chapter s contains functions to evaluate classical and symmetrised elliptic integrals.
### 2.3 Bessel and Airy Functions of a Complex Argument
The functions for Bessel and Airy functions of a real argument are based on Chebyshev expansions, as described in Section 2.1. The functions provided for functions of a complex argument, however, use different methods. These functions relate all functions to the modified Bessel functions ${I}_{\nu }\left(z\right)$ and ${K}_{\nu }\left(z\right)$ computed in the right-half complex plane, including their analytic continuations. ${I}_{\nu }$ and ${K}_{\nu }$ are computed by different methods according to the values of $z$ and $\nu $. The methods include power series, asymptotic expansions and Wronskian evaluations. The relations between functions are based on well known formulae (see Abramowitz and Stegun (1972)).
### 2.4 Option Pricing Functions
The option pricing functions evaluate the closed form solutions or approximations to the equations that define mathematical models for the prices of selected financial option contracts. These solutions can be viewed as special functions determined by the underlying equations. The terminology associated with these functions arises from their setting in financial markets and is briefly outlined below. See Joshi (2003) for a comprehensive introduction to this subject. An option is a contract which gives the holder the right, but not the obligation, to buy (if it is a call) or sell (if it is a put) a particular asset, $S$. A European option can be exercised only at the specified expiry time, $T$, while an American option can be exercised at any time up to $T$. For Asian options the average underlying price over a pre-set time period determines the payoff.
The asset is bought (if a call) or sold (if a put) at a pre-specified strike price $X$. Thus, an option contract has a payoff to the holder of $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{\left({S}_{T}-X\right),0\right\}$ for a call or $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{\left(X-{S}_{T}\right),0\right\}$, for a put, which depends on whether the asset price at the time of exercise is above (call) or below (put) the strike, $X$. If at any moment in time a contract is currently showing a theoretical profit then it is deemed ‘in-the-money’; otherwise it is deemed ‘out-of-the-money’.
The option contract itself therefore has a value and, in many cases, can be traded in markets. Mathematical models, such as the Black–Scholes model, give theoretical prices for particular option contracts using a number of assumptions about the behaviour of financial markets. Typically, the price, ${S}_{t}$, of the underlying asset at time $t$, is modelled as the solution of a stochastic differential equation for the return, $d{S}_{t}/{S}_{t}$, on the asset price over a time interval, $dt$,
$dSt St = μ dt + σdWt ,$
where $d{W}_{t}$ is a Brownian motion. The drift, $\mu $, defines the trend in the movements of $S$, while the volatility, $\sigma $, measures the risk and may be taken to be the standard deviation of the returns on the asset price. In addition the model requires a riskless money market account or bond with value, ${B}_{t}$, at time $t$ and risk-free rate, $r$, such that
$dBt = rBt dt .$
This leads to the determination of the Black–Scholes option price, $P$, by a martingale method or via the derivation of the Black–Scholes partial differential equation,
$∂P ∂t + ∂P ∂S rS+ 12 ∂2P ∂S2 σ2 S2 - rP=0 .$
For this case a closed form solution exists which is evaluated by nag_bsm_price (s30aac).
A number of different option types where the solution exists in closed form or as a closed form approximation are presented in this chapter. See Haug (2007) for an extensive listing of option pricing formulae.
## 3 Recommendations on Choice and Use of Available Functions
### 3.1 Elliptic Integrals
IMPORTANT ADVICE: users who encounter elliptic integrals in the course of their work are strongly recommended to look at transforming their analysis directly to one of the Carlson forms, rather than to the traditional canonical Legendre forms. In general, the extra symmetry of the Carlson forms is likely to simplify the analysis, and these symmetric forms are much more stable to calculate.
The function nag_elliptic_integral_rc (s21bac) for ${R}_{C}$ is largely included as an auxiliary to the other functions for elliptic integrals. This integral essentially calculates elementary functions, e.g.,
$lnx =x-1 RC 1+x2 2,x , x>0; arcsinx =x RC1-x2,1,x≤1; arcsinhx =x RC1+x2,1,etc.$
In general this method of calculating these elementary functions is not recommended as there are usually much more efficient specific functions available in the Library. However, nag_elliptic_integral_rc (s21bac) may be used, for example, to compute $\mathrm{ln}x/\left(x-1\right)$ when $x$ is close to $1$, without the loss of significant figures that occurs when $\mathrm{ln}x$ and $x-1$ are computed separately.
### 3.2 Bessel and Airy Functions
For computing the Bessel functions ${J}_{\nu }\left(x\right)$, ${Y}_{\nu }\left(x\right)$, ${I}_{\nu }\left(x\right)$ and ${K}_{\nu }\left(x\right)$ where $x$ is real and $\nu =0\text{ or }1$, special functions are provided, which are much faster than the more general functions that allow a complex argument and arbitrary real $\nu \ge 0$. Similarly, special functions are provided for computing the Airy functions and their derivatives $\mathrm{Ai}\left(x\right)$, $\mathrm{Bi}\left(x\right)$, ${\mathrm{Ai}}^{\prime }\left(x\right)$, ${\mathrm{Bi}}^{\prime }\left(x\right)$ for a real argument which are much faster than the functions for complex arguments.
## 4 Functionality Index
Airy function,
Ai, real argument nag_airy_ai (s17agc)
Ai or Ai ′ , complex argument, optionally scaled nag_complex_airy_ai (s17dgc)
Ai ′ , real argument nag_airy_ai_deriv (s17ajc)
Bi, real argument nag_airy_bi (s17ahc)
Bi or Bi ′ , complex argument, optionally scaled nag_complex_airy_bi (s17dhc)
Bi ′ , real argument nag_airy_bi_deriv (s17akc)
vectorized Ai, real argument nag_airy_ai_vector (s17auc)
vectorized Ai ′ , real argument nag_airy_ai_deriv_vector (s17awc)
vectorized Bi, real argument nag_airy_bi_vector (s17avc)
vectorized Bi ′ , real argument nag_airy_bi_deriv_vector (s17axc)
Arccosh,
inverse hyperbolic cosine nag_arccosh (s11acc)
Arcsinh,
inverse hyperbolic sine nag_arcsinh (s11abc)
Arctanh,
inverse hyperbolic tangent nag_arctanh (s11aac)
Bessel function,
I0, real argument nag_bessel_i0 (s18aec)
I1, real argument nag_bessel_i1 (s18afc)
Iα + n − 1(x) or Iα − n + 1(x), real argument nag_bessel_i_alpha (s18ejc)
Iν, complex argument, optionally scaled nag_complex_bessel_i (s18dec)
Iν / 4(x), real argument nag_bessel_i_nu (s18eec)
J0, real argument nag_bessel_j0 (s17aec)
J1, real argument nag_bessel_j1 (s17afc)
Jα + n − 1(x) or Jα − n + 1(x), real argument nag_bessel_j_alpha (s18ekc)
Jα ± n(z), complex argument nag_complex_bessel_j_seq (s18gkc)
Jν, complex argument, optionally scaled nag_complex_bessel_j (s17dec)
K0, real argument nag_bessel_k0 (s18acc)
K1, real argument nag_bessel_k1 (s18adc)
Kα + n(x), real argument nag_bessel_k_alpha (s18egc)
Kν, complex argument, optionally scaled nag_complex_bessel_k (s18dcc)
Kν / 4(x), real argument nag_bessel_k_nu (s18efc)
vectorized I0, real argument nag_bessel_i0_vector (s18asc)
vectorized I1, real argument nag_bessel_i1_vector (s18atc)
vectorized J0, real argument nag_bessel_j0_vector (s17asc)
vectorized J1, real argument nag_bessel_j1_vector (s17atc)
vectorized K0, real argument nag_bessel_k0_vector (s18aqc)
vectorized K1, real argument nag_bessel_k1_vector (s18arc)
vectorized Y0, real argument nag_bessel_y0_vector (s17aqc)
vectorized Y1, real argument nag_bessel_y1_vector (s17arc)
Y0, real argument nag_bessel_y0 (s17acc)
Y1, real argument nag_bessel_y1 (s17adc)
Yν, complex argument, optionally scaled nag_complex_bessel_y (s17dcc)
Beta function
incomplete nag_incomplete_beta (s14ccc)
Complement of the Cumulative Normal distribution nag_cumul_normal_complem (s15acc)
Complement of the Error function,
complex argument, scaled nag_complex_erfc (s15ddc)
real argument nag_erfc (s15adc)
real argument, scaled nag_erfcx (s15agc)
Cosine,
hyperbolic nag_cosh (s10acc)
Cosine Integral nag_cos_integral (s13acc)
Cumulative Normal distribution function nag_cumul_normal (s15abc)
Dawson's Integral nag_dawson (s15afc)
Digamma function, scaled nag_polygamma_deriv (s14adc)
Elliptic functions, Jacobian, sn, cn, dn
complex argument nag_jacobian_elliptic (s21cbc)
real argument nag_real_jacobian_elliptic (s21cac)
Elliptic integral,
general,
of 2nd kind, F(z , k ′ , a , b) nag_general_elliptic_integral_f (s21dac)
Legendre form,
complete of 1st kind, K(m) nag_elliptic_integral_complete_K (s21bhc)
complete of 2nd kind, E (m) nag_elliptic_integral_complete_E (s21bjc)
of 1st kind, F(ϕ | m) nag_elliptic_integral_F (s21bec)
of 2nd kind, E (ϕ ∣ m) nag_elliptic_integral_E (s21bfc)
of 3rd kind, Π (n ; ϕ ∣ m) nag_elliptic_integral_pi (s21bgc)
symmetrised,
degenerate of 1st kind, RC nag_elliptic_integral_rc (s21bac)
of 1st kind, RF nag_elliptic_integral_rf (s21bbc)
of 2nd kind, RD nag_elliptic_integral_rd (s21bcc)
of 3rd kind, RJ nag_elliptic_integral_rj (s21bdc)
Erf,
real argument nag_erf (s15aec)
Erfc,
complex argument, scaled nag_complex_erfc (s15ddc)
real argument nag_erfc (s15adc)
erfcx,
real argument nag_erfcx (s15agc)
Exponential Integral nag_exp_integral (s13aac)
Fresnel integral,
vectorized C nag_fresnel_c_vector (s20arc)
vectorized S nag_fresnel_s_vector (s20aqc)
Gamma function nag_gamma (s14aac)
Gamma function,
incomplete nag_incomplete_gamma (s14bac)
Generalized factorial function nag_gamma (s14aac)
Hankel function Hν(1) or Hν(2),
complex argument, optionally scaled nag_complex_hankel (s17dlc)
Jacobian theta functions θk(x , q),
real argument nag_jacobian_theta (s21ccc)
Kelvin function,
vectorized bei x nag_kelvin_bei_vector (s19apc)
vectorized ber x nag_kelvin_ber_vector (s19anc)
vectorized kei x nag_kelvin_kei_vector (s19arc)
vectorized ker x nag_kelvin_ker_vector (s19aqc)
bei x nag_kelvin_bei (s19abc)
ber x nag_kelvin_ber (s19aac)
kei x nag_kelvin_kei (s19adc)
ker x nag_kelvin_ker (s19acc)
Legendre functions of 1st kind Pnm(x), Pnm(x) nag_legendre_p (s22aac)
Logarithm of 1 + x nag_shifted_log (s01bac)
Logarithm of beta function,
Logarithm of gamma function,
complex nag_complex_log_gamma (s14agc)
real, scaled nag_scaled_log_gamma (s14ahc)
Option Pricing,
American option: Bjerksund and Stensland option price nag_amer_bs_price (s30qcc)
Asian option: geometric continuous average rate price nag_asian_geom_price (s30sac)
Asian option: geometric continuous average rate price with Greeks nag_asian_geom_greeks (s30sbc)
binary asset-or-nothing option price nag_binary_aon_price (s30ccc)
binary asset-or-nothing option price with Greeks nag_binary_aon_greeks (s30cdc)
binary cash-or-nothing option price nag_binary_con_price (s30cac)
binary cash-or-nothing option price with Greeks nag_binary_con_greeks (s30cbc)
Black–Scholes–Merton option price nag_bsm_price (s30aac)
Black–Scholes–Merton option price with Greeks nag_bsm_greeks (s30abc)
European option, option prices, using Merton jump-diffusion model nag_jumpdiff_merton_price (s30jac)
European option, option price with Greeks, using Merton jump-diffusion model nag_jumpdiff_merton_greeks (s30jbc)
floating-strike lookback option price nag_lookback_fls_price (s30bac)
floating-strike lookback option price with Greeks nag_lookback_fls_greeks (s30bbc)
Heston's model option price nag_heston_price (s30nac)
Heston's model option price with Greeks nag_heston_greeks (s30nbc)
standard barrier option price nag_barrier_std_price (s30fac)
Polygamma function,
ψ(n)(x), real x nag_real_polygamma (s14aec)
ψ(n)(z), complex z nag_complex_polygamma (s14afc)
Psi function nag_polygamma_fun (s14acc)
Psi function derivatives, scaled nag_polygamma_deriv (s14adc)
Scaled modified Bessel function(s),
e − (x)I0(x), real argument nag_bessel_i0_scaled (s18cec)
e − (x)I1(x), real argument nag_bessel_i1_scaled (s18cfc)
e − x Iν / 4(x), real argument nag_bessel_i_nu_scaled (s18ecc)
ex K0 (x), real argument nag_bessel_k0_scaled (s18ccc)
ex K1 (x), real argument nag_bessel_k1_scaled (s18cdc)
exKα + n(x), real argument nag_bessel_k_alpha_scaled (s18ehc)
exKν / 4(x), real argument nag_bessel_k_nu_scaled (s18edc)
vectorized e − (x)I0(x), real argument nag_bessel_i0_scaled_vector (s18csc)
vectorized e − (x)I1(x), real argument nag_bessel_i1_scaled_vector (s18ctc)
vectorized ex K0 (x), real argument nag_bessel_k0_scaled_vector (s18cqc)
vectorized ex K1 (x), real argument nag_bessel_k1_scaled_vector (s18crc)
Sine,
hyperbolic nag_sinh (s10abc)
Sine Integral nag_sin_integral (s13adc)
Tangent,
hyperbolic nag_tanh (s10aac)
Trigamma function, scaled nag_polygamma_deriv (s14adc)
Zeros of Bessel functions Jα(x), Jα ′ (x), Yα(x), Yα ′ (x) nag_bessel_zeros (s17alc)
None.
## 6 References
Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications
Carlson B C (1965) On computing elliptic integrals and functions J. Math. Phys. 44 36–51
Carlson B C (1977a) Special Functions of Applied Mathematics Academic Press
Carlson B C (1977b) Elliptic integrals of the first kind SIAM J. Math. Anal. 8 231–242
Clenshaw C W (1962) Chebyshev Series for Mathematical Functions Mathematical tables HMSO
Fox L and Parker I B (1968) Chebyshev Polynomials in Numerical Analysis Oxford University Press
Haug E G (2007) The Complete Guide to Option Pricing Formulas (2nd Edition) McGraw-Hill
Joshi M S (2003) The Concepts and Practice of Mathematical Finance Cambridge University Press
Schonfelder J L (1976) The production of special function routines for a multi-machine library Softw. Pract. Exper. 6(1)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 124, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.675895094871521, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/194904/calculating-expected-number-of-rounds
|
# Calculating expected number of rounds
I am trying to find the expected number of rounds for a system to finish a process. Lets say N = 100. One process starts off the program by sending a message to another random process. The choice is made at random so a process can potentially send a message multiple times to another process.
When a process receives a message, it will also send a message to other processes. Since collision can occur (i.e. multiple processes sending a message to a same process), and this is where I am kind of confused.
If at round R - X processes have the message and Y processes don't. I am trying to find how many processes might get the message at round R + 1, and expand it to the total amount of rounds needed. Each process has a 1/100 chance of receiving a message so there is a finite possibility but I am confused on how to calculate this. Any hints?
-
1
I don't get what is random in your problem. Is it the program to whom the message is sent, or is it wether a program sends a message, or is it the amount of messages a program sends, or is it a combination of any of all three above? I think once you have figured that out and what probability you assign to each event, you'll be less confused. – Raskolnikov Sep 12 '12 at 21:04
What is random is the number of rounds required to complete a process. What is missing is the definition of completing a process. Apparently Hagen knows because he gave Paul (the OP) a satisfactory answer. – Michael Chernick Sep 12 '12 at 21:24
There has been recent interest in gossip and how a single piece of information spreads across a network. See, for example, the paper A.D. Sarwate, A.G. Dimakis, The Impact of Mobility on Gossip Algorithms, IEEE Transactions on Information Theory 58(3): pp. 1731--1742, March 2012 (the lead author is my son). – Dilip Sarwate Sep 12 '12 at 22:45
## 1 Answer
A naive estimate would say that on average a share of $\frac Xn$ of the $Y$ processes will learn the message, thus one expects the number $X$ to grow to $X+\frac{XY}n$. Thus as long as $X\ll n$, the number will grow exponentially like $2^R$; if $X\approx \frac n2$, it will grow slower, but $Y$ will decrease by about half each step; and as $Y$ gets smaller, it will decrease even faster. Therefore a rough upper bound is that you probably need at most $2\log_2 \frac n2$ rounds.
-
Ah that makes sense. I was thinking of it as not decreasing by half each step and was getting very confused. Thanks for the explanation. – Paul Sep 12 '12 at 21:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460514187812805, "perplexity_flag": "head"}
|
http://mathhelpforum.com/geometry/200177-circumference-circle.html
|
# Thread:
1. ## the circumference of a circle
I need help in the below problem
thanks
A circle of diameter 2cm rolls along the circumference of a circle of diameter 12cm, without slipping until it returns to its starting position. Given that the smaller circle has turned x degrees about its centre. Find the value of x
2. ## Re: the circumference of a circle
Edit: My post is incorrect. Please refer to Soroban's explanation. I apologize for the error.
The smaller circle has a circumference of $2\pi$ cm and the larger circle has a circumference of $12\pi$ cm. To roll all the way around the larger circle, the smaller circle would have to make $\frac{12\pi}{2\pi} = 6$ full rotations about its center. 6 complete rotations equals how many degrees?
3. ## Re: the circumference of a circle
Thanks but why sometimes we have to add one more round when the small circle has completed one full round about the
centre of bigger circle
4. ## Re: the circumference of a circle
@kingman no...the small circle starts and ends at its original position; it makes six full revolutions (because the ratio of the circumferences of the larger, smaller circles is 6:1).
5. ## Re: the circumference of a circle
Hello, kingman!
This is a classic trick question . . .
A circle of diameter 2cm rolls along the circumference of a circle of diameter 12cm,
without slipping until it returns to its starting position.
Given that the smaller circle has turned x degrees about its centre, find the value of x
You are correct about that "extra revolution".
Consider a circle with radius $R.$
Roll it along a line.
Code:
* * * * * *
* * * *
* * * *
* * * *
* * * *
* * * * * *
* | * * | *
| |
* |R * * |R *
* | * * | *
* ↓ * * ↓ *
----------*-*-*---------------------------*-*-*----------
: - - - - - - 2πR - - - - - - :
At the start, the "initial radius" (IR) is pointing down (at the line).
After one revolution, the circle has moved $2\pi R$ units
. . and the IR is again pointing at the line.
Now revolve the circle around a circle with twice its radius.
Code:
* *
* *
* * *
|R
* | *
* * *
* *
* *
* *
* *
* * - - - - *
* 2R *
* *
* *
* *
* * *
* *
* | *
|R
* * *
* *
* *
The circle starts with its IR pointing at the circle.
In one revolution, it moves $2\pi R$ units around the large circle
. . and its IR is again pointing at the circle.
But that radius is pointing upward.
. . How did that happen?
While the small circle rolled around half of the large circle,
. . the IR made ${\color{blue}1\tfrac{1}{2}}$ revolutions.
So in making one "orbit", the smaller circle makes three revolutions.
6. ## Re: the circumference of a circle
Huh, very interesting. I guess it's because the "path" is circular, not a straight line, so there's an extra revolution. My apologies.
Turns out the College Board screwed up on this type of problem as well, in 1982:
Brain teaser: rolling one quarter around another. Rotation vs. revolution.
7. ## Re: the circumference of a circle
Thanks but can we conclude that in general for any small circle of radius "r" revolving round a bigger circle of radius '' R " one has to add one more revolution after the smaller circle has made one Orbit.
8. ## Re: the circumference of a circle
Hello, kingman!
Thanks, but can we conclude that in general for any small circle of radius "r"
revolving round a bigger circle of radius ''R", one has to add one more revolution
after the smaller circle has made one Orbit?
Yes, that is true.
9. ## Re: the circumference of a circle
Sorry Can you please explain from your diagram how you get 1/1/2 revolutions by just noticing that radius is pointing upward.
I wonder whether it is true to say the half revolution is due the small circle rotating about its own axis and the remaining half revolution is due to the half revolution after the small circle has revolved about the centre of the big circle. In concludion one is due to roatation about its ( small circle )own sxis and another is due the small circle orbiting about the centre of the big circle.
10. ## Re: the circumference of a circle
Originally Posted by kingman
I wonder whether it is true to say the half revolution is due the small circle rotating about its own axis and the remaining half revolution is due to the half revolution after the small circle has revolved about the centre of the big circle. In concludion one is due to roatation about its ( small circle )own sxis and another is due the small circle orbiting about the centre of the big circle.
Yes, I think that interpretation makes sense.
If the circle was rolling along a flat surface for the same distance, there would not be an extra revolution. The extra rotation comes from the fact that the surface on which the circle is rolling is itself curved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301420450210571, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/31143?sort=newest
|
## unique integer partitions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let me motivate my general question with an explicit example:
Suppose I am looking for all unique combinations of exactly three non-negative integers that sum to five. The solutions are 005, 014, 023, 113, and 122. Which means that there are five unique combinations.
Is there a way to find the $\textit{number}$ of unique combinations of exactly $k$ non-negative integers that sum to $n$? I'd rather not generate all the unique combinations and then count. I am hoping that there is a straightforward combinatoric solution to this.
Please let me know if more clarification is needed.
Thanks!
-
## 4 Answers
For fixed k
$p_k(n) \sim {n^{k-1} \over k!(k-1)!}.$
Maybe this limitingform will be of some use to you.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If you need an algorithm to calculate this number, you can use the following. Let $a_{nk}$ be an answer to your question, then it's not hard to prove that $a_{nk} = a_{n,k-1} + a_{n - k, k}$. So you can fill in the table of all $a_{nk}$ using this formulae.
-
+1 thanks for the answer. but if, say, $n = 10^{7}$ and $k = 2000$, i fear that i'd be sitting around for quite a while for the recursion to finish or i might reach the recursion depth of my programming language. do you know of any asymptotic results? maybe along the lines of hardy-ramanujan? – B Rivera Jul 10 2010 at 1:50
For those $n$n and $k$ I can suggest the following. Start with the approach suggested by Qiaochu Yuan. Take polynomial $f(x) = (1-x)(1-x^2)\cdots(1-x^k)$ and calculate it as $f(x) = f_0 + f_1 x + \cdots + f_{k(k+1)/2} x^{k(k+1)/2}$. Now you need coefficient with $x^n$ in $1/f(x)$. To calculate it apply Fast Fourier Transform to inverse polynomial $f(x)$. This works in $O(n \log n)$ basic operations (multiplications and additions). So the overall complexity is $O(n \log n)$ – falagar Jul 10 2010 at 5:22
Nothing wrong with Qiaochu Yuan's answer, but here's an orthogonal approach; for fixed $k$, calculate the first 5 or 10 $n$ values and then look up the resulting sequence at the Online Encyclopedia of Integer Sequences.
-
For fixed $k$ and large $n$ this is pretty doable. You want to find solutions to
$$x_1 + x_2 + ... + x_k = n$$
where $x_1 \ge x_2 \ge ... \ge x_k$. Letting $y_i = x_i - x_{i+1}$ and $y_k = x_k$, this is equivalent to finding solutions to
$$y_1 + 2y_2 + ... + ky_k = n$$
where $y_i \ge 0$. If $p_k(n)$ denotes the number of ways to do this, it follows by a standard generating function trick that
$$\sum_{k \ge 0} p_k(n) x^n = \frac{1}{(1 - x)(1 - x^2)...(1 - x^k)}.$$
In principle one can find the partial fraction decomposition of the RHS, allowing us to write $p_k(n)$ as a quasi-polynomial.
-
+1 thanks for the answer. perhaps i should have given my actual numerical constraints. i was looking for some type of method that would allow me to solve for cases of $n > 10^{6}$ and $k > 1000$ in a reasonable amount of time. perhaps it is not possible? – B Rivera Jul 10 2010 at 1:45
It's always a good idea to give your actual numerical constraints. It is actually quite feasible to read off the leading terms of p_k(n) in terms of n from the generating function; the leading term is something like 1/k! {n+k-1 choose k} and this should give a pretty reasonable approximation. – Qiaochu Yuan Jul 10 2010 at 1:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924838662147522, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/vectors+mathematics
|
# Tagged Questions
1answer
302 views
### Bra space and adjoint vectors
If I'm not wrong, a bra, $\langle \phi_n |$, can be thought as a linear functional that when applied to a ket vector, $| \phi_m \rangle$, returns a complex number; that is, the inner product it's a ...
6answers
2k views
### How is gradient the maximum rate of change of a function?
Recently I read a book which described about gradient. It says $${\rm d}T~=~ \nabla T \cdot {\rm d}{\bf r},$$ and suddenly they concluded that $\nabla T$ is the maximum rate of change of $f(T)$ ...
2answers
392 views
### What is the physical meaning of a product of vectors?
My teacher told me that Vectors are quantities that behave like Displacements. Seen this way, the triangle law of vector addition simply means that to reach point C from point A, going from A to B ...
4answers
3k views
### How can area be a vector?
My professor told me recently that Area is a vector. A Google search gave me the following definition for a vector: Noun: A quantity having direction as well as magnitude, esp. as determining ...
1answer
1k views
### Uniqueness of Helmholtz decomposition?
Helmholtz theorem states that given a smooth vector field $\pmb{H}$, there are a scalar field $\phi$ and a vector field $\pmb{G}$ such that $$\pmb{H}=\pmb{\nabla} \phi +\pmb{\nabla} \times \pmb{G},$$ ...
3answers
906 views
### Can vectors in physics be represented by complex numbers and can they be divided? [closed]
Below is attached for reference, but the question is simply about whether vectors used in physics in a vector space can be represented by complex numbers and whether they can be divided. In ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9561721682548523, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Dynamical_systems
|
# Dynamical system
(Redirected from Dynamical systems)
"Dynamical" redirects here. For other uses, see Dynamical (disambiguation).
The Lorenz attractor arises in the study of the Lorenz Oscillator, a dynamical system.
A dynamical system is a concept in mathematics where a fixed rule describes the time dependence of a point in a geometrical space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a lake.
At any given time a dynamical system has a state given by a set of real numbers (a vector) that can be represented by a point in an appropriate state space (a geometrical manifold). Small changes in the state of the system create small changes in the numbers. The evolution rule of the dynamical system is a fixed rule that describes what future states follow from the current state. The rule is deterministic; in other words, for a given time interval only one future state follows from the current state.
## Overview
The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is given implicitly by a relation that gives the state of the system only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. Once the system can be solved, given an initial point it is possible to determine all its future positions, a collection of points known as a trajectory or orbit.
Before the advent of fast computing machines, solving a dynamical system required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.
For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
• The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability.
• The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood.
• The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid.
• The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos.
It was in the work of Poincaré that these dynamical systems themes developed.[citation needed]
## Basic definitions
Main article: Dynamical system (definition)
A dynamical system is a manifold M called the phase (or state) space endowed with a family of smooth evolution functions Φt that for any element of t ∈ T, the time, map a point of the phase space back into the phase space. The notion of smoothness changes with applications and the type of manifold. There are several choices for the set T. When T is taken to be the reals, the dynamical system is called a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow. When T is taken to be the integers, it is a cascade or a map; and the restriction to the non-negative integers is a semi-cascade.
### Examples
The evolution function Φ t is often the solution of a differential equation of motion
$\dot{x} = v(x). \,$
The equation gives the time derivative, represented by the dot, of a trajectory x(t) on the phase space starting at some point x0. The vector field v(x) is a smooth function that at every point of the phase space M provides the velocity vector of the dynamical system at that point. (These vectors are not vectors in the phase space M, but in the tangent space TxM of the point x.) Given a smooth Φ t, an autonomous vector field can be derived from it.
There is no need for higher order derivatives in the equation, nor for time dependence in v(x) because these can be eliminated by considering systems of higher dimensions. Other types of differential equations can be used to define the evolution rule:
$G(x, \dot{x}) = 0 \,$
is an example of an equation that arises from the modeling of mechanical systems with complicated constraints.
The differential equations determining the evolution function Φ t are often ordinary differential equations: in this case the phase space M is a finite dimensional manifold. Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations. In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity.
## Linear dynamical systems
Main article: Linear dynamical system
Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t).
### Flows
For a flow, the vector field Φ(x) is a linear function of the position in the phase space, that is,
$\dot{x} = \phi(x) = A x + b,\,$
with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity). The case b ≠ 0 with A = 0 is just a straight line in the direction of b:
$\Phi^t(x_1) = x_1 + b t. \,$
When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there. For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0,
$\Phi^t(x_0) = e^{t A} x_0. \,$
When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.
The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior.
Linear vector fields and a few trajectories.
### Maps
A discrete-time, affine dynamical system has the form
$x_{n+1} = A x_n + b, \,$
with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A) –1b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system A nx0. The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.
As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point.
There are also many other discrete dynamical systems.
## Local dynamics
The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible.
### Rectification
A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.
The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.
### Near periodic orbits
In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γ, x0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0.
The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x2), so a change of coordinates h can only be expected to simplify F to its linear part
$h^{-1} \circ F \circ h(x) = J \cdot x.\,$
This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – ∑ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem.
### Conjugation results
The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic.
In the hyperbolic case the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic.
The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point.
## Bifurcation theory
Main article: Bifurcation theory
When the evolution map Φt (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.
Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.
The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory.
Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations.
## Ergodic systems
Main article: Ergodic theory
In many dynamical systems it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ t(A) and invariance of the phase space means that
$\mathrm{vol} (A) = \mathrm{vol} ( \Phi^t(A) ). \,$
In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure.
In a Hamiltonian system not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.
For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.
One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω).
The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator U t, the transfer operator,
$(U^t a)(x) = a(\Phi^{-t}(x)). \,$
By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ t. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ t gets mapped into an infinite-dimensional linear problem involving U.
The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems.
### Nonlinear dynamical systems and chaos
Main article: Chaos theory
Simple nonlinear dynamical systems and even piecewise linear systems can exhibit a completely unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This seemingly unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent space perpendicular to a trajectory can be well separated into two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold).
This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?"
Note that the chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The logistic map is only a second-degree polynomial; the horseshoe map is piecewise linear.
### Geometrical definition
A dynamical system is the tuple $\langle \mathcal{M}, f , \mathcal{T}\rangle$, with $\mathcal{M}$ a manifold (locally a Banach space or Euclidean space), $\mathcal{T}$ the domain for time (non-negative reals, the integers, ...) and f an evolution rule t → f t (with $t\in\mathcal{T}$) such that f t is a diffeomorphism of the manifold to itself. So, f is a mapping of the time-domain $\mathcal{T}$ into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain $\mathcal{T}$ .
### Measure theoretical definition
See main article Measure-preserving dynamical system.
A dynamical system may be defined formally, as a measure-preserving transformation of a sigma-algebra, the quadruplet (X, Σ, μ, τ). Here, X is a set, and Σ is a sigma-algebra on X, so that the pair (X, Σ) is a measurable space. μ is a finite measure on the sigma-algebra, so that the triplet (X, Σ, μ) is a probability space. A map τ: X → X is said to be Σ-measurable if and only if, for every σ ∈ Σ, one has $\tau^{-1}\sigma \in \Sigma$. A map τ is said to preserve the measure if and only if, for every σ ∈ Σ, one has $\mu(\tau^{-1}\sigma ) = \mu(\sigma)$. Combining the above, a map τ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The quadruple (X, Σ, μ, τ), for such a τ, is then defined to be a dynamical system.
The map τ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates $\tau^n=\tau \circ \tau \circ \ldots\circ\tau$ for integer n are studied. For continuous dynamical systems, the map τ is understood to be a finite time evolution map and the construction is more complicated.
## Examples of dynamical systems
### Internal links
• Arnold's cat map
• Baker's map is an example of a chaotic piecewise linear map
• Circle map
• Double pendulum
• Billiards and Outer Billiards
• Hénon map
• Horseshoe map
• Irrational rotation
• List of chaotic maps
• Logistic map
• Lorenz system
• Rossler map
## Multidimensional generalization
Dynamical systems are defined over a single independent variable, usually thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing.
## Further reading
Works providing a broad coverage:
• Ralph Abraham and Jerrold E. Marsden (1978). Foundations of mechanics. Benjamin–Cummings. ISBN 0-8053-0102-X. (available as a reprint: ISBN 0-201-40840-6)
• Encyclopaedia of Mathematical Sciences (ISSN 0938-0396) has a sub-series on dynamical systems with reviews of current research.
• Christian Bonatti, Lorenzo J. Díaz, Marcelo Viana (2005). Dynamics Beyond Uniform Hyperbolicity: A Global Geometric and Probabilistic Perspective. Springer. ISBN 3-540-22066-6.
• Stephen Smale (1967). "Differentiable dynamical systems". Bulletin of the American Mathematical Society 73 (6): 747–817. doi:10.1090/S0002-9904-1967-11798-1.
Introductory texts with a unique perspective:
• V. I. Arnold (1982). Mathematical methods of classical mechanics. Springer-Verlag. ISBN 0-387-96890-3.
• Jacob Palis and Wellington de Melo (1982). Geometric theory of dynamical systems: an introduction. Springer-Verlag. ISBN 0-387-90668-1.
• David Ruelle (1989). Elements of Differentiable Dynamics and Bifurcation Theory. Academic Press. ISBN 0-12-601710-7.
• Tim Bedford, Michael Keane and Caroline Series, eds. (1991). Ergodic theory, symbolic dynamics and hyperbolic spaces. Oxford University Press. ISBN 0-19-853390-X.
• Ralph H. Abraham and Christopher D. Shaw (1992). Dynamics—the geometry of behavior, 2nd edition. Addison-Wesley. ISBN 0-201-56716-4.
Textbooks
• Kathleen T. Alligood, Tim D. Sauer and James A. Yorke (2000). Chaos. An introduction to dynamical systems. Springer Verlag. ISBN 0-387-94677-2.
• Oded Galor (2011). Discrete Dynamical Systems. Springer. ISBN 978-3-642-07185-0.
• Anatole Katok and Boris Hasselblatt (1996). Introduction to the modern theory of dynamical systems. Cambridge. ISBN 0-521-57557-5.
• Stephen Lynch (2010). Dynamical Systems with Applications using Maple 2nd Ed. Springer. ISBN 0-8176-4389-3.
• Stephen Lynch (2007). Dynamical Systems with Applications using Mathematica. Springer. ISBN 0-8176-4482-2.
• Stephen Lynch (2004). Dynamical Systems with Applications using MATLAB. Springer. ISBN 0-8176-4321-4.
• James Meiss (2007). Differential Dynamical Systems. SIAM. ISBN 0-89871-635-7.
• Morris W. Hirsch, Stephen Smale and Robert Devaney (2003). Differential Equations, dynamical systems, and an introduction to chaos. Academic Press. ISBN 0-12-349703-5.
• Julien Clinton Sprott (2003). Chaos and time-series analysis. Oxford University Press. ISBN 0-19-850839-5.
• Steven H. Strogatz (1994). Nonlinear dynamics and chaos: with applications to physics, biology chemistry and engineering. Addison Wesley. ISBN 0-201-54344-3.
•
Popularizations:
• Florin Diacu and Philip Holmes (1996). Celestial Encounters. Princeton. ISBN 0-691-02743-9.
• James Gleick (1988). Chaos: Making a New Science. Penguin. ISBN 0-14-009250-1.
• Ivar Ekeland (1990). Mathematics and the Unexpected (Paperback). University Of Chicago Press. ISBN 0-226-19990-8.
• Ian Stewart (1997). Does God Play Dice? The New Mathematics of Chaos. Penguin. ISBN 0-14-025602-4.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8718698620796204, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/35792/what-results-are-known-if-f-g-are-both-analytic-in-mathbbc-having-infini?answertab=active
|
# What results are known If $f,g$ are both analytic in $\mathbb{C}$, having infinitely many poles or zeros for the same $z$'s?
What results are known If $f,g$ are both analytic in $\mathbb{C}$, having infinitely many poles (or zeros) that all coincide? Where each pole from $f$ has has the same order pole of $g$ for the same $z$.
-
## 2 Answers
Hint:
What happens if you multiply by an entire function without zeros - such as $h(z)=e^z$?
Do you know of any factorization theorems?
-
OK! Anything else is out there besides Weierstrass and Mittag-Leffler's ? +1 for reminding me the obvious – Arjang Apr 29 '11 at 8:21
2
Yes, can you prove this: If $f$ is meromorphic, then there are entire functions $g$ and $h$ such that $f= g/h$. In particular $g=fh$. – AD. Apr 29 '11 at 9:06
Theorem: Let $U\subseteq \mathbb{C}$ be open and connected, let $f:U\rightarrow \mathbb{C}$ be holomorphic, and let $A\subseteq U$ have an accumulation point in $U$. Then, if $f(A)=\{ 0\}$, then $f$ is identically $0$ on all of $U$.
In other words, the values a holomorphic function takes on an inifnite set with an accumulation point uniquely determine the function (on a certain connected component).
As for when $f$ and $g$ both have the same poles, I don't think you can say much. For example, if $h$ is holomorphic, then $f+h$ has exactly the same poles as $f$ and $g$. I guess you could try to apply the above theorem to $1/f$; however, if the set of points where $f$ has a pole has an accumulation point, then $f$ is identically equal to $\infty$ (follows from the above theorem). . .probably isn't a particularly useful fact.
-
1
While this is an important theorem, it's not directly relevant to the question because those infinitely many points at which $f$ and $g$ coincide need not have an accumulation point. – lhf Apr 29 '11 at 13:20
1
True, but he did not mention this one way or the other. He just asked what results are known when $f$ and $g$ share infinitely many zeroes, and this theorem is such a result. – Jonathan Gleason Apr 29 '11 at 15:33
you're right. Sorry for the noise. – lhf Apr 29 '11 at 15:58
@lhf No problem =) – Jonathan Gleason Apr 29 '11 at 16:20
@lhf and @GleasSpty : Thank you both for the discussion, lhf what you said was not noise it was a good point, GleasSpty's reply to you clarified how it related back to the question ( I wasn't sure myself when I read the answer ). – Arjang Apr 30 '11 at 0:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499865174293518, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/72098/a-few-proofs-of-properties-of-a-certain-mapping-on-groups
|
# A few proofs of properties of a certain mapping on groups
This is my first time using Stack Exchange, but it looks like a good resource. I am here to ask a couple questions about my homework. We're working from the latest edition of Abstract Algebra by Herstein. This is problem #3 from p. 73 of that book, and it reads as follows:
Let $G$ be any group and $A(G)$ the set of all 1-1 mappings of $G$, as a set, onto itself. Define $L_{a} \colon G \to G$ by $L_{a}(x)=xa^{-1}$. Prove that:
(a) $L_{a} = A(G)$.
(b) $L_{a}L_{b} = L_{ab}$.
(c) The mapping $\psi:G \rightarrow A(G)$ defined by $\psi(a) = L_{a}$ is a monomorphism of G into A(G).
I believe that I have answered parts b & c correctly, but have some concerns about the rigorousness of my approaches to all three problems. I will start out here by including my proposed solution to part a. Any advice or comments would be greatly appreciated.
(a) $A(G)$, as the set of all 1-1 mappings of $G$ onto itself, can be represented as the set of all operations on an element $x \in G$ such that the result is also in $G$. Since $G$ is a group, we know that this set can be represented as the set of all group multiplications $yx \mid x \in G, y \in G$ for a given element $x$. This is because any $x$ element can be fixed as the first parameter, while the $y$ elements are taken over every element of G. We have then that $yx \in G$, due to G's closure under group multiplication. Furthermore, any element $e \in G$ has an inverse element $e^{-1} \in G$, since $G$ is a group. This means that we can consider any element in $G$ as the inverse of its inverse: $e = (e^{-1})^{-1}$. This, when plugged in for our variable $x \in G$ above, gives us that $\forall x \in G, \forall y \in G, yx^{-1} \in G$. This is exactly our given mapping $L_{a}$ above, with the labels rearranged. This shows that $L_{a}$ is a mapping from $G \rightarrow G$, as required. In order to show why this set contains every such possible mapping, we will assume that there is a mapping $M_{a}(x) : G \rightarrow G$ such that $M_{a}(x) \notin L_{a}$. This mapping, as a mapping from G to G, must take the form of a group multiplication: $M_{a}(x) = x*a, x\in G$. However, since G is a group, we have again that $a = (a^{-1})^{-1}$, or, if we let $m = a^{-1}$, that $a = m^{-1}$, and so our mapping can be rewritten as $M_{a}(x) = x*m^{-1} m \in G$. However, this is exactly the same mapping as our above $L_{a}(x)$, showing that every mapping in $A(G)$ can indeed be written as a multiplication between some element $x \in G$ and another element's inverse $a^{-1} \in G$, and thus that $L_{a} \in A(G)$. $\blacksquare$
Thanks for taking the time to check this out, even if you don't feel you can offer any help. EDIT: Thanks to those who pointed out that I had used =, not $\in$, above by mistake. Also appreciated is the edit to italicize my variables. Now I know how to as well. :)
EDIT: The responses so far have been so helpful, I'd like to put the other two parts of my solution up to solicit feedback on them as well. Hopefully they aren't as muddled as the first part's was.
changed a bit in response to feedback
(b) $L_{a}(x) = xa^{-1}$
$L_{b}(x) = xb^{-1}$
$(L_{a}L_{b})(x) = L_{a}(L_{b}(x))$
$L_{a}(L_{b}(x)) = L_{a}(xb^-1)$
$L_{a}(xb^-1) = xb^{-1}a^{-1}$
$xb^{-1}a^{-1} = x(ab)^{-1} = L_{ab}(x)$
$L_{a}(x)L_{b}(x) = L_{ab}(x) \quad\blacksquare$
(c) $\psi(a) = L_{a}(x) = xa^{-1}$. To show that this mapping is a monomorphism of $G$ into $A(G)$, we will first rely on part (a) to state that $\psi(a) = L_{a}$ is indeed a mapping from $G$ to $A(G)$. Now we must show that $\psi$ is a monomorphism of $G$ into $A(G)$. First we will show that $\psi$ is a homomorphism of $G$ into $A(G)$. To this end, we will appeal to the results of our calculations in part (b) to state that $L_{a}L_{b} = L_{ab}$, which of course implies that $\psi(a)\psi(b) = L_{a}L_{b} = L_{ab} = \psi(ab)$, which proves that $\psi$ is a homomorphism. In order to continue and show that $\psi$ is a monomorphism, we must show that it is an injective (1-1) mapping. Let $\psi(a) = Z = \psi(b)$. This can be written as: $\psi(a) = L_{a}(e) = ea^{-1} = Z$
$\psi(b) = L_{b}(e) = eb^{-1} = Z$
$Z = ea^{-1} = eb^{-1}$
$a^{-1} = b^{-1} \Rightarrow a = b$
The last line of the above follows from the uniqueness of inverse elements in $G$. This shows that the only way for two output values of $\psi$ to be equal is for their inputs to be equal as well, and thus $\psi$ is an injective homomorphism, or a monomorphism from $G$ to $A(G)$. $\blacksquare$
-
4
+1 for a thought-out, typeset, respectful question. You are a model new user of the site :) – Zev Chonoles♦ Oct 12 '11 at 19:17
1
Do you want $L_a \in A(G)$, in part (a)? – Dylan Moreland Oct 12 '11 at 19:23
It is still incorrect that $\psi(a) = L_a(e)$. Again: $\psi(a)$ is a function from $G$ to $G$ but $L_a(e)$ is an element of $G$. They cannot be equal. Don't confuse a function with one of its values. – Arturo Magidin Oct 12 '11 at 20:51
OHHH, I see now what you mean. Thank you for your efforts to communicate that distinction! Hmmm... – karmic_mishap Oct 12 '11 at 20:52
## 3 Answers
I'm somewhat concerned with what you write, as I am not sure what it is you are trying to say.
You seem to be trying to show that if $f\in A(G)$, then $f=L_a$ for some $a$; first, nobody is asking you to prove that. And second, you don't seem to be doing that. And third, what you write, $L_a=A(G)$, does not even make sense! $L_a$ is a function from $G$ to itself; $A(G)$ is a set of functions from $G$ to itself. You are trying to show that a single function is equal to a set of functions. That is going to be rather hard to do...
Suppose that $f\colon G\to G$ is a (set-theoretic) 1-1 function.
It is true that for each $y\in G$, since $f(y)\in G$ by hypothesis, and since in a group we can always solve any equation of the form $ay=b$, we can find some $x$, *which depends on $y$ *, such that $f(y) = yx$. However, in general, different $y$'s will require different $x$s with the same function $f$.
So I do not see how you can simply state, as you do when you write:
Since G is a group, we know that this set can be represented as the set of all group multiplications $yx | x\in G,y\in G$ for a given element $x$.
In fact, this assertion is wrong: suppose that every bijection $f\colon G\to G$ is indeed of the form $f(y)=yx$ for some $x$. Then $f(e) = ex = x$, so for every $y$ we would have $f(y)=yf(e)$. It is very easy to see that this cannot be true for most groups, because if you have any bijection $G\to G$, you can compose it with a function that transposes $e$ and $x\neq e$ that does not have order $2$, and still get a bijection. This composition will be a bijection, but will map $e$ to $xf(e)$, but $x$ will not be mapped to $x^2f(e)$, but to $f(e)$. So not every function in $A(G)$ can be of the form $L_x$ for some $x$. The rest of the paragraph is, in my opinion, a big muddle.
In any case, you are not being asked to prove that. You are being asked to show that if $a\in G$, then the map $L_a\colon G\to G$ is an element of $A(G)$. That is, you need to show that $L_a$ is a bijection of $G$ onto itself. You are given that $L_a$ is a function from $G$ to $G$, so what you need to show is that $L_a$ is one-to-one and onto.
(As an aside, your attempt to argue about a function $M_a$ which is not one of the $L_a$ also gets off on the wrong foot; if you wanted to show by contradiction that every element of $A(G)$ is some $L_a$, you would need to start by assuming there is a function $M$ in $A(G)$ such that *for every $b\in G$ * we have $M\neq L_b$; you only assume that $M\neq L_a$ for a particular $a$, and that's no good).
Added. The penultimate line for part (b) should have $L_{ab}(x)$, not $L_{ab}$. Otherwise, it is correct.
The first line of (c) is incorrect. $L_a(x)$ is an element of $G$ (namely, $xa^{-1}$). But $\psi(a)$ is an element of $A(G)$ (namely, $L_a$). $\psi(a)$ does not equal $L_a(x)$.
You don't need to show that $\psi$ is a map from $G$ to $A(G)$: it is defined to be a map from $G$ to $A(G)$. This follows from (a), since you know, if you do (a) correctly, that $L_a\in A(G)$ for any $a\in G$.
Several times you confuse the function $L_a$, with the value of the function at $x$, $L_a(x)$. Remember that "$x$" is one of the names for elements of $G$: don't use it as if this were calculus and you call the function "$f(x)$"! The name of the function is just $L_a$, not $L_a(x)$.
You correctly show it is a homomorphism.
To prove it is 1-1, you are not really doing a proof by contradiction, you are doing a direct proof: you are showing that if $\psi(a)=\psi(b)$, then $a=b$. This is a direct proof, do it as a direct proof. But the line right after that you again commit the faux pas of confusing the function $\psi(a)$ with the particular value of $\psi(a)$ at the element $x$. $\psi(a)$ is not equal to $xa^{-1}$, and $\psi(b)$ is not equal to $xb^{-1}$.
Instead, you have to assume that $L_a=L_b$ as functions, meaning that for every $x\in G$ you have $L_a(x)=L_b(x)$. At that point you can plug in some values for $x$ and see what you can conclude. Might I suggest using the fact that $L_a(e) = L_b(e)$ to conclude that $a=b$?
-
Take a look at the problem again:
Part a) says that $$L_a\in A(G),$$ where the symbol in the middle denotes "is an element of" (see here), not $$L_a=A(G).$$ The latter expression doesn't make sense; $L_a$ is a function from $G$ to $G$, while $A(G)$ is a collection of functions. We want to prove that $L_a$ is a member of this collection.
-
D'oh! This is a big problem. Thanks for catching that. Does the rest of the proof lead to this as its conclusion? – karmic_mishap Oct 12 '11 at 19:27
No, I'm afraid the argument you have written in your question is incorrect (or does not make sense); not every element of $A(G)$ is of the form $L_a$ for some $a\in G$ (nor is that what the statement "$L_a=A(G)$" means), and not every function from $G$ to $G$ is in $A(G)$ (only the invertible, i.e. bijective, ones are). In order to prove that $L_a\in A(G)$ for all $a\in G$, you must prove that $L_a$ is a bijection. – Zev Chonoles♦ Oct 12 '11 at 19:33
@karmic_mishap: No, the rest of the argument is, unfortunately, a muddled mess. – Arturo Magidin Oct 12 '11 at 19:33
To be clear, $A(G)$ is the set of all bijective set maps $G \to G$. This $A(G)$ is even a group under composition, but an element of $A(G)$ doesn't have to be one of the $L_a$ or pay much attention at all to the group structure of $G$. For example, look at the cyclic group $G$ of order $3$, which I'll write as $\{1, x, x^2\}$. I can define a bijective map of sets $G \to G$ by swapping $x$ and $x^2$, fixing $1$. You can check that this is not of the form $L_a$.
Anyway, what you want to show for (a) is that each $L_a$ is bijective, and hence a member of $A(G)$. Perhaps you can find an inverse map: what operation will undo $x \mapsto xa^{-1}$? You shouldn't have to look very far.
And your solution for (b) looks good! I'm worried about (c) only because we seem to be mixing up functions and the formulas defining them. I would write this as: If $L_a$ and $L_b$ agree as maps of sets then in particular $L_a(1) = L_b(1)$, so $a^{-1} = b^{-1}$ and hence $a = b$, using the fact that $(a^{-1})^{-1} = a$. (Or whatever suits you.)
-
Your guess is correct: if you read carefully, you'll see that it is defined as "the set of all 1-1 from $G$, as a set, onto itself." The "onto" signals surjectivity. – Arturo Magidin Oct 12 '11 at 20:11
@Arturo Ah, good point! Then nothing is ambiguous. – Dylan Moreland Oct 12 '11 at 22:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 220, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9713118076324463, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/175426-linear-algebra-cryptography.html
|
# Thread:
1. ## Linear Algebra in Cryptography
So... each letter from A to Z corresponds to a number 0 to 25. Now I want to find the pairs (x,y) such that in $\mathbb{Z}_{26}$,
$\begin{bmatrix} 8 & 3\\ 1 & 7 \end{bmatrix} \begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} x\\ y \end{bmatrix}$
And we are only told that for (A, A), which corresponds to (0,0), this holds.
So, how do I find all pairs x, y such that they are unchanged by the multipication? Going through all possible combinations of x and y is a real daunting task. Is there an easy way for finding all the pairs?
P.S. In $\mathbb{Z}_{26}$,
$K^{-1} = \begin{bmatrix} 7 & 23\\ 25 & 8 \end{bmatrix}$.
2. You need to solve $Av=v$, that is, $(A-I)v = 0$ in $\mathbb{Z}_{26}$. Do you know how to do this?
3. Originally Posted by Defunkt
You need to solve $Av=v$, that is, $(A-I)v = 0$ in $\mathbb{Z}_{26}$. Do you know how to do this?
No, I'm not quite sure how to solve this. Could you please explain a bit more? I just got the two equations
$(K-I)v= \left( \begin{bmatrix} 8 & 3\\ 1 & 7 \end{bmatrix} - \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} \right) \begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} 7x+3y\\ x+6y \end{bmatrix} =0$
But don't know how solving the homogeneous system shows the list of elements that are mapped to themselves? How should I continue?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483881592750549, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/17552/what-are-some-interesting-calculus-of-variation-problems
|
# What are some interesting calculus of variation problems? [closed]
That I could create as a classical mechanics class project? Other than the classical examples that we see in textbooks (catenary, brachistochrone, Fermat, etc..)
-
1
As this is a list-making question I am converting it to Community wiki consistent with our policy on reference requests and the like. I'll be opening a topic on meta concerning this whole class of questions shortly. – dmckee♦ Nov 29 '11 at 16:09
## closed as not constructive by David Zaslavsky♦Oct 28 '12 at 12:07
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance.
## 2 Answers
Here is one I just made up, but it has a nice flavor--- suppose you have a 2-d bullet going very fast through a 2-d gas. The gas molecules reflects specularly off the bullet, making glancing collisions. What shape of bullet of a fixed area has the least drag?
This problem gives
$$\int {1\over 1+y'^2} + \lambda y dx$$
And the equation for y' you get is
$$y' = \lambda x (1 - 2 y'^2 - y'^4)$$
or
$$y = \int \lambda x ( 1 - 2 y^2 - y^4)$$
Which you can solve in a series by plugging in $y={\lambda x^2\over 2}$ and iterating a few times using the relation above as a recursion.
-
While studying classical mechanics I did the following simulation:
1. Consider a motion in Coulomb potential: $U(r) = \frac{\alpha}{r}$
2. Fix starting and final points $p_1$ and $p_2$, and consider different paths in a form: $$p_1 + (p_2 - p_1)\lambda + \vec{a}\sin(\pi\lambda) + \vec{b}\sin(2\pi\lambda) + \vec{c}\sin(3\pi\lambda)$$ Where $\lambda$ is the parameter along our path and $\vec{a},\vec{b},\vec{c}$ are 2D vectors, that parametrize it.
3. Take some initial parameters $(\vec{a},\vec{b},\vec{c})$ and calculate the action along the path by means of Maupertuis' principle.
4. Make a small random change in $\vec{a}' = \vec{a} + \mbox{random },\vec{b}' = \vec{b} + \mbox{random }$ and $\vec{c}' = \vec{c} + \mbox{random}$.
5. Calculate the action for $(\vec{a}',\vec{b}',\vec{c}')$ parameters. If action becomes smaller -- replace the parameters with new values $\vec{a}=\vec{a}',\vec{b}=\vec{b}',\vec{c}=\vec{c}'$.
6. Goto step 4.
Here is what I've got in the end:
Here $\alpha = -200, p_1 = (0,-5)$ and $p_2 = (0.17,-0.17)$.
Numbers on top are: left ("Шаг") -- is a step number in the simulation, right ("Действие") -- is the value of the Maupertuis' action.
Red and green lines are real trajectories in the potential and the black line is my "test trajectory". So one can see that the simple random walk in parameter space can find some of the real paths of the body.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9046263098716736, "perplexity_flag": "middle"}
|
http://spikedmath.com/forum/viewtopic.php?p=662
|
# Spiked Math Forums
Where math geeks unite to discuss math and more math!
## Piles of Coins
Have an interesting puzzle? Let's hear it!
7 posts • Page 1 of 1
### Piles of Coins
by DeathRowKitty » Tue Feb 15, 2011 7:46 am
Before getting to the puzzle, there are a few things I would like to mention about it:
1. If at some point in solving this, you come across something that would be implausible or even impossible in reality, that's fine. This problem is not meant to be realistic.
2. This problem is meant to be solved entirely without a calculator (or other computational aids). There's a certain theorem involved in solving this problem that you'll have to look up if you've never seen it before (you'll likely recognize when you'll need to do this), but aside from that, electronic aids are unnecessary.
3. Yes, this problem is very contrived. That's because I wrote it with a certain fact in mind that can be used to simplify part of it.
You are playing a game with 32 rounds, numbered 0 through 31. In each round you are asked a question. If you get the question in round n correct, you receive $2^n$ coins (assume all coins are identical). You continue until you get a question wrong (you don't lose coins for getting a question wrong).
At the end of the game, you are told to put your coins into piles, with each pile containing the same number of coins. Let's call that number n. Having done this, you are then presented with (n-1)! coins and told to put them into n piles, each with the same number of coins. If you have exactly one coin left over upon doing this, you win n-1 coins. If not, you win nothing.
What is the maximum number of coins you can win
1) if you get every question right?
2) by getting any number of questions right? (you may have to look something up for this)
DeathRowKitty
Mathlete
Posts: 68
Joined: Fri Feb 11, 2011 8:38 am
### Re: Piles of Coins
by DeathRowKitty » Thu Feb 17, 2011 11:40 pm
A hint since no one's responded:
Spoiler! :
You'll need Wilson's Theorem
DeathRowKitty
Mathlete
Posts: 68
Joined: Fri Feb 11, 2011 8:38 am
### Re: Piles of Coins
by Ardilla » Fri Feb 18, 2011 5:32 pm
I think the solution is:
Spoiler! :
I'll write the proof later, but I'm sure its 1
the proof:
Spoiler! :
I'll begin with the second part of the problem, where we have (n-1)! coins and are told to split them in n piles with the same number of coins.
The number of coins that we have left upon doing this is the remainder of dividing (n-1)! by n.
If n is not a prime number $(n-1)!\equiv 0 \ (mod \ n)$, thus if n is not a prime we won't get any coins as a reward.
Even more, if n is a prime number we have that $(n-1)!\equiv -1 \ (mod \ n)$, which means that the only way to have one coin left is when n is 2, and this will give us the great prize of 1 coin.
As whenever we answer a question correctly we get a power of 2 tokens (I'm calling this ones tokens to avoid confussion with the coins that are used later) we will always be able to distribute the tokens that we get in piles with 2 tokens in each pile.
This means that whenever we answer at least 1 question correctly we will be able to get our maximum reward.
Spoiler! :
The Spanish Inquisition
Clearly every even integer greater than 2 can be expressed as the sum of two primes.
I have discovered a truly wonderful proof of this proposition, but the signature is too small to contain it.
Ardilla
High School
Posts: 26
Joined: Fri Feb 11, 2011 6:15 pm
Location: Argentina
### Re: Piles of Coins
by DeathRowKitty » Fri Feb 18, 2011 6:47 pm
Oops, I put the condition wrong for the second half of the problem. I meant n-1 coins left over. Bleh.
Your answer is correct for the problem I posted though...except that you'd need at least 2 questions right, since you can't split 1 coin into 1 pile and have 1 coin left over.
Edit: Not that it makes a difference, but
Spoiler! :
$(n-1)! \equiv 0 (\mbox{mod }n)$ for composite n is only true for sufficiently large n....where sufficiently large means greater than 4
DeathRowKitty
Mathlete
Posts: 68
Joined: Fri Feb 11, 2011 8:38 am
### Re: Piles of Coins
by Ardilla » Sat Feb 19, 2011 12:12 am
DeathRowKitty wrote:Edit: Not that it makes a difference, but
Spoiler! :
$(n-1)! \equiv 0 (\mbox{mod }n)$ for composite n is only true for sufficiently large n....where sufficiently large means greater than 4
Oh, you are right, I completely forgot that case!!
I'll think the puzzle another time.
Spoiler! :
The Spanish Inquisition
Clearly every even integer greater than 2 can be expressed as the sum of two primes.
I have discovered a truly wonderful proof of this proposition, but the signature is too small to contain it.
Ardilla
High School
Posts: 26
Joined: Fri Feb 11, 2011 6:15 pm
Location: Argentina
### Re: Piles of Coins
by Ardilla » Tue Feb 22, 2011 7:16 am
Ok, I don't have the answer yet, but I have the following:
Spoiler! :
Once you are presented with (n-1)! coins, using Wilson's Theorem, the only way to have n-1 coins left over is if n is a prime number, this means that we have to find the greatest prime number that divides the number of coins that we receive after answering the questions.
We do have the following:
Spoiler! :
after answering the questions we are left with
$\sum_{i=1}^n2^i=2^{n+1}-1$
where n is the number of questions that we answered correctly.
Now the only thing that we have to do to answer question 1) is find the greatest prime divisor of $2^{33}-1$.
For question 2), we have that $2^{31}-1$ is a mersenne prime number. I guess that this is the greater prime number that we can get, but I'm not sure.
Edit: I have the answer for 2)
Spoiler! :
We have the identity:
$2^{ab}-1=(2^a-1)(1+2^a+2^{2a}+\ldots +2^{(b-1)a}$
Using this we have that:
$2^{33}-1=(2^{11}-1)(1+2^{11}+2^{22})$
Clearly both divisors are lesser than $2^{31}-1$
We also have that:
$2^{32}-1=(2^{8}-1)(1+2^{8}+2^{16}+2^{24})$
Again both divisors are lesser than $2^{31}-1$, and so this is the greater prime divisor that we will be able to find.
This means that the answer to 2) is that the greatest number of coins that you can get is $2^{31}-2$, and this is attained answering 31 questions correctly.
To answer 1) I have to factorize $2^{33}-1=(2^{11}-1)(1+2^{11}+2^{22})$, I'll do this later, maybe
Spoiler! :
The Spanish Inquisition
Clearly every even integer greater than 2 can be expressed as the sum of two primes.
I have discovered a truly wonderful proof of this proposition, but the signature is too small to contain it.
Ardilla
High School
Posts: 26
Joined: Fri Feb 11, 2011 6:15 pm
Location: Argentina
### Re: Piles of Coins
by DeathRowKitty » Tue Feb 22, 2011 12:06 pm
Your answer to part 2 is correct. I think you misread how I labeled the rounds though for your answer to part 1. (I labeled them 0-31, not 1-32.)
Edit: Remember: these are meant to be solved with minimal calculation required. If you find yourself having to do tedious calculations on large numbers, you're either on the wrong track or making things too difficult on yourself.
DeathRowKitty
Mathlete
Posts: 68
Joined: Fri Feb 11, 2011 8:38 am
7 posts • Page 1 of 1
Return to Puzzles and Riddles
### Who is online
Users browsing this forum: No registered users and 0 guests
• Board index
• The team • Delete all board cookies • All times are UTC - 5 hours
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 15, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489535093307495, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/36412/what-gives-matter-gravitational-mass?answertab=active
|
# What gives matter Gravitational Mass? [duplicate]
Possible Duplicate:
Does the equivalence between inertial and gravitational mass imply anything about the Higgs mechanism?
In Higgs mechanism, Higgs field, which likes syrup, slows down particles when they passing through. So it seems Higgs field gives particles inertial mass. But what gives particles gravitational mass? We know, particles can attract each other even when they are static.
-
– Qmechanic♦ Sep 14 '12 at 16:37
## marked as duplicate by Qmechanic♦, Manishearth♦Dec 11 '12 at 11:43
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 1 Answer
Gravitational mass is a bit of a misnomer, because in General Relativity the spacetime curvature is determined (mostly) by the energy density. Mass is simply treated as equivalent to the amount of energy given by $E = mc^2$, or conversely energy is just treated as the equivalent amount of mass.
So the fact the particles are massless above the electroweak symmetry breaking energy and have a mass below it (acquired through the Higgs mechanism) makes no difference to gravity.
-
Do you mean that particles were not affected by Higgs field above electroweak symmetry breaking energy? But if a particle's energy is between electroweak symmetry breaking energy and the GUT energy, then it can also attract others by gravity, i.e. have 'gravitational mass'. – Popopo Sep 15 '12 at 4:55
Yes, a massless particle will exert a gravitational attraction on objects near it because of its energy density (though for an elementary particle gravity is negligable compared to the other three forces). – John Rennie Sep 15 '12 at 5:36
So is the mass given by Higgs field different from mass given by the Mass-Energy Formula? – Popopo Sep 15 '12 at 10:33
This risks getting a bit complicated, but in brief the mass/energy of a particle is the source of the gravitational field of the particle, and it's the total energy density that matters not whether the mass is zero or non-zero. However the way the particle is affected by external gravitational fields does depend on the mass. Zero mass particles are deflected differently from massive particles in an external gravitational field. Zero mass particles always follow null geodesics, and a massive particle cannot follow a null geodesic. – John Rennie Sep 15 '12 at 15:26
Okay, I see. So does the formula $F_{12}=-G\frac{m_1m_2}{r^2}$ should be rewritten as $F_{12}=-G\frac{\frac{E_1}{c^2} m_2}{r^2}$? – Popopo Sep 15 '12 at 15:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280465245246887, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-topics/32626-need-some-help-alevel-maths.html
|
# Thread:
1. ## need some help in Alevel maths
hey everyone i have attached the problems i will try to see what how u guys can solve them and later i will ask questions thanks a lot.
Attached Thumbnails
2. Originally Posted by carlasader
hey everyone i have attached the problems i will try to see what how u guys can solve them and later i will ask questions thanks a lot.
4a). The two reaction forces are equal (you are told so) and as the plank is
statric their sum must equal the total load which is (60+90)g Newtons, so
the reaction of the plank at B is 75g N.
RonL
3. Originally Posted by carlasader
hey everyone i have attached the problems i will try to see what how u guys can solve them and later i will ask questions thanks a lot.
4b) You do this by taking moments of the forces about A.
Let $x$ be the distance of the centre of mass of the plank from A. Then
the sum of the moments of the three forces acting (the weight of the woman,
the weight of the plank and the reaction at B - the reaction at A has zero moment about A so we ignore it) is:
$2 \times 60 \times g + x \times 90 \times g - 6 \times R_B=0$
where $R_B$ is the reaction at B found in the first part of this question.
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551454186439514, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/38013/cramers-rule-origin-of-quarks-fractional-electric-charge
|
# Cramer's rule, Origin of Quarks Fractional electric charge? [closed]
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns.
2u+1d=1 1u+2d=0
$$a_1d+b_1u=c_1$$ $$a_2d+b_2u=c_2$$ $$u=\frac {c_1b_2-c_2b_1}{a_1b_2-a_2b_1}$$ $$d=\frac {a_1c_2-a_2c_1}{a_1b_2-a_2b_1}$$
u=+2/3 d=-1/3
there ain't no experiment that could be done, nor is there any observation that could be made, that would say, "You guys are wrong." The theory is safe, permanently safe. Is that a theory of physics or a philosophy? I ask you.
-
1
This whole question is based on a false premise. – dmckee♦ Sep 22 '12 at 14:39
## closed as not a real question by Qmechanic♦, genneth, Luboš Motl, David Zaslavsky♦Sep 24 '12 at 3:03
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, see the FAQ.
## 2 Answers
Deep inelastic scattering experiments at SLAC in the 1970's confirmed that quarks exist and that they have fractional charges. See this from which the following quote is taken:
These properties were so odd that for a number of years it was not clear whether quarks actually existed or were simply a useful mathematical fiction. For example, quarks must have charges of + 2/3e or - 1/3e, which should be very easy to spot in certain kinds of detectors; but intensive searches, both in cosmic rays and using particle accelerators, have never revealed any convincing evidence for fractional charge of this kind. By the mid-1970s, however, 10 years after quarks were first proposed, scientists had compiled a mass of evidence that showed that quarks do exist but are locked within the individual hadrons in such a way that they can never escape as single entities.
This evidence resulted from experiments in which beams of electrons, muons, or neutrinos were fired at the protons and neutrons in such target materials as hydrogen (protons only), deuterium, carbon, and aluminum. The incident particles used were all leptons, particles that do not feel the strong binding force and that were known, even then, to be much smaller than the nuclei they were probing. The scattering of the beam particles caused by interactions within the target clearly demonstrated that protons and neutrons are complex structures that contain structureless, pointlike objects, which were named partons because they are parts of the larger particles. The experiments also showed that the partons can indeed have fractional charges of + 2/3e or - 1/3e and thus confirmed one of the more surprising predictions of the quark model.
While it is true that we cannot seperate a single quark from a proton due to the color confinement property of the strong color force, it turns out that another property of the strong color force, asymptotic freedom, allows very high energy deep inelastic scattering to probe the properties of "free" quarks. These deep inelastic scattering experiments of electrons on protons established that there really were point like constituents of protons that had fractional electric charge and thus validated the quark model of hadrons.
-
Evidence for the quarks comes, not from some trivial argument that it would get the charges of two particles right, but from a host of sources.
Looking just at the "get the numbers to add up" approach we need a mechanism to simultaneously explain
• Mass spectrum of the baryons
• The spins of the baryons
• The charges of the baryons
• The parities of the baryons
and it was found that the so-called "constituent quark" models could do exactly that.
Indeed the constituent quark model predicted the mass, charge, and spin of the $\Omega^-$ baryon which was subsequently found right where it was expected.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9652165770530701, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/18848/extensional-theorems-mostly-used-intensionally/29775
|
## Extensional theorems mostly used intensionally
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Some theorems are stated and proved extensionally, but in practice are almost always used intensionally. Let me give an example to make this clear -- integration by parts: $$\int_a^b f(x)g'(x)ds = \left[f(x)g(x)\right]_a^b - \int_a^b f'(x)g(x) dx$$ for two continuously differentiable functions $f$ and $g$. In practice, this is seldom ever applied to functions but rather to expressions denoting functions. Much more importantly, it is almost always applied by 'pattern matching' on a product term. But note that integration is usually described formally as an operation on functions (i.e. extensional objects), but then in first-year calculus the students are taught to master a series of rewrite rules (i.e. operations on intensional objects).
I have two questions:
1. What other examples have you run into of such mixing of extension and intension?
2. Why is this dichotomy not more widely taught / appreciated?
In the case of algebra (more precisely, equational theories), the answer to #2 is very simple: because this dichotomy does not matter at all, because we have well-behaved adjunctions between the extensional and intensional theories [in fact, we often have isomorphisms]. For example, there is no essential difference between polynomials (over fields of characteristic 0) treated syntactically or semantically. But there is a huge difference between terms in analysis and the corresponding semantic theorems.
-
1
Polynomials... interesting. Map $x \mapsto x^2$ on field $Z/2$ is the identity function, but polynomial $x^2 \in (Z/2)[x]$ isn't considered to coincide with polynomial $x$. This, at least, is covered in basic algebra courses. – Gerald Edgar Mar 20 2010 at 16:44
8
How about... It makes sense to say "$\sum_{n=1}^\infty 2^{-n}$ converges", and we say $\sum_{n=1}^\infty 2^{-n}=1$, but we don't say "1 converges". Is this what you mean? – Gerald Edgar 0 secs ago – Gerald Edgar Mar 20 2010 at 16:51
3
@Gerald: Your second comment captures exactly what I meant. Mathematicians do this (correctly) instinctively, but when you try to mechanize mathematics, these issues become extremely important, and when misunderstood lead to bad bugs in software. This is the source of many bugs which are unlikely to ever be fixed in either Mathematica or Maple. – Jacques Carette Mar 20 2010 at 16:58
4
I don't believe in the distinction you draw in your example, and I would be surprised if my viewpoint were an unusual one among mathematicians. The other points discussed seem to center around the fact that mathematical notation is a human and not totally formal activity, which certainly does cause problems for computer algebra software that tries to mimic it. – Reid Barton Mar 20 2010 at 18:03
1
@Reid: That this viewpoint is not unusual amongst mathematicians is a large part of my motivation for posting this question. At least I am in good company in worrying, as many mathematicians, some of great reknown, have written extensively on this point. I could add Church, and Kripke, to that list, and more recently P. Aczel and W. Lawvere. – Jacques Carette Mar 20 2010 at 18:20
show 2 more comments
## 6 Answers
It seems to me that the mathematical equivalent of the intensional vs. extensional distinction in philosophy would be the distinction between "formal" vs. "functional" objects: formal power series vs. convergent power series, formal integration by parts (with no regard for checking the validity of the operation in a real analysis sense) vs. rigorous integration by parts, formal polynomials vs. functions which happen to be represented by a polynomial, etc. If so, I would say that the formal vs. functional distinction is usually dealt with in more advanced classes, though usually not at the first-year undergraduate level.
For instance, in algebra, the concept of an indeterminate variable (and its distinction from the set-theoretic notion of a variable in a fixed domain) tends to be sufficient for keeping the two concepts distinct in most situations involving set-theoretic functions and the formal expressions giving rise to those functions. In particular, polynomials can be formal by living in some polynomial ring $R[x]$ generated by an indeterminate $x$, rather than having to be set-theoretic functions on some domain. Algebraic geometry also takes particular care in distinguishing an ideal of polynomials from the set-theoretic locus that that ideal cuts out over a given field, or more generally by distinguishing a scheme from a variety.
Similarly, real analysis, with all its cautionary counterexamples as to how various formal operations (e.g. exchanging limits or sums) can lead to disaster if the appropriate functional hypotheses are not verified, also tends to be pretty good about distinguishing a formal computation from a functional one; often the former is used as an initial heuristic motivation only, with the latter then being brought in for the rigorous proof. Although certainly mistakes have been made by treating a formal computation as if it were functionally valid...
Related to this is the ubiquitous "abuse of notation" in which a package of objects, structures, and forms is referred to via its most prominent component (i.e. by synecdoche). Thus, for instance, one often sees a polynomial function $P: {\bf R} \to {\bf R}$ being used to simultaneously represent both the polynomial function and the formal polynomial that represents it, or vice versa (e.g. "the polynomial $x^2$" to refer to the function $x \mapsto x^2$). Another common instance of this is when dealing with spaces (sets with additional structure); one often abuses notation by using the set itself to denote the space, e.g. a group might be denoted by its set $G$ of elements, rather than by the tuple $(G, e, \cdot, ()^{-1})$ of group structures, or a set-theoretic function by just the mapping $f$, rather than than the triplet $(f,X,Y)$ that includes the domain and codomain of that mapping. Such abuses are technically illegal using the strictest interpretations of mathematical notation, but they save a lot of space and, when used correctly, allow readers to focus on the actual content of an argument rather than on its formalism. Still, it is useful and important to point these abuses out explicitly from time to time...
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
An example of (1) comes from the proof theory of arithmetic, and the way we view proofs of universal statements about the natural numbers. Gentzen's original consistency proof took an intensional view, reasoning explicitly about eigenvariables and induction. Schütte later gave a much simpler consistency proof based on an extensional view, where mathematical induction was eliminated in favor of the so-called "$\omega$-rule", an infinitary inference rule which asserts $\forall x.A(x)$ given proofs of $A(n)$ for all $n\in \mathbb N$.
Buchholz has connected the two approaches, showing how Gentzen's result can be reconstructed by translation into Schütte's sequent calculus, essentially by reading off the infinite extension of finite, intensionally-defined proofs. (He calls this viewing finite derivations as "notations for infinitary derivations".) There is certainly a much stronger asymmetry here, because once you take an extensional view of primitive recursive functions over the natural numbers, you can't go back to a (finite) symbolic view.
-
let me qualify that last point: there certainly <i>appears</i> to be a strong asymmetry here, and as far as I know no one has attempt to translate infinitary extensional proofs back to finitary intensional proofs. Though perhaps it is possible given the right reflection principles in the metalogic. – Noam Zeilberger Mar 20 2010 at 18:21
A recent question about combinatorial interpretations, namely to find an interpretation for the identity
$$\sum_{k=0}^m 2^{-2k} \binom{2k}{k} \binom{2m-k}{m}=4^{-m} \binom{4m+1}{2m}$$
gives another full class of such issues. This equation is much more than a tautology when interpreted intensionally as being about combinatorics: a combinatorial interpretation involves give a natural class $C_l$ of objects for the left-hand-side and $C_r$ for the right-hand side and a bijection between these classes. Furthermore, and this is where things get really interesting (with respect to my original question), such classes $C_l$ and $C_r$ would be considered 'natural' if by using the usual rules of combinatorial counting, we would naturally get that the number of objects of $C_l$ of size $m$ is the left-hand side expression (similarly for $C_r$ and the rhs). The point is that these counting expressions would be derived structurally from the combinatorial classes. This is a much more interesting interplay between 3 extensional objects (a counting function and 2 combinatorial classes) and 2 intensional ones (different-but-equal formulas representing the counting function).
-
Georges Gonthier and François Garillot are doing interesting things with phantom types and unification in Coq to allow one to write, for example, `directv (V + W)` to mean the proposition that $V \oplus W$ is a direct sum.
I haven't fully grasped how it works yet, but let me give you a simplified explanation of what I think is going on. What is happening is that `directv X` is really notation for `directv_def _ (Phantom _ X)`.
`Phantom` is a constructor of a very trivial inductive type
````Inductive phantom (A:Type) (a:A) : Type := Phantom : phantom A a
````
The function `Phantom` is a polymorphic constructor of type `forall (A:Type)(a:A), phantom A a`. The purpose of `Phantom` is to lift values to the type level so that type inference can operate on these values.
`directv_def` doesn't even use the `(Phantom _ X)` argument (because it contains no data). The only purpose of this argument is to drive the type inference engine to fill in the first argument. `directv_def` has type `forall (VW : addv_expr) (_ : phantom _ (Vadd VW)), Prop`. `addv_expr` is a record type.
````Record addv_expr := build_addv_expr {
V1 : VectorSpace;
V2 : VectorSpace;
Vadd : VectorSpace }
````
The definition of `directv_def` is
````directv_def (VW : addv_expr) _ := dim (V1 VW) + dim (V2 VW) = dim (Vadd VW)
````
The final ingredient is that `fun V1 V2 => (build_addv_expr V1 V2 (V1 + V2))` is declared as a Canoncial Structure.
So what does Coq read when you write `directv (V + W)`? Well it parses this as notation for
````directv_def _ (Phantom _ (V + W))
````
The first parameter to Phantom is the type of `(V + W)` so we can quickly fill that in to get
````directv_def _ (Phantom VectorSpace (V + W))
````
`Phantom VectorSpace (V + W)` has type `phantom VectorSpace (V + W)`, but `directv_def` is expecting something of type `phantom _ (Vadd _)` so it tries to unify `(V + W)` with `(Vadd _)`. Because `Vadd` is a record projection, Coq tries to look up in its list of canonical structures to see if there are any declared whose `Vadd` field is of the form `(V + W)`. It says, "ahha! there is! I can use `build_addv_expr V W (V + W)`" (notice the intensional behaviour of canonical inference here). So Coq successfully unifies `(V + W)` with `(Vadd (build_addv_expr V W (V + W))`, and this forces the first parameter of directv_def:
````directv_def (build_addv_expr V W (V + W)) (Phantom VectorSpace (V + W))
````
And that is it for type inference. Later on this expression might be used, so it will start normalizing:
````dim (V1 (build_addv_expr V W (V + W))) + dim (V2 (build_addv_expr V W (V + W))) = dim (Vadd (build_addv_expr V W (V + W)))
````
and then to
````dim V + dim W = dim (V + W)
````
If you try to write something else like `directv 0` then the canonical structure inference will fail and you will get a (probably obtuse) type error.
This has been as simplified example. In reality, `directv` is much more complicated and allows one to write `directv (\sum_(0 <= i < n) V i)` to mean $\bigoplus_{i=0}^n V_i$ is a direct sum and accepts things like `directv 0` to mean a trivial direct sum.
Matita allows you to write unification hints directly without the necessarily building canonical structures. I suspect doing this sort of intentional inference would be easier in such a system.
-
There is definitely a strong connection here, thanks. – Jacques Carette Jun 28 2010 at 21:22
Is there a formal definition of the intension/extension distinction, or even of "intension" and "extension" as separate terms?
In the examples discussed here, the difference is simply that there is a richer type of object A, carrying its own set of allowed operations and relations, that maps (maybe in partially-defined way) to a coarser type of object B. The coarser thing B could be, as in the examples posted, a "forgetting of structure" or "de-categorification" or "numerical evaluation" of A. Some of the allowed operations on A's will not work perfectly or unambiguously on B's per se due to the loss of structure. For example, replacing $(x-1)/(x-1)$ by $1$ is correct for (intensional, formal) rational functions but requires additional input (a domain) to be defined unambiguously for (extensional, numerical) rational functions. The possibly missing additional structure can be seen as a reinstatement of the information lost when passing from A to B, or rigidification data needed to disambiguate the operations on B.
From this point of view, intension seems to be just a specification of context A, and an extensional interpretation of such a context is a specification of a map from A to some other, usually less structured, context B. Is there more to this distinction as it appears in the philosophy or computer science literature?
-
After reading some of the answers, I don't think there exists such a difference.
Let me explain myself. Take Edgar's comment: we don't say $1$ converges, but we do say $\sum_{n=1}^\infty 2^{-n}$ converges. Strictly speaking, $1$ does not converge, we can't talk about the convergence of a number. It becomes a meaningful statement when we understand $1$ as a constant sequence. It could be said that $\sum_{n=1}^\infty 2^{-n}$ is not defined properly either, but we've accepted that an infinite sum as a limit.
I think the same problem occurs here. After all, maths is all about formal objects. Once one starts to omit some of the details (probably for simplicity), for example, not specifying whether a polynomial should be thought of as an element of a ring, or as a function, mistakes might occur. I disagree with Tao in this subject: there are no semantic or functional interpretations of an object (or at least, there are no unique such interpretations). A polynomial can't be interpreted as a function. It is defined to be as an element of $K[x]$. If we want to talk about a polynomial function, then we must state the domain, the codomain and construct the transformation from the variable to the polynomial interpreted by replacement of the variable. But then we are no longer talking about a polynomial.
Applied to Carette's example, one can think that before applying the theorem as a rewrite rule, we first have to define the function we identify by the expression, replace the expression by the function, and then apply the theorem to the function. And that's what we are doing (or should be doing), but for simplicity we omit these steps. It would be the same as if we instantiated $aa^{-1}=1$ as $00^{-1}=1$. We can't forget the requirements for the application of the rule.
-
2
Sometimes the requirements for intensional application of a theorem are not visible in the extensional proof, though. For example, try to find an explanation in a typical Calculus I text of how to apply the fundamental theorem of calculus to compute $\frac{d}{dx} \int^{x}_0 xy dy$. – Carl Mummert Jun 28 2010 at 14:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365994930267334, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/92831/how-can-i-prove-that-the-additive-group-of-rationals-is-not-isomorphic-to-a-dire
|
# How can I prove that the additive group of rationals is not isomorphic to a direct product of two nontrivial groups?
I am working through Paolo Aluffi's new GSM text on my own (self-study). On page 63, he asks the reader to
Prove that $\Bbb{Q}$ is not the direct product of two nontrivial groups.
For some context, this is an exercise following a section entitled "The category Grp". I am assuming that he means "is not isomorphic to the direct product of two nontrivial groups", and I can see two possible ways to proceed with this proof, but have been unsuccessful with either approach.
Approach 1: Show that the additive group of rationals has a property that is preserved by isomorphism that the direct product of two nontrivial groups does not have or vice-versa. This seems challenging unless I can significantly narrow down the properties that a direct product of two nontrivial groups that was isomorphic to $\Bbb{Q}$ would necessarily have.
Approach 2: Considering the section in which this question occurs, show that if $G$ and $H$ are nontrivial groups and $\Bbb{Q} \cong G \times H$, then there are homomorphisms $\varphi_{G}:\Bbb{Q} \rightarrow G$ and $\varphi_{H}:\Bbb{Q} \rightarrow H$ which do not factor or do not factor uniquely through the product $G \times H$. This would be a contradiction, as $G \times H$ is a final object in the category Grp.
I would greatly appreciate suggestions on how to proceed further with either of these approaches or with alternate approaches.
-
6
For any $p,q\in \mathbb Q\setminus 0$, there are non-zero integers $n$ and $m$ such that $np = mq\neq 0$. That is not true for the product of two non-trivial groups. – Thomas Andrews Dec 19 '11 at 22:02
4
Is there more than one David Pincus, or do I have two of your papers sitting on my desk? – Asaf Karagila Dec 19 '11 at 22:05
IOW Thomas' hint goes together with your Approach #1: A direct product has a pair of non-trivial subgroups that intersects trivially whereas – Jyrki Lahtonen Dec 19 '11 at 22:05
Here is an approach: show that any two nontrivial subgroups of $\mathbb{Q}$ have nontrivial intersection. Seems very localized though, I would like to see a more general argument. – François G. Dorais Dec 19 '11 at 22:07
1
Isomorphisms are, by definition, homomorphism that have an inverse that is also a homomorphism. So, yes, isomorphism is, by definition, assumed to be a homomorphism. – Arturo Magidin Mar 30 '12 at 18:20
show 4 more comments
## 1 Answer
The endomorphism ring of a direct product is never a domain, yet the endomorphism ring of ℚ is a field.
-
1
What is the endomorphism ring isomorphic to? To itself? I thought about the fact that $\mathbb Q$ cannot be isomorphic to a direct product of two rings (because one has non-trivial ideals and $\mathbb Q$ does not), but didn't know how to use that property. – Patrick Da Silva Dec 19 '11 at 22:11
@PatrickDaSilva, indeed, the endomorphism ring of the abelian group $\mathbb Q$ is isomorphic to ring $\mathbb Q$. – Mariano Suárez-Alvarez♦ Dec 19 '11 at 22:13
So my idea wasn't so bad after all. Thanks =) Perhaps I am not used to look at the endomorphism ring of a group to study its structure. =P – Patrick Da Silva Dec 19 '11 at 22:14
1
@Patrick: The simple way to establish this is to show that if you know what $1$ maps to, then you know where everything maps to, and that $1$ can map anywhere. $\mathbb{Q}$ is a "free torsionfree divisible group of rank 1". – Arturo Magidin Mar 30 '12 at 18:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934992790222168, "perplexity_flag": "head"}
|
http://motls.blogspot.com/2012/07/poincare-disk.html?m=0
|
# The Reference Frame
## Monday, July 16, 2012
... /////
### Poincaré disk
Almost everyone knows the sphere. However, the fame of a close cousin of the spherical geometry, the hyperbolic geometry, is much more limited. How many people know what is the Poincaré disk, for example?
Using the politically correct speech, people are discriminating against geometries of mixed signature. Let's try to fix it.
The ordinary two-dimensional sphere may be defined as the set of all points in the flat three-dimensional Euclidean space whose coordinates obey\[
x_1^2+x_2^2+x_3^2 = 1.
\] We have set the radius to one. One of the three coordinates may be expressed in terms of the other two – up to the sign. The remaining surface – e.g. the surface of the Earth – is two-dimensional which means that it can be parameterized by two real coordinates, e.g. the longitude and the latitude.
On the surface, one may measure distances. The right way to measure the length of a path is to cut the path to many infinitesimal (infinitely short) pieces and to add their lengths. The length of the infinitesimal piece is determined by the metric. The metric of the sphere is invariant under the $SO(3)$ rotations. Locally, this group is isomorphic to $SU(2)$ which is also the same thing as $USp(2)$.
But what if we change a sign or two? The equation\[
x_1^2+x_2^2-x_3^2=-1
\] With $-1$ on the right hand side, we get a two-sheeted hyperboloid. (We would get a one-sheeted one if there were a plus sign.)
Let's take one component of this manifold only. Does it have some symmetries similar to the $SO(3)$ rotational symmetry of the two-sphere we started with? If we only allow rotations that are also symmetries of the three-dimensional "environment" and if we assume this environment to be a flat Euclidean space that uses the Pythagorean theorem to measure distances, the answer is that the two-sheeted hyperboloid only has an $SO(2)$ symmetry: we may rotate it around the axis. That's a one-dimensional group isomorphic to $U(1)$.
But we have only modified one sign. That's not a big change; in some sense, we have only changed a radius to an imaginary value. Doesn't the hyperboloid have a larger, three-dimensional group of symmetries that would be as large as the group $SO(3)$? The answer is Yes. But we must allow transformations that don't preserve the distances in the parent three-dimensional Euclidean spacetime. Even more accurately, we must imagine that the parent three-dimensional spacetime is not Euclidean but Lorentzian, like in general relativity, and its distances are given by\[
ds^2 = dx_1^2+dx_2^2 - dx_3^2.
\] The signs defining the hyperboloid respect the relative signs from the metric above so the symmetries of the actual hyperboloid will include the whole $SO(2,1)$ Lorentz group of the original three-dimensional space – or spacetime, if you want to call it this way.
Such a two-sheeted hyperboloid may be thought of as the space of all allowed energy-momentum vectors of a massive particle in 2+1 dimensions, i.e. all vectors obeying\[
E^2 - p_x^2 - p_y^2 = m^2 \gt 0.
\] The Lorentz transformations, $SO(2,1)$, act on the vectors' coordinates in the usual way. When we talk about the single component of the two-sheeted hyperboloid as about a "geometry", we call it a "hyperbolic geometry". This concept should be viewed as another example of a non-Euclidean geometry besides the spherical geometry. Non-Euclidean geometries are similar to geometries of the flat Euclidean plane/space but they reject Euclid's axiom about the parallel line: it is no longer true that "there is exactly one straight line going through a given point that doesn't intersect another given straight line". For the spherical geometries, there is usually none (pairs of maximal circles such as two meridians always intersect, e.g. at the poles); for the hyperbolic geometries, there are infinitely many (the lines diverge from each other so there are many ways to adjust their directions so that they still don't intersect).
Is there something we should know about the hyperbolic geometry? How can we visualize it? Much like the sphere, the hyperbolic geometry has an intrinsic curvature so it is not isometric to a piece of the flat plane. Much like in the case of maps of the sphere, i.e. the Earth's surface, we have to choose a method to depict it. Some geometric quantities will be inevitably distorted.
One cute "compact" way to visualize the hyperbolic geometry is the Poincaré disk. Here is an animation of the Poincaré disk equipped with a uniform collection of Escher's batmen.
The hyperboloid had an infinite area – even when you adopt the Lorentzian signature for the metric – because one may "Lorentz boost" vectors indefinitely. Another related fact is that the group $SO(2,1)$ of the symmetries of the hyperbolic geometry is noncompact; if we define a group-invariant volume form on the group manifold, the volume of the group is infinite. It follows that there have to be infinitely many batmen living on the hyperbolic geometry.
As the name indicates, the Poincaré disk represents the hyperbolic geometry as a disk – it means the interior of a circle. But because there have to be infinitely many batmen, their density has to diverge in some regions. As you see in the animation above, the density of batmen diverges near the boundaries of the disk.
But much like in the case of angle-preserving maps (e.g. the stereographic projection), you see that all the internal angles of the batmen are preserved. The model of the hyperbolic geometry obviously doesn't preserve the areas (batmen near the boundary look smaller). And the Poincaré disk model doesn't make straight lines (geodesics) on the hyperboloid look straight here, either. (Another model, the Beltrami-Klein model or the Klein disk, does, but it doesn't preserve the angles.)
Can we reconstruct the metric on the original hyperboloid from the coordinates $x_1,x_2$ parameterizing the unit Poincaré disk? Yes, we can.\[
ds^2= 4 \frac{dx_1^2+dx_2^2}{1-x_1^2-x_2^2}
\] It would be straightforward to add additional coordinates if you needed to do so.
Note that up to the factor of $4$ (which is a convention, an overall scale of the metric, but it is actually helpful to make the curvature radius equal to one) and up to the denominator (a pure scalar), this is nothing else than the metric on the flat plane. Because the metric on the Poincaré disk only differs from the metric on the underlying paper by a scalar, Weyl rescaling, it preserves the angles. The denominator makes it clear that as $x_1^2+x_2^2\to 1$, the proper distances (and areas) blow up.
The animation shows some transformations that don't change the internal geometry of the hyperbolic geometry. They are elements of $SO(2,1)$. This group has three generators. The action of one of them is shown by the animation; the action of another one would look the same except that the batmen would be drifting in another, orthogonal direction; the action of the third generation is nothing else than the rotations of the disk which are isometries of the underlying paper, too.
In the case of the sphere, we noticed that $SU(2)\sim SO(3)$; the groups are locally isomorphic. This fact is related to the existence of spinors which have 2 complex (pseudoreal) components if we deal with the three-dimensional Euclidean space. Are there similar groups isomorphic to $SO(2,1)$? Yes, there are. In fact, there are at least two very important additional ways to write $SO(2,1)$.
Because we are talking about transformations preserving the angles, both of these alternative definitions of $SO(2,1)$ may be obtained as subgroups of $SL(2,\CC)$, the group of Möbius transformations. The cute old video below discusses the angle-preserving tranformations of the plane.
All one-to-one angle-preserving transformations of the plane may be written down in terms of a simple function of a complex variable $z\in\CC$,\[
z\to z' = \frac{az+b}{cz+d}, \quad \{a,b,c,d\}\subseteq\CC.
\] For the transformation to be nonsingular, we require $ad-bc\neq 0$. In fact, whenever this determinant is nonzero, we may rescale $a,b,c,d$ by the same complex number to achieve $ad-bc=1$ without changing the function. So we may assume $ad-bc=1$ and the group of all transformations of this form is therefore $SL(2,\CC)$. Just to be sure, if you're annoyed by the nonlinear character of the function $z\to z'$, don't be annoyed. The variable $z$ may be represented simply as $u_1/u_2$, the ratio of two coordinates of a complex vector, and when the Möbius transformations are acting on $(u_1,u_2)$ in the ordinary linear way, they will be acting on $z=u_1/u_2$ in the nonlinear way depicted by the formula above.
There are four complex parameters underlying the transformation, $a,b,c,d$, but because we imposed one complex condition $ad-bc=1$, there are effectively three free complex parameters i.e. six real parameters in the Möbius group. But we're interested in the Poincaré disk. It means that we would like to restrict our focus on the Möbius transformations that map the disk onto itself. If we deal with the boundary i.e. $zz^*=1$, then we would like to have $z' z^{\prime *}=1$, too. How does this condition constrain the parameters $a,b,c,d$?
One may prove that this restricts the matrix to be inside a smaller group, $SU(1,1)$. That's a group of matrices $M$ obeying\[
M\cdot \diag (1,-1)\cdot M^\dagger = \diag (1,-1), \quad {\rm det}\,M = 1.
\] Note that up to the insertion of the diagonal matrix with the $\pm 1$ entries, this would be a condition for a unitary group. However, the extra diagonal matrix changes the signature so instead of a unitary group, we obtain a pseudounitary group. This group $SU(1,1)$ is the group of all angle-preserving, one-to-one transformations of the unit disk onto itself, and because we've seen that the unit disk may be viewed as an angle-preserving depiction of the two-sheeted hyperboloid, i.e. the hyperbolic geometry, it follows that this group must be isomorphic to the group of symmetries of the hyperboloid,\[
SU(1,1)\sim SO(2,1).
\] The isomorphism is valid locally. Note that both groups have three real parameters. And I won't spend too much time with it but there's another isomorphism of this kind we may derive from the Poincaré disk model. The disk is conformally equivalent to a half-plane and the group of Möbius transformations that preserve the half-plane (and its boundary, let's say the real axis) is nothing else than the group of Möbius transformations with real parameters $a,b,c,d$. So we also have\[
SU(1,1)\sim SO(2,1)\sim SL(2,\RR).
\] Both $SU(1,1)$ and $SL(2,\RR)$ may be easily visualized as the groups acting on the two-component spinors in 2+1 dimensions.
In the case of the spherical geometry, we may construct Platonic polyhedra and various cute discrete subgroups of $SO(3)$, i.e. the group of isometries of an icosahedron (which is the same one as the group of isometries of a dodecahedron, the dual object to the icosahedron). Analogously, there are many interesting "polyhedra" and discrete subgroups of $SO(2,1)$, too. These mathematical facts were essential for Escher to be able to draw his batmen into the Poincaré disk, of course.
There are many things to be said about the Poincaré disk and its higher-dimensional generalizations. And these objects play a very important role in theoretical physics – in some sense, they are as important as the spheres themselves. The importance of the hyperbolic geometry in relativity (on-shell conditions for the momentum vectors) has already been mentioned. But there are many other applications. The world sheet description of string theory depends on conformal transformations which makes the appearance of similar structures omnipresent, too. The geometry of the moduli spaces of Riemann surfaces – starting from the torus – depends on groups such as $SL(2,\RR)$ which are also analyzed by tools similar to the mathematical games above.
Finally, the anti de Sitter space – the key geometric player of the AdS/CFT correspondence – may be considered as a higher-dimensional generalization of the hyperbolic geometry, too. (But in this case, there is a temporal dimension even "inside" the picture with the batmen.) That's why the Poincaré disk and various "cylinders" that generalize it are a faithful portrait of the AdS spaces. The regions near the boundary where the batmen get very dense become the usual "AdS boundary" which is where the conformal field theory, CFT, is defined.
But I didn't want to beyond the elementary mathematical observations so if you were intrigued by the previous two paragraphs, you will have to solve the mysteries yourself (or find the answers elsewhere in books or on the Internet).
Off-topic: Bad Universe
Tonight, I turned on my Czech Prima Cool TV half an hour too early, before the S05E17 episode of The Big Bang Theory. Whenever I do it, I can see things like the Simpsons, Futurama, Topgear, and others – in Czech dubbing. And they're often nice programs. But what I got tonight was... Phil Plait's Bad Universe ("Divoký vesmír" in Czech, meaning "Wild Universe"). Holy cow, this is an incredibly crappy would-be scientific program!
I have watched it for ten minutes or so but this period of time has saturated my adrenaline reservoir and depleted all my patience. First of all, it's sort of a crazy explosion of exhibitionism if someone looking like Phil Plait – the blogger behind Bad Astronomy – agrees to turn himself into a "TV star". But the content was much worse than that. He was showing some random combinations of scientific concepts – X-rays from outer space, global cooling, solar eruptions, random oxides etc. – as the culprits that have destroyed the trilobites. The program is meant to be catastrophic and in between the lines, the program clearly wants to fill the viewers' heads with many kinds of hypothetical catastrophes that may occur in a foreseeable future, too.
The unlimited combination of random "scientific ingredients" and contrived lab experiments pretending to emulate conditions in the past combined with a nearly complete absence of any explanation or argument or fair judgement or impartial and careful analysis or anything that actually makes any sense is what creates a program that decent people can't possibly like. I like to listen to scientific explanations about chemistry, biology, history, cosmology, geology, and other things – but this weird mixture of everything is just over the edge.
My rating for the program: pure shit.
Posted by Luboš Motl
|
Email This BlogThis! Share to Twitter Share to Facebook
Other texts on similar topics: mathematics
#### snail feedback (3)
:
reader Curious George said...
Formulas are unreadable, both in Chrome and Firefox.
reader Dilaton said...
Hm, at the moment I still think a Poincare disk is some kind of a frisbee ... ;-P
From scrolling through this article looks very nice and accessible to me; so I look forward to read and enjoy it and learn better tomorrow during my lunch break :-)
Are they serious about what they are saying in the "Bad Universe" or is it meant to be a (bad) parody ...? If it is serious it seems really bad as I learn from your description ...
reader Honza said...
Pretty cool, I'm already familiar with quadrics and elementary topological terms from my calculus class but not with groups and symmetries, is there an article where you explain those?
Post a Comment
| Subscribe to: all TRF disqus traffic via RSS [old]
To subscribe to this single disqus thread only, click at ▼ next to "★ 0 stars" at the top of the disqus thread above and choose "Subscribe via RSS".
Subscribe to: new TRF blog entries (RSS)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9314880967140198, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/22735/analytic-functions-over-fields-other-than-real-or-complex-numbers/22744
|
## Analytic Functions over Fields other than Real or Complex Numbers
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let K denote either the field of real numbers or the complex fields. An analytic function over $K^n$ is a function that can be represented locally by a convergent power series in n variables with coefficients in K.
My question is that can we take K to be other fields? It seems that such a field K should satisfy some criteria:
1. It is a metric space or at least a topological space.
2. There should be complete, in the sense that Cauchy nets converge.
3. The above 2 points probably force that K should have cardinality at least the size of the continuum.
Can there be other K where a reasonable theory of analytic functions can be developed? Say for cardinalities larger than the continuum? Probably this invokes some model theory.
-
3
I believe there is a pretty rich theory of p-adic functions. – Steve Huntsman Apr 27 2010 at 16:11
3
You might also want to look at Serre's book "Lie groups and Lie algebras" (or something like this). He does the basic theory of Lie groups with respect to a complete absolute-valued field using "analytic" in place of "smooth" (this is acceptable by that old theorem of Montgomery-Zippin-???). Included is a correspondence between Lie algebras, formal group laws and "group chunks", which are like a stand-alone neighborhood on which the formal group 'tries its best' to be an actual group law. One of the central ideas is the use of the Baker-Campbell-Hausdorff formula in this general situation. – Sean Rostami Apr 27 2010 at 16:47
Just wanted to say that the continuum might have any regular uncountable cardinality you want. This is a result by Easton. So the size of $K$ isn't the point – Stefan Hoffelner Apr 27 2010 at 17:08
3
See the many answers. But note: locally given by power series is NOT what you want. There are many strange functions on the $p$-adics that are CONSTANT in some neighborhood of every point, but lacking connectedness one cannot conclude the function is constant. – Gerald Edgar Apr 27 2010 at 17:19
1
@Gerald: sometimes it is and sometimes it isn't. E.g., that definition of analytic function is good enough to do Lie theory: see Serre's Lie Algebras and Lie Groups. For geometric applications, yes, it's often better to have a more rigid collection of analytic functions: it depends on what you're trying to do. – Pete L. Clark Jun 1 2010 at 19:04
show 1 more comment
## 4 Answers
There are several rich theories of analysis on non-archimedian theories. Neal Koblitz' book on $p$-adic analysis is a good introduction. Non-archimedian analysis by Bosch, Güntzer and Remmert is more encyclopedic. Berkovich's Spectral Theory and Analysis over Non-archimedian Fields introduces his beautiful theory of analytic spaces allowing for a reasonable algebraic topological theory. In Goss's book Basic Structures of Function Field Arithmetic there is a good introduction to analysis in positive characteristic.
Your suggestion that this subject might have something to do with model theory is apt. As the above references show, the theory may be developed without model theory, but it has been studied intensively via model theory giving interesting results about quantifier elimination, uniformity across the $p$-adics, and establishing a basis for motivic integration. You might want to look at the paper by van den Dries and Denef, $p$-adic and real subanalytic sets. Ann. of Math. (2) 128 (1988), no. 1, 79--183.
-
But p-adics have zero characteristic, so why to call this "analysis in positive characteristic"? (or perhaps i just misread your answer...) – Qfwfq Apr 27 2010 at 17:39
2
My point was that analysis may be developed over (complete -- though even this restriction may be relaxed) valued fields of arbitrary characteristic. In Goss's book, the ground field is a completion of the algebraic closure of a field of formal Laurent series over a field of positive characteristic. Most of my other suggested references focus on p-adic fields, but even they allow for more general fields. – Thomas Scanlon Apr 27 2010 at 17:53
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is a perfectly working theory of analytic functions over the p-adics with lots of theorems. No model theory is needed (Neal Koblitz has a book about that, also Non-Archimedean Analysis by Bosch, Güntzer, Remmert for a dry treatise), but indeed we face (ultra)metric complete fields here, being uncountably infinite, just as you suggest.
Nonetheless, if you just need the "feel" of power series to model something abstractly, formal power series, cf. http://en.wikipedia.org/wiki/Formal_power_series, may be all you need. They behave in many ways like (convergent) power series, for example if you 'formally' wish to invert a differential operator, such computations - at least algebraically - may be given a more-or-less solid foundation in a formal power series ring.
All classical operations, e.g. taking derivatives etc, can be defined termwise, no problem. You can also plug formal power series into each other, but just if the constant coefficient is zero, sadly.
Finally, your two points do not really enforce large cardinality. A finite field can be equipped with the discrete metric, this makes it complete, so you could take about convergent power series over this - it just means that only finitely many coefficients can be non-zero, making it effectively a polynomial ring.
-
I was looking for something analytic. Wasn't really thinking about formal power series. – Colin Tan Apr 28 2010 at 14:26
If you don´t mind skew fields, have a look at quaternionic analysis. Interestingly, many of the facts of complex analysis don´t hold there at all! (In fact, all you need to do analysis, is a Banach algebra or a complete ring with non-trivial unitary group.)
-
2
Unfortunately, the space of quaternionic-analytic functions of one quaternionic variable just happens to coincide with the space of 4-tuples of real-analytic functions of 4 real variables. – Qfwfq Apr 27 2010 at 17:41
This is far from satisfactory, but: Convergent power series with real coefficients make sense when interpreted as functions on the quaternions. This allows one to define a lot of old favorites like the exponential function on the quaternions. It's challenging, however, to determine the appropriate notion of "Riemann surface" for some of these, like the logarithm, since it has uncountably many branches. – Daniel Asimov Jun 1 2010 at 20:11
I think that the "right" generalization is "complete valued field" as used in:
Local Analytic Geometry (Shreeram Shankar Abhyankar) (pag.3). See:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262387752532959, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/13793/are-the-rationals-minus-a-point-homeomorphic-to-the-rationals/13796
|
# Are the rationals minus a point homeomorphic to the rationals?
A while ago I was dreaming up point-set topology exam questions, and this one came to mind:
Is $\mathbb Q\setminus \{0\}$ homeomorphic to $\mathbb Q$? (Where both sets have the subspace topology induced from the standard topology on $\mathbb R$.)
However, I couldn't figure this out at the time, and I'm curious to see whether anyone has a nice argument. I'm not even willing to take a guess as to whether they are or aren't homeomorphic.
-
## 3 Answers
A well-known theorem of Cantor says that any two countable dense linear orderings without endpoints are isomorphic as linear orders. So in particular there is an order-preserving bijection between $\mathbb{Q}$ and $\mathbb{Q} - \{0\}$. This bijection will be a homeomorphism if you give each space the order topology, which is the standard topology inherited from $\mathbb{R}$.
-
1
Thanks. Do you know a reference for Cantor's theorem? – Grumpy Parsnip Dec 10 '10 at 12:28
2
– Carl Mummert Dec 10 '10 at 12:37
That's not as hard as I thought it would be. Cool. – Grumpy Parsnip Dec 10 '10 at 12:49
Since the technique proving the homeomorphism is useful in many other situations, it may be worth adding some details to Carl's answer:
Suppose $A,B$ are two countable, dense linear orders without end points. We show that they are isomorphic by building an isomorphism $f:A\to B$. This is done by what we call a back-and-forth argument. Say that $A=\{a_n\mid n\in{\mathbb N}\}$ and $B=\{b_n\mid n\in{\mathbb N}\}$. We build $f$ by stages. At the end of stage $2n$ we have ensured that $a_n\in{\rm dom}(f)$, and at the end of stage $2n+1$, we have ensured that $b_n\in{\rm ran}(f)$.
The construction is simple. Begin by picking any $b\in B$ and letting $f(a_0)=b$. This completes stage 0.
Then we do stage 1: If $b=b_0$ we are done and go to stage 2. If $b<b_0$, we pick an $a\in A$ and set $f(a)=b_0$. Of course, since $f$ is to be an isomorphism, we better ensure that $a_0<a$. But this is trivial to accomplish, since $A$ has no endpoints. Similarly, if $b_0<b$, then we pick $a$ so that $a<a_0$.
In general, at stage $2n$ do the following: If $a_n$ is already in the domain we have built, we are done with this stage. Otherwise, if $a_n$ is larger than all the elements in the domain of $f$ so far, pick an element $c$ of $B$ larger than all the elements in the range of $f$ so far and set $f(a_n)=c$; this is possible since $B$ has no largest element. If $a_n$ is smaller than all elements in the current domain of $f$, pick $c$ in $B$ smaller than all elements in the current range of $f$, and set $f(a_n)=c$. Again, this is possible, since $B$ has no smallest element. Finally, if $a_n$ is between elements of the current domain of $f$, pick $d,e$ in the current domain of $f$ so $d<a_n<e$, $d$ is largest below $a_n$, and $e$ is smallest above $a_n$. Then pick in $B$ some $c$ between $f(d)$ and $f(e)$ and set $f(a_n)=c$. This is possible, since $B$ is dense in itself. This completes this stage.
At stage $2n+1$ we do the same, but now ensuring that $b_n$ is put in the range of $f$.
This construction gives us an isomorphism $f$ at the end: Even stages ensure the domain of $f$ is all of $A$, odd stages that the range is all of $B$. The construction is designed so for $\alpha,\beta$ in the domain of $f$, $\alpha<\beta$ iff $f(\alpha)<f(\beta)$. But this is precisely what it means to be an isomorphism.
The method of back-and-forth is very flexible. For example, it shows that any two countable random graphs are isomorphic. There are plenty of applications of this technique.
-
I'm a big fan of this argument in general, but for some reason, I especially like packaging it as the Rasiowa-Sikorski Lemma applied to the set of partially defined order preserving maps from $A$ to $B$. – Jason DeVito Dec 11 '10 at 0:17
Thanks for giving the argument here for easy reference. – Grumpy Parsnip Dec 11 '10 at 12:44
Hi Andres, I hope you don't mind that I added a missing \$. This answer was linked to me to answer one of my own questions. – yunone May 5 '11 at 21:01
The ordered approach is fine. A classical theorem by Sierpinski says that all countable metric spaces without isolated points are homeomorphic. Q is such a space. It also implies $\mathbf{Q} \setminus \{0\}$ is homeomorphic to $\mathbf{Q}$ and $\mathbf{Q} \times \mathbf{Q}$ e.g., or any finite product for that matter. A proof is at the topology atlas, topology explained
-
1
Wow, that really defies intuition. – Grumpy Parsnip Dec 13 '10 at 8:50
Such characterizations are very nice, though. And in a way, it is intuitive, if you think longer about it. Other spaces that have such characterisations are R, the irrationals, the Cantor set, the Cantor set minus a point, to name the most famous ones.. – Henno Brandsma Dec 13 '10 at 21:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484780430793762, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2012/01/11/gauss-law/?like=1&source=post_flair&_wpnonce=966197e62c
|
# The Unapologetic Mathematician
## Gauss’ Law
Rather than do any more messy integrals for special cases we will move to a more advanced fact about the electric field. We start with Coulomb’s law:
$\displaystyle E(r)=\frac{1}{4\pi\epsilon_0}\frac{q}{\lvert r\rvert^3}r$
and we replace our point charge $q$ with a charge distribution $\rho$ over some region of $\mathbb{R}^3$. This may be concentrated on some surfaces, or on curves, or at points, or even some combination of the these; it doesn’t matter. What does matter is that we can write the amount contributed to the electric field at $r$ by the charge a point $s$ as
$\displaystyle dE(r)=\frac{1}{4\pi\epsilon_0}\frac{\rho(s)}{\lvert r-s\rvert^3}(r-s)d^3s$
So to get the whole electric field, we integrate over all of space!
$\displaystyle E(r)=\frac{1}{4\pi\epsilon_0}\int\limits_{\mathbb{R}^3}\frac{\rho(s)}{\lvert r-s\rvert^3}(r-s)d^3s$
Now we want to take the divergence of each side with respect to $r$. On the right we can pull the divergence inside the integral, since the integral is over $s$ rather than $r$. But we’ve still got a hangup.
Let’s consider this divergence:
$\displaystyle\nabla\cdot\left(\frac{r}{\lvert r\rvert^3}\right)$
Away from $r=0$ this is pretty straightforward to calculate. In fact, you can do it by hand with partial derivatives, but I know a sneakier way to see it.
If you remember our nontrivial homology classes, this is closely related to the one we built on $\mathbb{R}^3$ — the case where $n=2$. In that case we got a $2$-form, not a vector field, but remember that we’re working in our standard $\mathbb{R}^3$ with the standard metric, which lets us use the Hodge star to flip a $2$-form into a $1$-form, and a $1$-form into a vector field! The result is exactly the field we’re taking the divergence of; and luckily enough the divergence of this vector field is exactly what corresponds to the exterior derivative on the $2$-form, which we spent so much time proving was zero in the first place!
So this divergence is automatically zero for any $r\neq0$, while at zero it’s not really well-defined. Still, in the best tradition of physicists we’ll fail the math and calculate anyway; what if it was well-defined, enough to take the integral inside the unit sphere at least? Then the divergence theorem tells us that the integral of the divergence through the ball is the same as the integral of the vector field itself through the surface of the sphere:
$\displaystyle\int\limits_B\nabla\cdot\left(\frac{r}{\lvert r\rvert^3}\right)=\int\limits_{S^2}\left(\frac{r}{\lvert r\rvert^3}\right)\cdot dS=\int\limits_{S^2}r\cdot dS=4\pi$
since the field is just the unit radial vector field on the sphere, which integrates to give the surface area of the sphere: $4\pi$. Remember that the fact that this is not zero is exactly why we said the $2$-form cannot be exact.
So what we’re saying is that this divergence doesn’t really work in the way we usually think of it, but we can pretend it’s something that integrates to give us $4\pi$ whenever our region of integration contains the point $r=0$. We’ll call this something $4\pi\delta(r)$, where the $\delta$ is known as the “Dirac delta-function”, despite not actually being a function. Incidentally, it’s actually very closely related to the Kronecker delta
So anyway, that means we can calculate
$\displaystyle\nabla\cdot E(r)=\frac{1}{4\pi\epsilon_0}\int\rho(s)\nabla\cdot\left(\frac{r-s}{\lvert r-s\rvert^3}\right)d^3s=\frac{1}{4\pi\epsilon_0}\int\rho(s)4\pi\delta(r-s)d^3s$
This integrand is zero wherever $r\neq s$, so the only point that can contribute at all is $\rho(r)$. We may as well consider it a constant and pull it outside the integral:
$\displaystyle\nabla\cdot E(r)=\frac{\rho(r)}{\epsilon_0}\int\delta(r-s)d^3s=\frac{\rho(r)}{\epsilon_0}$
where we have integrated away the delta function to get $1$. Notice how this is like we usually use the Kronecker delta to sum over one variable and only get a nonzero term where it equals the set value of the other variable.
The result is known as Gauss’ law:
$\displaystyle\nabla\cdot E(r)=\frac{\rho(r)}{\epsilon_0}$
and, incidentally, shows why we wrote the proportionality constant the way we did when defining Coulomb’s law. The meaning is that the divergence of the electric field at a point is proportional to the amount of charge distributed at that point, and the constant of proportionality is exactly $\frac{1}{\epsilon_0}$.
If we integrate both sides over some region $U\subseteq\mathbb{R}^3$ we can rewrite the law in “integral form”:
$\displaystyle\int_U\frac{\rho(r)}{\epsilon_0}d^3r=\int_U\nabla\cdot Ed^3r=\int_{\partial U}E\cdot dS$
That is: the outward flow of the electric field through a closed surface is equal to the integral of the charge contained within the surface. The second step here is, of course, the divergence theorem, but this is such a popular application that people often call this “Gauss’ theorem”. Of course, there are two very different statements here: one is the physical identification of electrical divergence with charge distribution, and the other is the geometric special case of Stokes’ theorem. Properly speaking, only the first is named for Gauss.
## 8 Comments »
1. [...] repeat what we did to come up with Gauss law, but this time on the magnetic [...]
Pingback by | January 12, 2012 | Reply
2. [...] electromotive force around the circuit by chemical or other means. And, as we saw when discussing Gauss’ law, Coulomb’s law gives rise to an electric field that looks [...]
Pingback by | January 14, 2012 | Reply
3. [...] this is great. We know that the gradient of is , and we also know that the divergence of this function is (basically) the “Dirac delta function”. That [...]
Pingback by | January 30, 2012 | Reply
4. [...] But we might note something interesting if we couple this with Gauss’ law: [...]
Pingback by | February 1, 2012 | Reply
5. [...] first is Gauss’ law and the second is Gauss’ law for magnetism. The third is directly equivalent to [...]
Pingback by | February 1, 2012 | Reply
6. [...] Gauss’ law and Gauss’ law for magnetism, we’ve actually already done this. First, we write them in [...]
Pingback by | February 2, 2012 | Reply
7. [...] Gauss’ law tells us that , so we [...]
Pingback by | February 14, 2012 | Reply
8. [...] When we vary with respect to and insist that the variance of be zero we get Gauss’ law: [...]
Pingback by | July 16, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376370310783386, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/1027/how-are-correlation-and-cointegration-related/1030
|
How are correlation and cointegration related?
In what ways (and under what circumstances) are correlation and cointegration related, if at all? One difference is that one usually thinks of correlation in terms of returns and cointegration in terms of price. Another issue is the different measures of correlation (Pearson, Spearman, distance/Brownian) and cointegration (Engle/Granger and Phillips/Ouliaris).
-
So, my question is, does anyone know how to generate correlated prices? In order to generate correlated time series you should use copula approach. – user2623 Jun 30 '12 at 8:31
@Dr.Mike If you want to ask a question, you need to click the Ask Question link in the top-right corner and post it there. – chrisaycock♦ Jun 30 '12 at 12:52
6 Answers
This isn't really an answer, but it's too long to add as a comment.
I've always had a real problem with the correlation/covariance of price. To me, it means nothing. I realize that it gets used (abused) in many contexts, but I just don't get anything out of it (over time, price has to generally go up, go down, or go sideways, so aren't all prices "correlated"?).
On the flip side, correlation/covariance of returns makes sense. You're dealing with random series, not integrated random series.
For example, below is the code required to generate two price series that have correlated returns.
A typical plot is shown below. In general, when the red series goes up, the blue series is likely to go up. If you run this code over and over, you'll get a feel for "correlated returns".
```` library(MASS)
#The input data
numpoi <- 1000 #Number of points to generate
meax <- 0.0002 #Mean for x
stax <- 0.010 #Standard deviation for x
meay <- 0.0002 #Mean for y
stay <- 0.005 #Standard deviation for y
corxy <- 0.8 #Correlation coeficient for xy
#Build the covariance matrix and generate the correlated random results
(covmat <- matrix(c(stax^2, corxy*stax*stay, corxy*stax*stay, stay^2), nrow=2))
res <- mvrnorm(numpoi, c(meax, meay), covmat)
plot(res[,1], res[,2])
#Calculate the stats of res[] so they can be checked with the input data
mean(res[,1])
sd(res[,1])
mean(res[,2])
sd(res[,2])
cor(res[,1], res[,2])
#Plot the two price series that have correlated returns
plot(exp(cumsum(res[,1])), main="Two Price Series with Correlated Returns", ylab="Price", type="l", col="red")
lines(exp(cumsum(res[,2])), col="blue")
````
If I try to generate correlated prices (not returns), I'm stumped. The only techniques that I am aware of deal with random normally distributed inputs, not integrated inputs.
So, my question is, does anyone know how to generate correlated prices?
I'm out of time, so I'll have to add my cointegration comments later.
Edit 1 (04/24/2011) ================================================
The above deals with the correlation of returns, but as implied in the original question, in the real world it looks like correlation of prices is a more important issue. After all, even if the returns are correlated, if the two price series drift apart over time, my pairs trade is going to screw me. That's where co-integration comes in.
When I look up "co-integration":
http://en.wikipedia.org/wiki/Cointegration
I get something like:
"....If two or more series are individually integrated (in the time series sense) but some linear combination of them has a lower order of integration, then the series are said to be cointegrated...."
What does that mean?
I need some code so I can screw around with things to make that definition meaningful. Here's my stab at a very simple version of co-integration. I'll use the same input data as in the code above.
````#The input data
numpoi <- 1000 #Number of data points
meax <- 0.0002 #Mean for x
stax <- 0.0100 #Standard deviation for x
meay <- 0.0002 #Mean for y
stay <- 0.0050 #Standard deviation for y
coex <- 0.0200 #Co-integration coefficient for x
coey <- 0.0200 #Co-integration coefficient for y
#Generate the noise terms for x and y
ranx <- rnorm(numpoi, mean=meax, sd=stax) #White noise for x
rany <- rnorm(numpoi, mean=meay, sd=stay) #White noise for y
#Generate the co-integrated series x and y
x <- numeric(numpoi)
y <- numeric(numpoi)
x[1] <- 0
y[1] <- 0
for (i in 2:numpoi) {
x[i] <- x[i-1] + (coex * (y[i-1] - x[i-1])) + ranx[i-1]
y[i] <- y[i-1] + (coey * (x[i-1] - y[i-1])) + rany[i-1]
}
#Plot x and y as prices
ylim <- range(exp(x), exp(y))
plot(exp(x), ylim=ylim, type="l", main=paste("Co-integrated Pair (coex=",coex,", coey=",coey,")", sep=""), ylab="Price", col="red")
lines(exp(y), col="blue")
legend("bottomleft", c("exp(x)", "exp(y)"), lty=c(1, 1), col=c("red", "blue"), bg="white")
#Calculate the correlation of the returns.
#Notice that for reasonable coex and coey values,
#the correlation of dx and dy is dominated by
#the spurious correlation of ranx and rany
dx <- diff(x)
dy <- diff(y)
plot(dx, dy)
cor(dx, dy)
cor(ranx, rany)
````
Notice above, that the "co-integration term" for x and y shows up inside the "for loop":
````x[i] <- x[i-1] + (coex * (y[i-1] - x[i-1])) + ranx[i-1]
y[i] <- y[i-1] + (coey * (x[i-1] - y[i-1])) + rany[i-1]
````
A positive `coex` determines how fast `x` will try to reduce the spread with `y`. Likewise, a positive `coey` determines how fast `y` will try to reduce the spread with `x`. You can tweak these values to generate all sorts of plots to see how those co-integration terms `(y[i-1] - x[i-1])` and `(x[i-1] - y[i-1])` work.
After you've played with this a while, notice that it doesn't really answer the correlation of prices issue. It replaces it. So, am I now off-the-hook for the correlation of prices issue?
=========================================================
Obviously, now it's time to put the two concepts together to get a model that is in the ballpark with pairs trading. Below is the code:
````library(MASS)
#The input data
numpoi <- 1000 #Number of data points
meax <- 0.0002 #Mean for x
stax <- 0.0100 #Standard deviation for x
meay <- 0.0002 #Mean for y
stay <- 0.0050 #Standard deviation for y
coex <- 0.0200 #Co-integration coefficient for x
coey <- 0.0200 #Co-integration coefficient for y
corxy <- 0.800 #Correlation coeficient for xy
#Build the covariance matrix and generate the correlated random results
(covmat <- matrix(c(stax^2, corxy*stax*stay, corxy*stax*stay, stay^2), nrow=2))
res <- mvrnorm(numpoi, c(meax, meay), covmat)
#Generate the co-integrated series x and y
x <- numeric(numpoi)
y <- numeric(numpoi)
x[1] <- 0
y[1] <- 0
for (i in 2:numpoi) {
x[i] <- x[i-1] + (coex * (y[i-1] - x[i-1])) + res[i-1, 1]
y[i] <- y[i-1] + (coey * (x[i-1] - y[i-1])) + res[i-1, 2]
}
#Plot x and y as prices
ylim <- range(exp(x), exp(y))
plot(exp(x), ylim=ylim, type="l", main=paste("Co-integrated Pair with Correlated Returns (coex=",coex,", coey=",coey,")", sep=""), ylab="Price", col="red")
lines(exp(y), col="blue")
legend("bottomleft", c("exp(x)", "exp(y)"), lty=c(1, 1), col=c("red", "blue"), bg="white")
#Calculate the correlation of the returns.
#Notice that for reasonable coex and coey values,
#the correlation of dx and dy is dominated by
#the correlation of res[,1] and res[,2]
dx <- diff(x)
dy <- diff(y)
plot(dx, dy)
cor(dx, dy)
cor(res[, 1], res[, 2])
````
You can play around with the parameters and generate all sorts of combinations. Notice that even though these series consistently reduce the spread, you can't predict how or when the spread will be reduced. That's just one reason why pairs-trading is so much fun. The bottom line is, to get in the ballpark with modeling pairs-trading, it requires both correlated returns and co-integration.
A typical example. Exxon (XOM) versus Chevron (CVX), where the above model applies if some additional terms are added.
http://finance.yahoo.com/q/bc?s=XOM&t=5y&l=on&z=l&q=l&c=cvx
So, to answer your question (as just my opinion), price correlation is typically used/abused as an attempt to deal with the longer term divergence/closeness of the paths of the series, when co-integration is what should be used. It is the co-integration terms that limit the drift between the series. Price correlation has no real meaning. Correlation of the returns of the series determine the short term similarity of the series.
I did this in a hurry, so if anyone sees an error, don't be afraid to point it out.
-
6
answer with code is so good.... – nicolas Jul 9 '11 at 11:54
Correlation is much more widely used concept and it has much more "informal" meanings. If we have only two random variables $X$ and $Y$ then correlation is simply a measure of linear dependence between the two variables:
$$corr(X,Y)=\frac{cov(X,Y)}{\sqrt{var(X)var(Y)}}=\frac{EXY-EX\cdot EY}{\sqrt{var(X)var(Y)}}$$
If correlation is -1 or 1 then the two variables are perfectly linearly related, i.e. there exists real numbers $a,b,c$ for which
$$P(aX+bY=c)=1$$
The correlation is called a measure of linear dependence since if we standartize $X$ and $Y$ (subtract the means and divide by standard deviations) then the correlation is the solution for the following
$$cor(X,Y)=argmin_a(E(Y-aX)^2)$$
So if $cor(X,Y)=0$ then you can say that there is no way to explain $Y$ using linear combination of $X$. For Gaussian random variables this have stronger implications, if correlation is zero, then the Gaussian random variables are independent.
When we have time series then we have more than two variables. Each time series is a sequence of random variable $\{X_t,t=1,2,...\}$. Naturally we can calculate correlation between any two time periods $corr(X_t,X_s)$. This gives us a lot of correlations for one time series, which are characterised by correlation function $r(t,s)=corr(X_t,X_s)$. Now if this function depends only on the difference $t-s$, i.e $r(t,s)=r(t-s)$ then the time series $X_t$ are called stationary (to be precise this is called weak stationarity and also requires that $EX_t$ should be constant, to be more precise the definition of stationarity actually involves the covariance not the correlation).
Now if we introduce another time series $Y_t$ we again can define a lot of correlations $corr(X_t,Y_s)$. And again we can define stationarity. Now for stationary series $(X_t,Y_t)$ the correlation $corr(X_t,Y_t)$ does not depend on $t$, so as in simple two random variables case we can talk about linear relationship between $X_t$ and $Y_t$. Note that in time series case we still have a lot of correlations left: $corr(X_t,Y_t+h)$, $h=...,-1,0,1,...$, which can be interpreted as a measure of linearity between past and future values of times series $X_t$ and $Y_t$.
And only now we can introduce the concept of cointegration. The stationary time-series are called integrated of order 0. If the difference of the time series is stationary then such time series are called integrated of order 1.
Integrated time-series are non-stationary, so for example $corr(X_t,Y_t)$ for integrated time series depend on $t$, which is not so nice. If for example we have highly correlated stationary processes knowing the fact that the correlation does not depend on time we can forecast one process with high accuracy knowing the values of the other. This does not hold for integrated time-series. Of course we can difference the integrated time series to get the stationary time series but if we difference them we can only investigate so call short-term dynamics, i.e. what happens now or in the near future (here we measure the time in numbers of time periods, short-term usualy mean 1-10 time periods).
Now finally we can introduce the definition of cointegration. The integrated (of order 1) time-series $X_t$ and $Y_t$ are called cointegrated if their linear combination $aX_t+bY_t$ is stationary. Since stationarity property remains if we multiply the time series by the constant we get that $Y_t-a/(-b)X_t$ is stationary. Which in turns gives us
$$Y_t=cX_t+\varepsilon_t$$
where $\varepsilon_t$ is a stationary time-series and $c$ is apropriate constant. This is again useful for predicting what happens to $Y_t$ if we know $X_t$ and $\varepsilon_t$ does not vary too much. Note that this relationship holds for all $t$ and $c$ can be estimated having the apropriate data since it does not depend on $t$.
The confusion between correlation and cointegration might arise from the fact that for stationary time series $Y_t$ and $X_t$ the exact same relationship holds:
$$Y_t=cX_t+\varepsilon_t$$
where $\varepsilon_t$ is a stationary process. Furthermore if we try to estimate $c$ using the data it can be shown that if we increase the number of data points indefinitely, the estimate for $c$ will converge to some meaningful number, which for example in case where $X_t$ and $Y_t$ are zero mean unit variance time series will be exactly the correlation $corr(X_t,Y_t)$ (which for stationary series is the constant, as discussed previously).
Note that this will not hold for cointegrated time series: although $c$ can be estimated, it in general will not be correlation $corr(X_t,Y_t)$, since $X_t$ and $Y_t$ are integrated (the special particular case is discussed in the link below). The story becomes worse for integrated time series, which are not cointegrated. Then the same estimate for $c$ which has nice properties and interpretations under stationarity and cointegration becomes meaningless. For more mathematical details what exactly happens in this case you can look at my post in stats.SE.
I hope this answer is useful. I intentionally sacrificed some mathematical strictness for better clarity, hopefully not too much.
-
Would you happen to know of a Khan-academy-esque video that shows derivation of the above? – Chloe Mar 6 '12 at 18:56
Derivation of exactly what? Nothing is derived in this post, only the definitions are given. As this post is purely my arrangement of known facts, I do not know about videos where somebody is talking about this. – mpiktas Mar 7 '12 at 2:01
From Quantitative Trading by Ernie Chan :
"Correlation between two price series actually refers to the correlations of their returns over some time horizon (for concreteness, let's say a day). If two stocks are positively correlated, there is a good chance that their prices will move in the same direction most days. However, having a positive correlation does not say anything about the long-term behavior of the two stocks. In particular, it doesn't guarantee that the stock prices will not grow farther and farther apart in the long run even if they do move in the same direction most days. However, if two stocks were cointegrated and remain so in the future, their prices (weighted appropriately) will be unlikely to diverge. Yet their daily (or weekly, or any other time horizon) returns may be quite uncorrelated."
-
4
Correlation between two price series does not actually refer to the correlations of their returns. You can calculate the correlation of two price series, but this is not how people tend to think of correlation between instruments and it's what I meant by "a stumbling block". I would agree that correlation between first differences of a series does not tell you anything about cointegration of their levels, but it doesn't help you understand how the two concepts relate to one another. – Joshua Ulrich Apr 22 '11 at 14:01
Before I try to answer your question we need to establish a difference between what one wants to analyse. It is true that before modern time-series methodologies were developed, researches used "correlation" between prices as a means of analysis. However, since a Price (at a specific moment in time) is 1 value, it makes no sense to compare 2 prices with each other using "correlation" (although there are some attempts: Robinson (2006) http://www.cemmap.ac.uk/wps/cwp107.pdf ) And as already has been pointed out before, most people above mention correlation in context of returns and such.
Most of the time we are interested not in the price but its movement! (i.e. it's a relative/dynamic concept which involves TIME).
In this "time-series" context, co-integration is the right tool to measure relationship between MOVEMENTS in prices.
Let me try to answer your question concretely by elaborating a bit on Johansen co-integration methodology as an illustration. The maximum likelihood estimation is in function of the deterministic term and the stationary effects. In other words we consider the multivariate linear regressions 1 and 2. With basic mathematic transformations it is easy to prove that the Johansen test statistics are directly linked to the angle (mean(t)) between the vectors of residuals in (1) and (2), ut and vt respectively. (If you care to read the full proof feel free to contact me)
In other words, the smaller the angle(t) between the two "error-vectors" ut and vt the more "connected" or integrated the price movements are between time-series 1 and 2. Which is quite intuitive actually...
Also, with some background knowledge, I think this illustration puts cointegration in contrast to correlation
I hope this was not too confusing (without the whole proof) and helped to answer your question to some degree.
-
Ow, I see as a new user I can not add images... my apologies (will try to add it later when my Reputation score goes up I guess) – Val Jul 6 '11 at 15:47
• Correlation between two financial time series should be calculated as correlation of the returns (or log returns for prices).
• There is absolutely no relationship between correlation of the returns and cointegration. Two correlated time series can be cointegrated or not cointegrated. Two cointegrated time series can be correlated or not correlated.
Everything else is spurious :)
Edit: Sorry for this quick answer to an actually good question. They are definitely 2 different concepts. But that's true that they can be confused, because they seam to capture the same kind of things. For instance, one can do a portfolio allocation algorithm taking into account the correlation (VaR, CAPM), or the cointegration. In these examples, correlation and cointegration are 2 measures of risk/diversification. But they do not mean the same think. Taking only one of the 2 may lead to overestimate or underestimate the risk of the portfolio.
I would say that in this example, correlation is more shorter term (risk over one day, one week, ...), whereas cointegration is more longer term, it is the risk that different assets move together in the long run. I would say that a good algorithm should have the 2 approaches to estimate correctly the diversification. See http://www.carolalexander.org/publish/download/JournalArticles/PDFs/RIBF_16_65-90.pdf
Another example are markets that close at different time. If you look only at correlation of daily returns, you may find a weak correlation even though the markets move together.
-
1
As I said in my comment to @NYCBrit, "I would agree that correlation between first differences of a series does not tell you anything about cointegration of their levels, but it doesn't help you understand how the two concepts relate to one another." In short, this doesn't help answer the question. – Joshua Ulrich Apr 25 '11 at 13:53
• Correlation is a property of collections of observations.
• Cointegration is a property of time series.
The important difference is that temporal observations have one neighbour to their left and one to their right. Collections are like a set — no implicit "neighbour" relationships.
Moving average is an inappropriate statistic to apply to lab experiments or phone survey data. It is appropriate in the analysis of time series.
-
Interesting point to make. If we are comparing two time series, correlation tell us something about the complete time series as a whole, whereas cointegration tells us something about the individual matching points. – Gravitas Sep 18 '11 at 22:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9238115549087524, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2012/08/08/special-linear-lie-algebras/?like=1&_wpnonce=dee27eefe3
|
# The Unapologetic Mathematician
## Special Linear Lie Algebras
More examples of Lie algebras! Today, an important family of linear Lie algebras.
Take a vector space $V$ with dimension $\mathrm{dim}(V)=l+1$ and start with $\mathfrak{gl}(V)$. Inside this, we consider the subalgebra of endomorphisms whose trace is zero, which we write $\mathfrak{sl}(V)$ and call the “special linear Lie algebra”. This is a subspace, since the trace is a linear functional on the space of endomorphisms:
$\displaystyle\mathrm{Tr}(ax+by)=a\mathrm{Tr}(x)+b\mathrm{Tr}(y)$
so if two endomorphisms have trace zero then so do all their linear combinations. It’s a subalgebra by using the “cyclic” property of the trace:
$\displaystyle\mathrm{Tr}(xy)=\mathrm{Tr}(yx)$
Note that this does not mean that endomorphisms can be arbitrarily rearranged inside the trace, which is a common mistake after seeing this formula. Anyway, this implies that
$\displaystyle\begin{aligned}\mathrm{Tr}\left([x,y]\right)&=\mathrm{Tr}(xy-yx)\\&=\mathrm{Tr}(xy)-\mathrm{Tr}(yx)=0\end{aligned}$
so actually not only is the bracket of two endomorphisms in $\mathfrak{sl}(V)$ back in the subspace, the bracket of any two endomorphisms of $\mathfrak{gl}(V)$ lands in $\mathfrak{sl}(V)$. In other words: $\left[\mathfrak{gl}(V),\mathfrak{gl}(V)\right]=\mathfrak{sl}(V)$.
Choosing a basis, we will write the algebra as $\mathfrak{sl}(l+1,\mathbb{F})$. It should be clear that the dimension is $(l+1)^2-1$, since this is the kernel of a single linear functional on the $(l+1)^2$-dimensional $\mathfrak{gl}(l+1,\mathbb{F})$, but let’s exhibit a basis anyway. All the basic matrices $e_{ij}$ with $i\neq j$ are traceless, so they’re all in $\mathfrak{sl}(n,\mathbb{F})$. Along the diagonal, $\mathrm{Tr}(e_{ii})=1$, so we need linear combinations that cancel each other out. It’s particularly convenient to define
$\displaystyle h_i=e_{ii}-e_{i+1,i+1}$
So we’ve got the $(l+1)^2$ basic matrices, but we take away the $l+1$ along the diagonal. Then we add back the $l$ new matrices $h_i$, getting $(l+1)^2-1$ matrices in our standard basis for $\mathfrak{sl}(l+1,\mathbb{F})$, verifying the dimension.
We sometimes refer to the isomorphism class of $\mathfrak{sl}(l+1,\mathbb{F})$ as $A_l$. Because reasons.
### Like this:
Posted by John Armstrong | Algebra, Lie Algebras
## 5 Comments »
1. [...] with the special linear Lie algebras, these form the “classical” Lie algebras. It’s a tedious but straightforward [...]
Pingback by | August 9, 2012 | Reply
2. [...] that then the map is an automorphism of . Clearly this happens for all in the cases of and the special linear Lie algebra — the latter because the trace is invariant under a change of [...]
Pingback by | August 18, 2012 | Reply
3. [...] some of the things we’ve been talking about. Specifically, we’ll consider — the special linear Lie algebra on a two-dimensional vector space. This is a nice example not only because it’s nicely [...]
Pingback by | August 18, 2012 | Reply
4. Thanks for mentioning the “common mistake”. I had always thought it was true…
Comment by | August 22, 2012 | Reply
5. Just a comment: When you compute the trace of an arbitrary bracket, what you are actually showing is that the derived algebra of the general Lie algebra is _contained_ in the special one, not that they are equal. To show the equality we may appeal to the dimensionality argument,which you exhibit afterwards.
Comment by Jose Brox | September 2, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 28, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9099796414375305, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/30658/does-light-photons-have-jerk
|
# Does light photons have jerk
While searching in web regarding whether rate of change of acceleration is possible or not; i came across the concept of jerk.I want to know whether light which can be accelerated can also have jerk or not?
-
-1: This is not a good question. The answer is "sort of". – Ron Maimon Jun 23 '12 at 11:06
hey ron, i am not a physicist just out of curiosity i have asked this question and i believe that no questions are stupid and every question has an answer – t3st Jun 24 '12 at 11:17
## 1 Answer
Strictly speaking light can't be accelerated. Viewed from a local frame it always travels at a speed of $c$ and in a straight line. Since the acceleration is always zero the jerk is also always zero.
Light can be bent by gravitational fields, i.e. in curved space-time, and therefore it is accelerated in the sense that it's velocity changes direction so I suppose the jerk is non-zero. However the bending of light we see is just the result of the curvature of spacetime. Viewed locally the light travels in a straight line at constant velocity, so it's not clear to me that jerk is an especially useful concept in calculating the trajectory of a light beam.
-
It is not a coordinate transformation. – Ron Maimon Jun 23 '12 at 11:03
Yes, that was a clumsy answer. I've edited it to try and improve it. – John Rennie Jun 24 '12 at 7:54
+1: Thanks, John. – Ron Maimon Jun 24 '12 at 7:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490915536880493, "perplexity_flag": "middle"}
|
http://www.citizendia.org/Correlation_function
|
Correlation functions contain information about the distribution of points or events, or things across some space/time.
A very simple example of a correlation function is the following: Given the existence of a point at a position X in some space, what is the probability of there being another point at a second position Y.
For stochastic processes, including those that arise in statistical mechanics and Euclidean quantum field theory, a correlation function is the correlation between random variables at two different points in space or time. A stochastic process, or sometimes random process, is the counterpart to a deterministic process (or Deterministic system) in Probability theory. Statistical mechanics is the application of Probability theory, which includes mathematical tools for dealing with large populations to the field of Mechanics In quantum field theory (QFT the forces between particles are mediated by other particles In Probability theory and Statistics, correlation, (often measured as a correlation coefficient) indicates the strength and direction of a linear A random variable is a rigorously defined mathematical entity used mainly to describe Chance and Probability in a mathematical way If one considers the correlation function between random variables at the same point but at two different times then one refers to this as the autocorrelation function. If there are multiple random variables in the problem then correlation functions of the same random variable are also sometimes called autocorrelation. The autocorrelation can be intuitively understood as an indicator of how the random variable at a given point changes with time. Correlation functions of different random variables are sometimes called cross correlations. Cross correlations are a useful indicator of the dependencies among different random variables as a function of time.
Correlation functions used in astronomy, financial analysis, quantum field theory and statistical mechanics differ only in the particular stochastic processes they are applied to with the caveat that we are dealing with "quantum distributions" in QFT. Astronomers describe the distribution of galaxies in the universe by means of a Correlation function. Financial analysis refers to an assessment of the viability stability and profitability of a Business, sub-business or Project. In quantum field theory (QFT the forces between particles are mediated by other particles Statistical mechanics is the application of Probability theory, which includes mathematical tools for dealing with large populations to the field of Mechanics (For details, see Correlation function (quantum field theory). In Quantum field theory, correlation functions generalize the concept of Correlation functions in statistics )
## Definition
For random variables X(s) and X(t) at different points s and t of some space, the correlation function is
$C(s,t) = \operatorname{corr}( X(s), X(t) ).$
In this definition, it has been assumed that the stochastic variable is scalar-valued. If it is not, then one can define more complicated correlation functions. For example, if one has a vector Xi(s), then one can define the matrix of correlation functions
$C_{ij}(s,s') = \operatorname{corr}( X_i(s), X_j(s') )$
or a scalar, which is the trace of this matrix. If the probability distribution has any target space symmetries, i. In Probability theory and Statistics, a probability distribution identifies either the probability of each value of an unidentified Random variable e. symmetries in the space of the stochastic variable (also called internal symmetries), then the correlation matrix will have induced symmetries. If there are symmetries of the space (or time) in which the random variables exist (also called spacetime symmetries) then the correlation matrix will have special properties. Spacetime symmetries refers to aspects of Spacetime that can be described as exhibiting some form of Symmetry. Examples of important spacetime symmetries are —
• translational symmetry yields C(s,s') = C(s − s') where s and s' are to be interpreted as vectors giving coordinates of the points
• rotational symmetry in addition to the above gives C(s, s') = C(|s − s'|) where |x| denotes the norm of the vector x (for actual rotations this is the Euclidean or 2-norm).
n is
$C_{i_1i_2\cdots i_n}(s_1,s_2,\cdots,s_n) = \langle X_{i_1}(s_1) X_{i_2}(s_2) \cdots X_{i_n}(s_n)\rangle.$
If the random variable has only one component, then the indices ij are redundant. If there are symmetries, then the correlation function can be broken up into irreducible representations of the symmetries — both internal and spacetime. In the mathematical field of Representation theory, group representations describe abstract groups in terms of Linear transformations of
The case of correlations of a single random variable can be thought of as a special case of autocorrelation of a stochastic process on a space which contains a single point.
## Properties of probability distributions
With these definitions, the study of correlation functions is equivalent to the study of probability distributions. Probability distributions defined on a finite number of points can always be normalized, but when these are defined over continuous spaces, then extra care is called for. The study of such distributions started with the study of random walks and led to the notion of the Ito calculus. A random walk, sometimes denoted RW, is a Mathematical formalization of a trajectory that consists of taking successive Random steps Itō calculus, named after Kiyoshi Itō, extends the methods of calculus to Stochastic processes such as Brownian motion ( Wiener process)
The Feynman path integral in Euclidean space generalizes this to other problems of interest to statistical mechanics. This article is about a formulation of quantum mechanics For integrals along a path also known as line or contour integrals see Line integral. Statistical mechanics is the application of Probability theory, which includes mathematical tools for dealing with large populations to the field of Mechanics Any probability distribution which obeys a condition on correlation functions called reflection positivity lead to a local quantum field theory after Wick rotation to Minkowski spacetime. In Quantum field theory, the Wightman distributions can be analytically continued to analytic functions in Euclidean space with the domain restricted In quantum field theory (QFT the forces between particles are mediated by other particles In Physics, Wick rotation, named after Gian-Carlo Wick, is a method of finding a solution to a problem in Minkowski space from a solution to a related problem In Physics and Mathematics, Minkowski space (or Minkowski spacetime) is the mathematical setting in which Einstein's theory of Special relativity The operation of renormalization is a specified set of mappings from the space of probability distributions to itself. In Quantum field theory, the Statistical mechanics of fields and the theory of self-similar geometric structures renormalization refers to a collection A quantum field theory is called renormalizable if this mapping has a fixed point which gives a quantum field theory. In quantum field theory (QFT the forces between particles are mediated by other particles
## See also
In Probability theory and Statistics, correlation, (often measured as a correlation coefficient) indicates the strength and direction of a linear In Statistics, Spearman's rank correlation coefficient or Spearman's rho, named after Charles Spearman and often denoted by the Greek letter \rho In Statistics, the Pearson product-moment correlation coefficient (sometimes referred to as the MCV or PMCC, and typically denoted by r Astronomers describe the distribution of galaxies in the universe by means of a Correlation function. The Correlation function in Statistical mechanics is measure of the order in a system In Quantum field theory, correlation functions generalize the concept of Correlation functions in statistics Rate–distortion theory is a major branch of Information theory which provides the theoretical foundations for Lossy data compression; it addresses the problem of
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073500633239746, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/19834/find-f-a-explaining-in-english?answertab=active
|
# “Find $f '(a)$” explaining in english
Find $f '(a)$, $f(x) = \frac{x^2 + 1}{x - 5}$
$f'(a)=$ [answer box here]
I understand that $f'(a)$ is the derivative, but i don't understand what the derivative is. Or how to solve this. I am used to finding a tangent slope with two points. This one doesn't have that. So i don't understand how i can use the definition to solve this. Thanks,
-
Are you used to finding the tangent slope when the two points get really close? The quotient rule is easier than doing it from scratch (by calculating the limit), but if you don't have the quotient rule handy you can go through all the algebra with the limit. Do you know which method to find the derivative you're expected to use (quotient rule or limit or something else)? – Mitch Feb 1 '11 at 17:45
## 3 Answers
The derivative is a limit of the slopes between two points that you're used to finding. As the points get closer together, these slopes between two points approach the slope of the line tangent to the graph of the function at a point. For an excellent explanation of the general ideas here, see this answer of Arturo Magidin.
If you were finding the slope of the line connecting $(a,f(a))$ and a nearby point on the graph, say $(a+\Delta x, f(a+\Delta x))$, where $\Delta x$ is thought of as being small, then you would compute change in $y$ over change in $x$ as $\frac{f(a+\Delta x) - f(a)}{\Delta x}$. The derivative $f'(a)$ is the limit of these quotients as $\Delta x$ goes to $0$. One way to find $f'(a)$ in this case is to first do some algebra with the general expression $\frac{f(a+\Delta x) - f(a)}{\Delta x}$, until you get it in a form where it is clear what happens when $\Delta x$ goes to zero. As indicated in other answers, there are general rules for calculating derivatives of algebraic expressions like this without appeal to limits, but if you haven't learned those yet then the direct method may be the way to go.
My approach to directly computing the limit would be to first break $f$ into simpler functions using polynomial division. This gives $f(x)=26\cdot\frac{1}{x-5}+x+5$. The heaviest algebra will be simplying the combination of $\frac{1}{a+\Delta x - 5}$ and $\frac{1}{a-5}$ in the quotient, but you'll know you're on the right track when you can cancel the $\Delta x$ in the denominator of the whole thing. At this point you should be able to "set $\Delta x = 0$" to compute the limit.
In practice, after first becoming acquainted with the limit definition, everyone learns to use rules for finding derivatives of polynomials and quotients, as indicated in other answers. Those methods are more efficient, but might seem mysterious if you haven't seen derivatives before. An example of using the limit definition is given on the Wikipedia page here.
-
In this case I would use the 'quotient rule': $\left( \frac{g}{h} \right)' = \frac{g'h - gh'}{h^2}$.
This means that you have to find the derivative of $g(x) = x^2 +1$ and $h(x) = x -5$
To find $g(x)$ you can use the power rule (a specific case of the chain rule), the sum rule, and the constant rule.
First split $g(x)=x^2 + 1$ into $f(x) =x^2$ and $f(x)=1$.
The power rule is if $f(x) = x^a$ where a is a constant, then $f'(x)=ax^{a-1}$. So looking at $x^2$ you can see that $a = 2$ so $f'(x) = 2x^{2-1} = 2x$, right?
Then you take the constant rule and look at $f(x) = 1$. The constant rule is if $f(x) = a$ where $a$ is some constant, then $f'(x) = 0$. In this case $a=1$ and is constant so $f'(1) = 0$.
Now use the sum rule to put these together. The sum rule is: $(f + g)' = f' + g'$ basically saying that you can take the derivatives separately first and then add them together. So now we take the derivatives that we already calculated and add them: $2x + 0 = 2x = g'(x)$.
If you now take another look at that original quotient rule, you have solved for $g'$... Since you already know $g$ and $h$ (without the $'$) all you need to do is find is $h'$ and $h^2$ and plug them in...
For a good reference, it might be worth it to check out http://en.wikipedia.org/wiki/Derivative.
-
HINT: Use the $\frac{f}{g}$ rule which states that $$\Bigl(\frac{f}{g}\Bigr)' = \frac{ g \cdot f' - f \cdot g'}{g^{2}}$$
So here your $f(x)=x^{2}+1$ and $g(x) = x-5$.
So your $$f'(x) = \frac{(x-5) \cdot \frac{d}{dx}\Bigl[x^{2}+1\Bigr] - (x^{2}+1) \cdot \frac{d}{dx}(x-5)}{(x-5)^{2}}$$
After doing this plugin $x=a$.
Okay if you are not comfortable with this we can do this by Product rule. I hope you are familiar with the product rule which states: $$(fg)' = f \cdot g' + g \cdot f'$$
So your $f(x) = x^{2}+1$ and $g(x) = (x-5)^{-1}$. Then by Product rule you have $$(fg)'= (x^{2}+1) \Bigl[ (x-5)^{-2}\Bigr] + \frac{1}{x-5} \Bigl[\frac{d}{dx} (x^{2}+1)\Bigr]$$
-
I dont understand any of that. We just started this section. I never seen or heard of a f/g rule. but does that mean.. ((x−5)*(x^2+1)-(x^2+1)*(x−5))/((x−5)^2) – John Carbonator Feb 1 '11 at 7:26
@john: See the edited answer. – anonymous Feb 1 '11 at 7:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.959514319896698, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/4833/does-noncompact-manifold-or-orbifold-have-the-homotopy-type-of-cw-complex
|
# Does noncompact manifold or orbifold have the homotopy type,of CW complex?
I forget for a while, we don't need the compactness condition here right?
-
– yasmar Apr 14 '12 at 21:14
## 2 Answers
According to The Topology of CW-Complexes by Lundell and Weingram (Van Nostrand Reinhold, 1969) the answer is yes for (separable) manifolds.
-
1
And the proof for smooth manifolds is relatively nice -- on a smooth manifold there is a proper non-negative smooth function $f : M \to \mathbb R$ so $f^{-1}([0,a])$ is a smooth submanifold of $M$ for $a$ a regular value of $f$, and these manifolds exhaust $M$. – Ryan Budney Sep 17 '10 at 13:34
For smooth manifolds the following holds. By the existence of Morse functions one can deduce a handle-body decomposition of a manifold. This decomposition then yields a CW structure on a space homotopy equivalent to the manifold. I don't think that compactness is needed in any of the above arguments.
-
The Morse function does not give a CW structure on the manifold. Moreover, the results you're quoting (without reference) do require compactness. – Ryan Budney Apr 14 '12 at 20:29
– mland Apr 14 '12 at 20:55
The .pdf file you link to does not support your claims, at least not as stated. The CW structure is not on the manifold, there is a homotopy-equivalence to an induced CW-complex. In particular, the Morse function they use on a non-compact manifold is a proper Morse function, meaning the level-sets are compact. This is not the same thing as a Morse function. – Ryan Budney Apr 14 '12 at 21:27
Yes the argument was to brief. But I still think for smooth manifolds the existence of such a "proper" morse function suffices to see that the manifold has the homotopy type of a CW complex. Of course the CW structure is only given on a space homotopy equivalent to the manifold we started with. – mland Apr 14 '12 at 21:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8982923030853271, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4183143
|
Physics Forums
Page 2 of 2 < 1 2
Recognitions:
Science Advisor
## intuition about definition of laplace transform
Instead of getting tied in knots about how the transform works, it might be better to learn what the s-plane represents - for example what the locations of poles and zeros mean for the dynamic behaviour of a system.
Laplace Transform is one of the simpler ones that has a calculus representation already. In real world, quite a bit of result can only be represented by numerical analysis which is some form of power series. Try to make sense out of that!!! People might be plotting out an input to output response and literally doing a curve fitting to develop an equation for it. Like Ratch gave, log is one of those, why do log? From my understanding, one reason is human ears response logarithmic to sound power, there must be a lot of other reason, but that's good enough for me!!! I remember the first lecture from the instructor of my Physical Chemistry that really stuck in my mind: Invention or discovery of a theory start with an idea or postulation. Then confirm or disprove using observation and experiment. If the postulation holds after scrutiny, then it is consider true and become a law. At the same time, mathematical formulas are developed to full fill the observation. Sometimes the postulation is proven by mathematical derivation. It is not the other way around that you have a theory or equation first. If you get stuck with Laplace, try quantum physics!!! I quit chemistry after I score the first in the class of the Physical chemistry and I was like 15 points above the second student. I worked in the chemistry dept. at the time and it's like the professor came to the stockroom window everyday answering my questions. Finally he said to me " Alan, you are not going to understand this, keep at it, when you get your PHD, you'll start to get the feel of it". This might not be the exact word to word, but it's very close. I quite chemistry after that, I just finished my degree and never even look for a job in that field. Between the snake biting it's tail and this, I quit. I did not know at the time it's like this in all science. Now I can really appreciate his first lecture and what he said.
Quote by Runei Learn that FUNCTIONS ARE VECTORS!!! This is an important part
Functions are vectors and therefore they can be represented in any coordinate system you want. Just like when you apply a transformation on a spatial vector to transform from xyz coordinates to polar, the Laplace transforms the function by projecting it onto a different space.
I tried to think of an analogy that would make this idea a little more tangible and I came up with a technical drawing. A technical drawing uses three projections to make a 3D object seem simpler to the eye. You could choose to represent a 3D object with an animated model that rotates in time, instead. If you did that, it would be very easy for your eye to interpret the picture and get a good impression of the object's shape, but measuring precise lengths and angles would be very hard, and you would have to have some kind of magic paper to display it. It becomes simpler to represent it with still images in three different locations on a piece of paper. In these two different representations, the viewing angle is either a function of time or location, and the relative length of a line may or may not depend on relative distance. We transform the coordinates to a equivalent but more easily understood system. We also choose to align the axis of the drawing in a way that is convenient for design and construction, the same object could be represented with a drawing from three other arbitrarily chosen orthoganal directions but that would be confusing to your eye.
I think this is similar to the situation of the laplace transform because its useful for transforming time-varying functions into inanimate pictures in which time dependence is represented by spatial coordinates. In S-space, periodicity becomes a length along one axis, and the slope of exponential decay becomes a length along the other axis, any functions of S-space that are multiplied together represents the convolution of their time-varying counterparts (which is why its useful for filter analysis), and phase angles look like... angles. Like in the technical drawing example, drawing things in different locations now represents something totally different and makes it easier to express certain ideas.
S-space is a nice place to work if your job is doing a lot of convolution, differential equations, and phase analysis. The laplace transform is like the train that you take to commute from regular space where you live to S-space where you work.
Once you've mastered the Laplace Transform, try the fractional Fourier Transform... I still don't get that one
Quote by learner07 i don't need math definitions. what i need is physical understanding/interpretations of what laplace transform actually is?
Quote by yungman Read my last post, it is a definition. It is like I am making up my transform called Alan transform and is defined as A(f(t))= f(t)+1 It is absolutely useless, BUT it's a transform!!!! Difference is my transform don't worry anything and Laplace transform works!!! Their might not be any rhyme and reason at all.
there is a rhyme and reason. but the story or poem is long, not quite epic, but it's long. maybe as long as a semester long class in Linear System Theory (now they call it Signals and Systems or differential equations.
so 07, you understand what the concept of a "transform" is, right? maybe a good example is the logarithm. this transform turns a multiplication problem into an addition problem (this is because when you multiply to exponentials with the same base, you add their exponents). it also will turn a power problem into a multiplication problem. normally the latter (the transformed problem) is easier to deal with.
the Laplace Transform will turn a certain class of linear differential equations into polynomials. supposedly the latter is easier to deal with.
the real thing about Laplace is that it's a generalization of the Fourier Transform which itself is a generalization of Fourier series.
Fourier conceptuallizes functions or signals as being a sum of sinusoids, but because of Euler's formula, a sinusoidal function is actually an exponential function (with complex or imaginary exponent). and Laplace conceptualizes functions or signals as being a sum of exponential functions.
why are exponential (or sinusoidal) functions so important that you would want to use them as the basis for creating general functions? it's because exponential functions are eigenfunctions to these operations that we call Linear, Time-Invariant systems. LTI systems are very important and fundamental. if a sinusoid or an exponential function goes into an LTI system, what comes out is a sinusoid of the same frequency, or if an exponential goes into an LTI system, what comes out is the same exponential except it will be scaled by some constant.
so that is why we want to understand a signal or a function in terms of these exponential building block, because LTI systems will deal with each block simply instead of dealing with a signal as a whole.
Quote by rbj there is a rhyme and reason. but the story or poem is long, not quite epic, but it's long. maybe as long as a semester long class in Linear System Theory (now they call it Signals and Systems or differential equations.
I am no math expert, I just learn math to survive electronics. I just went back and look at 5 books I used to study ODE and PDE:
1) Introduction to Laplace Transform by W.D DAY
2) PDE with Fourier Series and BVP by Nakhle H. Asmar.
3) Differential Eq. by Zill and Cullen.
4) Elementary Applied PDE by Richard Haberman.
5) Elementary Differential Eq. and BVP by William E. Boyce.
NONE present where this equation came by. Yes, they all talked about linear transformation and all, talked about how they justify the usefulness. It is understand here that the use of LT to transform a differential equation into a much simpler form, that's the application side of it. The advantage is very obvious. But nothing on how the Laplace transform equation came by.
In fact on Page 479 of Asmar, the first page on the chapter of LT, the first line is:
"Should I refuse a good dinner simply because I do not understand the process of digestion?"
By OLIVER HEAVISIDE.
[Criticized for using formal mathematical manipulations, without understanding how they worked.]
That's what I was talking about the dream of a snake biting it's own tail for discovering the most important organic compound of the Benzene Ring. And Eisenstein had relativity in mind for years before he could come out with observation and math to prove it. Of cause, now they have books on the benzene rings, why it is a ring, relativity and all, they can characterize everything and all the theory, but that comes later, that's the interpretation of the original equation!!! BUT where they originally come from can be very funky!!! That, there may be no explanation other than just stroke of genius and creativity.
The books do talk about the particular usefulness for solving second order ODE with constant coef as the solution is in exponential form and is particular suitable for using LT. Maybe I have not gone deep enough, I have not seen using Eigenfunction in LT.
Recognitions: Gold Member My professor said that the Laplace transform transforms an ODE into an algebraic expression. Then on a test he gave us the following: http://www.physicsforums.com/showthread.php?t=654318. I got that the Laplace transform changed a second order ODE into a first order ODE and didn't proceed further since in the lecture he said that "we should get an algebraic expression" so I thought I had made some mistake. Now at home I realized that I should have proceeded further. Sigh.
Quote by yungman I am no math expert, I just learn math to survive electronics. I just went back and look at 5 books I used to study ODE and PDE: 1) Introduction to Laplace Transform by W.D DAY 2) PDE with Fourier Series and BVP by Nakhle H. Asmar. 3) Differential Eq. by Zill and Cullen. 4) Elementary Applied PDE by Richard Haberman. 5) Elementary Differential Eq. and BVP by William E. Boyce. NONE present where this equation came by. Yes, they all talked about linear transformation and all, talked about how they justify the usefulness. It is understand here that the use of LT to transform a differential equation into a much simpler form, that's the application side of it. The advantage is very obvious. But nothing on how the Laplace transform equation came by. ... The books do talk about the particular usefulness for solving second order ODE with constant coef as the solution is in exponential form and is particular suitable for using LT. Maybe I have not gone deep enough, I have not seen using Eigenfunction in LT.
well, yungman, it's after midnight now, so i am a little tired. maybe tomorrow i can go through the step-by-step (nowadays this should be in a Signals and Systems course) that starts with
1. Linear Time-Invariant systems (LTI), just the definitions.
2. show how the convolution summation (for discrete LTI) or convolution integral (continuous-time LTI) are derived directly from the definitions of LTI.
3. show how exponentials are eigenfunctions of LTI systems ($e^{\alpha t}$ goes in -> $A e^{\alpha t}$ comes out)
4. with Euler, show how sinusoids are also a sort of eigenfunction of LTI systems ($e^{j \omega t}$ goes in -> $A e^{j \omega t}$ comes out)
5. then generalize a little more from sinusoidal to periodic input (Fourier series). note that in deriving the Fourier coefficients, this is where the integral that ultimately becomes the Laplace Transform will first emerge.
6. then generalize a little more and let the period go out to infinity, so the periodic input becomes non-periodic. then look at that integral for the Fourier coefficients. it becomes the Fourier Integral.
7. that Fourier Integral looks just like the double-sided Laplace Integral except the Fourier has $j \omega$ in it where as, if you generalize further, the double-sided Laplace Integral has $s = \sigma + j \omega$ replacing $j \omega$.
so biologists and physiologists, if they drill down a little, do understand, for the most part, how digestion works. they don't simply say "it works, let's eat."
I don't think we are talking about the same thing. I guess another way of looking at this is: Is there a derivation of $$L(f(t))=\int_0^{\infty} e^{-st} f(t) dt$$ If this is derived from the existing theory, then I hope at least one of my 5 books would have mentioned where the equation comes from. In another word, the history of Laplace transform. We all know the application and the indefinite integral, the kernel etc. I am not particular familiar with Laplace transform, all I can based on is the 5 books I have. When you talk about LTI, is Laplace transform derive base on that? If yes, that's good enough for me, you prove your point. If Laplace transform fit into the LTI, that does not prove anything. You don't need to repeat all the theory to characterize Laplace transform, just the original history of Laplace transform, I think that's what the OP was asking since he is not interested in the definition and all. I assume he know enough about the application of it.
Quote by fluidistic My professor said that the Laplace transform transforms an ODE into an algebraic expression. Then on a test he gave us the following: http://www.physicsforums.com/showthread.php?t=654318. I got that the Laplace transform changed a second order ODE into a first order ODE and didn't proceed further since in the lecture he said that "we should get an algebraic expression" so I thought I had made some mistake. Now at home I realized that I should have proceeded further. Sigh.
That's the application side, that it transform
$$L(y')=-y_0+sL(y) \;\hbox { and } \; L(y'')= -y_0' -sy_0 + s^2L(y)$$
Or
$$L( \cos(at))= \frac s {s^2+a^2}$$
This is the history I found: http://en.wikipedia.org/wiki/Laplace_transform The Laplace transform is named after mathematician and astronomer Pierre-Simon Laplace, who used a similar transform (now called z transform) in his work on probability theory. The current widespread use of the transform came about soon after World War II although it had been used in the 19th century by Abel, Lerch, Heaviside and Bromwich. The older history of similar transforms is as follows. From 1744, Leonhard Euler investigated integrals of the form $$z = \int X(x) e^{ax} dx \hbox { and } z = \int X(x) x^A dx$$ as solutions of differential equations but did not pursue the matter very far.[2] Joseph Louis Lagrange was an admirer of Euler and, in his work on integrating probability density functions, investigated expressions of the form $$\int X(x) e^{- a x } a^x dx$$ which some modern historians have interpreted within modern Laplace transform theory.[3][4][clarification needed] These types of integrals seem first to have attracted Laplace's attention in 1782 where he was following in the spirit of Euler in using the integrals themselves as solutions of equations.[5] However, in 1785, Laplace took the critical step forward when, rather than just looking for a solution in the form of an integral, he started to apply the transforms in the sense that was later to become popular. He used an integral of the form: $$\int x^s \phi (x) dx$$ akin to a Mellin transform, to transform the whole of a difference equation, in order to look for solutions of the transformed equation. He then went on to apply the Laplace transform in the same way and started to derive some of its properties, beginning to appreciate its potential power.[6] Laplace also recognised that Joseph Fourier's method of Fourier series for solving the diffusion equation could only apply to a limited region of space as the solutions were periodic. In 1809, Laplace applied his transform to find solutions that diffused indefinitely in space.[7] It is a continuation of Euler and Lagrange's development. Now the question is what's is the history of Euler's.
Fantastic thread. I want to take some time to digest this during break. My encounter with laplace was unpleasant.
Recognitions: Gold Member Science Advisor indeed a neat thread. I am grateful to the pointers to early mathematicians.... My experience with Laplace transforms was in automatic controls, which would be impossible without them. i read someplace that the math of feedback systems was developed by Descarte and shelved. Of course it was shelved, for there was no automation at the time. During WW2 the Germans revived Descarte's old math for their rocketry programs and their textbooks were among the war prizes we brought back. Along with the rocket scientists to explain them. I passed my controls courses by using Laplace as a tool whose workings i did not comprehend. I'd suggest that repetition might be your quickest way to conquer this. At least it'll make you familiar with the patterns, and for me that usually leads to insight. But I never mastered Laplace's transform. old jim
Quote by jim hardy I passed my controls courses by using Laplace as a tool whose workings i did not comprehend. I'd suggest that repetition might be your quickest way to conquer this. At least it'll make you familiar with the patterns, and for me that usually leads to insight. But I never mastered Laplace's transform. old jim
Being able to understand the theory and being able to design are not necessary related. There are people that are very good in theory and there are people that can design!! To me, there is an artistic component in electronic design, I made use quite a bit of my experience as a musician when I was young.
I know people that are very strong in theory but cannot design squad!! As an engineer, it's the result that is important, people hire one that can design good electronics using fortune telling than someone that can talk theory and can't produce. AND it is more irritating to have someone that can't get the job done and then argue with you in theory why it can't be done!!! Of cause, best will be good in both, but I'll take someone that can design any time of the day.
I am not good with Laplace transform and Fourier transform. I study these for signal and system, modulation. But after I studied these, then I found out that I still need statistic and probability to really understand the books. I kind of drop it all together and choose RF, EM instead, I rather get into transmission line, RF amplifier and antenna design. Those are totally different animal all together. I spend the most time on that instead. I still use Bode Plot for closed loop feedback designs!!! Simple, but works for me.
I found the first lecture and what the professor said to me described in post #19 is so important, that really open my eyes in the field of science. There are a lot more "guessing", "opinion", "eagle" and "politics" in science than people realize. There are a lot of theories, math come after the fact..........It's like Monday morning quarterbacking, people analyzed to death why the team lost the game, what is the reason and all. Or even why a team won a game.
Recognitions: Gold Member I always have made use of laplace transforms to integrate differential equations and solve them in less hastle. In terms of what it physically means, the relationship before the laplace transform is where the real intuition would lie, but then if we take this intuition - take the infinite sum of the function from zero to infinity, the intuition about the initial problem still exists. If we take the inverse laplace, the simplifications we make in that domain are perfectly valid. I agree with yungman (not to put words in his mouth, but,) it's best described as a "reality" in terms of mathematics.
Quote by yungman I don't think we are talking about the same thing. I guess another way of looking at this is: Is there a derivation of $$L(f(t))=\int_0^{\infty} e^{-st} f(t) dt$$ If this is derived from the existing theory, then I hope at least one of my 5 books would have mentioned where the equation comes from. In another word, the history of Laplace transform. We all know the application and the indefinite integral, the kernel etc. I am not particular familiar with Laplace transform, all I can based on is the 5 books I have. When you talk about LTI, is Laplace transform derive base on that? If yes, that's good enough for me, you prove your point. If Laplace transform fit into the LTI, that does not prove anything. You don't need to repeat all the theory to characterize Laplace transform, just the original history of Laplace transform, I think that's what the OP was asking since he is not interested in the definition and all. I assume he know enough about the application of it.
the OP was asking how this laplace transform actually works?
$$\mathcal{L}\{f(t)\}=\int_0^{\infty} e^{-st} f(t) dt$$
is derived, and i am saying it is an extension of the Fourier Transform which is an extension of Fourier series and you'll see the first integral that evolves to be the F.T. and L.T. from the derived Fourier coefficients in the F.T.
and the reason why sinusoidal and exponential functions are used as the basis functions for F.T. and L.T. are because they are eigenfunctions for Linear Time-Invariant systems.
it's not magic, and the definition of L.T. did not appear by magic. there is a rhyme and reason to it and a decent modern course in Signals and Systems (what we used to call Linear System Theory) would spell this out.
Recognitions: Gold Member Science Advisor i'm not mathematically inclined, let alone gifted. ODE was as far as i went. Might Euler's equation be part of the intuitive explanation OP inquired about?? when we multiply some arbitrary function by e^st where s includes a real term σ and a jω and has correct dimension (t^-1), it's not difficult for me to imagine that operation multipies our function by sine/cosine and exponential functions , analagous to a frequency sweep of a circuit plus ringing it with pulses , and integrating over 0 to ∞ collects the results and makes it somehow represent what a previous poster called a transform into a new plane,,, frequency dependent behavior being its ordinate and exponential behavior its abcissa? Please excuse this musing of a math ignoramus, its just i woke up in middle of night with that question. My alleged brain chews on concepts like this for months and ususlly discards them maybe somebody can accelerate that rejection process for me, or say it's worthy of more thought. Right now it's the best lead i've run across. It ties the transform to something i can conceive of doing with hardware. Thanks guys, old jim (a chid of the lesser gods)
Page 2 of 2 < 1 2
Thread Tools
| | | |
|----------------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: intuition about definition of laplace transform | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 5 |
| | Calculus | 6 |
| | General Math | 4 |
| | Calculus & Beyond Homework | 2 |
| | General Math | 1 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9580385088920593, "perplexity_flag": "middle"}
|
http://deltaepsilons.wordpress.com/tag/riemannian-metrics/
|
# Delta Epsilons
Mathematical research and problem solving
## The test case: flat Riemannian manifoldsNovember 12, 2009
Posted by Akhil Mathew in differential geometry, MaBloWriMo.
Tags: curvature tensor, flat manifolds, Riemannian metrics, test case
1 comment so far
Recall that two Riemannian manifolds ${M,N}$ are isometric if there exists a diffeomorphism ${f: M \rightarrow N}$ that preserves the metric on the tangent spaces. The curvature tensor (associated to the Levi-Civita connection) measures the deviation from flatness, where a manifold is flat if it is locally isometric to a neighborhood of ${\mathbb{R}^n}$.
Theorem 1 (The Test Case) The Riemannian manifold ${M}$ is locally isometric to ${\mathbb{R}^n}$ if and only if the curvature tensor vanishes. (more…)
## Identities for the curvature tensorNovember 11, 2009
Posted by Akhil Mathew in differential geometry, MaBloWriMo.
Tags: Bianchi identity, connections, curvature tensor, eponymy, Riemannian metrics
add a comment
It turns out that the curvature tensor associated to the connection from a Riemannian pseudo-metric ${g}$ has to satisfy certain conditions. (As usual, we denote by $\nabla$ the Levi-Civita connection associated to $g$, and we assume the ground manifold is smooth.)
First of all, we have skew-symmetry
$\displaystyle R(X,Y)Z = -R(Y,X)Z.$
This is immediate from the definition.
Next, we have another variant of skew-symmetry:
Proposition 1 $\displaystyle g( R(X,Y) Z, W) = -g( R(X,Y) W, Z)$ (more…)
## The fundamental theorem of Riemannian geometry and the Levi-Civita connectionNovember 10, 2009
Posted by Akhil Mathew in differential geometry, MaBloWriMo.
Tags: connections, Levi-Civita connection, Riemannian metrics
8 comments
Ok, now onto the Levi-Civita connection. Fix a manifold ${M}$ with the pseudo-metric ${g}$. This means essentially a metric, except that ${g}$ as a bilinear form on the tangent spaces is still symmetric and nondegenerate but not necessarily positive definite. It is still possible to say that a pseudo-metric is compatible with a given connection.
This is the fundamental theorem of Riemannian geometry:
Theorem 1 There is a unique symmetric connection ${\nabla}$ on ${M}$ compatible with ${g}$. (more…)
## Covariant derivatives and parallelism for tensorsNovember 3, 2009
Posted by Akhil Mathew in differential geometry, MaBloWriMo.
Tags: connections, covariant derivatives, parallelism, Riemannian metrics, tensors
3 comments
Time to continue the story for covariant derivatives and parallelism, and do what I promised yesterday on tensors.
Fix a smooth manifold ${M}$ with a connection ${\nabla}$. Then parallel translation along a curve ${c}$ beginning at ${p}$ and ending at ${q}$ leads to an isomorphism ${\tau_{pq}: T_p(M) \rightarrow T_q(M)}$, which depends smoothly on ${p,q}$. For any ${r,s}$, we get isomorphisms ${\tau^{r,s}_{pq} :T_p(M)^{\otimes r} \otimes T_p(M)^{\vee \otimes s} \rightarrow T_q(M)^{\otimes r} \otimes T_q(M)^{\vee \otimes s} }$ depending smoothly on ${p,q}$. (Of course, given an isomorphism ${f: M \rightarrow N}$ of vector spaces, there is an isomorphism ${M^* \rightarrow N^*}$ sending ${g \rightarrow g \circ f^{-1}}$—the important thing is the inverse.) (more…)
## Riemannian metrics and connectionsOctober 27, 2009
Posted by Akhil Mathew in differential geometry.
Tags: Christoffel symbols, connections, Riemannian metrics
8 comments
Wow. Blogging is definitely way harder during the academic year.
Ok, so I’m aiming to change things around a bit here and take a break from algebraic number theory to do some differential geometry. I’ll assume basic familiarity with what manifolds are, the tangent bundle and its variants, but generally no more. I eventually want to get to some real theorems, but this post will focus primarily on definitions.
Riemannian Metrics
A Riemannian metric on a smooth manifold ${M}$ is defined as a covariant symmetric 2-tensor ${(\cdot, \cdot)_p, p \in M}$ such that ${(v,v)_p > 0}$ for all ${v \in T_p(M)}$. For convenience, I will write ${(v,w)}$ for ${(v,w)_p}$. In other words, a Riemannian metric is a collection of (positive) inner products on each of the tangent spaces ${T_p(M)}$ such that if ${X,Y}$ are (smooth) vector fields, the function ${(X,Y): M \rightarrow \mathbb{R}}$ defined by taking the inner product at each point, is smooth. There are several ways to get Riemannian metrics: (more…)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8868368268013, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/40268/list
|
## Return to Question
3 edited tags
2 added 3 characters in body
This is a heuristic question that I think was once asked by Serge Lang. The gaussian: $e^{-x^2}$ appears as the fixed point to the Fourier Transformtransform, in the punchline to the central limit theorem, as the solution to the heat equation, in a very nice proof of the Atiyah-Singer index theorem etc. Is this an artifact of the techniques (such as the Fourier Transform) that people like use to deal with certain problems or is this the tip of some deeper platonic iceberg?
1
# Why is the Gaussian so pervasive in mathematics?
This is a heuristic question that I think was once asked by Serge Lang. The gaussian: $e^{-x^2}$ appears as the fixed point to the Fourier Transform, the punchline to the central limit theorem, as the solution to the heat equation, in a very nice proof of the Atiyah-Singer index theorem etc. Is this an artifact of the techniques (such as the Fourier Transform) that people like use to deal with certain problems or is this the tip of some deeper platonic iceberg?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528073072433472, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/29276/why-do-4-circles-cover-the-surface-of-a-sphere?answertab=active
|
Why do 4 circles cover the surface of a sphere?
Is there a geometric explanation for why a sphere has surface area $4 \pi r^2$ ?
Ie equal to 4 times its cross-section (a circle of radius r).
-
2
– please delete me Mar 27 '11 at 13:16
I would add to the comment of Eivind: the map from the cylinder to the sphere given by orthogonal projection from the axis is area-preserving. It's a nice exercise to show that it shrinks horizontal infinitesimal distances by the same factor as it expands vertical infinitesimal distances. – user8268 Mar 27 '11 at 13:36
What does cross-section mean here? – Rasmus Mar 27 '11 at 13:38
1
– Rahul Narain Mar 27 '11 at 16:38
3 Answers
Let $Z$ be a cylinder of height $2r$ touching the sphere $S_r$ along the equator $\theta=0$. Consider now a thin plate orthogonal to the $z$-axis having a thickness $\Delta z\ll r$. It intersects $S_r$ at a certain geographical latitude $\theta$ in a nonplanar annulus of radius $\rho= r\cos\theta$ and width $\Delta s=\Delta z/\cos\theta$, and it intersects $Z$ in a cylinder of height $\Delta z$. Both these "annuli" have the same area $2\pi r \Delta z$. As this is true for any such plate it follows that the total area of the sphere $S_r$ is the same as the total area of $Z$, namely $4\pi r^2$.
-
-
One geometric explanation is that $4\pi r^2$ is the derivative of $\frac{4}{3}\pi r^3$, the volume of the ball with radius $r$, with respect to $r$. This is because if you enlarge $r$ a little bit, the volume of the ball will change by its surface times the small enlargement of $r$.
So why is the volume of the full ball $\frac{4}{3}\pi r^3$? By slicing the ball into disks, using Pythagoras, you get that its volume is $$\int_{-r}^r \pi (r^2-x^2)\mathrm{d}x$$ which is indeed $\frac{4}{3}\pi r^3$.
-
1
Is this true for all manifolds, dV/dr=S?, where V is the n-volume and S is the n-1-surface and r is the distance from a point in the interior to the surcface ? – Holowitz Mar 27 '11 at 13:58
@solomoan: I think you will have trouble defining "the distance from a point in the interior to the surface" for a general manifold with boundary. I don't see how to generalise the result in a meaningful way to general manifolds. – Rasmus Mar 27 '11 at 14:02
@solomoan: If $$f: B\to{\mathbb R}^3, \quad (u,v)\mapsto f(u,v)$$ produces a surface $S$ with unit normal $n(u,v)$ then $$x: \ B\times[0,\epsilon]\ \to\ {\mathbb R}^3, \quad (u,v,t)\mapsto f(u,v)+ t n(u,v)$$ produces a plate of thickness $\epsilon$. You can compute the volume $V(\epsilon)$ of this plate by means of the Jacobian of $x$, and calculating the limit $$\lim_{\epsilon\to0}{V(\epsilon)\over\epsilon}$$ you get the formula for the surface area $\omega(S)$. – Christian Blatter Mar 27 '11 at 14:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.901874840259552, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/potential?sort=unanswered&pagesize=15
|
# Tagged Questions
The potential tag has no wiki summary.
1answer
151 views
### Electric field of a flat metal plate and a point particle
I'm currently studying electric potential, and I'm having trouble with one of the problems on my homework: A) A point particle with charge $+q$ is on the x-axis at a distance $d$ from the origin, ...
1answer
32 views
### The potentiality of the electric field
Could you, please, explain me using just words why electric the field is potentially? I know the proof using integral: \$A = \int_{12}q\overrightarrow{E}\overrightarrow{dr} = ...
1answer
31 views
### Potential of a Body
I have a doubt about the electric potential of a body. Well, I know that given a continuous distribution of charge we can find the potential at a point $a$ using the following relation: ...
1answer
26 views
### Potential of a conductor depends on the size and shape of the conductor. How?
1) When a charge 'q' is given to an isolated conductor, its potential will change. 2) The change in potential depends on the size and shape of the conductor. I could understand the point no. 1. ...
1answer
101 views
### Spring Constant
Is it possible to determine the spring constant of a spring in a situation in which it is being compressed when such certain length of compression is not known? If so, how can such calculation be ...
1answer
340 views
### Bound states for sech-squared potential
I'm working on an introductory qm project, hope somebody has the time to help me (despite the length of this post), it will be highly appreciated. My goal is to determine the bound states and their ...
0answers
115 views
### Symmetries of separable potential
For separable potential, say $x^4+y^4$, its symmetry are degenerate. Is that a generic case to every separable potential? I will explain my question: The potential $x^4+y^4$ has \$A_1, B_1, A_2, B_2, ...
0answers
39 views
### What is (or where can I discover) the Burke Potential?
I have very much enjoyed William L. Burke's Applied Differential Geometry. Reading around on the web it seems that he discovered something which is called the (retarded) Burke Potential, but I have ...
0answers
56 views
### The particle mesh ewald method in two dimensions
I am attempting to implement the particle mesh ewald method (http://dx.doi.org/10.1063/1.470117) in two dimensions. I am wondering what needs to be changed in the method from three dimensions to two ...
0answers
340 views
### Scattering on delta function potential
Suppose a particle has energy $E>V(+/-\infty)=0$, then the solutions to the Schrodinger equation outside of the potential will be $\psi(x)=Ae^{i k x}+Be^{-i k x}$. How can one show or explain that ...
0answers
93 views
### An electron is subjected to an electromagnetic field using the canonical equations solve
So I was given the following vector field: $\vec{A}(t)=\{A_{0x}cos(\omega t + \phi_x), A_{0y}cos(\omega t + \phi_y), A_{0z}cos(\omega t + \phi_z)\}$ Where the amplitudes $A_{0i}$ and phase shifts ...
0answers
40 views
### Wave equations for two intervals at Potential step
Lets say we have a potential step as in the picture: In the region I there is a free particle with a wavefunction $\psi_I$ while in the region II the wave function will be $\psi_{II}$. Let me ...
0answers
33 views
### Is it easier to determine the number of states with raising/lowering operators or using scattering?
A particle is bound by $$V(x) = \begin{cases}\infty,& x <0 \\ \frac{-32\hbar^2}{ma}, & x\le a \\ 0, & x \le a\end{cases}$$ a) how many states are there? i'm attempting ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215160012245178, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4020243
|
Physics Forums
Recognitions:
Science Advisor
integration techniques and Cauchy prinicpal value
Is there a good reference that summarizes what common integration techniques (e.g. change of variables, integration by parts, interchange of the order of integration) can be used on integrands when one is calculating the Cauchy principal value ( http://en.wikipedia.org/wiki/Cauchy_principal_value) and the ordinary integral does not exist?
It looks like a complicated subject, especially if we ask if the ordinary integration methods (like change of variables) are permitted to introduce more Cauchy principal value integrals.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
I believe the Cauchy Principal Value is determined by taking a symmetric limit on either side of the singular point so those techniques will not by themselves help you determine this value although they may help you get the problem in a form where you can take these limits. Take for example: $$P.V. \int_{-1}^{1} \frac{1}{z}dz=\lim_{\epsilon\to 0}\left(\int_{-1}^{-\epsilon} \frac{1}{z}dz+\int_{\epsilon}^1 \frac{1}{z}dz\right)$$ Now in this case, I can find the antiderivative and write: $$\begin{align} P.V. \int_{-1}^{1} \frac{1}{z}dz&=\lim_{\epsilon\to 0} \left(\log(-\epsilon)-\log(-1)+\log(1)-\log(\epsilon)\right) \\ &=\lim_{\epsilon\to 0}\ln(\epsilon)+\pi i-\ln(1)-\pi i+0-\ln(\epsilon) \\ &=\lim_{\epsilon\to 0} \left(\ln(\epsilon)-\ln(\epsilon) \right)\\ &=0 \end{align}$$ And in general you could do this (taking the limit of the antiderivative) for more complicated integrals. Now sometimes the Residue Theorem can and often is used to find principal values by allowing the radius of an indentation around a singular point go to zero. But even in that case, we are implicitly taking the limit as $\epsilon\to 0$.
Recognitions: Science Advisor I agree that the Cauchy principal value is defined as a limit. In a given problem, if it happens to be $lim_{A \rightarrow A_0} F(A)$ then I presume we can use any standard integration technique to find an ordinary integral $F(A) =$ some definite integral (in the ordinary sense of the word) of a function $f(x)$ with the constant $A$ involved in the limit of integration. However, suppose the evaluation of $F(A)$ requires doing several other Cauchy integrations. How safe are the common integration methods then? For example, consider a change in the order of integration.If your problem involves several Cauchy principal part integrations, you could end up with an expression invovling nested limits: $lim_{A \rightarrow A_0} ( lim_{B \rightarrow B_0} ( lim_{C \rightarrow C_0} G(A,B,C) ) )$ And if you changed the order of Cauchy integration, you might get the nested limits in different order. $lim_{A \rightarrow A_0} ( lim_{C \rightarrow C_0} ( lim_{B \rightarrow B_0} G(A,B,C) ) )$ Changing the order of limits in an expression involving nested limits can change the value of the expression. So is there something about the G(A,B,C) involved in Cauchy integration that avoids this problem? I wondering if someone has worked out a theory of Cauchy principal value integration that is a nice system of rules and checks like we have with ordinary integration. Or are the possibilities so complicated that each problem has to be analyzed on its own merits.
Thread Tools
| | | |
|------------------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: integration techniques and Cauchy prinicpal value | | |
| Thread | Forum | Replies |
| | Calculus | 6 |
| | Calculus & Beyond Homework | 4 |
| | Calculus & Beyond Homework | 5 |
| | Calculus & Beyond Homework | 6 |
| | Calculus | 62 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8977605104446411, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/46505/how-to-solve-a-quadratic-equation-in-characteristic-2/46506
|
How to solve a quadratic equation in characteristic 2 ?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What do I do if I have to solve the usual quadratic equation $X^2+bX+c=0$ where $b,c$ are in a field of characteristic 2? As pointed in the comments, it can be reduced to $X^2+X+c=0$ with $c\neq 0$.
Usual completion of square breaks. For a finite field there is Chen Formula that roughly looks like $X=\sum_{m} c^{4^m}$. I am more interested in the local field $F((z))$ or actually an arbitrary field of characteristic 2.
-
What's your method that works for finite fields? – André Henriques Nov 18 2010 at 17:03
2
I think you should add "in characteristic 2" to the title – Ewan Delanoy Nov 18 2010 at 17:05
11
Actually, over any field, either $b=0$ or the quadratic reduces to an Artin-Schreier equation $x^2+x+c=0$. If the field is finite, it is soluble if and only the trace of $c$ to ${\mathbb F}_2$ is 0, if I remember correctly. – Tim Dokchitser Nov 18 2010 at 17:44
@ Tim Sure, let $Y=X/b$ then you get $Y^2+Y+c/b^2=0$. How do I solve this one now? – Bugs Bunny Nov 18 2010 at 20:58
2
Dear Bugs: Is your aim a criterion (for interesting $K$ of characteristic $p > 0$, at least for $p = 2$) which detects when $x^p - x - a \in K[x]$ is irreducible or not? For local function fields there's a criterion using residues of 1-forms (in Artin-Tate). For global function fields there's an adelic analogue (in Artin-Tate). If $Y$ is a normal affine variety (geom. conn'd) over a finite field $k$ and $a \in k[Y]$, then $x^p - x - a$ is reducible over $k(Y)$ iff has solution in $k(y)$ for closed pts $y$ off a proper closed subset (prove using Lang-Weil). Is it useful for you? – BCnrd Nov 19 2010 at 1:00
show 3 more comments
2 Answers
I think this solves $X^2+X+c=0$ over $F((t))$:
I want to assume that $c\in F[[t]]$. If not, say $c=at^{-m}+...$, then the quadratic has no solutions when $m$ is odd or $a$ is not a square, and otherwise the substitution $X\mapsto X+\sqrt{a}t^{-m/2}$ gives a new equation with smaller $m$. So, after finitely many steps $c=c_0+c_1t+...$ is integral.
Because $X^2+X+c$ has derivative $1$, by Hensel's lemma the equation has a solution if and only the constant term $c_0$ is of the form $d^2+d$ for some $d$ in $F$. And if it is, Hensel's approximations are obtained by starting with an approximate solution $x_0=d$ and recursively computing $x_{m+1}=x_m-f(x_m)/f'(x_m)=x_m^2+c$. This gives $$x = d + \sum_{n=0}^\infty (c-c_0)^{2^n}$$ as the solution (the partial sums are the $x_m$). Actually, the approach seems to work over any complete field, reducing the problem to the residue field. Hope this helps.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here is a paper that might help
http://www.raco.cat/index.php/PublicacionsMatematiques/article/viewFile/37927/40412
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9125272631645203, "perplexity_flag": "head"}
|
http://www.reference.com/browse/Constraint+store
|
Definitions
# Constraint satisfaction
In artificial intelligence and operations research, constraint satisfaction is the process of finding a solution to a set of constraints that impose conditions that the variables must satisfy. A solution is therefore a vector of variables that satisfies all constraints.
The techniques used in constraint satisfaction depend on the kind of constraints being considered. Often used are constraints on a finite domain, to the point that constraint satisfaction problems are typically identified with problems based on constraints on a finite domain. Such problems are usually solved via search, in particular a form of backtracking or local search. Constraint propagation are other methods used on such problems; most of them are incomplete in general, that is, they may solve the problem or prove it unsatisfiable, but not always. Constraint propagation methods are also used in conjunction with search to make a given problem simpler to solve. Other considered kinds of constraints are on real or rational numbers; solving problems on these constraints is done via variable elimination or the simplex algorithm.
Constraint satisfaction originated in the field of artificial intelligence in the 1970s. During the 1980s and 1990s, embedding of constraints into a programming language were developed. Languages often used for constraint programming are Prolog and C++.
## Constraint satisfaction problem
As originally defined in artificial intelligence, constraints enumerate the possible values a set of variables may take. Informally, a finite domain is a finite set of arbitrary elements. A constraint satisfaction problem on such domain contains a set of variables whose values can only be taken from the domain, and a set of constraints, each constraint specifying the allowed values for a group of variables. A solution to this problem is an evaluation of the variables that satisfies all constraints. In other words, a solution is a way for assigning a value to each variable in such a way all constraints are satisfied by these values.
Formally, a constraint satisfaction problem is defined a triple $langle X,D,C rangle$, where $X$ is a set of variables, $D$ is a domain of values, and $C$ is a set of constraints. Every constraint is in turn a pair $langle t,R rangle$, where $t$ is a tuple of variables and $R$ is a set of tuples of values; all these tuples having the same number of elements; as a result $R$ is a relation. An evaluation of the variables is a function from variables to $v:X rightarrow D$. Such an evaluation satisfies a constraint $langle \left(x_1,ldots,x_n\right),R rangle$ if $\left(v\left(x_1\right),ldots,v\left(x_n\right)\right) in R$. A solution is an evaluation that satisfies all constraints.
In practice, constraints are often expressed in compact form, rather than enumerating all values of the variables that would satisfy the constraint. The constraint expressing that the values of some variables are all different is one of the most used such constraints.
Problems that can be expressed as constraint satisfaction problems are the Eight queens puzzle, the Sudoku solving problem, the Boolean satisfiability problem, scheduling problems and various problems on graphs such as the graph coloring problem.
While usually not included in the above definition of a constraint satisfaction problem, arithmetic equations and inequalities bound the values of the variables they contain and can therefore be considered a form of constraints. Their domain is the set of numbers (either integer, rational, or real), which is infinite: therefore, the relations of these constraints may be infinite as well; for example, $X=Y+1$ has an infinite number of pairs of satisfying values. Arithmetic equations and inequalities are often not considered within the definition of a "constraint satisfaction problem", which is limited to finite domains. They are however used often in constraint programming.
## Solving
Constraint satisfaction problems (on finite domains) are typically solved using a form of search. The most used techniques are variants of backtracking, constraint propagation, and local search.
Backtracking is a recursive algorithm. It maintains a partial assignment of the variables. Initially, all variables are unassigned. At each step, a variable is chosen, and all possible values are assigned to it in turn. For each value, the consistency of the partial assignment with the constraints is checked; in case of consistency, a recursive call is performed. When all values have been tried, the algorithm backtracks. In this basic backtracking algorithm, consistency is defined as the satisfaction of all constraints whose variables are all assigned. Several variants of backtracking exists. Backmarking improves the efficiency of checking consistency. Backjumping allows saving part of the search by backtracking "more than one variable" in some cases. Constraint learning infers and saves new constraints that can be later used to avoid part of the search. Look-ahead is also often used in backtracking to attempt to foresee the effects of choosing a variable or a value, thus sometimes determining in advance when a subproblem is satisfiable or unsatisfiable.
Constraint propagation techniques are methods used to modify a constraint satisfaction problem. More precisely, they are methods that enforce a form of local consistency, which are conditions related to the consistency of a group of variables and/or constraints. Constraints propagation has various uses. First, they turn a problem into one that is equivalent but is usually simpler to solve. Second, they may prove satisfiability or unsatisfiability of problems. This is not guaranteed to happen in general; however, it always happens for some forms of constraint propagation and/or for some certain kinds of problems. The most known and used form of local consistency are arc consistency, hyper-arc consistency, and path consistency. The most popular constraint propagation method is the AC-3 algorithm, which enforces arc consistency.
Local search methods are incomplete satisfiability algorithms. They may find a solution of a problem, but they may fail even if the problem is satisfiable. They work by iteratively improving a complete assignment over the variables. At each step, a small number of variables are changed value, with the overall aim of increasing the number of constraints satisfied by this assignment. In practice, local search appears to work well when these changes are also affected by random choices. Integration of search with local search have been developed, leading to hybrid algorithms.
Variable elimination and the simplex algorithm are used for solving linear and polynomial equations and inequalities.
## Complexity
Solving a constraint satisfaction problem on a finite domain is an NP complete problem. Research has shown a number of tractable subcases, some limiting the allowed constraint relations, some requiring the scopes of constraints to form a tree, possibly in a reformulated version of the problem. Research has also established relationship of the constraint satisfaction problem with problems in other areas such as finite model theory.
## Constraint programming
Constraint programming is the use of constraints as a programming language to encode and solve problems. This is often done by embedding constraints into a programming language, which is called the host language. Constraint programming originated from a formalization of equalities of terms in Prolog II, leading to a general framework for embedding constraints into a logic programming language. The most common host languages are Prolog, C++, and Java, but other languages have been used as well.
### Constraint logic programming
A constraint logic program is a logic program that contains constraints in the bodies of clauses. As an example, the clause `A(X):-X>0,B(X)` is a clause containing the constraint `X>0` in the body. Constraints can also be present in the goal. The constraints in the goal and in the clauses used to prove the goal are accumulated into a set called constraint store. This set contains the constraints the interpreter has assumed satisfiable in order to proceed in the evaluation. As a result, if this set is detected unsatisfiable, the interpreter backtracks. Equations of terms, as used in logic programming, are considered a particular form of constraints which can be simplified using unification. As a result, the constraint store can be considered an extension of the concept of substitution that is used in regular logic programming. The most common kinds of constraints used in constraint logic programming are constraints over integers/rational/real numbers and constraints over finite domains.
Concurrent constraint logic programming languages have also been developed. They significantly differ from non-concurrent constraint logic programming in that they are aimed at programming concurrent processes that may not terminate. Constraint handling rules can be seen as a form of concurrent constraint logic programming, but are also sometimes used within a non-concurrent constraint logic programming language. They allow for rewriting constraints or to infer new ones based on the truth of conditions.
### Constraint satisfaction toolkits
Constraint satisfaction toolkits are software libraries for imperative programming languages that are used to encode and solve a constraint satisfaction problem. The most common host language for these toolkits is C++, but implementations exist for Java and other programming languages. ZDC is an open source freeware developed in the Computer-Aided Constraint Satisfaction Project for modelling and solving constraint satisfaction problems.
### Other constraint programming languages
Constraint toolkits are a way for embedding constraints into an imperative programming language. However, they are only used as external libraries for encoding and solving problems. An approach in which constraints are integrated into an imperative programming language is taken in the Kaleidoscope programming language.
Constraints have also been embedded into functional programming languages.
## References
• Rossi, Francesca; Peter van Beek, Toby Walsh (ed.) (2006). Handbook of Constraint Programming,. Elsevier. ISBN-13 978-0-444-52726-4 ISBN-10 0-444-52726-5
• Dechter, Rina (2003). Constraint processing. Morgan Kaufmann. ISBN 1-55860-890-7
• Apt, Krzysztof (2003). Principles of constraint programming. Cambridge University Press. ISBN 0-521-82583-0
• Frühwirth, Thom; Slim Abdennadher (2003). Essentials of constraint programming. Springer. ISBN 3-540-67623-6
• Marriot, Kim; Peter J. Stuckey (1998). Programming with constraints: An introduction. MIT Press. ISBN 0-262-13341-5
• Jaffar, Joxan; Michael J. Maher (1994). "Constraint logic programming: a survey". Journal of logic programming 19/20 503–581.
• Freuder, Eugene; Alan Mackworth (ed.) (1994). Constraint-based reasoning. MIT Press. ISBN
• Tsang, Edward (1993). Foundations of Constraint Satisfaction. Academic Press. ISBN 0-12-701610-4
• Van Hentenryck, Pascal (1989). Constraint Satisfaction in Logic Programming. MIT Press. ISBN 0-262-08181-4
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9063189625740051, "perplexity_flag": "middle"}
|
http://www.ams.org/bookstore?fn=20&arg1=survseries&ikey=SURV-79
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Return to List
The Backward Shift on the Hardy Space
Joseph A. Cima, University of North Carolina, Chapel Hill, NC, and William T. Ross, University of Richmond, VA
SEARCH THIS BOOK:
Mathematical Surveys and Monographs
2000; 199 pp; hardcover
Volume: 79
ISBN-10: 0-8218-2083-4
ISBN-13: 978-0-8218-2083-4
List Price: US\$57
Member Price: US\$45.60
Order Code: SURV/79
Shift operators on Hilbert spaces of analytic functions play an important role in the study of bounded linear operators on Hilbert spaces since they often serve as models for various classes of linear operators. For example, "parts" of direct sums of the backward shift operator on the classical Hardy space $$H^2$$ model certain types of contraction operators and potentially have connections to understanding the invariant subspaces of a general linear operator.
This book is a thorough treatment of the characterization of the backward shift invariant subspaces of the well-known Hardy spaces $$H^{p}$$. The characterization of the backward shift invariant subspaces of $$H^{p}$$ for $$1 < p < \infty$$ was done in a 1970 paper of R. Douglas, H. S. Shapiro, and A. Shields, and the case $$0 < p \le 1$$ was done in a 1979 paper of A. B. Aleksandrov which is not well known in the West. This material is pulled together in this single volume and includes all the necessary background material needed to understand (especially for the $$0 < p < 1$$ case) the proofs of these results.
Several proofs of the Douglas-Shapiro-Shields result are provided so readers can get acquainted with different operator theory and theory techniques: applications of these proofs are also provided for understanding the backward shift operator on various other spaces of analytic functions. The results are thoroughly examined. Other features of the volume include a description of applications to the spectral properties of the backward shift operator and a treatment of some general real-variable techniques that are not taught in standard graduate seminars. The book includes references to works by Duren, Garnett, and Stein for proofs and a bibliography for further exploration in the areas of operator theory and functional analysis.
Readership
Advanced graduate students with a background in basic functional analysis, complex analysis and the basics of the theory of Hardy spaces; professional mathematicians interested in operator theory and functional analysis.
Reviews
"The book has been carefully written and contains a wealth of information ... It will probably appeal most to those with an interest in the interplay between operator theory and modern function theory."
-- Bulletin of the LMS
• The backward shift on $$H^p$$ for $$p \in [1,\infty)$$
• The backward shift on $$H^p$$ for $$p \in (0,1)$$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.874347984790802, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/87567?sort=votes
|
## A special ribbon graph presents a cylinder.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am reading "Quantum Invariants of Knots and 3-Manifolds" by Turaev. I have a dificulty to understand the proof of Lemma 2.6 on page 172.
The lemma says that a special ribbon graph drawn on page 167 presents a cylinder. I am sorry that I don't know how to show that ribbon graph here.
I especially don't understand the statement starting "One may check that the cylindrical structures are compatible on the boundary..."
Could you show me the detail and/or an intuitive(geometric) proof?
-
## 1 Answer
Turaev's book assumes familiarity with basic 3-dimensional geometric topology and especially Dehn surgery presentations of 3-manifolds. If you want to understand all the details in Tureav's book, then I strongly recommend first reading Rolfsen's "Knots and Links", or some similar text.
It's hard to explain without pictures, but briefly: Start with $S^2\times I$. Remove a regular neighborhood of the arcs (not the loops) of the tangle in Figure 2.4. Do Dehn surgery along the (framed) loops. The boundary of the resulting 3-manifold is the union of a "vertical" annulus for each straight arc of the tangle and "upper" and "lower" surface. The upper surface contains `$S^2\times \{1\}$` (minus some disks) and an annulus for each curved arc of the tangle. Call this surface $Y$. Then the 3-manifold, after Dehn surgery, is homeomorphic to $Y\times I$. Turaev's Figure 2.5 shows a 3-punctured disk which, after Dehn surgery, becomes an instance of curve$\times I$ inside $Y\times I$.
If the above explanation makes no sense to you then you should read Rolfsen.
-
Thank you for the answer. I took a look at Rolfsen and I think I understand the result of the surgery is the cylinder $\Sigma \times I$. But I don't see the parametrization of the upper boundary is the identity. By the construction, the top parametrization is the compositon of mir and $f^+: -R_{t^+} \rightarrow \Sigma^+$ (by the book's notation on p.159 & 168). How does the surgery affect the parametrization? I guess we need one more mir to have the identity parametrization. – knot Feb 10 2012 at 5:48
I don't have a detailed answer for your question about parameterizations, but in general you want to identify annuli of the form (curve)$\times I$ in $\Sigma\times I$ (such as the one in Turaev's Figure 2.5) and then use these annuli to construct an identification between pants decompositions of the two boundary components of $\Sigma\times I$. – Kevin Walker Feb 10 2012 at 17:19
I reduced this question to the new question mathoverflow.net/questions/88334. Could you take a look at it and if you have time to write answer, please do so. Thanks. – knot Feb 13 2012 at 8:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9173491597175598, "perplexity_flag": "middle"}
|
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aop/1176996559
|
### The Optimal Reward Operator in Special Classes of Dynamic Programming Problems
David A. Freedman
Source: Ann. Probab. Volume 2, Number 5 (1974), 942-949.
#### Abstract
Consider a dynamic programming problem with separable metric state space $S$, constraint set $A$, and reward function $r(x, P, y)$ for $(x, P)\in A$ and $y\in S$. Let $Tf$ be the optimal reward in one move, for the reward function $r(x, P, y) + f(y)$. Three results are proved. First, suppose $S$ is compact, $A$ closed, and $r$ upper semi-continuous; then $T^n0$ is upper semi-continuous, and there is an optimal Borel strategy for the $n$-move game. Second, suppose $S$ is compact, $A$ is an $F_\sigma$, and $\{r > a\}$ is an $F_\sigma$ for all $a$; then $\{T^n0 > a\}$ is an $F_\sigma$ for all $a$, and there is an $\varepsilon$-optimal Borel strategy for the $n$-move game. Third, suppose $A$ is open and $r$ is lower semi-continuous; then $T^n0$ is lower semi-continuous, and there is an $\varepsilon$-optimal Borel measurable strategy for the $n$-move game.
First Page:
Primary Subjects: 49C99
Secondary Subjects: 60K99, 90C99, 28A05
Full-text: Open access
Permanent link to this document: http://projecteuclid.org/euclid.aop/1176996559
Digital Object Identifier: doi:10.1214/aop/1176996559
Mathematical Reviews number (MathSciNet): MR359819
Zentralblatt MATH identifier: 0318.49022
2013 © Institute of Mathematical Statistics
Turn MathJax Off
What is MathJax?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8623864650726318, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/45188/what-is-curie-weiss-temperature?answertab=active
|
# What is Curie-Weiss temperature?
What is Curie-Weiss temperature? What is the difference between Curie-Weiss temperature and Curie temperature?
-
## 2 Answers
The Curie temperature or Curie point is the temperature at which a ferromagnetic or a ferri-magnetic material becomes paramagnetic when heated. The effect is reversible.
On the other hand,the Curie-Weiss temperature is the temperature at which a plot of the reciprocal molar magnetic susceptibility against the absolute temperature T intersects the T-axis. The Curie-Weiss temperature can adopt positive as well as negative values.
I hope,now you will get the difference.
-
Naively, both temperatures are equal and they're the constant temperature $T_c$ entering the Curie-Weiss Law: $$\chi = \frac{C}{T-T_c}.$$ However, the behavior is often more complicated and the formula above doesn't describe the susceptibility $\chi$ well for all temperatures. When it's so, the Curie temperature $T_c$ is the temperature at which the susceptibility actually blows up, so $\chi=C/(T-T_c)$ holds for $T\sim T_c$ while the Curie-Weiss temperature is the temperature for which the law $\chi=C/(T-T_0)$ holds for $T\gg T_0$, i.e. one reconstructed from the "shape of the hyperbola far away".
The temperatures are close $T_0\sim T_c$ for materials for which the transition is first-order; the temperatures are very different if the transition is second-order.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9160518646240234, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/12319/list
|
## Return to Answer
Post Made Community Wiki by 002
1
Tung, S. H. Bernstein's theorem for the polydisc. Proc. Amer. Math. Soc. 85 (1982), no. 1, 73--76. MR0647901 (83h:32017)
(from MR review): Let $P(z)$ be a polynomial of degree $N$ in $z=(z_1,\cdots,z_m)$; suppose that $|P(z)|\leq 1$ for $z\in U^m$; then $\|DP(z)\|\leq N$ for $z\in U^m$ where $\|DP(z)\|^2=\sum_{i=1}^m|\partial P/\partial z_i|^2$.
Here $U^m$ is the polydisc. Same author proved Bernstein-type inequality for the ball, Tung, S. H. Extension of Bernšteĭn's theorem. Proc. Amer. Math. Soc. 83 (1981), no. 1, 103--106. MR0619992 (82k:32013)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6868130564689636, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2011/07/19/the-exterior-derivative-is-nilpotent/?like=1&_wpnonce=67418a1937
|
# The Unapologetic Mathematician
## The Exterior Derivative is Nilpotent
One extremely important property of the exterior derivative is that $d(d\omega)=0$ for all exterior forms $\omega$. This is only slightly less messy to prove than the fact that $d$ is a derivation. But since it’s so extremely important, we soldier onward! If $\omega$ is a $p$-form we calculate
$\displaystyle\begin{aligned}\left[d(d\omega)\right](X_0,\dots,X_{p+1})=&\sum\limits_{i=0}^k(-1)^iX_i\left(d\omega(X_0,\dots,\hat{X}_i,\dots,X_{p+1})\right)\\&+\sum\limits_{0\leq i<j\leq k}(-1)^{i+j}d\omega\left([X_i,X_j],X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1}\right)\end{aligned}$
We now expand out the $d\omega$ on the first line. First we extract an $X_j$ from the list of vector fields. If $j<i$, then we get a term like
$\displaystyle(-1)^{i+j}X_iX_j\omega(X_0,\dots,\hat{X}_j,\dots,\hat{X}_i,\dots,X_{p+1})$
while if $j>i$ then we get a term like
$\displaystyle(-1)^{i+j-1}X_iX_j\omega(X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1})=(-1)^{i+j-1}X_jX_i\omega(X_0,\dots,\hat{X}_j,\dots,\hat{X}_i,\dots,X_{p+1})$
If we put these together, we get the sum over all $i<j$ of
$\displaystyle(-1)^{i+j}[X_j,X_i]\omega(X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1})$
We continue expanding the first line by picking out two vector fields. There are three ways of doing this, which give us terms like
$\displaystyle\begin{aligned}(-1)^{i+j+k}&X_i\omega([X_j,X_k],X_0,\dots,\hat{X}_j,\dots,\hat{X}_k,\dots,\hat{X}_i,\dots,X_{p+1})\\(-1)^{i+j+k-1}&X_i\omega([X_j,X_k],X_0,\dots,\hat{X}_j,\dots,\hat{X}_i,\dots,\hat{X}_k,\dots,X_{p+1})\\(-1)^{i+j+k-2}&X_i\omega([X_j,X_k],X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,\hat{X}_k,\dots,X_{p+1})\end{aligned}$
Next we can start expanding the second line. First we pull out the first vector field to get
$\displaystyle(-1)^{i+j}[X_i,X_j]\omega(X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1})$
which cancels out against the first group of terms from the expansion of the first line! Progress!
We continue by pulling out an extra vector field from the second line, getting three collections of terms:
$\displaystyle\begin{aligned}(-1)^{i+j+k+1}&X_k\omega([X_i,X_j],X_0,\dots,\hat{X}_k,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k}&X_k\omega([X_i,X_j],X_0,\dots,\hat{X}_i,\dots,\hat{X}_k,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k-1}&X_k\omega([X_i,X_j],X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,\hat{X}_k,\dots,X_{p+1})\end{aligned}$
It’s less obvious, but each of these groups of terms cancels out one of the groups from the second half of the expansion of the first line! Our sum has reached zero!
Unfortunately, we’re not quite done. We have to finish expanding the second line, and this is where things will get really ugly. We have to pull two more vector fields out; first we’ll handle the easy case where we avoid the $[X_i,X_j]$ term, and we get a whopping six cases:
$\displaystyle\begin{aligned}(-1)^{i+j+k+l+2}&\omega([X_k,X_l],[X_i,X_j],X_0,\dots,\hat{X}_k,\dots,\hat{X}_l,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k+l+1}&\omega([X_k,X_l],[X_i,X_j],X_0,\dots,\hat{X}_k,\dots,\hat{X}_i\dots,\hat{X}_l,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k+l}&\omega([X_k,X_l],[X_i,X_j],X_0,\dots,\hat{X}_k,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,\hat{X}_l,\dots,X_{p+1})\\(-1)^{i+j+k+l}&\omega([X_k,X_l],[X_i,X_j],X_0,\dots,\hat{X}_i,\dots,\hat{X}_k,\dots,\hat{X}_l,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k+l-1}&\omega([X_k,X_l],[X_i,X_j],X_0,\dots,\hat{X}_i,\dots,\hat{X}_k,\dots,\hat{X}_j,\dots,\hat{X}_l,\dots,X_{p+1})\\(-1)^{i+j+k+l-2}&\omega([X_k,X_l],[X_i,X_j],X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,\hat{X}_k,\dots,\hat{X}_l,\dots,X_{p+1})\end{aligned}$
In each group, we can swap the $[X_k,X_l]$ term with the $[X_i,X_j]$ term to get a different group. These two groups always have the same leading sign, but the antisymmetry of $\omega$ means that this swap brings another negative sign with it, and thus all these terms cancel out with each other!
Finally, we have the dreaded case where we pull the $[X_i,X_j]$ term and one other vector field. Here we mercifully have only three cases:
$\displaystyle\begin{aligned}(-1)^{i+j+k+1}&\omega([[X_i,X_j],X_k],X_0,\dots,\hat{X}_k,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k}&\omega([[X_i,X_j],X_k],X_0,\dots,\hat{X}_i,\dots,\hat{X}_k,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k-1}&\omega([[X_i,X_j],X_k],X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,\hat{X}_k,\dots,X_{p+1})\end{aligned}$
Here we can choose to re-index the three vector fields so we always have $0\leq i<j<k\leq p+1$. Adding all three terms up we get
$\displaystyle(-1)^{i+j+k}\omega(-[[X_i,X_j],X_k]+[[X_i,X_k],X_j]-[[X_j,X_k],X_i],X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,\hat{X}_k,\dots,X_{p+1})$
Taking the linear combination of double brackets out to examine it on its own we find
$\displaystyle-[[X_i,X_j],X_k]+[[X_i,X_k],X_j]-[[X_j,X_k],X_i]=[X_k,[X_i,X_j]]-\left([[X_k,X_i],X_j]+[X_i,[X_k,X_j]]\right)$
Which is zero because of the Jacobi identity!
And so it all comes together: some of the terms from the second row work to cancel out the terms from the first row; the antisymmetry of the exterior form $\omega$ takes care some remaining terms from the second row; the Jacobi identity mops up the rest.
Now I say again that the reason we’re doing all this messy juggling is that nowhere in here have we had to pick any local coordinates on our manifold. The identity $d(d\omega)=0$ is purely geometric, even though we will see later that it actually boils down to something that looks a lot simpler — but more analytic — when we write it out in local coordinates.
### Like this:
Posted by John Armstrong | Differential Topology, Topology
## 3 Comments »
1. [...] we consider the differential of a function. If we apply to it, the nilpotency of the exterior derivative tells us that we automatically get zero. On the other hand, if we apply [...]
Pingback by | July 26, 2011 | Reply
2. [...] Even so, the differential is going to be identically zero. Away from the “branch curve” on which we cut in our setup — the negative real axis here — this should be obvious, because here we have since the square of the exterior derivative is automatically zero. [...]
Pingback by | August 22, 2011 | Reply
3. [...] operator is that the curl of a gradient is zero: . In our terms, this is a simple consequence of the nilpotence of the exterior derivative. Indeed, when we work in terms of -forms instead of vector fields, the composition of the two [...]
Pingback by | October 13, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217808246612549, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/60441/the-multiplicative-order-of-2-modulo-primes/114114
|
## The multiplicative order of 2 modulo primes
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Artin's Conjecture says that any positive integer, which is not a square, is a primitive root modulo infinitely many primes. Christopher Hooley gave in
• Hooley, Christopher (1967). "On Artin's conjecture." J. Reine Angew. Math. 225, 209-220.
a proof of this conjecture assuming the Generalized Riemann Hypothesis.
Roger Heath-Brown showed (not using the GRH) in
• Heath-Brown, D.R. (1986). "Artin's conjecture for primitive roots." Quart. J. Math. Oxford Ser. 37(1), 27-38.
that there are at most two primes for which Artin's Conjecture fails. Nevertheless, it seems to be unknown whether any single specific prime number satisfies the conjecture. In particular, it is unknown if 2 is a primitive root modulo infinitely many primes.
Question: What is known about the multiplicative order of 2 modulo primes?
More specifically, can one prove interesting statements of the form: For infinitely many primes $p$, the multiplicative order of 2 is larger than some expression in terms of $p$ (which goes to infinity as $p \to \infty$)?
I have to say, that I am not an expert on these kind of questions at all. Given the enormous amount of literature on these questions, I tag this as a reference-request.
-
The first link--"Artin's Conjecture"--seems to be broken! – drbobmeister Apr 3 2011 at 15:56
Thanks. I fixed the link. – Andreas Thom Apr 3 2011 at 16:12
1
Simple but amusing application: that multiplicative order is the minimal number of perfect shuffles required to restore a deck of $p \pm 1$ cards (the $\pm$ depending on which of the two ways of perfectly shuffling we're talking about). – Greg Marks Apr 4 2011 at 1:41
## 4 Answers
The answer is "yes" - the order mod p of 2 is almost always as large as the square root of p (actually you get epsilon less than this in the exponent). If you take r multiplicatively independent numbers and ask for the group they generate mod p, the exponent is r/(r + 1). This is a paper of mine, and then in a paper of the Murtys, and I think is referenced in some form by Heath-Brown (it is the less deep part of his technique - to get something serious out of it you need something like Chen's method for Goldbach).
-
It would be useful if you gave the journal data for the papers you mentioned. – GH Apr 3 2011 at 16:21
1
Thanks a lot, this answers the question very nicely. I add the link blms.oxfordjournals.org/content/14/2/149 to your paper from 1982. – Andreas Thom Apr 3 2011 at 16:22
1
Here is a link with no access restrictions: citeseerx.ist.psu.edu/viewdoc/… – Andreas Thom Apr 3 2011 at 16:27
@Andreas: Thanks - as usual I was doing something else in another tab. – Charles Matthews Apr 3 2011 at 16:39
3
Mike Rosen, Ram Murty, and I wrote a paper on a series that can be used to estimate the average (in some sense) order of t modulo p, and more generally for finitely generated subgroups as in Charles' paper. We also covered finitely generated subgroups of abelian varieties, where the exponent turns out instead to be r/(r+2), due to the quadratic nature of the height. Here's the reference: Variations on a theme of Romanoff, Inter. J. Math. 7 1996, 373-391. – Joe Silverman Apr 3 2011 at 22:55
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Just an easy low tech answer: the multiplicative order of 2 modulo $p$ is at least $\log_2 p$, hence tends to infinity with $p$. Indeed, if $r$ is the order, then $2^r-1$ is divisible by $p$, hence $2^r\geq p+1$.
-
Thanks a lot. I was aware of this lower bound. – Andreas Thom Apr 3 2011 at 16:23
A small correction regarding Artin's conjecture is in order: it doesn't just exclude squares. You also need to exclude $-1$.
-
Thanks for the correction. – Andreas Thom Apr 4 2011 at 3:58
What about cubes? – Dror Speiser Apr 4 2011 at 15:07
I guess, cubes are not a problem, since 3 does not divide p−1 for too many primes, whereas 2 does. – Andreas Thom Apr 5 2011 at 2:35
I am not sure if this was in Charles' answer. I couldn't really follow what was being said in the link. If it is I apologise. Here is what I found in the most exciting 2 weeks of my undergraduate so far. Hence why I am eager to share :-)
So we are looking for the minimal $x$ such that $2^x \equiv 1 \mod p, \quad p \quad \text{prime}$. I only managed to get a few cases depending on the nature of $p$.
$$(1) \quad p = 2^k-1 \Rightarrow x = k$$
$$(2) \quad p = 2^k+1 \Rightarrow x = 2k$$
$$(3) \quad p = 2q+1 \quad \text{and $\quad q \equiv 3 \mod 4,\quad q$ prime } \Rightarrow x = q$$
$$(4) \quad p = 4k+3, \quad p \not\equiv \pm 1 \mod 8 \Rightarrow x = p-1$$
$(1)$ is trivial.
$(2)$ follows from the fact that $2^k \equiv -1 \mod p$ then just squaring.
$(3)$ is basically just the statement of a theorem Mersenne Primes Theorem number 7.
$(4)$ follows from the fact that $c^2 \equiv 2 \mod p$ is solvable iff $p \equiv \pm 1 \mod 8$. Clearly $2^{p-1} \equiv 1$ so the question is what is $2^{\frac{p-1}{2}}$ congruent to. If $p = 4k+3$ then $\frac{p-1}{2} = 2k+1$. So if we assume that:
$$2^{\frac{p-1}{2}} \equiv 1 \mod p$$ then for some $d$ $$2^{\frac{p-1}{2}+1}= (2^d)^2 \equiv 2 \mod p$$ Therefore $2^d$ is a solution to $c^2 \equiv 2 \mod p$ but by assumption $p \not\equiv \pm 1 \mod 8$ so we have a contradiction and thus $2^{\frac{p-1}{2}} \equiv -1 \mod p$ and so $x = p-1$
-
1
Sorry, but (4) is false : for example $p=43$ gives $x=14$. – François Brunault Nov 21 at 23:06
Proving that $2^{(p-1)/2} \not\equiv1\pmod p$ is not enough to show that the order of $2$ equals $p-1$; you would have to prove that $2^{(p-1)/q} \not\equiv 1\pmod p$ for every $q$ dividing $p-1$. That's the error in the argument of (4). – Greg Martin Nov 22 at 8:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9242205023765564, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Coriolis_effect
|
# Coriolis effect
For the psychophysical perception effect, see Coriolis effect (perception).
In the inertial frame of reference (upper part of the picture), the black object moves in a straight line. However, the observer (red dot) who is standing in the rotating/non-inertial frame of reference (lower part of the picture) sees the object as following a curved path due to the Coriolis and centrifugal forces present in this frame.
In physics, the Coriolis effect is a deflection of moving objects when they are viewed in a rotating reference frame. In a reference frame with clockwise rotation, the deflection is to the left of the motion of the object; in one with counter-clockwise rotation, the deflection is to the right. Although recognized previously by others, the mathematical expression for the Coriolis force appeared in an 1835 paper by French scientist Gaspard-Gustave Coriolis, in connection with the theory of water wheels. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology.
Newton's laws of motion govern the motion of an object in a (non-accelerating) inertial frame of reference. When Newton's laws are transformed to a uniformly rotating frame of reference, the Coriolis and centrifugal forces appear. Both forces are proportional to the mass of the object. The Coriolis force is proportional to the rotation rate and the centrifugal force is proportional to its square. The Coriolis force acts in a direction perpendicular to the rotation axis and to the velocity of the body in the rotating frame and is proportional to the object's speed in the rotating frame. The centrifugal force acts outwards in the radial direction and is proportional to the distance of the body from the axis of the rotating frame. These additional forces are termed inertial forces, fictitious forces or pseudo forces.[1] They allow the application of Newton's laws to a rotating system. They are correction factors that do not exist in a non-accelerating or inertial reference frame.
Perhaps the most commonly encountered rotating reference frame is the Earth. The Coriolis effect is caused by the rotation of the Earth and the inertia of the mass experiencing the effect. Because the Earth completes only one rotation per day, the Coriolis force is quite small, and its effects generally become noticeable only for motions occurring over large distances and long periods of time, such as large-scale movement of air in the atmosphere or water in the ocean. Such motions are constrained by the surface of the earth, so only the horizontal component of the Coriolis force is generally important. This force causes moving objects on the surface of the Earth to be deflected in a clockwise sense (with respect to the direction of travel) in the Northern Hemisphere and in a counter-clockwise sense in the Southern Hemisphere. Rather than flowing directly from areas of high pressure to low pressure, as they would in a non-rotating system, winds and currents tend to flow to the right of this direction north of the equator and to the left of this direction south of it. This effect is responsible for the rotation of large cyclones (see Coriolis effects in meteorology).
## History
Italian scientists Giovanni Battista Riccioli and his assistant Francesco Maria Grimaldi described the effect in connection with artillery in the 1651 Almagestum Novum, writing that rotation of the Earth should cause a cannon ball fired to the north to deflect to the east.[2] The effect was described in the tidal equations of Pierre-Simon Laplace in 1778.
Gaspard-Gustave Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels.[3] That paper considered the supplementary forces that are detected in a rotating frame of reference. Coriolis divided these supplementary forces into two categories. The second category contained a force that arises from the cross product of the angular velocity of a coordinate system and the projection of a particle's velocity into a plane perpendicular to the system's axis of rotation. Coriolis referred to this force as the "compound centrifugal force" due to its analogies with the centrifugal force already considered in category one.[4][5] The effect was known in the early 20th century as the "acceleration of Coriolis",[6] and by 1920 as "Coriolis force".[7]
In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes with air being deflected by the Coriolis force to create the prevailing westerly winds.[8]
Understanding the kinematics of how exactly the rotation of the Earth affects airflow was partial at first.[9] Late in the 19th century, the full extent of the large scale interaction of pressure gradient force and deflecting force that in the end causes air masses to move 'along' isobars was understood.[citation needed]
## Formula
See also: Fictitious force
This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (January 2012)
In non-vector terms: at a given rate of rotation of the observer, the magnitude of the Coriolis acceleration of the object is proportional to the velocity of the object and also to the sine of the angle between the direction of movement of the object and the axis of rotation.
The vector formula for the magnitude and direction of the Coriolis acceleration [10] is
$\boldsymbol{ a}_C = -2 \, \boldsymbol{ \Omega \times v}$
where (here and below) $\boldsymbol{ a}_C$ is the acceleration of the particle in the rotating system, $\boldsymbol{v}\,$ is the velocity of the particle in the rotating system, and Ω is the angular velocity vector which has magnitude equal to the rotation rate ω and is directed along the axis of rotation of the rotating reference frame, and the × symbol represents the cross product operator.
The equation may be multiplied by the mass of the relevant object to produce the Coriolis force:
$\boldsymbol{ F}_C = -2 \, m \, \boldsymbol{\Omega \times v}$.
See fictitious force for a derivation.
The Coriolis effect is the behavior added by the Coriolis acceleration. The formula implies that the Coriolis acceleration is perpendicular both to the direction of the velocity of the moving mass and to the frame's rotation axis. So in particular:
• if the velocity is parallel to the rotation axis, the Coriolis acceleration is zero.
• if the velocity is straight inward to the axis, the acceleration is in the direction of local rotation.
• if the velocity is straight outward from the axis, the acceleration is against the direction of local rotation.
• if the velocity is in the direction of local rotation, the acceleration is outward from the axis.
• if the velocity is against the direction of local rotation, the acceleration is inward to the axis.
The vector cross product can be evaluated as the determinant of a matrix:
$\boldsymbol{\Omega \times v} = \begin{vmatrix} \boldsymbol{i}&\boldsymbol{j}&\boldsymbol{k} \\ \Omega_x & \Omega_y & \Omega_z \\ v_x & v_y & v_z \end{vmatrix}\ = \begin{pmatrix} \Omega_y v_z - \Omega_z v_y \\ \Omega_z v_x - \Omega_x v_z \\ \Omega_x v_y - \Omega_y v_x \end{pmatrix}\ ,$
where the vectors i, j, k are unit vectors in the x, y and z directions.
## Causes
The Coriolis effect exists only when one uses a rotating reference frame. In the rotating frame it behaves exactly like a real force (that is to say, it causes acceleration and has real effects). However, Coriolis force is a consequence of inertia, and is not attributable to an identifiable originating body, as is the case for electromagnetic or nuclear forces, for example. From an analytical viewpoint, to use Newton's second law in a rotating system, Coriolis force is mathematically necessary, but it disappears in a non-accelerating, inertial frame of reference. For example, consider two children on opposite sides of a spinning roundabout (carousel), who are throwing a ball to each other. From the children's point of view, this ball's path is curved sideways by the Coriolis effect. Suppose the roundabout spins counter-clockwise when viewed from above. From the thrower's perspective, the deflection is to the right.[11] From the non-thrower's perspective, deflection is to left. For a mathematical formulation see Mathematical derivation of fictitious forces.
An observer in a rotating frame, such as an astronaut in a rotating space station, very probably will find the interpretation of everyday life in terms of the Coriolis force accords more simply with intuition and experience than a cerebral reinterpretation of events from an inertial standpoint. For example, nausea due to an experienced push[clarification needed] may be more instinctively explained by Coriolis force than by the law of inertia.[12][13] See also Coriolis effect (perception). In meteorology, a rotating frame (the Earth) with its Coriolis force proves a more natural framework for explanation of air movements than a non-rotating, inertial frame without Coriolis forces.[14] In long-range gunnery, sight corrections for the Earth's rotation are based upon Coriolis force.[15] These examples are described in more detail below.
The acceleration entering the Coriolis force arises from two sources of change in velocity that result from rotation: the first is the change of the velocity of an object in time. The same velocity (in an inertial frame of reference where the normal laws of physics apply) will be seen as different velocities at different times in a rotating frame of reference. The apparent acceleration is proportional to the angular velocity of the reference frame (the rate at which the coordinate axes change direction), and to the component of velocity of the object in a plane perpendicular to the axis of rotation. This gives a term $-\boldsymbol\Omega\times\boldsymbol v$. The minus sign arises from the traditional definition of the cross product (right hand rule), and from the sign convention for angular velocity vectors.
The second is the change of velocity in space. Different positions in a rotating frame of reference have different velocities (as seen from an inertial frame of reference). In order for an object to move in a straight line it must therefore be accelerated so that its velocity changes from point to point by the same amount as the velocities of the frame of reference. The effect is proportional to the angular velocity (which determines the relative speed of two different points in the rotating frame of reference), and to the component of the velocity of the object in a plane perpendicular to the axis of rotation (which determines how quickly it moves between those points). This also gives a term $-\boldsymbol\Omega\times\boldsymbol v$.
## Length scales and the Rossby number
Further information: Rossby number
The time, space and velocity scales are important in determining the importance of the Coriolis effect. Whether rotation is important in a system can be determined by its Rossby number, which is the ratio of the velocity, U, of a system to the product of the Coriolis parameter,$f = 2 \omega \sin \varphi \,$, and the length scale, L, of the motion:
$Ro = \frac{U}{fL}.$
The Rossby number is the ratio of inertial to Coriolis forces. A small Rossby number signifies a system which is strongly affected by Coriolis forces, and a large Rossby number signifies a system in which inertial forces dominate. For example, in tornadoes, the Rossby number is large, in low-pressure systems it is low and in oceanic systems it is around 1. As a result, in tornadoes the Coriolis force is negligible, and balance is between pressure and centrifugal forces. In low-pressure systems, centrifugal force is negligible and balance is between Coriolis and pressure forces. In the oceans all three forces are comparable.[16]
An atmospheric system moving at U = 10 m/s (22 mph) occupying a spatial distance of L = 1,000 km (621 mi), has a Rossby number of approximately 0.1. A baseball pitcher may throw the ball at U = 45 m/s (100 mph) for a distance of L = 18.3 m (60 ft). The Rossby number in this case would be 32,000. Needless to say, one does not worry about which hemisphere one is in when playing baseball. However, an unguided missile obeys exactly the same physics as a baseball, but may travel far enough and be in the air long enough to notice the effect of Coriolis. Long-range shells in the Northern Hemisphere landed close to, but to the right of, where they were aimed until this was noted. (Those fired in the Southern Hemisphere landed to the left.) In fact, it was this effect that first got the attention of Coriolis himself.[17][18][19]
## Applied to Earth
An important case where the Coriolis force is observed is the rotating Earth.
### Intuitive explanation
As the Earth turns around its axis, everything attached to it turns with it (imperceptibly to our senses). An object that is moving without being dragged along with this rotation travels in a straight motion over the turning Earth. From our rotating perspective on the planet, its direction of motion changes as it moves, bending in the opposite direction to our actual motion. When viewed from a stationary point in space above, any land feature in the Northern Hemisphere turns counter-clockwise, and, fixing our gaze on that location, any other location in that hemisphere will rotate around it the same way. The traced ground-path of a freely moving body traveling from one point to another will therefore bend the opposite way, clockwise, which is conventionally labeled as "right," where it will be if the direction of motion is considered "ahead" and "down" is defined naturally.
### Rotating sphere
Coordinate system at latitude φ with x-axis east, y-axis north and z-axis upward (that is, radially outward from center of sphere).
Consider a location with latitude φ on a sphere that is rotating around the north-south axis.[20] A local coordinate system is set up with the x axis horizontally due east, the y axis horizontally due north and the z axis vertically upwards. The rotation vector, velocity of movement and Coriolis acceleration expressed in this local coordinate system (listing components in the order east (e), north (n) and upward (u)) are:
$\boldsymbol{ \Omega} = \omega \begin{pmatrix} 0 \\ \cos \varphi \\ \sin \varphi \end{pmatrix}\ ,$ $\boldsymbol{ v} = \begin{pmatrix} v_e \\ v_n \\ v_u \end{pmatrix}\ ,$
$\boldsymbol{ a}_C =-2\boldsymbol{\Omega \times v}= 2\,\omega\, \begin{pmatrix} v_n \sin \varphi-v_u \cos \varphi \\ -v_e \sin \varphi \\ v_e \cos\varphi\end{pmatrix}\ .$
When considering atmospheric or oceanic dynamics, the vertical velocity is small, and the vertical component of the Coriolis acceleration is small compared to gravity. For such cases, only the horizontal (east and north) components matter. The restriction of the above to the horizontal plane is (setting vu = 0):
$\boldsymbol{ v} = \begin{pmatrix} v_e \\ v_n\end{pmatrix}\ ,$ $\boldsymbol{ a}_c = \begin{pmatrix} v_n \\ -v_e\end{pmatrix}\ f\ ,$
where $f = 2 \omega \sin \varphi \,$ is called the Coriolis parameter.
By setting vn = 0, it can be seen immediately that (for positive φ and ω) a movement due east results in an acceleration due south. Similarly, setting ve = 0, it is seen that a movement due north results in an acceleration due east. In general, observed horizontally, looking along the direction of the movement causing the acceleration, the acceleration always is turned 90° to the right and of the same size regardless of the horizontal orientation. That is:[21][22]
On a merry-go-round in the night
Coriolis was shaken with fright
Despite how he walked
'Twas like he was stalked
By some fiend always pushing him right
— David Morin, Eric Zaslow, E'beth Haley, John Golden, and Nathan Salwen
As a different case, consider equatorial motion setting φ = 0°. In this case, Ω is parallel to the north or n-axis, and:
$\boldsymbol{ \Omega} = \omega \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}\ ,$ $\boldsymbol{ v} = \begin{pmatrix} v_e \\ v_n \\ v_u \end{pmatrix}\ ,$ $\boldsymbol{ a}_C =-2\boldsymbol{\Omega \times v}= 2\,\omega\, \begin{pmatrix}-v_u \\0 \\ v_e \end{pmatrix}\ .$
Accordingly, an eastward motion (that is, in the same direction as the rotation of the sphere) provides an upward acceleration known as the Eötvös effect, and an upward motion produces an acceleration due west.
For additional examples in this article see cannon on turntable and tossed ball. In other articles, see rotating spheres, apparent motion of stationary objects, and carousel.
### Distant stars
The apparent motion of a distant star as seen from Earth is dominated by the Coriolis and centrifugal forces. Consider such a star (with mass m) located at position r, with declination δ, so Ω · r = |r| Ω sin(δ), where Ω is the Earth's rotation vector. The star is observed to rotate about the Earth's axis with a period of one sidereal day in the opposite direction to that of the Earth's rotation, making its velocity v = –Ω × r. The fictitious force, consisting of Coriolis and centrifugal forces, is:
$\begin{align} \boldsymbol {F_f} & = -2 \, m \, \boldsymbol{\Omega \times v} - m \, \boldsymbol{\Omega \times { (\Omega \times r)}} \\[8pt] & = +2 \, m \, \boldsymbol{\Omega \times (\Omega \times r)} - m \, \boldsymbol{\Omega \times {(\Omega \times r)}} \\[8pt] & = m \, \boldsymbol{\Omega \times (\Omega \times r)} \\[8pt] & = m \, \boldsymbol{(\Omega (\Omega \cdot r)} - \boldsymbol{r (\Omega \cdot \Omega))} \\[8pt] & = - m \, \Omega^2 \, \boldsymbol{ ( r} - \mid\boldsymbol{r}\mid \sin(\delta)\boldsymbol{u_\Omega)}, \end{align}$
where uΩ = Ω−1Ω is a unit vector in the direction of Ω. The fictitious force Ff is thus a vector of magnitude m Ω2|r| cos(δ), perpendicular to Ω, and directed towards the center of the star's rotation on the Earth's axis, and therefore recognisable as the centripetal force that will keep the star in a circular movement around that axis.
### Meteorology
This low pressure system over Iceland spins counter-clockwise due to balance between the Coriolis force and the pressure gradient force.
Schematic representation of flow around a low-pressure area in the Northern Hemisphere. The Rossby number is low, so the centrifugal force is virtually negligible. The pressure-gradient force is represented by blue arrows, the Coriolis acceleration (always perpendicular to the velocity) by red arrows
Schematic representation of inertial circles of air masses in the absence of other forces, calculated for a wind speed of approximately 50 to 70 m/s (110 to 160 mph).
Perhaps the most important impact of the Coriolis effect is in the large-scale dynamics of the oceans and the atmosphere. In meteorology and oceanography, it is convenient to postulate a rotating frame of reference wherein the Earth is stationary. In accommodation of that provisional postulation, the centrifugal and Coriolis forces are introduced. Their relative importance is determined by the applicable Rossby numbers. Tornadoes have high Rossby numbers, so, while tornado-associated centrifugal forces are quite substantial, Coriolis forces associated with tornados are for practical purposes negligible.[23]
High pressure systems rotate in a direction such that the Coriolis force will be directed radially inwards, and nearly balanced by the outwardly radial pressure gradient. This direction is clockwise in the Northern Hemisphere and counter-clockwise in the Southern Hemisphere. Low pressure systems rotate in the opposite direction, so that the Coriolis force is directed radially outward and nearly balances an inwardly radial pressure gradient. In each case a slight imbalance between the Coriolis force and the pressure gradient accounts for the radially inward acceleration of the system's circular motion.
#### Flow around a low-pressure area
Main article: Low-pressure area
If a low-pressure area forms in the atmosphere, air will tend to flow in towards it, but will be deflected perpendicular to its velocity by the Coriolis force. A system of equilibrium can then establish itself creating circular movement, or a cyclonic flow. Because the Rossby number is low, the force balance is largely between the pressure gradient force acting towards the low-pressure area and the Coriolis force acting away from the center of the low pressure.
Instead of flowing down the gradient, large scale motions in the atmosphere and ocean tend to occur perpendicular to the pressure gradient. This is known as geostrophic flow.[24] On a non-rotating planet, fluid would flow along the straightest possible line, quickly eliminating pressure gradients. Note that the geostrophic balance is thus very different from the case of "inertial motions" (see below) which explains why mid-latitude cyclones are larger by an order of magnitude than inertial circle flow would be.
This pattern of deflection, and the direction of movement, is called Buys-Ballot's law. In the atmosphere, the pattern of flow is called a cyclone. In the Northern Hemisphere the direction of movement around a low-pressure area is counter-clockwise. In the Southern Hemisphere, the direction of movement is clockwise because the rotational dynamics is a mirror image there. At high altitudes, outward-spreading air rotates in the opposite direction.[25] Cyclones rarely form along the equator due to the weak Coriolis effect present in this region.
#### Inertial circles
An air or water mass moving with speed $v\,$ subject only to the Coriolis force travels in a circular trajectory called an 'inertial circle'. Since the force is directed at right angles to the motion of the particle, it will move with a constant speed around a circle whose radius $R$ is given by:
$R= \frac {v}{f}\,$
where $f$ is the Coriolis parameter $2 \Omega \sin \varphi$, introduced above (where $\varphi$ is the latitude). The time taken for the mass to complete a full circle is therefore $2\pi/f$. The Coriolis parameter typically has a mid-latitude value of about 10−4 s−1; hence for a typical atmospheric speed of 10 m/s (22 mph) the radius is 100 km (62 mi), with a period of about 17 hours. For an ocean current with a typical speed of 10 cm/s (0.22 mph), the radius of an inertial circle is 1 km (0.6 mi). These inertial circles are clockwise in the Northern Hemisphere (where trajectories are bent to the right) and counter-clockwise in the Southern Hemisphere.
If the rotating system is a parabolic turntable, then $f$ is constant and the trajectories are exact circles. On a rotating planet, $f$ varies with latitude and the paths of particles do not form exact circles. Since the parameter $f$ varies as the sine of the latitude, the radius of the oscillations associated with a given speed are smallest at the poles (latitude = ±90°), and increase toward the equator.[26]
#### Other terrestrial effects
The Coriolis effect strongly affects the large-scale oceanic and atmospheric circulation, leading to the formation of robust features like jet streams and western boundary currents. Such features are in geostrophic balance, meaning that the Coriolis and pressure gradient forces balance each other. Coriolis acceleration is also responsible for the propagation of many types of waves in the ocean and atmosphere, including Rossby waves and Kelvin waves. It is also instrumental in the so-called Ekman dynamics in the ocean, and in the establishment of the large-scale ocean flow pattern called the Sverdrup balance.
### Eötvös effect
See also: Eötvös effect
The practical impact of the "Coriolis effect" is mostly caused by the horizontal acceleration component produced by horizontal motion.
There are other components of the Coriolis effect. Eastward-traveling objects will be deflected upwards (feel lighter), while westward-traveling objects will be deflected downwards (feel heavier). This is known as the Eötvös effect. This aspect of the Coriolis effect is greatest near the equator. The force produced by this effect is similar to the horizontal component, but the much larger vertical forces due to gravity and pressure mean that it is generally unimportant dynamically.
In addition, objects traveling upwards or downwards will be deflected to the west or east respectively. This effect is also the greatest near the equator. Since vertical movement is usually of limited extent and duration, the size of the effect is smaller and requires precise instruments to detect.
### Draining in bathtubs and toilets
Water rotation in home bathrooms under normal circumstances is not related to the Coriolis effect or to the rotation of the earth, and no consistent difference in rotation direction between toilets in the Northern and Southern Hemispheres can be observed. The formation of a vortex over the plug hole may be explained by the conservation of angular momentum: The radius of rotation decreases as water approaches the plug hole, so the rate of rotation increases, for the same reason that an ice skater's rate of spin increases as they pull their arms in. Any rotation around the plug hole that is initially present accelerates as water moves inward. Only if the water is so still that the effective rotation rate of the earth is faster than that of the water relative to its container, and if externally applied torques (such as might be caused by flow over an uneven bottom surface) are small enough, the Coriolis effect may determine the direction of the vortex. Without such careful preparation, the Coriolis effect may be much smaller than various other influences on drain direction[27] such as any residual rotation of the water[28] and the geometry of the container.[29] Despite this, the idea that toilets and bathtubs drain differently in the Northern and Southern Hemispheres has been popularized by several television programs, including Wedding Crashers, The Simpsons episode "Bart vs. Australia," and The X-Files episode "Die Hand Die Verletzt".[30] Several science broadcasts and publications, including at least one college-level physics textbook, have also stated this.[31][32]
In 1908, the Austrian physicist Ottokar Tumlirz described careful and effective experiments which demonstrated the effect of the rotation of the Earth on the outflow of water through a central aperture.[33] The subject was later popularized in a famous article in the journal Nature, which described an experiment in which all other forces to the system were removed by filling a 6 ft (1.8 m) tank with 300 US gal (1,100 L) of water and allowing it to settle for 24 hours (to allow any movement due to filling the tank to die away), in a room where the temperature had stabilized. The drain plug was then very slowly removed, and tiny pieces of floating wood were used to observe rotation. During the first 12 to 15 minutes, no rotation was observed. Then, a vortex appeared and consistently began to rotate in a counter-clockwise direction (the experiment was performed in Boston, Massachusetts, in the Northern Hemisphere). This was repeated and the results averaged to make sure the effect was real. The report noted that the vortex rotated, "about 30,000 times faster than the effective rotation of the earth in 42° North (the experiment's location)". This shows that the small initial rotation due to the earth is amplified by gravitational draining and conservation of angular momentum to become a rapid vortex and may be observed under carefully controlled laboratory conditions.[34][35]
### Ballistic missiles and satellites
Ballistic missiles and satellites appear to follow curved paths when plotted on common world maps mainly because the Earth is spherical and the shortest distance between two points on the Earth's surface (called a great circle) is usually not a straight line on those maps. Every two-dimensional (flat) map necessarily distorts the Earth's curved (three-dimensional) surface. Typically (as in the commonly used Mercator projection, for example), this distortion increases with proximity to the poles. In the Northern Hemisphere for example, a ballistic missile fired toward a distant target using the shortest possible route (a great circle) will appear on such maps to follow a path north of the straight line from target to destination, and then curve back toward the equator. This occurs because the latitudes, which are projected as straight horizontal lines on most world maps, are in fact circles on the surface of a sphere, which get smaller as they get closer to the pole. Being simply a consequence of the sphericity of the Earth, this would be true even if the Earth didn't rotate. The Coriolis effect is of course also present, but its effect on the plotted path is much smaller.
The Coriolis effects became important in external ballistics for calculating the trajectories of very long-range artillery shells. The most famous historical example was the Paris gun, used by the Germans during World War I to bombard Paris from a range of about 120 km (75 mi).
## Special cases
### Cannon on turntable
This section . Please help improve this section to make it understandable to non-experts, without removing the technical details. The talk page may contain suggestions. (June 2010)
See also: Fictitious force#Crossing a carousel
Cannon at the centre of a rotating turntable. To hit the target located at position 1 on the perimeter at time t = 0s, the cannon must be aimed ahead of the target at angle θ. That way, by the time the cannonball reaches position 3 on the periphery, the target also will be at that position. In an inertial frame of reference, the cannonball travels a straight radial path to the target (curve yA). However, in the frame of the turntable, the path is arched (curve yB), as also shown in the figure.
Successful trajectory of cannonball as seen from the turntable for three angles of launch θ. Plotted points are for the same equally spaced times steps on each curve. Cannonball speed v is held constant and angular rate of rotation ω is varied to achieve a successful "hit" for selected θ. For example, for a radius of 1 m and a cannonball speed of 1 m/s, the time of flight tf = 1 s, and ωtf = θ → ω and θ have the same numerical value if θ is expressed in radians. The wider spacing of the plotted points as the target is approached show the speed of the cannonball is accelerating as seen on the turntable, due to fictitious Coriolis and centrifugal forces.
Acceleration components at an earlier time (top) and at arrival time at the target (bottom)
Coriolis acceleration, centrifugal acceleration and net acceleration vectors at three selected points on the trajectory as seen on the turntable.
Given the radius R of the turntable in that animation, the rate of angular rotation ω, and the speed of the cannonball (assumed constant) v, the correct angle θ to aim so as to hit the target at the edge of the turntable can be calculated.
The inertial frame of reference provides one way to handle the question: calculate the time to interception, which is tf = R / v . Then, the turntable revolves an angle ω tf in this time. If the cannon is pointed an angle θ = ω tf = ω R / v, then the cannonball arrives at the periphery at position number 3 at the same time as the target.
No discussion of Coriolis force can arrive at this solution as simply, so the reason to treat this problem is to demonstrate Coriolis formalism in an easily visualized situation.
The trajectory in the inertial frame (denoted A) is a straight line radial path at angle θ. The position of the cannonball in (x, y) coordinates at time t is:
$\mathbf{r}_A (t) = vt\ \left( \cos (\theta ), \ \sin (\theta )\right) \ .$
In the turntable frame (denoted B), the x- y axes rotate at angular rate ω, so the trajectory becomes:
$\mathbf{r}_B (t) = vt\ \left( \cos ( \theta - \omega t), \ \sin ( \theta - \omega t)\right) \ ,$
and three examples of this result are plotted in the figure.
To determine the components of acceleration, a general expression is used from the article fictitious force:
$\mathbf{a}_{B} = \mathbf{a}_A$ ${}- 2 \boldsymbol\Omega \times \mathbf{v}_{B}$ ${}- \boldsymbol\Omega \times (\boldsymbol\Omega \times \mathbf{r}_B )$ ${}- \frac{d \boldsymbol\Omega}{dt} \times \mathbf{r}_B \ ,$
in which the term in Ω × vB is the Coriolis acceleration and the term in Ω × ( Ω × rB) is the centrifugal acceleration. The results are (let α = θ − ωt):
$\boldsymbol{\Omega} \mathbf{\times r_B}$ $= \begin{vmatrix} \boldsymbol{i}&\boldsymbol{j}&\boldsymbol{k} \\ 0 & 0 & \omega \\ v t \cos \alpha & vt \sin \alpha & 0 \end{vmatrix}\$ $= \omega t v \left(-\sin\alpha, \cos\alpha\right )\ ,$
$\boldsymbol{\Omega \ \times} \left( \boldsymbol{\Omega} \mathbf{\times r_B}\right)$ $= \begin{vmatrix} \boldsymbol{i}&\boldsymbol{j}&\boldsymbol{k} \\ 0 & 0 & \omega \\ -\omega t v \sin\alpha & \omega t v \cos\alpha & 0 \end{vmatrix}\ \ ,$
producing a centrifugal acceleration:
$\mathbf{a_{\mathrm{Cfgl}}}$ $= \omega^2 v t \left(\cos\alpha, \sin\alpha\right )=\omega^2 \mathbf{r_B}(t) \ .$
Also:
$\mathbf{v_B} = \frac{d\mathbf{r_B}(t)}{dt}=(v \cos \alpha + \omega t \ v \sin \alpha,$ $\ v \sin \alpha -\omega t \ v \cos \alpha ,\ 0)\ \ ,$
$\boldsymbol{\Omega} \mathbf{\times v_B}$ $= \begin{vmatrix}\! \boldsymbol{i}& \! \boldsymbol{j}& \! \boldsymbol{k} \\ 0 & 0 & \omega \\v \cos \alpha\quad &v \sin \alpha\quad &\quad \\ \; + \omega t \ v \sin \alpha & \; -\omega t \ v \cos \alpha & 0 \end{vmatrix}\ \ ,$
producing a Coriolis acceleration:
$\mathbf{a_{\mathrm{Cor}}}$ $= -2\left[ -\omega v \left( \sin\alpha - \omega t \cos\alpha\right),\right.$$\left. {\color{white}...}\ \omega v \left(\cos\alpha + \omega t \sin \alpha \right) \right]\$
$=2\omega v \left(\sin\alpha,\ - \cos\alpha \right)$ $-2\omega^2 \mathbf{r_B}(t) \ .$
These accelerations are shown in the diagrams for a particular example.
It is seen that the Coriolis acceleration not only cancels the centrifugal acceleration, but together they provide a net "centripetal", radially inward component of acceleration (that is, directed toward the centre of rotation):[36]
$\mathbf{a_{\mathrm{Cptl}}} = -\omega^2 \mathbf{r_B}(t) \ ,$
and an additional component of acceleration perpendicular to rB (t):
$\mathbf{a_{C\perp}}$ $= 2\omega v \left(\sin\alpha,\ -\cos\alpha \right) \ .$
The "centripetal" component of acceleration resembles that for circular motion at radius rB, while the perpendicular component is velocity dependent, increasing with the radial velocity v and directed to the right of the velocity. The situation could be described as a circular motion combined with an "apparent Coriolis acceleration" of 2ωv. However, this is a rough labelling: a careful designation of the true centripetal force refers to a local reference frame that employs the directions normal and tangential to the path, not coordinates referred to the axis of rotation.
These results also can be obtained directly by two time differentiations of rB (t). Agreement of the two approaches demonstrates that one could start from the general expression for fictitious acceleration above and derive the trajectories shown here. However, working from the acceleration to the trajectory is more complicated than the reverse procedure used here, which, of course, is made possible in this example by knowing the answer in advance.
As a result of this analysis an important point appears: all the fictitious accelerations must be included to obtain the correct trajectory. In particular, besides the Coriolis acceleration, the centrifugal force plays an essential role. It is easy to get the impression from verbal discussions of the cannonball problem, which are focussed on displaying the Coriolis effect particularly, that the Coriolis force is the only factor that must be considered;[37] emphatically, that is not so.[38] A turntable for which the Coriolis force is the only factor is the parabolic turntable. A somewhat more complex situation is the idealized example of flight routes over long distances, where the centrifugal force of the path and aeronautical lift are countered by gravitational attraction.[39][40]
### Tossed ball on a rotating carousel
A carousel is rotating counter-clockwise. Left panel: a ball is tossed by a thrower at 12:00 o'clock and travels in a straight line to the centre of the carousel. While it travels, the thrower circles in a counter-clockwise direction. Right panel: The ball's motion as seen by the thrower, who now remains at 12:00 o'clock, because there is no rotation from their viewpoint.
The figure illustrates a ball tossed from 12:00 o'clock toward the centre of a counter-clockwise rotating carousel. On the left, the ball is seen by a stationary observer above the carousel, and the ball travels in a straight line to the centre, while the ball-thrower rotates counter-clockwise with the carousel. On the right the ball is seen by an observer rotating with the carousel, so the ball-thrower appears to stay at 12:00 o'clock. The figure shows how the trajectory of the ball as seen by the rotating observer can be constructed.
On the left, two arrows locate the ball relative to the ball-thrower. One of these arrows is from the thrower to the centre of the carousel (providing the ball-thrower's line of sight), and the other points from the centre of the carousel to the ball.(This arrow gets shorter as the ball approaches the centre.) A shifted version of the two arrows is shown dotted.
On the right is shown this same dotted pair of arrows, but now the pair are rigidly rotated so the arrow corresponding to the line of sight of the ball-thrower toward the centre of the carousel is aligned with 12:00 o'clock. The other arrow of the pair locates the ball relative to the centre of the carousel, providing the position of the ball as seen by the rotating observer. By following this procedure for several positions, the trajectory in the rotating frame of reference is established as shown by the curved path in the right-hand panel.
The ball travels in the air, and there is no net force upon it. To the stationary observer the ball follows a straight-line path, so there is no problem squaring this trajectory with zero net force. However, the rotating observer sees a curved path. Kinematics insists that a force (pushing to the right of the instantaneous direction of travel for an counter-clockwise rotation) must be present to cause this curvature, so the rotating observer is forced to invoke a combination of centrifugal and Coriolis forces to provide the net force required to cause the curved trajectory.
### Bounced ball
Bird's-eye view of carousel. The carousel rotates clockwise. Two viewpoints are illustrated: that of the camera at the center of rotation rotating with the carousel (left panel) and that of the inertial (stationary) observer (right panel). Both observers agree at any given time just how far the ball is from the center of the carousel, but not on its orientation. Time intervals are 1/10 of time from launch to bounce.
The figure describes a more complex situation where the tossed ball on a turntable bounces off the edge of the carousel and then returns to the tosser, who catches the ball. The effect of Coriolis force on its trajectory is shown again as seen by two observers: an observer (referred to as the "camera") that rotates with the carousel, and an inertial observer. The figure shows a bird's-eye view based upon the same ball speed on forward and return paths. Within each circle, plotted dots show the same time points. In the left panel, from the camera's viewpoint at the center of rotation, the tosser (smiley face) and the rail both are at fixed locations, and the ball makes a very considerable arc on its travel toward the rail, and takes a more direct route on the way back. From the ball tosser's viewpoint, the ball seems to return more quickly than it went (because the tosser is rotating toward the ball on the return flight).
On the carousel, instead of tossing the ball straight at a rail to bounce back, the tosser must throw the ball toward the right of the target and the ball then seems to the camera to bear continuously to the left of its direction of travel to hit the rail (left because the carousel is turning clockwise). The ball appears to bear to the left from direction of travel on both inward and return trajectories. The curved path demands this observer to recognize a leftward net force on the ball. (This force is "fictitious" because it disappears for a stationary observer, as is discussed shortly.) For some angles of launch, a path has portions where the trajectory is approximately radial, and Coriolis force is primarily responsible for the apparent deflection of the ball (centrifugal force is radial from the center of rotation, and causes little deflection on these segments). When a path curves away from radial, however, centrifugal force contributes significantly to deflection.
The ball's path through the air is straight when viewed by observers standing on the ground (right panel). In the right panel (stationary observer), the ball tosser (smiley face) is at 12 o'clock and the rail the ball bounces from is at position one (1). From the inertial viewer's standpoint, positions one (1), two (2), three (3) are occupied in sequence. At position 2 the ball strikes the rail, and at position 3 the ball returns to the tosser. Straight-line paths are followed because the ball is in free flight, so this observer requires that no net force is applied.
### Bullets at high velocity through the atmosphere
Because of the rotation of the earth in relationship to ballistics, the bullet does not fly straight although it may seem like it from the shooter's perspective. The Coriolis effect changes the trajectory of the bullet slightly to give the path of the projectile a more arched shape. This situation only occurs at extremely long distances and therefore, is used to calculate a perfect shot by today's trained snipers.[15]
## Visualization of the Coriolis effect
A fluid assuming a parabolic shape as it is rotating
The forces at play in the case of a curved surface.
Red: gravity
Green: the normal force
Blue: the resultant centripetal force.
To demonstrate the Coriolis effect, a parabolic turntable can be used. On a flat turntable, the inertia of a co-rotating object would force it off the edge. But if the surface of the turntable has the correct paraboloid (parabolic bowl) shape (see the figure) and is rotated at the corresponding rate, the force components shown in the figure are such that the component of gravity tangential to the bowl surface will exactly equal the centripetal force necessary to keep the object rotating at its velocity and radius of curvature (assuming no friction). (See banked turn.) This carefully contoured surface allows the Coriolis force to be displayed in isolation.[41][42]
Discs cut from cylinders of dry ice can be used as pucks, moving around almost frictionlessly over the surface of the parabolic turntable, allowing effects of Coriolis on dynamic phenomena to show themselves. To get a view of the motions as seen from the reference frame rotating with the turntable, a video camera is attached to the turntable so as to co-rotate with the turntable, with results as shown in the figure. In the left panel of the figure, which is the viewpoint of a stationary observer, the gravitational force in the inertial frame pulling the object toward the center (bottom ) of the dish is proportional to the distance of the object from the center. A centripetal force of this form causes the elliptical motion. In the right panel, which shows the viewpoint of the rotating frame, the inward gravitational force in the rotating frame (the same force as in the inertial frame) is balanced by the outward centrifugal force (present only in the rotating frame). With these two forces balanced, in the rotating frame the only unbalanced force is Coriolis (also present only in the rotating frame), and the motion is an inertial circle. Analysis and observation of circular motion in the rotating frame is a simplification compared to analysis or observation of elliptical motion in the inertial frame.
Because this reference frame rotates several times a minute rather than only once a day like the Earth, the Coriolis acceleration produced is many times larger and so easier to observe on small time and spatial scales than is the Coriolis acceleration caused by the rotation of the Earth.
In a manner of speaking, the Earth is analogous to such a turntable.[43] The rotation has caused the planet to settle on a spheroid shape, such that the normal force, the gravitational force and the centrifugal force exactly balance each other on a "horizontal" surface. (See equatorial bulge.)
The Coriolis effect caused by the rotation of the Earth can be seen indirectly through the motion of a Foucault pendulum.
## Coriolis effects in other areas
### Coriolis flow meter
Object moving frictionlessly over the surface of a very shallow parabolic dish. The object has been released in such a way that it follows an elliptical trajectory.
Left: The inertial point of view.
Right: The co-rotating point of view.
A practical application of the Coriolis effect is the mass flow meter, an instrument that measures the mass flow rate and density of a fluid flowing through a tube. The operating principle involves inducing a vibration of the tube through which the fluid passes. The vibration, though it is not completely circular, provides the rotating reference frame which gives rise to the Coriolis effect. While specific methods vary according to the design of the flow meter, sensors monitor and analyze changes in frequency, phase shift, and amplitude of the vibrating flow tubes. The changes observed represent the mass flow rate and density of the fluid.[44]
### Molecular physics
In polyatomic molecules, the molecule motion can be described by a rigid body rotation and internal vibration of atoms about their equilibrium position. As a result of the vibrations of the atoms, the atoms are in motion relative to the rotating coordinate system of the molecule. Coriolis effects will therefore be present and will cause the atoms to move in a direction perpendicular to the original oscillations. This leads to a mixing in molecular spectra between the rotational and vibrational levels from which Coriolis coupling constants can be determined.[45]
### Insect flight
Flies (Diptera) and moths (Lepidoptera) utilize the Coriolis effect when flying: their halteres, or antennae in the case of moths, oscillate rapidly and are used as vibrational gyroscopes.[46] See Coriolis effect in insect stability.[47] In this context, the Coriolis effect has nothing to do with the rotation of the Earth.
## References
1. Bhatia, V.B. (1997). Classical Mechanics: With introduction to Nonlinear Oscillations and Chaos. Narosa Publishing House. p. 201. ISBN 81-7319-105-0.
2. Graney, Christopher M. (2011). "Coriolis effect, two centuries before Coriolis". Physics Today 64: 8. Bibcode:2011PhT....64h...8G. doi:10.1063/PT.3.1195.
3. G-G Coriolis (1835). "Sur les équations du mouvement relatif des systèmes de corps". J. de l'Ecole royale polytechnique 15: 144–154.
4.
5. Arthur Gordon Webster (1912). The Dynamics of Particles and of Rigid, Elastic, and Fluid Bodies. B. G. Teubner. p. 320. ISBN 1-113-14861-6.
6. Edwin b. Wilson (1920). "Space, Time, and Gravitation". In James McKeen Cattell. The Scientific Monthly (American Association for the Advancement of Science) 10: 226.
7. William Ferrel (November 1856). "An Essay on the Winds and the Currents of the Ocean". Nashville Journal of Medicine and Surgery xi (4): 7–19. Retrieved on 1 January 2009.
8.
9. Hestenes, David (1990). New Foundations for Classical Mechanics. The Netherlands: Kluwer Academic Publishers. p. 312. ISBN ISBN=90-277-2526-8 (pbk).
10. John M. Wallace and Peter V. Hobbs (1977). Atmospheric Science: An Introductory Survey. Academic Press, Inc. pp. 368–371. ISBN 0-12-732950-1.
11. Sheldon M. Ebenholtz (2001). Oculomotor Systems and Perception. Cambridge University Press. ISBN 0-521-80459-0.
12. George Mather (2006). Foundations of perception. Taylor & Francis. ISBN 0-86377-835-6.
13. Roger Graham Barry, Richard J. Chorley (2003). Atmosphere, Weather and Climate. Routledge. p. 113. ISBN 0-415-27171-1.
14. ^ a b The claim is made that in the Falklands in WW I, the British failed to correct their sights for the southern hemisphere, and so missed their targets. John Robert Taylor (2005). Classical Mechanics. University Science Books. p. 364; Problem 9.28. ISBN 1-891389-22-X. For set up of the calculations, seeDonald E. Carlucci, Sidney S. Jacobson (2007). Ballistics. CRC Press. p. 225. ISBN 1-4200-6618-8.
15. Lakshmi H. Kantha & Carol Anne Clayson (2000). Numerical Models of Oceans and Oceanic Processes. Academic Press. p. 103. ISBN 0-12-434068-7.
16. Stephen D. Butz (2002). Science of Earth Systems. Thomson Delmar Learning. p. 305. ISBN 0-7668-3391-7.
17. James R. Holton (2004). An Introduction to Dynamic Meteorology. Academic Press. p. 18. ISBN 0-12-354015-1.
18. Donald E. Carlucci & Sidney S. Jacobson (2007). Ballistics: Theory and Design of Guns and Ammunition. CRC Press. pp. 224–226. ISBN 1-4200-6618-8.
19. William Menke & Dallas Abbott (1990). Geophysical Theory. Columbia University Press. pp. 124–126. ISBN 0-231-06792-5.
20. David Morin, Eric Zaslow, Elizabeth Haley, John Goldne, and Natan Salwen (2 December 2005). "Limerick – May the Force Be With You". Weekly Newsletter Volume 22, No 47. Department of Physics and Astronomy, University of Canterbury. Retrieved 1 January 2009.
21. David Morin (2008). Introduction to classical mechanics: with problems and solutions. Cambridge University Press. p. 466. ISBN 0-521-87622-2.
22. James R. Holton (2004). An Introduction to Dynamic Meteorology. Burlington, MA: Elsevier Academic Press. p. 64. ISBN 0-12-354015-1.
23. Roger Graham Barry & Richard J. Chorley (2003). Atmosphere, Weather and Climate. Routledge. p. 115. ISBN 0-415-27171-1.
24. John Marshall & R. Alan Plumb (2007). p. 98. Amsterdam: Elsevier Academic Press. ISBN 0-12-558691-4.
25. Larry D. Kirkpatrick and Gregory E. Francis (2006). Physics: A World View. Cengage Learning. pp. 168–9. ISBN 978-0-495-01088-3. Retrieved 1 April 2011.
26. Y. A. Stepanyants and G. H. Yeoh (2008). "Stationary bathtub vortices and a critical regime of liquid discharge". Journal of Fluid Mechanics 604 (1): 77–98. Bibcode:2008JFM...604...77S. doi:10.1017/S0022112008001080.
27. Creative Media Applications (2004). A Student's Guide to Earth Science: Words and terms. Greenwood Publishing Group. p. 22. ISBN 978-0-313-32902-9.
28. Fraser, Alistair. "Bad Coriolis". Bad Meteorology. Pennsylvania State College of Earth and Mineral Science. Retrieved 17 January 2011.
29. Tipler, Paul (1998). Physics for Engineers and Scientists (4th ed.). W.H.Freeman, Worth Publishers. p. 128. ISBN 978-1-57259-616-0. "...on a smaller scale, the coriolis effect causes water draining out a bathtub to rotate counter clockwise in the northern hemisphere..."
30. Tumlirz, Ottokar (1908). "Ein neuer physikalischer Beweis für die Achsendrehung der Erde". Sitzungsberichte der math.-nat. Klasse der kaiserlichen Akademie der Wissenschaften IIa 117: 819–841.
31. Shapiro, Ascher H. (1962). "Bath-Tub Vortex". Nature 196 (4859): 1080. Bibcode:1962Natur.196.1080S. doi:10.1038/1961080b0.
32. Here the description "radially inward" means "toward the axis of rotation". That direction is not toward the center of curvature of the path, however, which is the direction of the true centripetal force. Hence, the quotation marks on "centripetal".
33. George E. Owen (2003). Fundamentals of Scientific Mathematics (original edition published by Harper & Row, New York, 1964 ed.). Courier Dover Publications. p. 23. ISBN 0-486-42808-7.
34. Morton Tavel (2002). Contemporary Physics and the Limits of Knowledge. Rutgers University Press. p. 88. ISBN 0-8135-3077-6.
35. James R Ogden & M Fogiel (1995). High School Earth Science Tutor. Research & Education Assoc. p. 167. ISBN 0-87891-975-9.
36. James Greig McCully (2006). Beyond the moon: A Conversational, Common Sense Guide to Understanding the Tides. World Scientific. pp. 74–76. ISBN 981-256-643-0.
37. When a container of fluid is rotating on a turntable, the surface of the fluid naturally assumes the correct parabolic shape. This fact may be exploited to make a parabolic turntable by using a fluid that sets after several hours, such as a synthetic resin. For a video of the Coriolis effect on such a parabolic surface, see Geophysical fluid dynamics lab demonstration John Marshall, Massachusetts Institute of Technology.
38. John Marshall & R. Alan Plumb (2007). Atmosphere, Ocean, and Climate Dynamics: An Introductory Text. Academic Press. p. 101. ISBN 0-12-558691-4.
39.
40. califano, S (1976). Vibrational states. Wiley. pp. 226–227. ISBN 0471129968.
41. "Antennae as Gyroscopes", Science, Vol. 315, 9 February 2007, p. 771
42. Wu, W.C.; Wood, R.J.; Fearing, R.S. (2002). "Halteres for the micromechanical flying insect". IEEE International Conference on Robotics and Automation, 2002. Proceedings. ICRA '02. 1: 60–65. doi:10.1109/ROBOT.2002.1013339. ISBN 0-7803-7272-7.
### Further reading: physics and meteorology
• Riccioli, G.B., 1651: Almagestum Novum, Bologna, pp. 425–427
(Original book [in Latin], scanned images of complete pages.)
• Coriolis, G.G., 1832: Mémoire sur le principe des forces vives dans les mouvements relatifs des machines. Journal de l'école Polytechnique, Vol 13, 268–302.
(Original article [in French], PDF-file, 1.6 MB, scanned images of complete pages.)
• Coriolis, G.G., 1835: Mémoire sur les équations du mouvement relatif des systèmes de corps. Journal de l'école Polytechnique, Vol 15, 142–154
(Original article [in French] PDF-file, 400 KB, scanned images of complete pages.)
• Gill, AE Atmosphere-Ocean dynamics, Academic Press, 1982.
• Robert Ehrlich (1990). Turning the World Inside Out and 174 Other Simple Physics Demonstrations. Princeton University Press. p. Rolling a ball on a rotating turntable; p. 80 ff. ISBN 0-691-02395-6.
• Durran, D. R., 1993: Is the Coriolis force really responsible for the inertial oscillation?, Bull. Amer. Meteor. Soc., 74, 2179–2184; Corrigenda. Bulletin of the American Meteorological Society, 75, 261
• Durran, D. R., and S. K. Domonkos, 1996: An apparatus for demonstrating the inertial oscillation, Bulletin of the American Meteorological Society, 77, 557–559.
• Marion, Jerry B. 1970, Classical Dynamics of Particles and Systems, Academic Press.
• Persson, A., 1998 [1] How do we Understand the Coriolis Force? Bulletin of the American Meteorological Society 79, 1373–1385.
• Symon, Keith. 1971, Mechanics, Addison–Wesley
• Akira Kageyama & Mamoru Hyodo: Eulerian derivation of the Coriolis force
• James F. Price: A Coriolis tutorial Woods Hole Oceanographic Institute (2003)
### Further reading: historical
• Grattan-Guinness, I., Ed., 1994: Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences. Vols. I and II. Routledge, 1840 pp.
1997: The Fontana History of the Mathematical Sciences. Fontana, 817 pp. 710 pp.
• Khrgian, A., 1970: Meteorology — A Historical Survey. Vol. 1. Keter Press, 387 pp.
• Kuhn, T. S., 1977: Energy conservation as an example of simultaneous discovery. The Essential Tension, Selected Studies in Scientific Tradition and Change, University of Chicago Press, 66–104.
• Kutzbach, G., 1979: The Thermal Theory of Cyclones. A History of Meteorological Thought in the Nineteenth Century. Amer. Meteor. Soc., 254 pp.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8797608017921448, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/55490/list
|
## Return to Answer
1 [made Community Wiki]
Minkowski's lower bound for density of sphere packings in $\mathbb{R}^n$: take any sphere packing where you can't cram in any more spheres. Then doubling the size of the spheres must cover all space, which gives a lower bound of $\frac{1}{2^n}$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8933271765708923, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/3321/additive-elgamal-cryptosystem-using-a-finite-field?answertab=votes
|
# Additive ElGamal cryptosystem using a finite field
I'm trying to implement a modified version of the ElGamal cryptosystem as specified by Cramer et al. in "A secure and optimally efficient multi-authority election scheme", which possesses additive homomorphism between ciphertexts, as opposed to the original version, which presents multiplicative homomorphism.
The BIG problem (as always) is that the paper is annoyingly scarce on the "small" details. Here's what I have so far:
1. Configuration parameters
• size of $p$
• size of $q$
• message space
2. Key generation
• pick a prime number $p$ of a given "key size", ensuring that $p - 1$ has a large prime factor $q$ (of a given size)
• pick $g$ as a generator of the cyclic group $\mathbb{Z}_{q}^*$
• generate a random number, $s \in \mathbb{Z}_{q}$, and compute $h = g^s \pmod q$
• The public key is $(g, h)$ and the private key is $(s)$
3. Precomputations
• Since the scheme requires computing discrete logarithms in order to perform the decryption (see below), the messages must be small so we precompute each $g^m \pmod q$ and we store them in a lookup table
4. Encryption
• pick a random value $\alpha \in \mathbb{Z}_{q}$
• compute $(x, y) = (g^{\alpha} \pmod q, h^{\alpha} g^m \pmod q)$
5. Decryption
• $y x^{-s} \pmod q \equiv h^{\alpha} g^m g^{-s \alpha} \pmod q \equiv g^{s \alpha} g^m g^{-s \alpha} \pmod q \equiv g^m \pmod q$
• we use the precomputed lookup table to find the corresponding message, $m$, for the computed $g^m \pmod q$
Questions:
1. I'm not sure if the implied modulus of each operation is $q$, as I added myself to the above formulas. Could someone please clarify this? The paper omits it...
2. If, indeed, the modulus of the operations is $q$, then I need to add it to the public key, right? If I do this, doesn't the scheme become insecure?
3. Regarding the private key, the paper does not specify the set from which to select it, and I assumed it to be $\mathbb{Z}_{q}$. Is this correct?
4. what should be the size of $p$ (in bits) in order to have similar security as provided by RSA 1024?
5. what should be the size (in bits) of the large prime factor of $p - 1$?
-
1
An additional note to augment great answers below: $g$ generates a group of size $q$ but the group is not $\mathbb{Z}_q$. In other words, it is not $1,2,3,\ldots,q$. Rather it is a random looking subset of numbers in $\mathbb{Z}_p^*$; i.e., numbers between $1$ and $p-1$. This is why everything is done $\mod p$. – PulpSpy Jul 23 '12 at 21:08
@PulpSpy Yes, that makes sense now. Thanks! – Mihai Todor Jul 23 '12 at 22:27
## 2 Answers
1. As far as I can tell from your description, the modulus is p. To multiply two group elements, you compute x*y (mod p); because the generator g you choose has period q it'll all work out fine.
2. No, p, q, and g can (and must) all be public. This is ElGamal, not RSA we're talking about - the security comes from the (presumed) hardness of taking discrete logarithms rather than factoring.
3. Yes.
4. and 5. The Ecrypt report on algorithms and key sizes http://www.ecrypt.eu.org/documents.html is a good place to look. It's not an easy question because it depends on how hard you think it is (and will be in the future) to solve discrete logarithms.
There's an existing, open-source implementation of additive ElGamal (in Python) used in the Helios voting system ( http://heliosvoting.org ) that you might want to look at.
-
Thanks for taking the time to provide clear answers to all my questions and thank you very much for the python implementation link. – Mihai Todor Jul 23 '12 at 16:03
While I haven't read the paper, I believe I can answer these questions:
1. I'm not sure if the implied modulus of each operation is $q$, as I added myself to the above formulas. Could someone please clarify this? The paper omits it...
No, the arithmetic is done modulo $p$. Remember, you're working in a subgroup of size $q$ of $\mathbb{Z}^*_{p}$; the operation between two members of that subgroup is the operation in the supergroup $\mathbb{Z}^*_{p}$, which is multiplication modulo $p$. Now, if you did arithmetic of exponents, that would be done modulo $q$. However, the above protocol never does.
1. If, indeed, the modulus of the operations is $q$, then I need to add it to the public key, right? If I do this, doesn't the scheme become insecure?
Well, $g$, $p$ and $q$ are group parameters; they need to be shared between people using the key. They could be included in the key, or just be implicitly understood by both sides.
1. Regarding the private key, the paper does not specify the set from which to select it, and I assumed it to be $\mathbb{Z}_{q}$. Is this correct?
That is correct; $g^x \bmod p$ can take up exactly $q$ distinct values.
1. what should be the size of $p$ (in bits) in order to have similar security as provided by RSA 1024?
Well, the best general factorization method (NFS) can be applied to the discrete log problem with roughly the same complexity; this implies that a $p$ of 1024 bits would give you roughly the same level of security.
1. what should be the size (in bits) of the large prime factor of $p−1$?
Well, discrete logs in the subgroup can also be solved in time $O(\sqrt q)$; this implies to make this attack (say) take $O(2^{80})$ time, we need to have $q > 2^{160}$. Now, it is unclear exactly how difficult the discrete log problem is in the supergroup; this implies that it might be a good idea to have a larger $p$ and $q$ than what seems immediately necessary.
-
Now this is an even nicer answer than Bristol's. Thank you very much @poncho. Too bad I can't give a +2. I'll leave his answer as accepted though, since he also provided a link to the python implementation. – Mihai Todor Jul 23 '12 at 16:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383586645126343, "perplexity_flag": "middle"}
|
http://www.reference.com/browse/Bayes%2C+Thomas
|
Definitions
Nearby Words
# Bayes, Thomas
Bayes, Thomas, 1702-61, English clergyman and mathematician. The son of a Nonconformist minister, he was privately educated and earned his livelihood as a minister to the Nonconformist community at Tunbridge Wells. Although he wrote on theology, e.g., Divine Benevolence (1731), Bayes is best known for his two mathematical works, Introduction to the Doctrine of Fluxions (1736), a defense of the logical foundations of Newton's calculus against the attack of Bishop Berkeley, and "Essay Towards Solving a Problem in the Doctrine of Chances" (1763). The latter, a pioneering work, attempts to establish that the rule for determining the probability of an event is the same whether or not anything is known antecedently to any trials or observations concerning the event.
The Columbia Electronic Encyclopedia Copyright © 2004.
Licensed from Columbia University Press
In probability theory, Bayes' theorem (often called Bayes' law after Thomas Bayes) relates the conditional and marginal probabilities of two random events. It is often used to compute posterior probabilities given observations. For example, a patient may be observed to have certain symptoms. Bayes' theorem can be used to compute the probability that a proposed diagnosis is correct, given that observation. (See example 2)
As a formal theorem, Bayes' theorem is valid in all common interpretations of probability. However, it plays a central role in the debate around the foundations of statistics: frequentist and Bayesian interpretations disagree about the ways in which probabilities should be assigned in applications. Frequentists assign probabilities to random events according to their frequencies of occurrence or to subsets of populations as proportions of the whole, while Bayesians describe probabilities in terms of beliefs and degrees of uncertainty. The articles on Bayesian probability and frequentist probability discuss these debates at greater length.
## Statement of Bayes' theorem
Bayes' theorem relates the conditional and marginal probabilities of events A and B, where B has a non-vanishing probability:
$P\left(A|B\right) = frac\left\{P\left(B | A\right), P\left(A\right)\right\}\left\{P\left(B\right)\right\}.$
Each term in Bayes' theorem has a conventional name:
• P(A) is the prior probability or marginal probability of A. It is "prior" in the sense that it does not take into account any information about B.
• P(A|B) is the conditional probability of A, given B. It is also called the posterior probability because it is derived from or depends upon the specified value of B.
• P(B|A) is the conditional probability of B given A.
• P(B) is the prior or marginal probability of B, and acts as a normalizing constant.
Intuitively, Bayes' theorem in this form describes the way in which one's beliefs about observing 'A' are updated by having observed 'B'.
## An example
Suppose there is a co-ed school having 60% boys and 40% girls as students. The girl students wear trousers or skirts in equal numbers; the boys all wear trousers. An observer sees a (random) student from a distance; all they can see is that this student is wearing trousers. What is the probability this student is a girl?
It is clear that the probability is less than 40%, but by how much? Is it half that, since only half the girls are wearing trousers? The correct answer can be computed using Bayes' theorem.
The event A is that the student observed is a girl, and the event B is that the student observed is wearing trousers. To compute P(A|B), we first need to know:
• P(A), or the probability that the student is a girl regardless of any other information. Since the observers sees a random student, meaning that all students have the same probability of being observed, and the fraction of girls among the students is 40%, this probability equals 0.4.
• P(A'), or the probability that the student is a boy regardless of any other information (A' is the complementary event to A). This is 60%, or 0.6.
• P(B|A), or the probability of the student wearing trousers given that the student is a girl. As they are as likely to wear skirts as trousers, this is 0.5.
• P(B|A'), or the probability of the student wearing trousers given that the student is a boy. This is given as 1.
• P(B), or the probability of a (randomly selected) student wearing trousers regardless of any other information. Since P(B) = P(B|A)P(A) + P(B|A')P(A'), this is .
Given all this information, the probability of the observer having spotted a girl given that the observed student is wearing trousers can be computed by substituting these values in the formula:
$P\left(A|B\right) = frac\left\{P\left(B|A\right) P\left(A\right)\right\}\left\{P\left(B\right)\right\} = frac\left\{0.5 times 0.4\right\}\left\{0.8\right\} = 0.25,.$
As expected, it is less than 40%, but more than half that.
Another, essentially equivalent way of obtaining the same result is as follows. Assume, for concreteness, that there are 100 students, 60 boys and 40 girls. Among these, 60 boys and 20 girls wear trousers. All together there are 80 trouser-wearers, of which 20 are girls. Therefore the chance that a random trouser-wearer is a girl equals .
It is often helpful when calculating conditional probabilities to create a simple table containing the number of occurrences of each outcome, or the relative frequencies of each outcome, for each of the independent variables. The table below illustrates the use of this method for the above girl-or-boy example.
Girls Boys Total
Trousers
20
60
80
Skirts
20
0
20
Total
40
60
100
## Bayes' theorem in terms of likelihood
Bayes' theorem can also be interpreted in terms of likelihood:
$P\left(A|B\right) propto L\left(A | b\right), P\left(A\right).$
Here L(A|b) is the likelihood of A given fixed b. The rule is then an immediate consequence of the relationship $P\left(B | A\right) propto L\left(A | B\right)$.
With this terminology, the theorem may be paraphrased as
$mbox\left\{posterior\right\} = frac\left\{mbox\left\{likelihood\right\} times mbox\left\{prior\right\}\right\} \left\{alpha\right\}$
(where $alpha$ is a normalising constant equal to P(B)).
In words: the posterior probability is proportional to the product of the prior probability and the likelihood.
## Derivation from conditional probabilities
To derive the theorem, we start from the definition of conditional probability. The probability of event A given event B is
$P\left(A|B\right)=frac\left\{P\left(A cap B\right)\right\}\left\{P\left(B\right)\right\}.$
Equivalently, the probability of event B given event A is
$P\left(B|A\right) = frac\left\{P\left(A cap B\right)\right\}\left\{P\left(A\right)\right\}. !$
Rearranging and combining these two equations, we find
$P\left(A|B\right), P\left(B\right) = P\left(A cap B\right) = P\left(B|A\right), P\left(A\right). !$
This lemma is sometimes called the product rule for probabilities. Dividing both sides by P(B), providing that it is non-zero, we obtain Bayes' theorem:
$P\left(A|B\right) = frac\left\{P\left(A cap B\right)\right\}\left\{P\left(B\right)\right\} = frac\left\{P\left(B|A\right),P\left(A\right)\right\}\left\{P\left(B\right)\right\}. !$
## Alternative forms of Bayes' theorem
Bayes' theorem is often embellished by noting that
$P\left(B\right) = P\left(Acap B\right) + P\left(A^Ccap B\right) = P\left(B|A\right) P\left(A\right) + P\left(B|A^C\right) P\left(A^C\right),$
where AC is the complementary event of A (often called "not A"). So the theorem can be restated as the following formula
$P\left(A|B\right) = frac\left\{P\left(B | A\right), P\left(A\right)\right\}\left\{P\left(B|A\right) P\left(A\right) + P\left(B|A^C\right) P\left(A^C\right)\right\}. !$
More generally, where {Ai} forms a partition of the event space,
$P\left(A_i|B\right) = frac\left\{P\left(B | A_i\right), P\left(A_i\right)\right\}\left\{sum_j P\left(B|A_j\right),P\left(A_j\right)\right\} , !$
for any Ai in the partition.
See also the law of total probability.
### Bayes' theorem in terms of odds and likelihood ratio
Bayes' theorem can also be written neatly in terms of a likelihood ratio Λ and odds O as
$O\left(A|B\right)=O\left(A\right) cdot Lambda \left(A|B\right)$
where $O\left(A|B\right)=frac\left\{P\left(A|B\right)\right\}\left\{P\left(A^C|B\right)\right\} !$ are the odds of A given B,
and $O\left(A\right)=frac\left\{P\left(A\right)\right\}\left\{P\left(A^C\right)\right\} !$ are the odds of A by itself,
while $Lambda \left(A|B\right) = frac\left\{L\left(A|B\right)\right\}\left\{L\left(A^C|B\right)\right\} = frac\left\{P\left(B|A\right)\right\}\left\{P\left(B|A^C\right)\right\} !$ is the likelihood ratio.
### Bayes' theorem for probability densities
There is also a version of Bayes' theorem for continuous distributions. It is somewhat harder to derive, since probability densities, strictly speaking, are not probabilities, so Bayes' theorem has to be established by a limit process; see Papoulis (citation below), Section 7.3 for an elementary derivation. Bayes' theorem for probability densities is formally similar to the theorem for probabilities:
$f_X\left(x|Y=y\right) = frac\left\{f_\left\{X,Y\right\}\left(x,y\right)\right\}\left\{f_Y\left(y\right)\right\} = frac\left\{f_Y\left(y|X=x\right),f_X\left(x\right)\right\}\left\{f_Y\left(y\right)\right\} = frac\left\{f_Y\left(y|X=x\right),f_X\left(x\right)\right\}\left\{int_\left\{-infty\right\}^\left\{infty\right\} f_Y\left(y|X=xi \right),f_X\left(xi \right),dxi \right\}.!$
There is an analogous statement of the law of total probability, which is used in the denominator:
$f_Y\left(y\right) = int_\left\{-infty\right\}^\left\{infty\right\} f_Y\left(y|X=x \right),f_X\left(x\right),dx .!$
As in the discrete case, the terms have standard names.
$f_\left\{X,Y\right\}\left(x,y\right),$ is the joint distribution of X and Y,
$f_X\left(x|Y=y\right),$ is the posterior distribution of X given Y=y,
$f_Y\left(y|X=x\right) = L\left(x|y\right),$ is (as a function of x) the likelihood function of X given Y=y,
and
$f_X\left(x\right),$
and
$f_Y\left(y\right)!$
are the marginal distributions of X and Y respectively, with $f_X\left(x\right),$ being the prior distribution of X.
### Abstract Bayes' theorem
Given two absolutely continuous probability measures $P sim Q$ on the probability space $\left(Omega, mathcal\left\{F\right\}\right)$ and a sigma-algebra $mathcal\left\{G\right\} subset mathcal\left\{F\right\}$, the abstract Bayes theorem for a $mathcal\left\{F\right\}$-measurable random variable $X$ becomes
$E_P\left[X|mathcal\left\{G\right\}\right] = frac\left\{E_Q\left[frac\left\{dP\right\}\left\{dQ\right\} X |mathcal\left\{G\right\}\right]\right\}\left\{E_Q\left[frac\left\{dP\right\}\left\{dQ\right\}|mathcal\left\{G\right\}\right]\right\}$.
Proof :
by definition of conditional probability,
$E_P\left[X|mathcal\left\{G\right\}\right] = frac\left\{E_P\left[X 1_mathcal\left\{G\right\}\right]\right\}\left\{E_P\left[mathcal\left\{G\right\}\right]\right\}$
and we have also
$E_Q\left[frac\left\{dP\right\}\left\{dQ\right\} X|mathcal\left\{G\right\}\right] = frac\left\{E_Q\left[frac\left\{dP\right\}\left\{dQ\right\} X 1_mathcal\left\{G\right\}\right]\right\}\left\{E_Q\left[mathcal\left\{G\right\}\right]\right\} = frac\left\{E_P\left[X 1_mathcal\left\{G\right\}\right]\right\}\left\{E_Q\left[mathcal\left\{G\right\}\right]\right\}$
$E_Q\left[frac\left\{dP\right\}\left\{dQ\right\}|mathcal\left\{G\right\}\right] = frac\left\{E_Q\left[frac\left\{dP\right\}\left\{dQ\right\} 1_mathcal\left\{G\right\}\right]\right\}\left\{E_Q\left[mathcal\left\{G\right\}\right]\right\} = frac\left\{E_P\left[1_mathcal\left\{G\right\}\right]\right\}\left\{E_Q\left[mathcal\left\{G\right\}\right]\right\}$
This formulation is used in Kalman filtering to find Zakai equations. It is also used in financial mathematics for change of numeraire techniques.
### Extensions of Bayes' theorem
Theorems analogous to Bayes' theorem hold in problems with more than two variables. For example:
$P\left(A|B cap C\right) = frac\left\{P\left(A\right) , P\left(B|A\right) , P\left(C|A cap B\right)\right\}\left\{P\left(B\right) , P\left(C|B\right)\right\},.$
This can be derived in a few steps from Bayes' theorem and the definition of conditional probability:
$P\left(A|B cap C\right) = frac\left\{P\left(A cap B cap C\right)\right\}\left\{P\left(B cap C\right)\right\} = frac\left\{P\left(C|A cap B\right) , P\left(A cap B\right)\right\}\left\{P\left(B\right) , P\left(C|B\right)\right\} = frac\left\{P\left(A\right) , P\left(B|A\right) , P\left(C|A cap B\right)\right\}\left\{P\left(B\right) , P\left(C|B\right)\right\},.$
Similarly,
$P\left(A|B cap C\right) = frac\left\{P\left(B|A cap C\right) , P\left(A|C\right)\right\}\left\{P\left(B|C\right)\right\},,$
which can be regarded as a conditional Bayes' Theorem, and can be derived by as follows:
$P\left(A|B cap C\right) = frac\left\{P\left(A cap B cap C\right)\right\}\left\{P\left(B cap C\right)\right\} = frac\left\{P\left(B|A cap C\right) , P\left(A|C\right) , P\left(C\right)\right\}\left\{P\left(C\right) , P\left(B|C\right)\right\} = frac\left\{P\left(B|A cap C\right) , P\left(A|C\right)\right\}\left\{P\left(B|C\right)\right\},.$
A general strategy is to work with a decomposition of the joint probability, and to marginalize (integrate) over the variables that are not of interest. Depending on the form of the decomposition, it may be possible to prove that some integrals must be 1, and thus they fall out of the decomposition; exploiting this property can reduce the computations very substantially. A Bayesian network, for example, specifies a factorization of a joint distribution of several variables in which the conditional probability of any one variable given the remaining ones takes a particularly simple form (see Markov blanket).
## Further examples
### Example 1: Drug testing
Bayes' theorem is useful in evaluating the result of drug tests. Suppose a certain drug test is 99% sensitive and 99% specific, that is, the test will correctly identify a drug user as testing positive 99% of the time, and will correctly identify a non-user as testing negative 99% of the time. This would seem to be a relatively accurate test, but Bayes' theorem will reveal a potential flaw. Let's assume a corporation decides to test its employees for opium use, and 0.5% of the employees use the drug. We want to know the probability that, given a positive drug test, an employee is actually a drug user. Let "D" be the event of being a drug user and "N" indicate being a non-user. Let "+" be the event of a positive drug test. We need to know the following:
• P(D), or the probability that the employee is a drug user, regardless of any other information. This is 0.005, since 0.5% of the employees are drug users. This is the prior probability of D.
• P(N), or the probability that the employee is not a drug user. This is , or 0.995.
• P(+|D), or the probability that the test is positive, given that the employee is a drug user. This is 0.99, since the test is 99% accurate.
• P(+|N), or the probability that the test is positive, given that the employee is not a drug user. This is 0.01, since the test will produce a false positive for 1% of non-users.
• P(+), or the probability of a positive test event, regardless of other information. This is 0.0149 or 1.49%, which is found by adding the probability that a true positive result will appear (= 99% x 0.5% = 0.495%) plus the probability that a false positive will appear (= 1% x 99.5% = 0.995%). This is the prior probability of +.
Given this information, we can compute the posterior probability P(D|+) of an employee who tested positive actually being a drug user:
$begin\left\{align\right\}P\left(D|+\right) & = frac\left\{P\left(+ | D\right) P\left(D\right)\right\}\left\{P\left(+\right)\right\}$
& = frac{P(+ | D) P(D)}{P(+ | D) P(D) + P(+ | N) P(N)} & = frac{0.99 times 0.005}{0.99 times 0.005 + 0.01 times 0.995} & = 0.3322.end{align}
Despite the apparently high accuracy of the test, the probability that an employee who tested positive actually did use drugs is only about 33%, so it is actually more likely that the employee is not a drug user. The rarer the condition for which we are testing, the greater the percentage of positive tests that will be false positives.
### Example 2: Bayesian inference
Applications of Bayes' theorem often assume the philosophy underlying Bayesian probability that uncertainty and degrees of belief can be measured as probabilities. One such example follows. For additional worked out examples, including simpler examples, please see the article on the examples of Bayesian inference.
We describe the marginal probability distribution of a variable A as the prior probability distribution or simply the 'prior'. The conditional distribution of A given the "data" B is the posterior probability distribution or just the 'posterior'.
Suppose we wish to know about the proportion r of voters in a large population who will vote "yes" in a referendum. Let n be the number of voters in a random sample (chosen with replacement, so that we have statistical independence) and let m be the number of voters in that random sample who will vote "yes". Suppose that we observe n = 10 voters and m = 7 say they will vote yes. From Bayes' theorem we can calculate the probability distribution function for r using
$f\left(r | n=10, m=7\right) =$
frac {f(m=7 | r, n=10) , f(r)} {int_0^1 f(m=7|r, n=10) , f(r) , dr}. !
From this we see that from the prior probability density function f(r) and the likelihood function L(r) = f(m = 7|r, n = 10), we can compute the posterior probability density function f(r|n = 10, m = 7).
The prior probability density function f(r) summarizes what we know about the distribution of r in the absence of any observation. We provisionally assume in this case that the prior distribution of r is uniform over the interval [0, 1]. That is, f(r) = 1. If some additional background information is found, we should modify the prior accordingly. However before we have any observations, all outcomes are equally likely.
Under the assumption of random sampling, choosing voters is just like choosing balls from an urn. The likelihood function L(r) = P(m = 7|r, n = 10,) for such a problem is just the probability of 7 successes in 10 trials for a binomial distribution.
$P\left(m=7 | r, n=10\right) = \left\{10 choose 7\right\} , r^7 , \left(1-r\right)^3.$
As with the prior, the likelihood is open to revision -- more complex assumptions will yield more complex likelihood functions. Maintaining the current assumptions, we compute the normalizing factor,
$int_0^1 P\left(m=7|r, n=10\right) , f\left(r\right) , dr = int_0^1 \left\{10 choose 7\right\} , r^7 , \left(1-r\right)^3 , 1 , dr = \left\{10 choose 7\right\} , frac\left\{1\right\}\left\{1320\right\} !$
and the posterior distribution for r is then
$f\left(r | n=10, m=7\right) =$
frac{{10 choose 7} , r^7 , (1-r)^3 , 1} {{10 choose 7} , frac{1}{1320}} = 1320 , r^7 , (1-r)^3
for r between 0 and 1, inclusive.
One may be interested in the probability that more than half the voters will vote "yes". The prior probability that more than half the voters will vote "yes" is 1/2, by the symmetry of the uniform distribution. In comparison, the posterior probability that more than half the voters will vote "yes", i.e., the conditional probability given the outcome of the opinion poll – that seven of the 10 voters questioned will vote "yes" – is
$1320int_\left\{1/2\right\}^1 r^7\left(1-r\right)^3,dr approx 0.887, !$
which is about an "89% chance".
### Example 3: The Monty Hall problem
We are presented with three doors - red, green, and blue - one of which has a prize. We choose the red door, which is not opened until the presenter performs an action. The presenter who knows what door the prize is behind, and who must open a door, but is not permitted to open the door we have picked or the door with the prize, opens the blue door and reveals that there is no prize behind it and subsequently asks if we wish to change our mind about our initial selection of red. What is the probability that the prize is behind each of the green and red doors?
Let us call the situation that the prize is behind a given door Ar, Ag, and Ab.
To start with, $P\left(A_r\right) = P\left(A_g\right) = P\left(A_b\right) = frac 1 3$, and to make things simpler we shall assume that we have already picked the red door.
Let us call B "the presenter opens the blue door". Without any prior knowledge, we would assign this a probability of 50%.
• In the situation where the prize is behind the red door, the host is free to pick between the green or the blue door at random. Thus, $P\left(B|A_r\right) = 1/2$
• In the situation where the prize is behind the green door, the host must pick the blue door. Thus, $P\left(B|A_g\right) = 1$
• In the situation where the prize is behind the blue door, the host must pick the green door. Thus, $P\left(B|A_b\right) = 0$
Thus,
$begin\left\{matrix\right\} P\left(A_r|B\right) & = frac\left\{P\left(B | A_r\right) P\left(A_r\right)\right\}\left\{P\left(B\right)\right\} & = frac\left\{frac 1 2 cdot frac 1 3\right\}\left\{frac 1 2\right\} & = frac 1 3 P\left(A_g|B\right) & = frac\left\{P\left(B | A_g\right) P\left(A_g\right)\right\}\left\{P\left(B\right)\right\} & = frac\left\{1 cdot frac 1 3\right\}\left\{frac 1 2\right\} & = frac 2 3 P\left(A_b|B\right) & = frac\left\{P\left(B | A_b\right) P\left(A_b\right)\right\}\left\{P\left(B\right)\right\} & = frac\left\{0 cdot frac 1 3\right\}\left\{frac 1 2\right\} & = 0. end\left\{matrix\right\}$
So, we should always choose the green door.
Note how this depends on the value of P(B).
## Historical remarks
An investigation by a statistics professor (Stigler 1983) suggests that Bayes' theorem was discovered by Nicholas Saunderson some time before Bayes.
Bayes' theorem is named after the Reverend Thomas Bayes (1702–1761), who studied how to compute a distribution for the parameter of a binomial distribution (to use modern terminology). His friend, Richard Price, edited and presented the work in 1763, after Bayes' death, as An Essay towards solving a Problem in the Doctrine of Chances. Pierre-Simon Laplace replicated and extended these results in an essay of 1774, apparently unaware of Bayes' work.
One of Bayes' results (Proposition 5) gives a simple description of conditional probability, and shows that it can be expressed independently of the order in which things occur:
If there be two subsequent events, the probability of the second b/N and the probability of both together P/N, and it being first discovered that the second event has also happened, from hence I guess that the first event has also happened, the probability I am right [i.e., the conditional probability of the first event being true given that the second has also happened] is P/b.
Note that the expression says nothing about the order in which the events occurred; it measures correlation, not causation. His preliminary results, in particular Propositions 3, 4, and 5, imply the result now called Bayes' Theorem (as described above), but it does not appear that Bayes himself emphasized or focused on that result.
Bayes' main result (Proposition 9 in the essay) is the following: assuming a uniform distribution for the prior distribution of the binomial parameter p, the probability that p is between two values a and b is
$$
frac {int_a^b {n+m choose m} p^m (1-p)^n,dp} {int_0^1 {n+m choose m} p^m (1-p)^n,dp} !
where m is the number of observed successes and n the number of observed failures.
What is "Bayesian" about Proposition 9 is that Bayes presented it as a probability for the parameter p. So, one can compute probability for an experimental outcome, but also for the parameter which governs it, and the same algebra is used to make inferences of either kind.
Bayes states his question in a way that might make the idea of assigning a probability distribution to a parameter palatable to a frequentist. He supposes that a billiard ball is thrown at random onto a billiard table, and that the probabilities p and q are the probabilities that subsequent billiard balls will fall above or below the first ball.
Stephen Fienberg describes the evolution of the field from "inverse probability" at the time of Bayes and Laplace, and even of Harold Jeffreys (1939) to "Bayesian" in the 1950's. The irony is that this label was introduced by R.A. Fisher in a derogatory sense. So, historically, Bayes was not a "Bayesian". It is actually unclear whether or not he was a Bayesian in the modern sense of the term, i.e. whether or not he was interested in inference or merely in probability: the 1763 essay is more of a probability paper.
## References
### Versions of the essay
• Thomas Bayes (1763), "An Essay towards solving a Problem in the Doctrine of Chances. By the late Rev. Mr. Bayes, F. R. S. communicated by Mr. Price, in a letter to John Canton, A. M. F. R. S.", 53:370–418.
• Thomas Bayes (1763/1958) "Studies in the History of Probability and Statistics: IX. Thomas Bayes' Essay Towards Solving a Problem in the Doctrine of Chances", 45:296–315. (Bayes' essay in modernized notation)
• Thomas Bayes "An essay towards solving a Problem in the Doctrine of Chances" (Bayes' essay in the original notation)
### Commentaries
• G. A. Barnard (1958) "Studies in the History of Probability and Statistics: IX. Thomas Bayes' Essay Towards Solving a Problem in the Doctrine of Chances", Biometrika 45:293–295. (biographical remarks)
• Daniel Covarrubias. "An Essay Towards Solving a Problem in the Doctrine of Chances" (an outline and exposition of Bayes' essay)
• Stephen M. Stigler (1982). "Thomas Bayes' Bayesian Inference," Journal of the Royal Statistical Society, Series A, 145:250–258. (Stigler argues for a revised interpretation of the essay; recommended)
• Isaac Todhunter (1865). A History of the Mathematical Theory of Probability from the time of Pascal to that of Laplace, Macmillan. Reprinted 1949, 1956 by Chelsea and 2001 by Thoemmes.
• An Intuitive Explanation of Bayesian Reasoning (includes biography)
### Additional material
• Pierre-Simon Laplace (1774/1986), "Memoir on the Probability of the Causes of Events", Statistical Science 1(3):364–378.
• Stephen M. Stigler (1986), "Laplace's 1774 memoir on inverse probability", Statistical Science 1(3):359–378.
• Stephen M. Stigler (1983), "Who Discovered Bayes' Theorem?" The American Statistician 37(4):290–296.
• Jeff Miller, et al., Earliest Known Uses of Some of the Words of Mathematics (B) (very informative; recommended)
• Athanasios Papoulis (1984), Probability, Random Variables, and Stochastic Processes, second edition. New York: McGraw-Hill.
• The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J. C. MacKay provides an up to date overview of the use of Bayes' theorem in information theory and machine learning.
• Provides a comprehensive introduction to Bayes' theorem.
• Stanford Encyclopedia of Philosophy: Inductive Logic provides a comprehensive Bayesian treatment of Inductive Logic and Confirmation Theory.
• Eliezer S. Yudkowsky (2003), " An Intuitive Explanation of Bayesian Reasoning"
• A tutorial on probability and Bayes’ theorem devised for Oxford University psychology students
• Confirmation Theory An extensive presentation of Bayesian Confirmation Theory
Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 50, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260848760604858, "perplexity_flag": "middle"}
|
http://psychology.wikia.com/wiki/Graph_theory
|
Graph theory
Talk0
31,736pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
Further information: Graph (mathematics)
In mathematics and computer science, graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects from a certain collection. A "graph" in this context is a collection of "vertices" or "nodes" and a collection of edges that connect pairs of vertices. A graph may be undirected, meaning that there is no distinction between the two vertices associated with each edge, or its edges may be directed from one vertex to another; see graph (mathematics) for more detailed definitions and for other variations in the types of graph that are commonly considered. Graphs are one of the prime objects of study in discrete mathematics.
The graphs studied in graph theory should not be confused with the graphs of functions or other kinds of graphs.
Refer to the glossary of graph theory for basic definitions in graph theory.
Applications
Graphs are among the most ubiquitous models of both natural and human-made structures. They can be used to model many types of relations and process dynamics in physical, biological[1] and social systems. Many problems of practical interest can be represented by graphs.
In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. One practical example: The link structure of a website could be represented by a directed graph. The vertices are the web pages available at the website and a directed edge from page A to page B exists if and only if A contains a link to B. A similar approach can be taken to problems in travel, biology, computer chip design, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. There, the transformation of graphs is often formalized and represented by graph rewrite systems. They are either directly used or properties of the rewrite systems (e.g. confluence) are studied. Complementary to graph transformation systems focussing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data.
Graph-theoretic methods, in various forms, have proven particularly useful in linguistics, since natural language often lends itself well to discrete structure. Traditionally, syntax and compositional semantics follow tree-based structures, whose expressive power lies in the Principle of Compositionality, modeled in a hierarchical graph. More contemporary approaches such as Head-driven phrase structure grammar (HPSG) model syntactic constructions via the unification of typed feature structures, which are directed acyclic graphs. Within lexical semantics, especially as applied to computers, modeling word meaning is easier when a given word is understood in terms of related words; semantic networks are therefore important in computational linguistics. Still other methods in phonology (e.g. Optimality Theory, which uses lattice graphs) and morphology (e.g. finite-state morphology, using finite-state transducers) are common in the analysis of language as a graph. Indeed, the usefulness of this area of mathematics to linguistics has borne organizations such as TextGraphs, as well as various 'Net' projects, such as WordNet, VerbNet, and others.
Graph theory is also used to study molecules in chemistry and physics. In condensed matter physics, the three dimensional structure of complicated simulated atomic structures can be studied quantitatively by gathering statistics on graph-theoretic properties related to the topology of the atoms. For example, Franzblau's shortest-path (SP) rings. In chemistry a graph makes a natural model for a molecule, where vertices represent atoms and edges bonds. This approach is especially used in computer processing of molecular structures, ranging from chemical editors to database searching. In statistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such systems.
Graph theory is also widely used in sociology as a way, for example, to measure actors' prestige or to explore diffusion mechanisms, notably through the use of social network analysis software.
Likewise, graph theory is useful in biology and conservation efforts where a vertex can represent regions where certain species exist (or habitats) and the edges represent migration paths, or movement between the regions. This information is important when looking at breeding patterns or tracking the spread of disease, parasites or how changes to the movement can affect other species.
In mathematics, graphs are useful in geometry and certain parts of topology, e.g. Knot Theory. Algebraic graph theory has close links with group theory.
A graph structure can be extended by assigning a weight to each edge of the graph. Graphs with weights, or weighted graphs, are used to represent structures in which pairwise connections have some numerical values. For example if a graph represents a road network, the weights could represent the length of each road.
A digraph with weighted edges in the context of graph theory is called a network. Network analysis have many practical applications, for example, to model and analyze traffic networks. Applications of network analysis split broadly into three categories:
1. First, analysis to determine structural properties of a network, such as the distribution of vertex degrees and the diameter of the graph. A vast number of graph measures exist, and the production of useful ones for various domains remains an active area of research.
2. Second, analysis to find a measurable quantity within the network, for example, for a transportation network, the level of vehicular flow within any portion of it.
3. Third, analysis of dynamical properties of networks.
History
The paper written by Leonhard Euler on the Seven Bridges of Königsberg and published in 1736 is regarded as the first paper in the history of graph theory.[2] This paper, as well as the one written by Vandermonde on the knight problem, carried on with the analysis situs initiated by Leibniz. Euler's formula relating the number of edges, vertices, and faces of a convex polyhedron was studied and generalized by Cauchy[3] and L'Huillier,[4] and is at the origin of topology.
More than one century after Euler's paper on the bridges of Königsberg and while Listing introduced topology, Cayley was led by the study of particular analytical forms arising from differential calculus to study a particular class of graphs, the trees. This study had many implications in theoretical chemistry. The involved techniques mainly concerned the enumeration of graphs having particular properties. Enumerative graph theory then rose from the results of Cayley and the fundamental results published by Pólya between 1935 and 1937 and the generalization of these by De Bruijn in 1959. Cayley linked his results on trees with the contemporary studies of chemical composition.[5] The fusion of the ideas coming from mathematics with those coming from chemistry is at the origin of a part of the standard terminology of graph theory.
In particular, the term "graph" was introduced by Sylvester in a paper published in 1878 in Nature, where he draws an analogy between "quantic invariants" and "co-variants" of algebra and molecular diagrams:[6]
"[...] Every invariant and co-variant thus becomes expressible by a graph precisely identical with a Kekuléan diagram or chemicograph. [...] I give a rule for the geometrical multiplication of graphs, i.e. for constructing a graph to the product of in- or co-variants whose separate graphs are given. [...]" (italics as in the original).
The first textbook on graph theory was written by Dénes Kőnig, and published in 1936.[7] A later textbook by Frank Harary, published in 1969, was enormously popular,[citation needed] and enabled mathematicians, chemists, electrical engineers and social scientists to talk to each other. Harary donated all of the royalties to fund the Pólya Prize.[8]
One of the most famous and productive problems of graph theory is the four color problem: "Is it true that any map drawn in the plane may have its regions colored with four colors, in such a way that any two regions having a common border have different colors?" This problem was first posed by Francis Guthrie in 1852 and its first written record is in a letter of De Morgan addressed to Hamilton the same year. Many incorrect proofs have been proposed, including those by Cayley, Kempe, and others. The study and the generalization of this problem by Tait, Heawood, Ramsey and Hadwiger led to the study of the colorings of the graphs embedded on surfaces with arbitrary genus. Tait's reformulation generated a new class of problems, the factorization problems, particularly studied by Petersen and Kőnig. The works of Ramsey on colorations and more specially the results obtained by Turán in 1941 was at the origin of another branch of graph theory, extremal graph theory.
The four color problem remained unsolved for more than a century. In 1969 Heinrich Heesch published a method for solving the problem using computers.[9] A computer-aided proof produced in 1976 by Kenneth Appel and Wolfgang Haken makes fundamental use of the notion of "discharging" developed by Heesch.[10][11] The proof involved checking the properties of 1,936 configurations by computer, and was not fully accepted at the time due to its complexity. A simpler proof considering only 633 configurations was given twenty years later by Robertson, Seymour, Sanders and Thomas.[12]
The autonomous development of topology from 1860 and 1930 fertilized graph theory back through the works of Jordan, Kuratowski and Whitney. Another important factor of common development of graph theory and topology came from the use of the techniques of modern algebra. The first example of such a use comes from the work of the physicist Gustav Kirchhoff, who published in 1845 his Kirchhoff's circuit laws for calculating the voltage and current in electric circuits.
The introduction of probabilistic methods in graph theory, especially in the study of Erdős and Rényi of the asymptotic probability of graph connectivity, gave rise to yet another branch, known as random graph theory, which has been a fruitful source of graph-theoretic results.
Drawing graphs
Main article: Graph drawing
Graphs are represented graphically by drawing a dot or circle for every vertex, and drawing an arc between two vertices if they are connected by an edge. If the graph is directed, the direction is indicated by drawing an arrow.
A graph drawing should not be confused with the graph itself (the abstract, non-visual structure) as there are several ways to structure the graph drawing. All that matters is which vertices are connected to which others by how many edges and not the exact layout. In practice it is often difficult to decide if two drawings represent the same graph. Depending on the problem domain some layouts may be better suited and easier to understand than others.
The pioneering work of W. T. Tutte was very influential in the subject of graph drawing. Among other achievements, he introduced the use of linear algebraic methods to obtain graph drawings.
Graph drawing also can be said to encompass problems that deal with the crossing number and its various generalizations. The crossing number of a graph is the minimum number of intersections between edges that a drawing of the graph in the plane must contain. For a planar graph, the crossing number is zero by definition.
Drawings on surfaces other than the plane are also studied.
Graph-theoretic data structures
Main article: Graph (data structure)
There are different ways to store graphs in a computer system. The data structure used depends on both the graph structure and the algorithm used for manipulating the graph. Theoretically one can distinguish between list and matrix structures but in concrete applications the best structure is often a combination of both. List structures are often preferred for sparse graphs as they have smaller memory requirements. Matrix structures on the other hand provide faster access for some applications but can consume huge amounts of memory.
List structures
Incidence list
The edges are represented by an array containing pairs (tuples if directed) of vertices (that the edge connects) and possibly weight and other data. Vertices connected by an edge are said to be adjacent.
Adjacency list
Much like the incidence list, each vertex has a list of which vertices it is adjacent to. This causes redundancy in an undirected graph: for example, if vertices A and B are adjacent, A's adjacency list contains B, while B's list contains A. Adjacency queries are faster, at the cost of extra storage space.
Matrix structures
Incidence matrix
The graph is represented by a matrix of size |V | (number of vertices) by |E| (number of edges) where the entry [vertex, edge] contains the edge's endpoint data (simplest case: 1 - incident, 0 - not incident).
Adjacency matrix
This is an n by n matrix A, where n is the number of vertices in the graph. If there is an edge from a vertex x to a vertex y, then the element $a_{x, y}$ is 1 (or in general the number of xy edges), otherwise it is 0. In computing, this matrix makes it easy to find subgraphs, and to reverse a directed graph.
Laplacian matrix or "Kirchhoff matrix" or "Admittance matrix"
This is defined as D − A, where D is the diagonal degree matrix. It explicitly contains both adjacency information and degree information. (However, there are other, similar matrices that are also called "Laplacian matrices" of a graph.)
Distance matrix
A symmetric n by n matrix D, where n is the number of vertices in the graph. The element $d_{x, y}$ is the length of a shortest path between x and y; if there is no such path $d_{x, y}$ = infinity. It can be derived from powers of A
$d_{x,y}=\min\{n\mid A^n[x,y]\ne 0\}. \,$
Problems in graph theory
Enumeration
There is a large literature on graphical enumeration: the problem of counting graphs meeting specified conditions. Some of this work is found in Harary and Palmer (1973).
Subgraphs, induced subgraphs, and minors
A common problem, called the subgraph isomorphism problem, is finding a fixed graph as a subgraph in a given graph. One reason to be interested in such a question is that many graph properties are hereditary for subgraphs, which means that a graph has the property if and only if all subgraphs have it too. Unfortunately, finding maximal subgraphs of a certain kind is often an NP-complete problem.
• Finding the largest complete graph is called the clique problem (NP-complete).
A similar problem is finding induced subgraphs in a given graph. Again, some important graph properties are hereditary with respect to induced subgraphs, which means that a graph has a property if and only if all induced subgraphs also have it. Finding maximal induced subgraphs of a certain kind is also often NP-complete. For example,
• Finding the largest edgeless induced subgraph, or independent set, called the independent set problem (NP-complete).
Still another such problem, the minor containment problem, is to find a fixed graph as a minor of a given graph. A minor or subcontraction of a graph is any graph obtained by taking a subgraph and contracting some (or no) edges. Many graph properties are hereditary for minors, which means that a graph has a property if and only if all minors have it too. A famous example:
• A graph is planar if it contains as a minor neither the complete bipartite graph $K_{3,3}$ (See the Three-cottage problem) nor the complete graph $K_{5}$.
Another class of problems has to do with the extent to which various species and generalizations of graphs are determined by their point-deleted subgraphs, for example:
Graph coloring
Many problems have to do with various ways of coloring graphs, for example:
• The four-color theorem
• The strong perfect graph theorem
• The Erdős–Faber–Lovász conjecture (unsolved)
• The total coloring conjecture (unsolved)
• The list coloring conjecture (unsolved)
• The Hadwiger conjecture (graph theory) (unsolved).
Subsumption and unification
Constraint modeling theories concern families of directed graphs related by a partial order. In these applications, graphs are ordered by specificity, meaning that more constrained graphs—which are more specific and thus contain a greater amount of information—are subsumed by those that are more general. Operations between graphs include evaluating the direction of a subsumption relationship between two graphs, if any, and computing graph unification. The unification of two argument graphs is defined as the most general graph (or the computation thereof) that is consistent with (i.e. contains all of the information in) the inputs, if such a graph exists; efficient unification algorithms are known.
For contraint frameworks which are strictly compositional, graph unification is the sufficient satisfiability and combination function. Well-known applications include automatic theorem proving and modeling the elaboration of linguistic structure.
Route problems
• Hamiltonian path and cycle problems
• Minimum spanning tree
• Route inspection problem (also called the "Chinese Postman Problem")
• Seven Bridges of Königsberg
• Shortest path problem
• Steiner tree
• Three-cottage problem
• Traveling salesman problem (NP-hard)
Network flow
There are numerous problems arising especially from applications that have to do with various notions of flows in networks, for example:
Covering problems
Covering problems are specific instances of subgraph-finding problems, and they tend to be closely related to the clique problem or the independent set problem.
Graph classes
Many problems involve characterizing the members of various classes of graphs. Overlapping significantly with other types in this list, this type of problem includes, for instance:
• Enumerating the members of a class
• Characterizing a class in terms of forbidden substructures
• Ascertaining relationships among classes (e.g., does one property of graphs imply another)
• Finding efficient algorithms to decide membership in a class
• Finding representations for members of a class.
Notes
1. ↑ Mashaghi, A. (2004). Investigation of a protein complex network. European Physical Journal B 41 (1): 113–121.
2. ↑ Biggs, N.; Lloyd, E. and Wilson, R. (1986), Graph Theory, 1736-1936, Oxford University Press
3. ↑ Cauchy, A.L. (1813), "Recherche sur les polyèdres - premier mémoire", 9 (Cahier 16): 66–86.
4. ↑ L'Huillier, S.-A.-J. (1861), "Mémoire sur la polyèdrométrie", Annales de Mathématiques 3: 169–189.
5. ↑ Cayley, A. (1875), "Ueber die Analytischen Figuren, welche in der Mathematik Bäume genannt werden und ihre Anwendung auf die Theorie chemischer Verbindungen", Berichte der deutschen Chemischen Gesellschaft 8 (2): 1056–1059, doi:.
6. ↑ John Joseph Sylvester (1878), Chemistry and Algebra. Nature, volume 17, page 284. DOI:10.1038/017284a0 . Online version. Retrieved 2009-12-30.
7. ↑ Tutte, W.T. (2001), , Cambridge University Press, p. 30, ISBN 978-0-521-79489-3 .
8. ↑ Society for Industrial and Applied Mathematics (2002), "The George Polya Prize", Looking Back, Looking Ahead: A SIAM History, p. 26 .
9. ↑ Heinrich Heesch: Untersuchungen zum Vierfarbenproblem. Mannheim: Bibliographisches Institut 1969.
10. ↑ Appel, K. and Haken, W. (1977), "Every planar map is four colorable. Part I. Discharging", Illinois J. Math. 21: 429–490.
11. ↑ Appel, K. and Haken, W. (1977), "Every planar map is four colorable. Part II. Reducibility", Illinois J. Math. 21: 491–567.
12. ↑ Robertson, N.; Sanders, D.; Seymour, P. and Thomas, R. (1997), "The four color theorem", Journal of Combinatorial Theory Series B 70: 2–44, doi:.
References
• Berge, Claude (1958), Théorie des graphes et ses applications, Collection Universitaire de Mathématiques, II, Paris: Dunod . English edition, Wiley 1961; Methuen & Co, New York 1962; Russian, Moscow 1961; Spanish, Mexico 1962; Roumanian, Bucharest 1969; Chinese, Shanghai 1963; Second printing of the 1962 first English edition, Dover, New York 2001.
• Biggs, N.; Lloyd, E.; Wilson, R. (1986), Graph Theory, 1736–1936, Oxford University Press .
• Bondy, J.A.; Murty, U.S.R. (2008), Graph Theory, Springer, ISBN 978-1-84628-969-9 .
• Bondy, Riordan, O.M (2003), Mathematical results on scale-free random graphs in "Handbook of Graphs and Networks" (S. Bornholdt and H.G. Schuster (eds)), Wiley VCH, Weinheim, 1st ed. .
• Chartrand, Gary (1985), Introductory Graph Theory, Dover, ISBN 0-486-24775-9 .
• Gibbons, Alan (1985), Algorithmic Graph Theory, Cambridge University Press .
• Reuven Cohen, Shlomo Havlin (2010), Complex Networks: Structure, Robustness and Function, Cambridge University Press
• Golumbic, Martin (1980), Algorithmic Graph Theory and Perfect Graphs, Academic Press .
• Harary, Frank (1969), Graph Theory, Reading, MA: Addison-Wesley .
• Harary, Frank; Palmer, Edgar M. (1973), Graphical Enumeration, New York, NY: Academic Press .
• Mahadev, N.V.R.; Peled, Uri N. (1995), Threshold Graphs and Related Topics, North-Holland .
• Mark Newman (2010), Networks: An Introduction, Oxford University Press .
Photos
Add a Photo
6,465photos on this wiki
• by Dr9855
2013-05-14T02:10:22Z
• by PARANOiA 12
2013-05-11T19:25:04Z
Posted in more...
• by Addyrocker
2013-04-04T18:59:14Z
• by Psymba
2013-03-24T20:27:47Z
Posted in Mike Abrams
• by Omaspiter
2013-03-14T09:55:55Z
• by Omaspiter
2013-03-14T09:28:22Z
• by Bigkellyna
2013-03-14T04:00:48Z
Posted in User talk:Bigkellyna
• by Preggo
2013-02-15T05:10:37Z
• by Preggo
2013-02-15T05:10:17Z
• by Preggo
2013-02-15T05:09:48Z
• by Preggo
2013-02-15T05:09:35Z
• See all photos
See all photos >
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113192558288574, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/187454-few-questions-involving-square-roots-inequalities.html
|
# Thread:
2. ## re: A few questions involving square roots and inequalities
For the first one, $\displaystyle \sqrt{x - 1}$ is not equal to $\displaystyle \sqrt{x} - 1$.
You should start by squaring both sides.
3. ## re: A few questions involving square roots and inequalities
Originally Posted by juliak
For the last one, you first need to note that $\displaystyle x \neq 3$ (Why?)
Then
$\displaystyle \begin{align*} \frac{\sqrt{x^3 - 6x^2 + 9x}}{x-3} &= \frac{\sqrt{x(x^2 - 6x+ 9)}}{x - 3} \\ &= \frac{\sqrt{x}\sqrt{x^2 - 6x + 9}}{x - 3} \\ &= \frac{\sqrt{x}\sqrt{(x - 3)^2}}{x-3} \textrm{ which means }x \geq 0 \\ &= \frac{\sqrt{x}|x-3|}{x-3} \\ &= \begin{cases}\frac{\sqrt{x}(x - 3)}{x-3}\textrm{ if }x - 3 > 0 \\ \frac{\sqrt{x}\,\left[-(x-3)\right]}{x-3}\textrm{ if }x - 3 < 0\end{cases} \\ &= \begin{cases}\sqrt{x}\textrm{ if }x > 3 \\ -\sqrt{x} \textrm{ if }0 \leq x < 3 \textrm{ since we have already established that }x \geq 0\end{cases}\end{align*}$
You should be able to sketch this from here.
4. ## re: A few questions involving square roots and inequalities
Originally Posted by juliak
b) Your working has been marked incorrect because you have failed to realise that multiplying or dividing both sides of an inequality changes the inequality sign. It's quite possible that your denominators could be negative. The easiest way to do this is first to note that $\displaystyle x \neq -1$ and $\displaystyle x \neq 1$, then...
$\displaystyle \begin{align*} \frac{1}{1 + x} &< \frac{x}{x - 1} \\ \frac{1}{1 + x} &< \frac{x - 1 + 1}{x - 1} \\ \frac{1}{1 + x} &< \frac{x - 1}{x - 1} + \frac{1}{x - 1} \\ \frac{1}{1 + x} &< 1 + \frac{1}{x - 1} \\ \frac{1}{1 + x} - \frac{1}{x - 1} &< 1 \\ \frac{(x - 1) - (1 + x)}{(1 + x)(x - 1)} &< 1 \\ \frac{-2}{(1 + x)(x - 1)} &< 1 \end{align*}$
Now to solve this, you need to consider two cases, the first where $\displaystyle (1 + x)(x - 1) < 0$ and the second where $\displaystyle (1 + x)(x - 1) > 0$.
Case 1:
$\displaystyle (1 + x)(x - 1) < 0 \implies x^2 - 1 < 0 \implies |x| < 1 \implies -1 < x < 1$
which gives
$\displaystyle \begin{align*} \frac{-2}{(1 + x)(x - 1)} &< 1 \\ -2 &> (1 + x)(x - 1) \\ -2 &> x^2 - 1 \\ x^2 - 1 &< -2 \\ x^2 &< -1 \textrm{ which is not true for any }x \end{align*}$
Case 2:
$\displaystyle (1 + x)(x - 1) > 0 \implies x^2 - 1 > 0 \implies |x| > 1 \implies x < -1 \textrm{ or }x > 1$
which gives
$\displaystyle \begin{align*} \frac{-2}{(1 + x)(x - 1)} &< 1 \\ -2 &< (1 + x)(x - 1) \\ -2 &< x^2 - 1 \\ x^2 - 1 &> -2 \\ x^2 &> -1\textrm{ which is true for all possible }x \end{align*}$
So the solution is $\displaystyle x < -1 \textrm{ and }x > 1$.
5. ## re: A few questions involving square roots and inequalities
Originally Posted by juliak
$\displaystyle \begin{align*} |4x - x^3| &= \begin{cases}4x - x^3 \textrm{ if }4x - x^3 \geq 0 \\ x^3 - 4x \textrm{ if }4x - x^4 < 0\end{cases} \end{align*}$
So to work out the possible $\displaystyle x$ values for each case, we need to solve the equation $\displaystyle f(x) = 4x - x^3 = 0$, since the function changes sign at these solutions.
$\displaystyle \begin{align*} 4x - x^3 &= 0 \\ x(4 - x^2) &= 0 \\ x(2 - x)(2 + x) &= 0 \\ x = -2 \textrm{ or }x &= 0 \textrm{ or }x = 2 \end{align*}$
Now testing values for the function...
$\displaystyle \begin{align*} f(-3) &= 4(-3) - (-3)^3 \\ &= -12 - (-27) \\ &= 27 - 12 \\ &= 15 \end{align*}$
so $\displaystyle 4x - x^3 > 0\textrm{ if } x < -2$.
$\displaystyle \begin{align*}f(-1) &= 4(-1) - (-1)^3 \\ &= -4 - (-1) \\ &= 1 - 4 \\ &= -3\end{align*}$
so $\displaystyle 4x - x^3 < 0 \textrm{ if }-2 < x < 0$.
$\displaystyle \begin{align*} f(1) &= 4(1) - 1^3 \\ &= 4 - 1 \\ &= 3 \end{align*}$
so $\displaystyle 4x - x^3 > 0 \textrm{ if } 0 < x < 2$.
$\displaystyle \begin{align*} f(3) &= 4(3) - 3^3 \\ &= 12 - 27 \\ &= -15 \end{align*}$
so $\displaystyle 4x - x^3 < 0 \textrm{ if } x > 2$.
Therefore
$\displaystyle |4x - x^3| = \begin{cases} 4x - x^3 \textrm{ if }x \leq 2 \textrm{ or }0 \leq x \leq 2 \\ x^3 - 4x \textrm{ if }-2 < x < 0 \textrm{ or }x > 2 \end{cases}$
6. ## Re: A few questions involving square roots and inequalities
$\sqrt(x - 1) = \sqrt(1-x)$ is true only if x = 1 which is the solution.
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377956390380859, "perplexity_flag": "middle"}
|
http://slawekk.wordpress.com/tag/haskell/
|
# Formalized Mathematics
Just another WordPress.com weblog
## Posts Tagged ‘Haskell’
### Gowers and Ganesalingam
May 12, 2013
There is a series of posts at the Timothy Gowers’s blog about software that can “solve mathematical problems” which he has been working on with Mohan Ganesalingam for the last three years. The word “problems” is used here in the way a mathematician would use it – it’s really about software that can support a mathematician trying to write a proof of a theorem. The goal is for the program to write proofs that look like ones produced by a human. The experiment Gowers sets up on his blog presents several theorems with proofs written by two humans (an undergraduate and a graduate student) and one created by the program and asks the readers to guess which proofs are written by whom. The results are truly remarkable – while I, and most of the audience, could have a good guess which proof is written by the software, I believe I could guess it only because I had known one of the three proofs was such. If a student submitted such proof as homework to me, I would have no suspicion at all.
(more…)
### IsarMathLib 1.7.2
July 20, 2011
I have released a new version of IsarMathLib. It adds about 50 new lemmas, mostly in group theory and topology, leading to the following characterization of closure in topological groups:
$\overline{A} = \bigcap_{H\in \mathcal{N}_0} A+H$
Here, $\mathcal{N}_0$ is the collection of neighborhoods of zero (sets whose interior contains the neutral element of the group), and for two sets $A,B\subseteq G$ we define $A+B$ as $A+B=\{a+b | a\in A, b\in B\}$.
(more…)
Posted in Haskell, IsarMathLib releases, news | 22 Comments »
### Markov chains
June 5, 2009
The approach taken in the Haskell’s probability library makes it especially convenient for modeling Markov chains.
Suppose $\{X_n\}_{n \in \mathbb{N}}$ is a Markov chain on a state space $S$. Distribution of the process $\{X_n\}_{n \in \mathbb{N}}$ is determined by the distribution of $X_0$ and the numbers $Pr(X_{n+1}=y | X_n = x)$, $x,y \in S, n \in \mathbb{N}$, called transition probabilities. For simplicity let’s consider only time-homogeneous Markov chains where these numbers don’t depend on $n$ and let’s assume that the initial distribution is a point mass, i.e. $X_0 = x_0$ for some $x_0\in S$. Then we can define a function $f: S \rightarrow Dist(S)$, where for each $x \in S$ the value $f(x)$ is a distribution on $S$ defined by $(f(x))(y) = Pr(X_{n+1}=y | X_n = x), x,y \in S, n \in \mathbb{N}$. Here $Dist(S)$ denotes the set of finitely supported nonnegative functions on $S$ that sum up to 1. Conceptually, $f(x)$ is the the distribution of the next value of the chain given the current value is $x$. This is exactly the kind of distribution-valued function that is the right hand side operand of the $\leadsto$ operation from the previous post, which corresponds to the Haskell’s ` >>= ` operation in the probability monad. (more…)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454548358917236, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/1544/nonlinear-optics-as-gauge-theory
|
Nonlinear optics as gauge theory
the widely used approach to nonlinear optics is a Taylor expansion of the dielectric displacement field $\mathbf{D} = \epsilon_0\cdot\mathbf{E} + \mathbf{P}$ in a Fourier representation of the polarization $\mathbf{P}$ in terms of the dielectric susceptibility $\mathcal{X}$:
$\mathbf{P} = \epsilon_0\cdot(\mathcal{X}^{(1)}(\mathbf{E}) + \mathcal{X}^{(2)}(\mathbf{E},\mathbf{E}) + \dots)$ .
This expansion does not work anymore if the excitation field has components close to the resonance of the medium. Then, one has to take the whole quantum mechanical situation into account by e.g. describing light/matter interaction by a two-level Hamiltonian.
But this approach is certainly not the most general one.
Intrinsically nonlinear formulations of electrodynamics
So, what kind of nonlinear formulations of electrodynamics given in a Lagrangian formulation are there?
One known ansatz is the Born-Infeld model as pointed out by Raskolnikov. There, the Lagrangian density is given by
$\mathcal{L} = b^2\cdot \left[ \sqrt{-\det (g_{\mu \nu})} - \sqrt{-\det(g_{\mu \nu} + F_{\mu \nu}/b)} \right]$
and the theory has some nice features as for example a maximum energy density and its relation to gauge fields in string theory. But as I see it, this model is an intrinsically nonlinear model for the free-space field itself and not usefull for describing nonlinear matter interaction.
The same holds for an ansatz of the form
$\mathcal{L} = -\frac{1}{4}F^{\mu\nu}F_{\mu\nu} + \lambda\cdot\left( F^{\mu\nu}F_{\mu\nu} \right)^2$
proposed by Mahzoon and Riazi. Of course, describing the system in Quantum Electrodynamics is intrinsically nonlinear and ... to my mind way to complicated for a macroscopical description for nonlinear optics. The question is: Can we still get a nice formulation of the theory, say, as a mean field theory via an effective Lagrangian?
I think a suitable ansatz could be
$\mathcal{L} = -\frac{1}{4}M^{\mu\nu}F_{\mu\nu}$
where $M$ now accounts for the matter reaction and depends in a nonlinear way on $\mathbf{E}$ and $\mathbf{B}$, say
$M^{\mu\nu} = T^{\mu\nu\alpha\beta}F_{\alpha\beta}$
where now $T$ is a nonlinear function of the field strength and might obey certain symmetries. The equation $T = T\left( F \right)$ remains unknown and depends on the material.
Metric vs. $T$ approach
As pointed out by space_cadet, one might ask the question why the nonlinearity is not better suited in the metric itself. I think this is a matter of taste. My point is that explicitly changing the metric might imply a non-stationary spacetime in which a Fourier transformation might not be well defined. It might be totally sufficient to treat spacetime as Lorentzian manifold.
Also, we might need a simple spacetime structure later on to explain the material interaction since the polarization $\mathbf{P}$ depends on the matter response generally in terms of an integration over the past, say
$\mathbf{P}(t) = \int_{-\infty}^{t}R\left[\mathbf{E}\right](\tau )d\tau$
with $R$ beeing some nonlinear response function(al) related to $T^{\mu\nu\alpha\beta}$.
Examples for $T$
To illustrate the idea of $T$, here are some examples.
For free space, $T$ it is given by $T^{\mu\nu\alpha\beta} = g^{\mu\alpha}g^{\nu\beta}$ resulting in the free-space Lagrangian $\mathcal{L} = -\frac{1}{4}T^{\mu\nu\alpha\beta}F_{\alpha\beta}F_{\mu\nu} = -\frac{1}{4}F^{\mu\nu}F_{\mu\nu}$ The Lagrangian of Mahzoon and Riazi can be reconstructed by
$T^{\mu\nu\alpha\beta} = \left( 1 + \lambda F^{\gamma\delta}F_{\gamma\delta} \right)\cdot g^{\mu\alpha}g^{\nu\beta}$.
One might be able to derive a Kerr nonlinearity using this Lagrangian.
So, is anyone familiar in a description of nonlinear optics/electrodynamics in terms of a gauge field theory or something similar to the thoughts outlined here?
Thank you in advance.
Sincerely,
Robert
Comments on the first Bounty
I want to thank everyone actively participating in the discussion, especially Greg Graviton, Marek, Raskolnikov, space_cadet and Willie Wong. I am enjoying the discussion relating to this question and thankfull for all the nice leads you gave. I decided to give the bounty to Willie since he gave the thread a new direction introducing the material manifold to us.
For now, I have to reconsider all the ideas and I hope I can come up with a new revision of the question that should be formulated in a clearer way as it is at the moment.
So, thank you again for your contributions and feel welcome to share new insights.
-
1
I am not sure what you want. QED is a gauge theory and tells you almost everything you might want to know about interaction of light with matter. But I guess this level of approach is rarely useful. Usually you would want to work with scattering of photons on some lattice and that is just condensed matter physics. To say the least, some of my friends are working in the field of quantum optics and they don't even need to know field theory (not to say gauge theory). Usually they deal just with material science. – Marek Dec 2 '10 at 10:49
1
– Marek Dec 10 '10 at 22:13
4
Hey guys, I might notice that this is getting a little off-topic here... yes, I am a German guy and it seems perfectly fine to me to use the english words (with German descent) ansatz, bremsstrahlung, zitterbewegung, eigen* etc. in my posts :) Why shouldn't I? Noldorin, I suppose you use much more words from your mother language in your posts than I do ;) – Robert Filter Dec 11 '10 at 8:42
3
@Noldorin: Thank you for your concern. But I guess that people who are used to scientific literature are familiar with those words. The irony is that the english language is not the mother tongue of a lot of people here; And the first moment a word gets used which is not instantaniously part of the vocabulary of a native speaker, it gets replaced. I really appreciate you helping to clearify things but this one was actually ... quite funny :) – Robert Filter Dec 12 '10 at 17:08
3
@Noldorin: I am sorry if I offended you. I delivered the message with a smile hoping you would see the funny part of the whole discussion. Really, noone doubts your second point but I hope you will excuse yourself later for the part in brackets. – Robert Filter Dec 12 '10 at 18:38
show 19 more comments
4 Answers
Just a few random thoughts.
There is something important in your observation that the Born-Infeld model is essentially a free-space model. It is known to Boillat and Plebanski (separately in 1970) that the Born-Infeld model is the only model of electromagnetism (as a connection on a $U(1)$ vector bundle) that satisfies the following conditions
1. Covariance under Lorentz transformations
2. Reduces to Maxwell's equation in the small-field strength limit
3. $U(1)$ gauge symmetry
4. Integrable energy density for a point-charge
5. No birefringence (speed of light independent of polarization).
(the linear Maxwell system fails condition 4.) (See Michael Kiessling, "Electromagnetic field theory without divergence problems", J. Stat. Phys. (2004) doi:10.1023/B:JOSS.0000037250.72634.2a for an exposition on this and related issues.)
Now, since you are interested in nonlinear optics inside a material, instead of in vacuum, I think conditions 1 and 5 can safely be dropped. (Though you may want to keep 5 as a matter of course.) Condition 4 is intuitively pleasing, but maybe not too important, at least not until you have some candidate theories in mind that you want to distinguish. Condition 3 you must keep. Condition 2, on the other hand, really depends on what kind of material you have in mind.
In any case, a small suggestion: personally I think it is better to, from the get-go, write your proposed Lagrangian as
$$L = T^{abcd} F_{ab}F_{cd}$$
instead of $M^{ab}F_{cd}$. I think it is generally preferable to consider Lagrangian field theories of at least quadratic dependence on the field variables. A pure linear term suggests to me an external potential which I don't think should be built into the theory.
If you want something like condition 2, but with a dielectric constant or such, then you must have that $T^{abcd}$ admit a Taylor expansion looking something like
$$T^{abcd} = \tilde{g}^{ac}\tilde{g}^{bd} + O(|F|)$$
where $\tilde{g}$ is some effective metric for the material. Birefringence, however, you don't have to insert in explicitly: most likely a generic (linear or nonlinear) $T^{abcd}$ you write down will have birefringence; it is only when you try to rule it out that you will bring in some constraints.
An interesting thing is to consider what it means to have an analogous notion to condition 1. In the free-space case, condition 1 implies that the Lagrangian should only be a function of the Lorentz invariant $B^2 - E^2$ (in natural units) and of the pseudo-scalar invariant $B\cdot E$. In terms of the Faraday tensor these two invariants are $F^{ab}F_{ab}$ and $F^{ab}{}^*F_{ab}$ respectively, where ${}^*$ denote the Hodge dual. The determination of the linear part of your theory (of electromagnetic waves in a material) is essentially by what you will use to replace condition 1. If you assume your material is isotropic and homogeneous, then some similar sort of scalar + pseudo-scalar invariants is probably a good bet.
-
@Willie Wong: Thank you very much for your substantial answer. If I understand your argumentation correctly, one might not be able to find another intrinsically nonlinear formulation that is Lorentz-invariant. Having some relativistic background I would not dare to drop this condition. Do you think having something like $g\times{U(1)}$ with a new group coming from the matter interaction (maybe in some spirit of the electroweak interaction) would be a much better approach? Sincerely – Robert Filter Dec 16 '10 at 9:37
@Robert: Well, matter interaction will give all sorts of new things, but I do suspect that back-reaction can be in some-sense approximated by pure nonlinearities. One thing to note is the following: you can actually use some sort of Aether-theory idea to break local Lorentz invariance when keeping general covariance for the over-all theory. That is: if you take your optical medium to be some sort of fluid or elastic body evolving (possibly independently of the EM field in the linear approximation except through gravity) in space-time, you can define your "optical metric" $T$ through... – Willie Wong Dec 16 '10 at 12:25
...properties of the optical medium. For example, in the linear case, say with a relativistic elastic body as the material, you can construct $\tilde{g}$ from the pull-back of the Riemannian metric on the material manifold, plus a factor coming from the particle world-lines. The overall theory will be generally covariant, but after "fixing" the optical medium you get a local background that breaks Lorentz invariance. So I wouldn't worry too much about breaking Condition 1. Relaxing Conditions 4 and 5 also gives many, many other admissible Lagrangians. – Willie Wong Dec 16 '10 at 12:30
(I should say that the above is inspired by some recent work of Ted Jacobson's on Einstein-Aether theory, which I think is somewhat related to Horava gravity.) – Willie Wong Dec 16 '10 at 12:32
@Willie: Thank you very much for your further explanations. I have to admit that the notion of a material manifold is new to me. I always thought of entities defined on a background spacetime but with $\tilde{g}$ as the metric of that manifold, one could basically drop the $T$ approach in favour of this curved material manifold. I suppose the most simple example would be transformation optics with permittivity $\epsilon$ as $\tilde{g}$ if I am not entirely wrong. I am not sure what you mean by particle world lines in this sense; (timelike -> light) geodesics maybe? – Robert Filter Dec 16 '10 at 13:04
show 3 more comments
Nonlinear is a buzzword used to cover anything that is not linear. Depending on what kind of nonlinearity is involved, and thus what kind of material, there could be one symmetry or another, or there could be no symmetry at all. For instance, in superconductors, gauge symmetry is broken and photons behave as if they have acquired a mass. The result is that magnetic fields have limited penetration in the superconductor. And I think this is still described by linear equations.
I know of one gauge-invariant theory that is non-linear, this model is called the Born-Infeld model.
-
Thank you very much for your answer. I was not aware of the Born-Infeld theory of electrodynamics so far but it looks very interesting. You are also pointing to one important thing: different materials will have different symmetries. This is exactly what should cause different materials to obey different kinds of nonlinearities if they can be described by a gauge theory. For the moment we might not focus on further complicated things as symmetry breaking, if this is convenient for you. – Robert Filter Dec 2 '10 at 16:29
You have been asking some seriously interesting questions! Here's my take on this one ...
You say this about the Born-Infeld action:
But as I see it, this model is an intrinsically nonlinear model for the free-space field itself and not useful for describing nonlinear matter interaction.
I'm not sure exactly what you mean by "free-space" field. I take it that you're referring to $F_{\mu\nu}$. Well there is no reason why one cannot define an $F_{\mu\nu}$ for waves propagating non-linearly, within a medium or in a vacuum.
The matter-light interaction can be specified (at least in part if not wholly) by the form of $g_{\mu\nu}$. Now bear with me for a minute. I'm not referring to the metric generated by some kind of matter. The metric in question does not, a priori, satisfy the Einstein equations. It is instead the effective metric experienced by the light-rays propagating within the given material. See these excellent papers by Ulf Leonhardt and Thomas Philbin [1],[2] for more details on this notion. In brief the off-diagonal components $g_{ij}$ (where $(i,j \in \{1,2,3\}\,\, i \neq j)$ encode the susceptibility tensor and the diagonal components $g_{0i}$ determine the mixing between the electric and magnetic components of the wave.
As for the lagrangian density for the matter-light interaction you posit:
$$\mathcal{L}_{int} \propto M^{\mu\nu} F_{\mu\nu} = T^{\mu\nu\alpha\beta} F_{\alpha\beta} F_{\mu\nu}$$
for flat space (or no-medium) $T^{\mu\nu\alpha\beta} = g^{\mu\nu}g^{\alpha\beta}$, this term reduces to $F^{\mu\nu} F_{\mu\nu}$ which is nothing more than the Maxwell term ! On the face of it this gives us nothing new, unless we adopt the route outlined above and use the metric $g_{\mu\nu}$ to encode the optical properties of the medium.
Another line of thought which exploits this notion of the metric to allow one to speak of an analogy between optical processes and the big-bang is the phenomenal work of Igor Smolyaninov [3]. This paper was accepted by PRL btw, so its nothing to sneeze at.
Assuming that the above line of reasoning is not fatally flawed, and that one can encode the effects of the medium in the metric, it seems that either the Maxwell or the Born-Infeld action are perfectly good candidates of gauge-invariant actions for your purposes.
```` Cheers,
````
Edit: Non-linearity redux
As @Raskolnikov pointed out, the identification of the components $g_{ab}$ with the optical susceptibilities of a material, does not give us a nonlinear material. For that, you have to have a dependence of the susceptibilities on the field strengths themselves. So you have a feedback mechanism $\mathbf{g} \rightarrow \mathbf{F} \rightarrow \mathbf{g}$ and therefore the non-linearity ! Therefore in general, as @robert has been trying to convey to me without success, $\mathbf{g}$ should in general be a function of $\mathbf{F}$.
But then you start treading dangerously close to the speculation that somehow the eventual picture (for the fully non-linear case) might be somehow general relativistic. That is a very tempting idea, but I leave that for another time.
-
– Robert Filter Dec 12 '10 at 7:43
– Marek Dec 12 '10 at 7:43
@Robert - timelike k.v.f ... ?? You're making it too complicated. You have a default timelike k.v.f. The medium that the light propagates through presumably sits in some background geometry which is close to being flat unless your lab is in orbit around a really massive object. The t.v.k.f in in this case is the usual clock-time. In looking at these problems it is tempting to get caught up in the jargon. One should avoid such habits if possible. – user346 Dec 12 '10 at 8:17
– user346 Dec 12 '10 at 8:21
1
@space_cadet: Just to add: Not I am, we are :) I asked some time ago if a community page can write a paper. If there is something in this question, it might be worth the try. – Robert Filter Dec 15 '10 at 9:42
show 15 more comments
In a condensed matter field theory course, I learned the following: microscopically, the Lagrangian for the electromagnetic field looks like it is supposed to, coupling minimally to the particle coordinates.
$$L = \sum_i\left( \frac m2 (p_i-\frac ec \mathbf A(r_i))^2 - e\Phi(r_i) + \dots \right) .$$
On a macroscopic level, however, after getting rid of all the individual particle degrees of freedom via the grand canonical ensemble, new behavior may emerge. Namely, the effective Lagrangian for the electromagnetic field in the body may look very different from a linear one. For example, the effective action for the e.m. field in a superconductor is
$$S_{\text{eff}}[\mathbf A] = \frac\beta2 \int d^3r \mathbf A^\perp(r) \left(-\frac 1{\mu_0}\nabla^2 + \frac {n_s}m \right)\mathbf A^\perp(r)$$
where $\mu_0$ is the vacuum permeability, $n_s$ the superfluid density, $m$ the electron mass and $\mathbf A^\perp$ is the perpendicular component of the gauge field, defined in Fourier space as $\mathbf A^\perp(q) = \mathbf A(q) - q(q\cdot \mathbf A(q))/q^2$. The difference to the vacuum action is the additional "mass term" $n_s/m$, which causes the Meissner effect.
I suppose that you are asking for the most general form that such effective actions may have? I don't have an answer, but I don't see why a most general form should actually exist in the first place.
-
Hi @greg, that is not what the question is asking. Their is no requirement for the non-linear theory in question to be an effective theory. Also there are "general forms" for effective actions. Many different microscopic model hamiltonians could yield the same effective theory macroscopically - as long as the hamiltonians share the same symmetries. This property is known as universality. Also any action, effective or not, has to satisfy the basic requirements for gauge invariance (or invariance under canonical transformations). This limits the possible form of the action very effectively. – user346 Dec 15 '10 at 9:30
@Greg Graviton: Thank you for your answer. May I ask you if there exists a script ot your course on the internet? I think getting my hands on the derivation of this macroscopical effective action, I may give the question a new direction. – Robert Filter Dec 15 '10 at 9:34
@space_cadet: I think the answer is just fine. The problem is that my question is not very specific and just relies on a vague idea. I will actually have to spend more time on it which is atm unfortunately not so easy :) – Robert Filter Dec 15 '10 at 9:38
@space_cadet: I thought that Robert was asking about nonlinear optics in matter. The most general form is of course $L = f(\mathbf A,\Phi)$ with some arbitrary $f$ that is gauge invariant, but that's kinda pointless. But as you note, a more specialized classification of common effective theories according to the microscopic Hamiltionians and their symmetries would be a very good answer. Unfortunately, I'm not knowledgeable about that. – Greg Graviton Dec 15 '10 at 20:51
– Greg Graviton Dec 15 '10 at 20:58
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9308373332023621, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/2586-just-another-trivial-problem.html
|
# Thread:
1. ## Just another trivial problem
Hello, i hope you can help me with this trivial problem(i hope)
Determine the the equation for the tangent to the curve y=tan x/x
at the given point of ((pi/4),(4/pi))
Is it just the orinary y-y1=k(x-x1) formulae we need to use or is it different this time?
Thank you, Jones
2. Originally Posted by Jones
Hello, i hope you can help me with this trivial problem(i hope)
Determine the the equation for the tangent to the curve y=tan x/x
at the given point of ((pi/4),(4/pi))
Is it just the orinary y-y1=k(x-x1) formulae we need to use or is it different this time?
Thank you, Jones
The equation $y-y_1=k(x-x_1)$ is the equation of the line through
the point $(x_1,y_1)$ with slope $k$. For it to be the equation
of the tangent to the curve at the given point $k$ must be the slope
of the curve at the point. That is:
$<br /> k=\frac{d}{dx} \left\{\frac{\tan(x)}{x}\right \} \left \left \vert _{x=\pi /4}<br />$,
and $x_1=pi/4,\ y_1=4/pi$.
RonL
3. Don't we need to differentiate the function first?
4. Originally Posted by Jones
Don't we need to differentiate the function first?
That appears to be what 5th line of my earlier post says
RonL
5. Originally Posted by CaptainBlack
For it to be the equation
of the tangent to the curve at the given point $k$ must be the slope
of the curve at the point. That is:
$<br /> k=\frac{d}{dx} \left\{\frac{\tan(x)}{x}\right \} \left \left \vert _{x=\pi /4}<br />$,
$<br /> \frac{d}{dx} \left\{\frac{\tan(x)}{x}\right \}= \frac{1}{x (\cos(x))^2}-\frac{\tan(x)}{x^2}<br />$.
Therefore
$<br /> k=\frac{8}{\pi}-\frac{16}{\pi^2}\approx 0.92534<br />$
RonL
6. Originally Posted by CaptainBlack
$<br /> <br /> -\frac{\tan(x)}{x^2}<br />$.
where did this come from?
Im a little slow you see, almost dumb. therefor you have to be very legible
7. Originally Posted by Jones
where did this come from?
Im a little slow you see, almost dumb. therefor you have to be very legible
From the product rule for differentiation.
$<br /> \frac{d}{dx} f(x)g(x)= f'(x)g(x)+f(x)g'(x)<br />$
and here $g(x)=1/x$ and $f(x)=\tan(x)$ so:
$<br /> \frac{d}{dx} \left\{\frac{\tan(x)}{x}\right \}= \frac{1}{x (\cos(x))^2}-\frac{\tan(x)}{x^2}<br />$
I could have used the quotient rule, but since it is equivalent to this
use of the product rule I cant be bothered to remember it (as it is
redundant).
RonL
8. Originally Posted by CaptainBlack
I could have used the quotient rule, but since it is equivalent to this use of the product rule I cant be bothered to remember it (as it is redundant).
RonL
I'm glad I'm not the only one with that "hang-up."
-Dan
9. Originally Posted by topsquark
I'm glad I'm not the only one with that "hang-up."
-Dan
I'm not sure it is a "hang-up" I find it difficult to see why time and
limited human memory resources are wasted on teaching the quotient
rule.
It seems to me that the teaching of the q-rule in the anomaly -
one of those distrubing features of this reality which makes one suspect
that one has slipped into a parallel universe without having noticed.
RonL
10. Originally Posted by CaptainBlack
I'm not sure it is a "hang-up" I find it difficult to see why time and
limited human memory resources are wasted on teaching the quotient
rule.
It seems to me that the teaching of the q-rule in the anomaly -
one of those distrubing features of this reality which makes one suspect
that one has slipped into a parallel universe without having noticed.
RonL
I think I live in a parallel universe. (If I only had a "brane!" sings the scarecrow.)
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410483241081238, "perplexity_flag": "middle"}
|
http://johncarlosbaez.wordpress.com/2013/02/12/nash-equilibria/
|
# Azimuth
## Nash Equilibria
As you know if you’ve been reading this blog lately, I’m teaching game theory. I could use some help.
Is there a nice elementary proof of the existence of Nash equilibria for 2-person games?
Here’s the theorem I have in mind. Suppose $A$ and $B$ are $m \times n$ matrices of real numbers. Say a mixed strategy for player A is a vector $p \in \mathbb{R}^m$ with
$\displaystyle{ p_i \ge 0 , \quad \sum_i p_i = 1 }$
and a mixed strategy for player B is a vector $q \in \mathbb{R}^n$ with
$\displaystyle{ q_i \ge 0 , \quad \sum_j q_j = 1 }$
A Nash equilibrium is a pair consisting of a mixed strategy $p$ for A and a mixed strategy $q$ for B such that:
1) For every mixed strategy $p'$ for A, $p' \cdot A q \le p \cdot A q.$
2) For every mixed strategy $q'$ for B, $p \cdot B q' \le p \cdot B q.$
(The idea is that $p \cdot A q$ is the expected payoff to player A when A chooses mixed strategy $p$ and B chooses $q.$ Condition 1 says A can’t improve their payoff by unilaterally switching to some mixed strategy $p'.$ Similarly, condition 2 says B can’t improve their expected payoff by unilaterally switching to some mixed strategy $q'.$)
### Some history
The history behind my question is sort of amusing, so let me tell you about that—though I probably don’t know it all.
Nash won the Nobel Prize for a one-page proof of a more general theorem for n-person games, but his proof uses Kakutani’s fixed-point theorem, which seems like overkill, at least for the 2-person case. There is also a proof using Brouwer’s fixed-point theorem; see here for the n-person case and here for the 2-person case. But again, this seems like overkill.
Earlier, von Neumann had proved a result which implies the one I’m interested in, but only in the special case where $B = -A:$ the so-called minimax theorem. Von Neumann wrote:
As far as I can see, there could be no theory of games … without that theorem … I thought there was nothing worth publishing until the Minimax Theorem was proved.
I believe von Neumann used Brouwer’s fixed point theorem, and I get the impression Kakutani proved his fixed point theorem in order to give a different proof of this result! Apparently when Nash explained his generalization to von Neumann, the latter said:
That’s trivial, you know. That’s just a fixed point theorem.
But you don’t need a fixed point theorem to prove von Neumann’s minimax theorem! There’s a more elementary proof in an appendix to Andrew Colman’s 1982 book Game Theory and its Applications in the Social and Biological Sciences. He writes:
In common with many people, I first encountered game theory in non-mathematical books, and I soon became intrigued by the minimax theorem but frustrated by the way the books tiptoed around it without proving it. It seems reasonable to suppose that I am not the only person who has encountered this problem, but I have not found any source to which mathematically unsophisticated readers can turn for a proper understanding of the theorem, so I have attempted in the pages that follow to provide a simple, self-contained proof with each step spelt out as clearly as possible both in symbols and words.
This proof is indeed very elementary. The deepest fact used is merely that a continuous function assumes a maximum on a compact set—and actually just a very special case of this. So, this is very nice.
Unfortunately, the proof is spelt out in such enormous elementary detail that I keep falling asleep halfway through! And worse, it only covers the case $B = -A.$
Is there a good references to an elementary but terse proof of the existence of Nash equilibria for 2-person games? If I don’t find one, I’ll have to work through Colman’s proof and then generalize it. But I feel sure someone must have done this already.
This entry was posted on Tuesday, February 12th, 2013 at 12:37 am and is filed under game theory, mathematics. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 31 Responses to Nash Equilibria
1. Richard Brown says:
Equation 2) has a typo (should be B on both left and right)
• John Baez says:
Thanks – fixed!
2. Gregory Benford says:
Nash equil. are best conveyed by clear descriptions of interests vs tradeoffs, ie, stories from the point of view of actual players in a game.
I asked Nash this once and he nodded and laughed at the “overproof” of the basic point.
3. drvinceknight says:
Hi John,
In Webb’s book (which is a nice book for a mathematics course on game theory BTW) there is an elementary algebraic proof of the existence of NE in a 2 player 2 strategy game. I know this isn’t what you asked and also I’m sure you wouldn’t actually need the proof in the book but its an accessible starting point perhaps for students.
I look forward to reading future posts!
• John Baez says:
Thanks! I have a big stack of game theory books on my desk, including a bunch you recommended, but it seems not that one. I’ll check out that proof. Maybe I should present it in class.
The existence of this elementary proof, together with Colman’s fairly elementary proof that Nash equilibria exist for 2-person zero-sum games with an arbitrary number of strategies, seems to hint that there’s an elementary proof of the result I actually want.
But this may not be true!
Over on MathOverflow, Rabee Tourky wrote:
Any proof of the existence of a Nash equilibrium for two person finite games games is a proof of kakutani’s fixed point theorem. This is the simplest proof that I know of. Though I can conceive of simpler proofs.
• Andrew McLennan† and Rabee Tourky, ‡From imitation games to Kakutani.
and Michael Greinecker wrote:
Here is what seems to amount to a simplified version of the argument that shows how one can prove the Brouwer fixed point theorem from the existence of Nash equilibria in two player games (modulo a limiting compactness argument):
• Brouwer implies Nash implies Brouwer, The Leisure of the The Theory Class.
However, the blog article ‘Brouwer implies Nash implies Brouwer’, which is very nice, derives Brouwer’s fixed-point theorem from the existence of Nash equilibria where the set of pure strategies is not finite, but an arbitrary compact convex subset of \$\mathbb{R}^n.\$ This is seemingly a much more heavy-duty fact than what I’m after! But, the post has something to say about that….
Anyway, I seem to be closing in on the truth here.
4. Arrow says:
It is interesting that for mixed strategies there are always NE. But aren’t all those cases of NEs in games where they only exist for mixed strategies stalemate situations?
Let’s take rock, paper scissors as an example. For pure strategies there is no NE, for mixed strategies there is picking at random, but that isn’t terribly interesting. Are there games where allowing mixed strategies leads to NEs (that were not there before) where one player gains an advantage?
Another thing about rock, paper scissors is that it seems rather easy to modify the game to break even the mixed strategies NEs. For example let’s say the payoff is also dependent on the consistency of choices in previous games, +1 for regular win and +0.5 for picking the same choice as last time, that should motivate players to prefer a particular choice.
• John Baez says:
Arrow wrote:
Are there games where allowing mixed strategies leads to NEs (that were not there before) where one player gains an advantage?
Yes; wait ’til the next post… or maybe someone here can make up an example. Most games are of this form, so examples aren’t hard to find.
5. Aaron F. says:
Nash equil. are best conveyed by clear descriptions of interests vs tradeoffs, ie, stories from the point of view of actual players in a game.
If this is actually true, it should be possible to write a proof along these lines. ;) Can you give examples of the “clear descriptions of interests vs. tradeoffs” you’re talking about?
• Gregory Benford says:
My point is, instead of fielding proofs, try to tell a story. That’s how people learn.
• John Baez says:
I tell plenty of stories in the course I’m teaching, and in class we play lots of games. But it’s a course for math majors. I’ll make sure they get the idea of a Nash equilibrium, but they need to learn to understand proofs, too!
What I’m looking for here is a proof of the existence of Nash equilibria for 2-person games that doesn’t require the Kakutani or Brouwer fixed-point theorems. It’s the use of these rather heavy-duty tools—which most kids learn only in grad school—that makes undergraduate courses on game theory present the existence of Nash equilibria as an article of faith, skipping the proof. But I suspect that for 2-person games these fixed-point theorems are overkill. It’s only linear algebra, for god’s sake!
• Robert says:
What about just proving a special case of the fixed point theorems, just strong enough to apply to 2-person games? Since you don’t need the full strength of the theorem, you may be able to do this at undergraduate level.
• John Baez says:
I could try that. I’ll get to work soon and examine various options!
6. I agree with Prof. Benford. The easiest way I learned of Game Theory was on a bike ride from Tustin High School to Ed Thomas’ Book Carnival in Orange back in 2001. My girlfriend at the time asked about The Fermi Paradox or the Great Silence. (Why haven’t we heard them yet?)
As you all know the points Mike Hart and Enrico Fermi brought up were:
-The Sun is a young star.
-There are billions of stars in the galaxy that are billions of years older;
-Some of these stars likely have Earth-like planets[2] which, if the Earth is typical, may develop intelligent life;
-Presumably some of these civilizations will develop interstellar travel, as Earth seems likely to do;
-At any practical pace of interstellar travel, the galaxy can be completely colonized in just a few tens of millions of years.)
Where is everybody?! (No evidence of Von Neumann probes either?!!)
The movie ‘A Beautiful Mind’ just came out (2000) and I read up on Game Theory and used the simple Two prisoner Dilemma to answer her query. (Her parents where peaceniks, as are mine even though both our parents have had military/para-military experience)
Suggesting we reach a Kardashev level 1 society (a true global society, as it would be the only logical like option) one world governing body, one world telephone system – internet, one world language – English; Terran hegemony you name it. Just because we can get along with each other relatively well in that world would not immediately mean we have inhibition to kill other species that exercise a high degree on intelligence.
We kill elephants and dolphins all the time and they are not a threat to us.
Extra-terrestrial species may not necessarily play nice.
Why?
Because, “they” are not part of our species or civilization.
This brings us to where the prisoner dilemma kicks in.
We cannot afford to be proven wrong to think they would be peaceful.
A Kardashev level 1 would harness 10×16 to 18 power in W (watts)
Using relativistic weapons would push that number even higher.
When you get to that level then we can make starships that travel at relativistic speeds. (Thank you Prof. Benford!)
What that is essentially is a kardashev level 1 or 2 doomsday weapon. (A weapon that can take out level 1 and 2 civilizations)
To make it simple suggest we only had 2 intelligent level 1 or 2 societies in this galaxy.
You then have two civilizations either both level 1 or 2 sharing 1 galaxy having this whacko scenario
*(Me and my girl at the time had the thought matrix below. I am sure someone else online has replicated it, the internet is a big place)
You can substitute any Type 1 or 2 civilization killer
Be it nanites, relativistic kill vehicles, sun destroyers (weapons that make a home star go nova) etc. etc………..
Race 2 Ignores Race 2 Attacks
Race 1 Ignores Both live constant fear Race 1 exterminated
Race 2 lives free of fear
Race 1 Attacks Race 1 lives free of fear Both are devastated
Race 2 exterminated but not destroyed
After the date we both wanted to write a paper titled
“Game theory / Nash Equilibria multiple player scenarios as a possible response to Fermi Paradox”
We were both silly…..but whatcha gonna do? We were 17 year olds.
We did eventually write it down up to about 5 players (as in 5 intelligent level 1 or 2 societies in our Galaxy.
It was pretty funny.
but just for a 2 player response below is a less clean cut outcomes/more detailed matrix:
-Key(outcomes)
2= alive, free of fear of retaliatory strike
4= alive, but fear or retaliatory/first strike from opponent civ or federation.
1= civilization intact, but devastated, probably downgraded from level 1 or 2
3= total annihilation
Cooperate Defect
Cooperate 4,4 3,2
Defect 2,3 1,1
*(again I am sure someone else has this matrix somewhere online as I have been talking about this for some 13 years now)
In jest it can be applied to kids with high capacity guns on a playgrounds, kardashev level 1 or 2 civilizations, and hegemonic empires pre-empting first strikes.
Or whatever else you can think of.
Hehehe.
P.S. Prof. Baez did you catch Jeff Buckley’s “Grace” yet?
7. Darn my coding skills suck….It looked like matrix form when I typed it and it did not when I pressed the “Post Comment” button.
8. Phillip Helbig says:
“That’s trivial, you know.”
Trivial is in the eye of the beholder. Be especially careful if it is von Neumann’s eye. (I’m reminded of the fly between the trains.) There is a story about a famous mathematician, maybe Hardy, saying “this follows trivially from…” while writing on the blackboard. Then he paused. Then he stepped back from the blackboard and thought a bit. Then he sat down and thought some more. Then he went into another room. After half an hour, he came back, continuing his lecture where he left off, saying “Yes, it really is trivial”.
• John Baez says:
I think von Neumann claimed Nash’s theorem was trivial because he was jealous!
Nash used the same technique von Neumann had—a fixed-point theorem—to prove the existence of equilibria for a much wider and more interesting class of games! Von Neumann had considered 2-player zero-sum games. Nash considered n-player games that could be zero-sum or nonzero-sum games. Von Neumann (being a genius) should have tried to generalize his result. Nash showed the generalization could be done in five easy paragraphs.
So, I think von Neumann was jealous and trying to cut Nash down to size.
9. davetweed says:
There’s a 15 minute UK BBC radio programme that touches on game theory and poker here. It’s a bit… energetically breathless and seems, to me, more enthusiastic about big ideas than seeing if they’re actually what happens in the real world. But it might be interesting to some following these posts.
10. Luís says:
Maybe this is a solution.
$e$, abusing notation, a little, is the vector of ones. Set
$p_2= \mathrm{max}_{p,q} p\cdot Bq$
s.t. $p\cdot e=1$ and $q \cdot e=1$ with $p \geq 0, q \geq 0$. A continuous function $p \cdot Bq$ on a compact space,
$\{(p,q) \in \mathbb{R}^{n+m}: p\cdot e=1, q\cdot e=1, p \geq \bar{0}, q \geq \bar{0} \}$
has a maximum, no prob there. Now find the maximums of player 1 payoff, on the set of points that maximized player 2, and choose one of them, $(p^*,q^*) \in \mathrm{arg max}_{p,q} p\cdot Aq$ s.t $p\cdot Bq \geq p_2$. The function $p\cdot Aq$ is again continuous, the set $p\cdot Bq \geq p_2$ is compact and non-empty. And if I did not miss anything $(p^*,q^*)$ is your Nash equilibrium.
• Luís says:
I did miss something. Player 1 is not necessarily maximizing is payoff. Bah. So dumb!
• Luís says:
Along the same lines. For each $p$ maximize player 2′s payoff:
$g_2(p)= \mathrm{max}_{\{q: q\cdot e=1\}} p\cdot Bq$
(continuous function on compact set).
A sequential argument easily prove $g_2(p)$ is continuous. The set
$\Delta=\{(p,q): p\cdot Bq \geq g_2(p), p\cdot e=p\cdot q=1, p \geq \bar{0}, q \geq \bar{0}\}$
is closed by $g_2(p)$‘s continuity, and bounded, so compact.
Then any $(p^*, q^*) \in \mathrm{argmax}_{ \{(p,q) \in \Delta\}} p \cdot Aq$
is a Nash equilibrium.
• John Baez says:
Thanks for the comments. I fixed your TeX a bit. I’ve never seen this $\mathrm{argmax}$ notation before! I’m guessing that $\mathrm{argmax}_{x \in X} f(x)$ stands for the set of $x \in X$ that maximize $f(x).$ Is that right?
• Graham says:
I am sure that’s what Luis means. I was initially surprised that you didn’t know the notation, but I think it is more commonly used by programmers than mathematicians.
• John Baez says:
That’s for sure. I’ve been doing math for decades and have never seen that notation. It looks useful, but I’ll need to explain it whenever I use it in front of mathematicians.
• Graham Jones says:
I noticed it is used in this paper you pointed to:
Andrew McLennan and Rabee Tourky, From imitation games to Kakutani
… but they’re economists.
• Graham says:
I don’t understand Luis’ proof. Let
$G_2=\{(p,q): p\cdot Bq = g_2(p), p\cdot e=p\cdot q=1, p \geq \bar{0}, q \geq \bar{0}\}$
The set $G_2$ is like $\Delta$ with an inequality replaced by an equality. It is the set of points $(p,q)$ in which $q$ is an optimal choice for player 2 for some choice $p$ by player 1. Similarly we can define a function $g_1(q)$ and a set $G_1.$ What we want to show is that the intersection of $G_1$ and $G_2$ is non-empty.
I don’t see how it helps to extend $G_2$ to $\Delta$ and look for points in that.
• John Baez says:
Okay, I didn’t understand the proof either.
11. Graham Jones says:
I think have an elementary proof for the $A=-B$ case. (It’s certainly elementary: I’m not capable of other kinds.) It uses induction on $n+m$, assuming (for example) the case $n=m=2$ is true, and uses the following lemma.
Lemma. Let $v_1, \dots, v_r$ be vectors in a finite dimensional real vector space. Then either (1) there is a vector $u$ such that $v_i \cdot u > 0$ for all $i$ or (2) there are nonnegative reals $x_1, \dots, x_r$, not all zero, such that $\sum_1^r x_i v_i = 0.$
Proof idea for lemma: consider the convex hull of the $v_i$. If the origin is outside we can find a $u$ for (1), otherwise we can show (2).
Sketch proof of existence of Nash equilibrium. Denote the i’th row of $A$ by $A_i,$ regarded as a vector in $\mathbb{R}^n.$ Player A’s payoff can be written as
$V = (\sum_1^m p_i A_i) \cdot q$
Now $q$ belongs to a hyperplane in $\mathbb{R}^n,$ so we can project $q$ and the $A_i$ onto it, with $q$ becoming $y$ and $A_i$ becoming $a_i$ in $\mathbb{R}^{n-1}.$ Note $y$ is constrained to a hypertetrahedron $T.$ Then
$V = (\sum_1^m p_i a_i) \cdot y + f(p,A)$
where $f(p,a)$ does not depend on $q$.
Now apply the lemma. If there is a $u$ such that $a_i \cdot u > 0$ for all i, then player B’s payoff can be improved by moving in direction $-u$ from any point in $T$, so player B will choose a point on the boundary of $T$, which means at least one strategy is never chosen. So the game is equivalent to a smaller game and induction provides a Nash equilibrium. Otherwise there is some (not necessarily optimal) choice $\hat{p}$ for player A such that $\sum_1^m \hat{p}_i a_i = 0$ so player B cannot change the payoff.
The same argument with A and B reversed either gives an equilibrium by induction, or a choice $\hat{q}$ for player B such that player A cannot change the payoff. So if induction fails in both cases, we have $(\hat{p}, \hat{q}).$
• John Baez says:
This looks interesting—sorry to take so long to reply!
Now $q$ belongs to a hyperplane in $\mathbb{R}^n$ [...]
Which hyperplane? Any old random hyperplane? Yes, $q$ belongs to infinitely many different hyperplanes, but this makes me nervous, mainly because I’m trying to understand your proof and it’s not making sense yet.
Now $q$ belongs to a hyperplane in $\mathbb{R}^n,$ so we can project $q$ and the $A_i$ onto it, with $q$ becoming $y$ [...]
If $q$ belongs to a hyperplane, projecting $q$ onto this hyperplane won’t change $q$ at all, so we get $y = q.$ Or am I not understanding you?
• Graham Jones says:
I mean the hyperplane $\sum_1^n q_i = 1.$ I want regard the unit simplex in $\mathbb{R}^n$ as a hypertetrahedron $T$ in $\mathbb{R}^{n-1}.$
A bit later, where I say “…B’s payoff can be improved by moving in direction $-u$ from any point in $T,$ …” I think it would be clearer if I added that this works for any choice by player A.
• Graham Jones says:
Here is that part of the argument more explicitly:
Let $\eta = 1/\sqrt n$ and let $s = \eta(1,1,\dots,1)$, that is, $s$ is a unit vector normal to the hyperplane $\sum_1^n q_i=1$. Let
$y = q - (q\cdot s)s = q - \eta s$
and
$a_i = A_i - (A_i \cdot s)s$
Then
$p_i A_i \cdot q = p_i((A_i \cdot s)s + a_i) \cdot (\eta s + y)$
and expanding that out and using $s \cdot y = 0$ leads to
$V = (\sum_1^m p_i a_i) \cdot y + f(p,A)$
where $f(p,A)$ does not depend on $q$.
• John Baez says:
Graham wrote:
I mean the hyperplane $\sum_1^n q_i = 1.$
Oh! I hadn’t considered that option. In math, half the time words like ‘line’, ‘plane’ and ‘hyperplane’ mean things that contain the origin, so that they’re vector spaces. And half the time they don’t. It’s about linear versus affine geometry.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 123, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333933591842651, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/124864-clarification-definite-integrals.html
|
# Thread:
1. ## Clarification on Definite Integrals
Hello, and thanks in advance for your help.
The problem I'm facing is to integrate by parts the following function:
lnx/x^2 with the interval of integration being from 1 to 2.
I don't think I need a full solution, it's just that after I do the integration by parts method, I'm left with no integrals, and this makes me concerned. Is the answer to this problem simply my simplified 1/3x^3lnx - 1/9x^3 or is it this simplification with the values 2 inputted for x subtracted from it with 1 inputted for x (like shown below)?
[1/3*2^3*ln2 - 1/9*2^3] - [1/3*1^3*ln1 - 1/9*1^3]
Thanks again for the assistance...I think it's just a matter of not understanding how the interval of integration given at the start of the problem relates to my solution.
2. integrate ln(x)/x^2 - Wolfram|Alpha
click show steps on the right to see the integration by parts
3. Originally Posted by NBrunk
Hello, and thanks in advance for your help.
The problem I'm facing is to integrate by parts the following function:
lnx/x^2 with the interval of integration being from 1 to 2.
I don't think I need a full solution, it's just that after I do the integration by parts method, I'm left with no integrals, and this makes me concerned. Is the answer to this problem simply my simplified 1/3x^3lnx - 1/9x^3 or is it this simplification with the values 2 inputted for x subtracted from it with 1 inputted for x (like shown below)?
[1/3*2^3*ln2 - 1/9*2^3] - [1/3*1^3*ln1 - 1/9*1^3]
Thanks again for the assistance...I think it's just a matter of not understanding how the interval of integration given at the start of the problem relates to my solution.
Try letting $\ln(x)=z$.
4. Thanks for the link, it showed how wrong my solution was...took dv = simply x^2 rather than x^-2 and it threw me off.
However, the link shows the solution to the indefinite integral, and I'm still not sure how to reflect the definite interval of integration of 1 to 2 in my answer...although I'm much closer!
5. Okay listen (actually read); we want to compute $\int_1^2\frac{\ln x}{x^2}\,dx.$
You already spotted that this needs to be integrated by parts, now the trick here, it's to express $\ln x$ or $\frac1{x^2}$ as a function being differentiated, but you'll probably say "I don't get your point." It's simple, let's think: do we have a function whose derivative gives me $\ln x$ ? Yes, it does exist, but, do you know it that fast so that you can tell me what it is? - No, so let's think on $\frac1{x^2},$ we can immediately see that $\left(-\frac1x\right)'=\frac1{x^2},$ so we have $\int_{1}^{2}{\frac{\ln x}{x^{2}}\,dx}=\int_{1}^{2}{\left( -\frac{1}{x} \right)'\ln (x)\,dx}.$ This will be equal to the following: multiply $-\frac{1}{x}$ and $\ln x$ evaluated for $1\le x\le2$ and the remaining integral will be $-\int_1^2\frac1{x}(\ln x)'\,dx,$ so our original integral becomes $\left. -\frac{1}{x}\ln x \right|_{1}^{2}+\int_{1}^{2}{\frac{1}{x}\left( \ln x \right)'\,dx}=-\frac{1}{2}\ln 2+\int_{1}^{2}{\frac{dx}{x^{2}}}=-\frac{1}{2}\ln 2+\left( \left. -\frac{1}{x} \right|_{1}^{2} \right).$
So our integral equals $-\frac{1}{2}\ln 2+1-\frac{1}{2}=\frac{1-\ln 2}{2}.$
6. ## Correction and Restatement of Question
Pending Correction
7. 1) let ln x=t
2) int(e^t.t)dt
3) by parts integration
4) use ILATE rule
5) ans:e^t(t-1)
6) ans in terms of x:x(ln x-1)
7) put the limits
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293606877326965, "perplexity_flag": "middle"}
|
http://gilkalai.wordpress.com/2009/12/03/why-planar-graphs-are-so-exceptional/?like=1&source=post_flair&_wpnonce=8149a33672
|
Gil Kalai’s blog
## Why are Planar Graphs so Exceptional
Posted on December 3, 2009 by
Harrison Brown asked the problem “Why are planar graphs so exceptional” over mathoverflow, and I was happy to read it since it is a problem I have often thought about over the years, as I am sure have many combinatorialsists and graph theorists. Harrison was interested in comparing planar graphs to graphs embedded in other surfaces.
It will be nice to discuss this question further here (mathoverflow is not an ideal platform for discussions, but some interesting ones have emerged.) So let me offer my answer. Another interesting answer was offered by Joseph Malkevitch.
## Three exceptional characteristics of planar graphs
Duality: Perhaps duality is the crucial property of planar graphs. There is a theorem asserting that the dual of a graphic matroid M is a graphic matroid if and only if M is the matroid of a planar graph. In this case, the dual of M is the matroid of the dual graph of G. (See this wikipedia article). This means that the circuits of a planar graph are in one to one correspondence with cuts of the dual graph.
One important manifestation of the uniqueness of planar graphs (which I believe is related to duality) is Kasteleyn’s formula for the number of perfect matchings and the connection with counting trees.
Robust geometric descriptions: Another conceptual difference is that (3-connected or maximal) planar graphs are graphs of convex 3-dimensional polytopes and thus have extra geometric properties that graphs on surfaces do not share. (This is Steinitz’s theorem.)
The geometric definition of planar graphs (unlike various generalizations) is very robust. A graph is planar if it can be drawn in the plane such that the edges are represented by Jordan curves and do not intersect in their interiors; the same class of planar graphs is what we get if we replace “Jordan curves” by “line intervals,” or if we replace “no intersection” by “even number of crossings”. The Koebe-Andreev-Thurston theorem allows us to represent every planar graph by the “touching graph” of nonoverlapping circles. Both (related) representations, via convex polytopes and by circle packing, can respect the group of automorphisms of the graph and its dual.
Simple inductive constructions. Another exceptional property of the class of planar graphs is that planar graphs can be constructed by simple inductive constructions. (In this respect they are similar to the class of trees, although the inductive constructions are not as simple as for trees.) This fails for most generalizations of planar graphs.
A related important property of planar graphs, maps, and triangulations (with labeled vertices) is that they can be enumerated very nicely. This is Tutte theory. (It has deep extensions to surfaces.)
It is often the case that results about planar graphs extend to other classes. As I mentioned, Tutte theory extends to triangulations of other surfaces. Another example is the fundamental Lipton-Tarjan separator theorem, which extends to all graphs with a forbidden minor.
## Two interesting generalizations
There are many interesting generalizations of the notion of planar graphs (I mentioned quite a few in the mathoverflow answer and initiated a question to have a useful source for such extensions); let me mention two of them:
### The direct high-dimensional analog
$k$-dimensional simplicial complexes (or more general stratified topological spaces) that can be embedded in $R^{2k}$
### Elementary polytopes
I tried to propose the following analog of planar graphs (more precisely of 3-connected planar graphs):
An elementary $d$-polytope is defined by the property that when you triangulate every 2-face of the polytope by adding edges then the number of edges you get is $dn - {{d+1} \choose {2}}$ edges. (For every polytope the number of such edges is at least $dn - {{d+1} \choose {2}}$.) Another way to say it is:
$f_1(P)=f_{02}(P)-3f_2(P) = dn - {{d+1} \choose {2}}.$
In this formula, $f_{02}(P)$ is the number of chains $F_o \subset F_2$ of verices included in 2-faces. See this post for a further discussion of such counting of chains(flag) of faces. Let me mention that for every $d$-polytope, $d \ge 3$, the left hand side in the formula above is at least as large as the right hand side.
Elementary $d$-polytopes and their graphs can be regarded as extensions of 3-polytopes and their graphs. Elementary polytopes form an interesting class of polytopes that include all 3-dimensional polytopes. It is known (but is not at all easy to prove) that the dual of an elementary polytope is elementary.
### Like this:
This entry was posted in Combinatorics, Convex polytopes. Bookmark the permalink.
### 2 Responses to Why are Planar Graphs so Exceptional
1. Joseph Malkevitch says:
Not only do interesting questions arise by considering the special class of planar graphs but additional special issues arise when one considers a specific plane drawing of a planar graph. This is because when a graph is drawn in the plane it becomes possible to order or number, say for example, the edges at a particular vertex of the graph, in an organized consistent way. One example of results which started from this reality for plane graphs is this paper of Grunbaum and Motzkin:
B. Grünbaum and T. S. Motzkin , The number of hexagons and the simplicity of geodesics on certain polyhedra. Canad. J. Math. 15 (1963), pp. 744–751.
In this paper the following is explored (along with results about what today are called fullerenes). Suppose one has a 3-valent plane graph. If one picks any edge and moves along that edge (in either direction), when one gets to a vertex one has the choice of going left or right and moving on to another edge. Suppose one goes left, and at the next vertex in one’s “traversal” one goes right, alternating left, and right. Grunbaum and Motzkin refer to this as left-right path, and they prove that for a plane 3-valent graph with each of its faces having a multiple of 3 for its number of sides, that these left-right paths (starting on any edge) always generate “simple circuits.” I was able to extend this result in several directions. One such idea applies to the graph that serendipitously appears as the diagram that starts this “thread.” This graph is plane 4-valent, so when one moves along an edge and one gets to a vertex one might always choose to take the middle edge. For the graph above, when one does this, the graph breaks up into the union of simple closed curves. (It is easy enough to generate such graphs. Plop down a bunch of simple closed curves which cut each other when they meet transversely and think of the result as a 4-valent graph.) What are sufficient conditions for this (moving along middle edges to generate simple closed curves) to happen? Perhaps surprisingly, one such condition is that each of the faces of the plane 4-valent graph have a number of sides which is is a multiple of 3. What sometimes happens, which is an interesting phenomenon in its own right, is that one generates an eulerian circuit by always choosing the middle edge in a 4-valent plane graph. So the theorem I mentioned just a moment ago can be interpreted to say that there is no knot projection which results in a 4-valent graph all of whose faces have a multiple of 3 as their number of sides. (Knots would be eulerian when one moves along a middle edge.)
I like to think of left-right (more precisely, far left, far right) paths (which makes sense for 4-valent graphs, too) or take the middle edge in a 4-valent graph as arising because one has numbered the edges from left to right as one gets to a vertex with the numbers 1, 2, 3. (Similarly, for 3-valent or 5-valent plane graphs.) Now one can ask a large number of questions about the behavior paths which obey a certain “code.” Left-right paths (4-valent case) are the code 1,3 while take the middle edge is the 2 code.
Once more, because of the graph being planar (plane) one gets interesting mathematical ideas, and I suspect there are many interesting results and new ideas to be obtained from this line of thinking. So, yes, planar and plane graphs are exceptional.
2. Gil Kalai says:
Dear Joe, many thanks for your very interesting comment.
• ### Blogroll
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344767332077026, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/45647/rayleigh-benard-convection?answertab=votes
|
# Rayleigh-Benard Convection
I found this nice paper about RB convection. However I am confused by what is going on page 6. In particular why we are suddenly using Helmholtz equation to find spatially periodic solutions. Aren't we working with convection, so why are we looking at it from a wave like point of view? Or maybe I'm just missing the point all together.
Furthermore I would like to run a quick experiment to collect data. I was planning on using a heating place, a small glass tube filled with olive oil and a thermometer. Any tips? Suggestions?
-
1
What exactly are you looking for? If you can get your hands on a copy of any of Drazin's books (tinyurl.com/d22r476 or tinyurl.com/cxr5hq6) there is a good and detailed treatment of the maths involved. Not sure what you expect to find through a statistical thermal dynamics lens... – Jaime Dec 2 '12 at 4:43
Alrighty sorry for the vague question. Now I am looking for information about why the RB convection is still being studied, a qualitative description of the process, which is backed up by a mathematical explanation. Also a mathematical explanation of where the critical Rayleigh number comes from would be nice. And on the side: how can I create an at home experiment to convince myself of the convection cells. @Qmechanic thanks for the edit! – kuantumbro Dec 2 '12 at 22:37
To add to my answer: A layer of oil being heated on a non-stick pan shows the formation of these Benard cells. Be careful.. I think it would be safer to use canola/vegetable oils instead of olive oil since olive oil smokes in a shorter period of time. – drN Mar 3 at 23:56
## 1 Answer
Hope it's not a little too late for this answer!!!
Why are we looking at this from a "wave-point-of-view"
Rayleigh's experiments were on spermacetti (whale oil) in a cylindrical container with an aspect ratio of depth to radius $h_0/r\lt\lt 1$. This can be treated as a membrane with a certain stiffness. Hence, it is common to find a wave . The instabilities you see in RB/MB convection are called short wavelength instabilities and these wavelengths can be calculated using linear/non-linear stability analysis and can be faithfully captured with well controlled experiments.
Short note on (non)linear stability analysis
In case you aren't aware of what this is. It is simply perturbing a system with a wave with a disturbance of the form $A \sin (k x)$ and with some clever maxima/minima differential calculus and algebra, figuring out the most destructive wavelength that emerges.
($A$ is the amplitude of the wave and is generally $0.01 h_0$, $k$ is the wavenumber $k = 2 \pi x/L$ and $L$ is the domain size).
Some useful references/papers
Besides Killercam's reference to Drazin, if you are really interested in RB or MB (Marangoni Benard) convection, I would suggest that you read Rayleigh's 1916 paper and Thomson's 1855 paper in addition to Kundu's textbook , particularly chapter 12.
Running experiments with olive oil
As for running experiments with olive oil in a cylinder: be careful; the dimensions of the cylinder (aspect ratio) can change the structures from the hexagonal RB/MB cells to perhaps a Rayleigh-Taylor type instability.
Extra fun stuff
If you find that your interest is roused by RB/MB convection, eventually you look at long wave instabilities that affect liquid film.
References
Thomson, J. On certain curious motions observable at the surfaces of wine and other alcoholic liquors Phil. Mag. Ser., 1855, 10, 330-333
Rayleigh On convective currents in a horizontal layer of fluid when the higher temperature is on the under side. Philo. Mag. Series, 1916, 32, 529-546
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382787942886353, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/24854/what-are-the-issues-in-modern-set-theory/25476
|
What are the issues in modern set theory?
This is spurred by the comments to my answer here. I'm unfamiliar with set theory beyond Cohen's proof of the independence of the continuum hypothesis from ZFC. In particular, I haven't witnessed any real interaction between set-theoretic issues and the more conventional math I've studied, the sort of place where you realize "in order to really understand this problem in homotopy theory, I need to read about large cardinals." I've even gotten the feeling from several professional mathematicians I've talked to that set theory is no longer relevant, and that if someone were to find some set-theoretic flaw in their axioms (a non-standard model or somesuch), they would just ignore it and try again with different axioms.
I also don't personally care for the abstraction of set theory, but this is a bad reason to judge anything, especially at this early stage in my life, and I feel like I'd be more interested if I knew of some ways it interacted with the rest of the mathematical world. So:
• What do set theorists today care about?
• How does set theory interact with the rest of mathematics?
• (more subjective but) Would mathematicians working outside of set theory benefit from thinking about large cardinals, non-standard models, or their ilk?
• Could you recommend any books or papers that might convince a non-set theorist that the subject as it's currently practiced is worth studying?
Thanks a lot!
-
7
Clearly, we need JDH to weigh in on this. – Arturo Magidin Mar 7 '11 at 5:50
3
Arturo, you are very kind. – JDH Mar 7 '11 at 17:23
5 Answers
Set theory today is a vibrant, active research area, characterized by intense fundamental work both on set theory's own questions, arising from a deep historical wellspring of ideas, and also on the interaction of those ideas with other mathematical subjects. It is fascinating and I would encourage anyone to learn more about it.
Since the field is simply too vast to summarize easily, allow me merely to describe a few of the major topics that are actively studied in set theory today.
Large cardinals. These are the strong axioms of infinity, first studied by Cantor, which often generalize properties true of $\omega$ to a larger context, while providing a robust hierarchy of axioms increasing in consistency strength. Large cardinal axioms often express combinatorial aspects of infinity, which have powerful consequences, even low down. To give one deep example, if there are sufficiently many Woodin cardinals, then all projective sets of reals are Lebesgue measurable, a shocking but very welcome situation. You may recognize some of the various large cardinal concepts---inaccessible, Mahlo, weakly compact, indescribable, totally indescribable, unfoldable, Ramsey, measurable, tall, strong, strongly compact, supercompact, almost huge, huge and so on---and new large cardinal concepts are often introduced for a particular purpose. (For example, in recent work Thomas Johnstone and I proved that a certain forcing axiom was exactly equiconsistent with what we called the uplifting cardinals.) I encourage you to follow the Wikipedia link for more information.
Forcing. The subject of set theory came to maturity with the development of forcing, an extremely flexible technique for constructing new models of set theory from existing models. If one has a model of set theory $M$, one can construct a forcing extension $M[G]$ by adding a new ideal element $G$, which will be an $M$-generic filter for a forcing notion $\mathbb{P}$ in $M$, akin to a field extension in the sense that every object in $M[G]$ is constructible algebraically from $G$ and objects in $M$. The interaction of a model of set theory with its forcing extensions provides an extremely rich, intensely studied mathematical context.
Independence Phenomenon. The initial uses of forcing were focused on proving diverse independence results, which show that a statement of set theory is neither provable nor refutable from the basic ZFC axioms. For example, the Continuum Hypothesis is famously independent of ZFC, but we now have thousands of examples. Although it is now the norm for statements of infinite combinatorics to be independent, the phenomenon is particularly interesting when it is shown that a statement from outside set theory is independent, and there are many prominent examples.
Forcing Axioms. The first forcing axioms were often viewed as unifying combinatorial assertions that could be proved consistent by forcing and then applied by researchers with less knowledge of forcing. Thus, they tended to unify much of the power of forcing in a way that was easily employed outside the field. For example, one sees applications of Martin's Axiom undertaken by topologists or algebraists. Within set theory, however, these axioms are a focal point, viewed as expressing particularly robust collections of consequences, and there is intense work on various axioms and finding their large cardinal strength.
Inner model theory. This is a huge on-going effort to construct and understand the canonical fine-structural inner models that may exist for large cardinals, the analogues of Gödel's constructible universe $L$, but which may accommodate large cardinals. Understanding these inner models amounts in a sense to the ability to take the large cardinal concept completely apart and then fit it together again. These models have often provided a powerful tool for showing that other mathematical statements have large cardinal strength.
Cardinal characteristics of the continuum. This subject is concerned with the diverse cardinal characteristics of the continuum, such as the size of the smallest non-Lebesgue measurable set, the additivity of the null ideal or the cofinality of the order $\omega^\omega$ under eventual domination, and many others. These cardinals are all equal to the continuum under CH, but separate into a rich hierarchy of distinct notions when CH fails.
Descriptive set theory. This is the study of various complexity hierarchies at the level of the reals and sets of reals.
Borel equivalence relation theory. Arising from descriptive set theory, this subject is an exciting comparatively recent development in set theory, which provides a precise way to understand what otherwise might be a merely informal understanding of the comparative difficulty of classification problems in mathematics. The idea is that many classification problems arising in algebra, analysis or topology turn out naturally to correspond to equivalence relations on a standard Borel space. These relations fit into a natural hierarchy under the notion of Borel reducibility, and this notion provides us with a way to say that one classification problem in mathematics is at least as hard as or strictly harder than another. Researchers in this area are deeply knowledgable both about set theory and also about the subject area in which their equivalence relations arise.
Philosophy of set theory. Lastly, let me also mention the emerging subject known as the philosophy of set theory, which is concerned with some of the philosophical issues arising in set theoretic research, particularly in the context of large cardinals, such as: How can we decide when or whether to adopt new mathematical axioms? What does it mean to say that a mathematical statement is true? In what sense is there an intended model of the axioms of set theory? Much of the discussion in this area weaves together profoundly philosophical concerns with extremely technical mathematics concerning deep features of forcing, large cardinals and inner model theory.
Remark. I see in your answer to the linked question you mentioned that you may not have been exposed to much set theory at Harvard, and I find this a pity. I would encourage you to look beyond any limiting perspectives you may have encountered, and you will discover the rich, fascinating subject of set theory. The standard introductory level graduate texts would be Jech's book Set Theory and Kanamori's book The Higher Infinite, on large cardinals, and both of these are outstanding.
I apologize for this too-long answer...
-
1
Great answer! I would also refer to Jech's Set Theory The Millennium edition preface, as well Kanamori's historical overview in The Higher Infinite. – Asaf Karagila Mar 7 '11 at 17:29
5
@JDH: No apologies necessary; exactly what I was hoping for. – Arturo Magidin Mar 7 '11 at 18:30
1
Apology not accepted! Great answer. – The Chaz 2.0 Mar 11 '11 at 2:39
1
– Charles Stewart Jun 28 '11 at 8:36
1
@Charles, of course that is excellent work, with some excellent people working on it, and it clearly has deep connections with set theory. But at the same time, my impression is that the theory arises principally from a category-theoretic perspective rather than a set-theoretic one, no? But surely set theory is a big tent. – JDH Jun 28 '11 at 11:50
show 3 more comments
Maybe start with looking at the chapters for the Handbook of Set Theory here. As a topologist, I can say that set theory is still very useful in General Topology, as people run into many questions that are independent of set theory there, or that are helped by techniques from Set theory. Harvey Friedmann has many (I think) interesting ideas about the role of large cardinals in "normal", combinatoric questions. Shelah is probably the most prolific author in Set Theory and has quite a breadth of subjects.
-
One of the professors in my department was talking to me about it the other day, he joked that logic and set theory to "conventional" mathematics is like mathematics to physics.
First of all, to me conventional mathematics is to take an idea and derive more ideas from it by logical inference - this relates to what my Algebra I professor said in the very first math lecture I attended to: mathematics is the science of deducing A from B and C.
Secondly, the aforementioned professor was joking but it was part true in a way. Set theory trickles into model theory and topology which in turn trickle into algebra and analysis. Those fields are conventional mathematics, I think.
Lastly, I can't speak completely about set theorists but I can say what I see from the very narrow point of view I have right now - the dominance of ZF was established and now there is a search for "measures of consistency" how much more do you need to assume. The axiom of choice, its negation, existence of some large cardinals, and so forth and so on. This is my narrow point of view, as someone who's mostly studying forcing and large cardinals for the past few months and I might as well be talking trash.
-
Is ZF's dominance established? It seems like it's nice to have some notion of "class," for category theory if nothing else. – Paul VanKoughnett Mar 4 '11 at 2:46
@Paul: ZF is very good in the sense that you don't "feel" you're working with the axioms, they are all very natural and describe what we thought are sets to begin with. There are ways to deal with classes, and there is the NBG axiomatic set theory which is a conservative extension of ZF, which essentially means you can prove the same things regarding to sets. NBG has classes. Still, you don't see many people work with it. – Asaf Karagila Mar 4 '11 at 5:14
One place where you need to care about set-theoretical issues is in category theory. Even once you get past the obvious 'problems', like the existence of the category of sheaves on a non-small category (one of the usual solutions is to assume at least one universe - a set that acts like $V_\kappa$ for some inaccessible $\kappa$, and work just with sets of size bounded by the size of elements of $V_\kappa$, or perhaps enough such sets such that every set in contained in some universe), there are interesting questions that are affected by set-theory like Vopěnka's principle - see the nLab page for a brief discussion.
-
3
Two of the set theorists recently working and collaborating with algebrists on category theoretic issues are Joan Bagaria, bcnsets.ub.es/researchers/Joan.Bagaria and Adrian Mathias. See for example arxiv.org/abs/1101.2792 "Definable orthogonality classes in accessible categories are small", by Bagaria, Mathias, Carles Casacuberta, and Jiri Rosicky. – Andres Caicedo Mar 7 '11 at 6:54
I'll take a stab at this part:
"How does set theory interact with the rest of mathematics?"
Henno Brandsma mentioned that set theory is useful in topology. Another area, which has quite a few analogies to topology, is measure theory, and by extension, probability theory. We are very interested in, for example, infinite product spaces, because many useful probabilistic models have infinitely many random variables. But we can't just summon them into existence without some care. To do so in first-order logic requires the axiom of choice (or something like it). But then non-measurable sets exist, so every set of outcomes must be accompanied by a $\sigma$-algebra of sets for which measurement (probability or volume) makes sense...
Further, much like in topology, our basic definitions rely quite heavily on "set-construction" axioms like comprehension, union and powerset. For example, the common operation "take the coarsest $\sigma$-algebra $\mathcal{A}$ such that $\phi(\mathcal{A})$" appeals directly to those axioms: powerset twice, then comprehension, and then union (with one indirection, from an intersection of uncountably many $\sigma$-algebras).
So probability theory is intricately tied with the axiom of choice, in such a way that anybody doing serious work in it is always aware of it. And we use the axioms directly, even if we often forget that that's what we're doing.
"I've even gotten the feeling from several professional mathematicians I've talked to that set theory is no longer relevant, and that if someone were to find some set-theoretic flaw in their axioms (a non-standard model or somesuch), they would just ignore it and try again with different axioms."
What else would we do but try again? :)
The real question is: how easy would it be? Much of analysis would be fine; the reals seem to have the same properties no matter what theory you build them in. Measure theory and probability theory, however, which are so tied to set theory, would (I think) come crashing down - everything would be suspect, and would have to be rebuilt almost from scratch. What a terrifying and exciting idea!
-
Actually, when you don't assume the axiom of choice the reals can be a countable union of countable sets, and have Baire first category properties. – Asaf Karagila Mar 7 '11 at 6:52
@Asaf: That's really fascinating! When I wrote "have the same properties" I was referring to the field axioms, arithmetic properties of limits, and such. But I wrote "seems to" in case there was some construction that would give them subtly different properties from the typical reals - like the one you brought up. :) – Neil Toronto Mar 7 '11 at 16:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9617518186569214, "perplexity_flag": "middle"}
|
http://www.sagemath.org/doc/bordeaux_2008/nf_orders.html
|
# Orders and Relative Extensions¶
## Orders in Number Fields¶
An order in a number field $$K$$ is a subring of $$K$$ whose rank over $$\ZZ$$ equals the degree of $$K$$. For example, if $$K=\QQ(\sqrt{-1})$$, then $$\ZZ[7i]$$ is an order in $$K$$. A good first exercise is to prove that every element of an order is an algebraic integer.
```sage: K.<I> = NumberField(x^2 + 1)
sage: R = K.order(7*I)
sage: R
Order in Number Field in I with defining polynomial x^2 + 1
sage: R.basis()
[1, 7*I]
```
Using the discriminant command, we compute the discriminant of this order
```sage: factor(R.discriminant())
-1 * 2^2 * 7^2
```
## Constructing the order with given generators¶
You can give any list of elements of the number field, and it will generate the smallest ring $$R$$ that contains them.
```sage: K.<a> = NumberField(x^4 + 2)
sage: K.order([12*a^2, 4*a + 12]).basis()
[1, 4*a, 4*a^2, 16*a^3]
```
If $$R$$ isn’t of rank equal to the degree of the number field (i.e., $$R$$ isn’t an order), then you’ll get an error message.
```sage: K.order([a^2])
Traceback (most recent call last):
...
ValueError: the rank of the span of gens is wrong
```
## Computing Maximal Orders¶
We can also compute the maximal order, using the maxima order command, which behind the scenes finds an integral basis using Pari’s nfbasis command. For example, $$\QQ(\sqrt[4]{2})$$ has maximal order $$\ZZ[\sqrt[4]{2}]$$, and if $$\alpha$$ is a root of $$x^3 + x^2 - 2x+8$$, then $$\QQ(\alpha)$$ has maximal order with $$\ZZ$$-basis
$1, \frac{1}{2} a^{2} + \frac{1}{2} a, a^{2}.$
```sage: K.<a> = NumberField(x^4 + 2)
sage: K.maximal_order().basis()
[1, a, a^2, a^3]
sage: L.<a> = NumberField(x^3 + x^2 - 2*x+8)
sage: L.maximal_order().basis()
[1, 1/2*a^2 + 1/2*a, a^2]
sage: L.maximal_order().basis()[1].minpoly()
x^3 - 2*x^2 + 3*x - 10
```
## Functionality for non-maximal orders is minimal¶
There is still much important functionality for computing with non-maximal orders that is missing in Sage. For example, there is no support at all in Sage for computing with modules over orders or with ideals in non-maximal orders.
```sage: K.<a> = NumberField(x^3 + 2)
sage: R = K.order(3*a)
sage: R.ideal(5)
Traceback (most recent call last):
...
NotImplementedError: ideals of non-maximal orders not
yet supported.
```
## Relative Extensions¶
A relative number field $$L$$ is a number field of the form $$K(\alpha)$$, where $$K$$ is a number field, and an absolute number field is a number field presented in the form $$\QQ(\alpha)$$. By the primitive element theorem, any relative number field $$K(\alpha)$$ can be written as $$\QQ(\beta)$$ for some $$\beta\in L$$. However, in practice it is often convenient to view $$L$$ as $$K(\alpha)$$. In Symbolic Expressions, we constructed the number field $$\QQ(\sqrt{2})(\alpha)$$, where $$\alpha$$ is a root of $$x^3 + \sqrt{2} x + 5$$, but not as a relative field–we obtained just the number field defined by a root of $$x^6 + 10x^3 - 2x^2 + 25$$.
## Constructing a relative number field step by step¶
To construct this number field as a relative number field, first we let $$K$$ be $$\QQ(\sqrt{2})$$.
```sage: K.<sqrt2> = QuadraticField(2)
```
Next we create the univariate polynomial ring $$R = K[X]$$. In Sage, we do this by typing R.<X> = K[]. Here R.<X> means “create the object $$R$$ with generator $$X$$” and K[] means a “polynomial ring over $$K$$”, where the generator is named based on the aformentioned $$X$$ (to create a polynomial ring in two variables $$X,Y$$ simply replace R.<X> by R.<X,Y>).
```sage: R.<X> = K[]
sage: R
Univariate Polynomial Ring in X over Number Field in sqrt2
with defining polynomial x^2 - 2
```
Now we can make a polynomial over the number field $$K=\QQ(\sqrt{2})$$, and construct the extension of $$K$$ obtained by adjoining a root of that polynomial to $$K$$.
```sage: L.<a> = K.extension(X^3 + sqrt2*X + 5)
sage: L
Number Field in a with defining polynomial X^3 + sqrt2*X + 5...
```
Finally, $$L$$ is the number field $$\QQ(\sqrt{2})(\alpha)$$, where $$\alpha$$ is a root of $$X^3 + \sqrt{2}\alpha + 5$$. We can do now do arithmetic in this number field, and of course include $$\sqrt{2}$$ in expressions.
```sage: a^3
-sqrt2*a - 5
sage: a^3 + sqrt2*a
-5
```
## Functions on relative number fields¶
The relative number field $$L$$ also has numerous functions, many of which have both relative and absolute version. For example the relative_degree function on $$L$$ returns the relative degree of $$L$$ over $$K$$; the degree of $$L$$ over $$\QQ$$ is given by the absolute_degree function. To avoid possible ambiguity degree is not implemented for relative number fields.
```sage: L.relative_degree()
3
sage: L.absolute_degree()
6
```
## Extra structure on relative number fields¶
Given any relative number field you can also an absolute number field that is isomorphic to it. Below we create $$M = \QQ(b)$$, which is isomorphic to $$L$$, but is an absolute field over $$\QQ$$.
```sage: M.<b> = L.absolute_field()
sage: M
Number Field in b with defining
polynomial x^6 + 10*x^3 - 2*x^2 + 25
```
The structure function returns isomorphisms in both directions between $$M$$ and $$L$$.
```sage: M.structure()
(Isomorphism map:
From: Number Field in b with defining polynomial x^6 + 10*x^3 - 2*x^2 + 25
To: Number Field in a with defining polynomial X^3 + sqrt2*X + 5 over its base field, Isomorphism map:
From: Number Field in a with defining polynomial X^3 + sqrt2*X + 5 over its base field
To: Number Field in b with defining polynomial x^6 + 10*x^3 - 2*x^2 + 25)
```
## Arbitrary towers of relative number fields¶
In Sage one can create arbitrary towers of relative number fields (unlike in Pari, where a relative extension must be a single extension of an absolute field).
```sage: R.<X> = L[]
sage: Z.<b> = L.extension(X^3 - a)
sage: Z
Number Field in b with defining polynomial X^3 - a over its base field
sage: Z.absolute_degree()
18
```
Note
Exercise: Construct the relative number field $$L = K(\sqrt[3]{\sqrt{2}+\sqrt{3}})$$, where $$K=\QQ(\sqrt{2}, \sqrt{3})$$.
## Relative number field arithmetic can be slow¶
One shortcoming with relative extensions in Sage is that behind the scenes all arithmetic is done in terms of a single absolute defining polynomial, and in some cases this can be very slow (much slower than Magma). Perhaps this could be fixed by using Singular’s multivariate polynomials modulo an appropriate ideal, since Singular polynomial arithmetic is extremely fast. Also, Sage has very little direct support for constructive class field theory, which is a major motivation for explicit computation with relative orders; it would be good to expose more of Pari’s functionality in this regard.
### Table Of Contents
#### Previous topic
Number Fields: Galois Groups and Class Groups
#### Next topic
A Bird’s Eye View
### Quick search
Enter search terms or a module, class or function name.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 58, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8898820281028748, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/89423/peter-weyl-theorem-as-proven-in-cartiers-primer/89429
|
## Peter-Weyl theorem as proven in Cartier’s Primer
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm reading Pierre Cartier's A primer of Hopf algebras to educate myself. In its subsection 3.3 (which doesn't need any Hopf algebra theory), he sketches a proof why compact Lie groups are algebraic. One step in this proof is the Peter-Weyl theorem. Here is something I don't understand in the proof of this theorem: Why is the space $C_{\lambda, f}$ invariant under left translations $L_g$? It is clear to me that $R_f$ commutes with $L_g$, but I don't see why $R_f^{\ast}$ should also commute with $L_g$, and I would need this assumption to prove the $L_g$-invariance of $C_{\lambda,f}$.
[Disclosure: I am neither an analyst nor a Lie group theorist, so this might be a trivial question.]
-
## 3 Answers
(Same as pm's answer, with details.)
Well, if $$R_f(\varphi)(h) = \int_G \varphi(g) f(g^{-1}h) \ dg$$ then the adjoint satisfies ```\begin{align*}
(R_f^*(\varphi)|\psi) &= (\varphi|R_f(\psi))
= \int_{G\times G} \varphi(h) \overline{ \psi(g) f(g^{-1}h) } \ dg \ dh \\
&= \int_{G\times G} \varphi(h) \tilde f(h^{-1}g) \overline{\psi(g)} \ dh \ dg
= (R_{\tilde f}(\varphi) | \psi) \end{align*}``` where $\tilde f(g) = \overline{ f(g^{-1}) }$. So $R_f^* = R_{\tilde f}$ is again a right translation operator.
-
Nice answer, thanks a lot! – darij grinberg Feb 24 2012 at 17:58
Ah-- the overlines denote complex conjugation (I always work over the complex numbers); the tilde means do the group inverse, and take complex conjugate (the latter not being needed if you are working over the reals). – Matthew Daws Feb 24 2012 at 18:13
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
More generally, if $A$ is a bounded $G$-invariant operator acting on a unitary representation $(\rho,\mathcal{H})$ of $G$ and $A^*$ is it's adjoint (i.e. $(Ax,y) = (x,A^*y)$ for all $x,y\in \mathcal{H}$), then we have
$$(x,A^*\rho(g)y) = (Ax,\rho(g)y) = (\rho(g^{-1})Ax,y)$$
where in the last step we used unitarity of $\rho$. Now use $G$-invariance of $A$ and retrace the steps back
$$(\rho(g^{-1})Ax,y) = (A\rho(g^{-1})x,y) = (\rho(g^{-1})x,A^*y) = (x,\rho(g)A^*y).$$
QED
-
Are you aware of the relations, $R_{f}^* = R_{f*}$ and $R_f R_h = R_{f \ast h}$? So if $R_f$ commutes with $L_g$ for all $f$, hence so does $R_{f^\ast}$.
So to say $R$ is a $*$ algebra representation by right convolutionn and $L$ are the left convolution. As an intuition why, these are equivalent to right and left translation by elements, which commute for every group. But you can also just check the integrals. The facts are all purely topological, no smooth structures are required and the proofs work perfectly fine for every compact group.
Actually conversely, every compact group is a projective limit of compact (possibly finite) Lie groups by the Peter-Weyl theorem, but that is another story (see the comments).
-
Thanks - I'd have asked you for what $f^{\ast}$ means, but now I am sure it's what Matthew calls $\overline f$. – darij grinberg Feb 24 2012 at 17:58
Yes Matthew is correct. – Marc Palm Feb 24 2012 at 18:22
I'm now curious - Peter-Weyl gives an embedding of the group as a subgroup of a product of fin-dim unitary groups, but how does it give smooth structure on, say, a finite group? A profinite one? – Yemon Choi Feb 24 2012 at 18:56
1
Yemon, pm may mean "every compact group with no small subgroups". – BR Feb 24 2012 at 20:19
No, I meant every compact group is a projective limit of lie groups. In the profinite case, you simply get a projective limit of finite groups. The article en.wikipedia.org/wiki/Peter%E2%80%93Weyl_theorem explains it well – Marc Palm Feb 25 2012 at 10:49
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9109545946121216, "perplexity_flag": "middle"}
|
http://wiki.panotools.org/index.php?title=Lens_correction_model&oldid=14463
|
# Lens correction model
From PanoTools.org Wiki
Revision as of 18:51, 30 November 2012 by Bronger (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
## Lens correction model
The panorama tools have a very flexible model to correct for typical geometrical lens errors. Even better, it can often even estimate the correction parameters directly from the images in a panorama.
There are a total of 6 parameters that have to do with lens correction.
• First of all there is the lens Field of View (FoV) - not exactly an error, but a parameter that determines the image perspective distortion.
• The actual lens correction parameters a, b and c which are used to correct for barrel distortion, pincushion distortion and even wavy distortion.
• The lens shift parameters d and e that correct for the lens optical axis not being in the image center.
Two more parameters correct for image errors that are not induced by the lens but by a scanner or scanning camera for example. These are the shear parameters f and g.
### Field of View
The Focal Length is a physical property of the lens. Together with the effective sensor or film size and the focusing distance it approximates the image Field of View (there are other factors that influence it). Caution: Cropping the image changes the Field of View. If you need to crop your source images for a panorama, crop them all to the same size!
The Field of View together with the lens projection (rectilinear, fisheye or cylindrical for swing lens cameras) determine the image perspective distortion. Perspective distortion is less with a smaller Field of View. See Helmut Dersch page [1] for details about different wide angle perspectives.
### Lens distortion a, b & c parameters
For perfect rectilinear camera optics, all you would need to know is the field of view. Perfect results could be achieved by simply mapping pixels in the image to the tangent plane. Real lenses deviate from this perfect tangent plane projection. The deviations push and pull fixed points in the scene away from where they would have fallen. Luckily, rather than arbitrary pushes and pulls, almost all deviations occur radially, towards or away from some common center, and luckily the deviation amount is almost the same at a given radius around that center. Hence a model that corrects for this deviation based on the radius gives pretty good results.
The lens distortion a, b and c parameters correspond to a third degree polynomial describing radial lens distortion:
$r_{src} = ({a}r_{dest}^3+{b}r_{dest}^2+{c}r_{dest}+d)r_{dest}$
where $r_{dest}$ and $r_{src}$ refer to the normalized radius of an image pixel. The center point of this radius is where the optical axis hits the image - normally the image center. Normalized means here that the largest circle that completely fits into an image is said to have radius=1.0 . (In other words, radius=1.0 is half the smaller side of the image.) A perfect lens would have a=b=c=0.0 and d=1.0 which resolves into $r_{src} = r_{dest}$.
Sometimes the above formula is written as
$r_{src} = {a}r_{dest}^4+{b}r_{dest}^3+{c}r_{dest}^2+d{r_{dest}}$
which is essentially the same.
Usual values for a, b and c are below 1.0, in most cases below 0.01. Too high values suggest that you chose a wrong lens type, f.e. fisheye instead of rectilinear or vice versa. This refers to the absolute values of course since a, b and c can be positive or negative (f.e. both 4.5 and -4.5 are considered too high values).
The fourth parameter (d) is only available in the Correct, Radial Shift filter of the Panorama Tools Plugins. It is calculated implicitly by pano12 (used by PTOptimizer, PTStitcher and the GUIs) in order to keep the same image size:
$d = 1 - (a+b+c)$
Hence it is not available in the different GUI front-ends (you can see it in the PTOptimizer result script).
Unfortunately a different parameter also named d refers to image shift in PTStitcher and PTOptimizer scripts and the GUIs. This sometimes causes confusion. (See more discussion below.)
This polynomial approach is never exact, but can give a pretty good approximation to the real behaviour of a given lens. If you need better correction you must use a distortion matrix, as used by Distortion Remove (see link below).
#### Lens distortion and fisheyes
Unlike rectilinear lenses, fish-eye lenses do not follow the tangent-plane geometry, but instead have built-in distortions designed to achieve wide fields of view. The radial lens distortion parameters are used the same way for rectilinear lenses and fisheye lenses, but they should never be used to attempt to remap a fisheye to a rectilinear image. This is done by selecting the proper source and destination projection. Fisheye geometry follows a rapidly-changing trigonometric function which can hardly be approximated by a third degree polynomial.
For fisheyes, the lens correction parameters correct for the deviation between a real lens and the ideal fisheye geometry.
### Lens or image shift d & e parameters
Sometimes a lens and image sensor might not be centered with respect to each other. In this case the optical axis doesn't fall on the image center. This is particularly the case for scanned images where you never can say whether the film is centered on the scanner or not.
If the above lens correction algorithm is used on such images both lens correction and perspective correction work on the wrong center point. The lens shift parameters d (horizontal shift) and e (vertical shift) compensate for that problem. They contain values in pixel units which determine how far the center for radial correction is shifted outside the geometrical image center.
### Image shear g & t parameters
Image shear is not a lens distortion but nevertheless is part of the panotools lens correction model. It corrects for a distortion induced by scanners or scanning cameras that causes a rectangular image being sheared to the form of a parallelogram (one side of the images is shifted parallel to the opposite side)
### Determine lens correction
a, b, c and FoV are physical properties of a lens/camera-combination at a given focus distance. If you always shoot at the same focus setting, f.e. infinity or the hyperfocal distance, then you can safely reuse the parameters. At different focus settings, FoV will change noticeably, but usually it is fine to reuse a, b, and c even then.
There are a number of ways to determine the a, b, c and fov parameters to calibrate a particular lens/camera combination:
• Taking a single photograph of a subject containing straight lines, defining one or more sets of straight line control points (types t3, t4, etc.), and optimising for just a, b, c. You need to set the output format to Rectilinear Projection for this technique to work. This method is used by the author of PTLens. The calibrate_lens tool also uses this technique and can operate with Fisheye Projection images greater than 180°.
• Taking a single photograph of a rectangular or grid object, selecting lots of horizontal and vertical control points, then optimising roll, pitch, yaw, fov, a, b & c. You need to set the output format to Rectilinear Projection for this technique to work. The process is similar to this hugin architectural tutorial:
• Taking two or more overlapping photographs and selecting lots of normal control points, then optimising roll, pitch, yaw, fov, a, b & c. This technique works with any output projection format but requires parallax free images shot exactly from the Nodal Point. Note that to get a precise measure of the Field of View, you have to take a full 360 degree panorama.
• Using points that are known to be directly above each other such as edges of buildings, windows, reflections in ponds etc... This is the vertical control points method and works with Equirectangular Projection or Rectilinear Projection output and all lenses including those wider than 180°.
• Using a tool such as PTLens, lensfun or fulla to read the photo EXIF metadata and correct the image automatically by looking up the lens in an existing database.
### Optimize for lens correction
If you optimize for lens correction in order to calibrate your lens you should keep some facts in mind:
Since lens correction parameters are determined by evaluating the distortion at different radius values you should provide enough control points at a large range of radii from the image center.
• If you use a rectangular pattern or straight lines for that task, make sure you set control points in all distances from the center.
• If you use two or more images make sure you overlap regions with large potential distortion (f.e. the corners) with regions with low possible distortion (f.e. the center). An only horizontal overlap would do, but use at least 50% in order to overlap the image center of one image with the border of the other.
a, b and c parameters influence Field of View, especially for images in landscape orientation but slightly for portrait oriented ones, too. This is because although the implicit calculation of the fourth polynomial parameter tries to keep the image at the same size, this is only possible at the radius r_src = 1.0.
Outside this radius, especially in the image corners, the size and hence the Field of View might differ. Since they are interconnected in this way, you should always allow the optimization for FoV too, if you optimize for a, b and c with more than one image. (You cannot optimize for FoV with only one image). As noted above you need a full 360 degree panorama in order to get an accurate measure of the Field of View.
The a and c parameters control more complex forms of distortion. In most cases it will be enough to optimize for the b parameter only, which is good at correcting normal barrel distortion and pincushion distortion.
If you want to see how changing the parameters influences distortion correction go to http://4pi.org/downloads/ and get abc.xls. Don't deactivate macros on loading.
See also Helmut Dersch's barrel distortion page.
There's an excellent tutorial on how to optimize by John Houghton: [2]
### Tools to correct barrel and pincushion distortion
• The original PTStitcher can be scripted to batch process images with known a, b & c parameters. It can also be operated with one of the GUI front-ends.
• nona or nona_gui (both part of the hugin distribution) can be used identically to PTStitcher.
• The Correct Radial Shift filter in the Panorama Tools Plugins for the gimp or photoshop uses the same a, b & c parameters as PTStitcher. Note that it doesn't know about d & e shift parameters and uses 'd' as an overall scaling factor instead, which should be d = 1-(a+b+c) to keep the image roughly the same size. If you need to shift the correction center like with the d & e parameter you must combine it with Vertical Shift and/or Horizontal Shift.
• PTLens is a Photoshop plugin and a stand-alone Windows tool that uses the same a, b & c parameters and comes with a database of popular lenses.
• Clens is a command line version of PTLens.
• fulla is a command-line tool that uses the same a, b, c & d parameters to correct barrel distortion. It can also correct chromatic aberration and vignetting at the same time.
• PTShift determines different a, b & c parameters for the three color channels in order to correct for Chromatic aberration with the Correct Radial Shift filter.
• Gimp wideangle plugin uses a different formula altogether to correct distortion.
• Gimp phfluuh plugin is another tool that corrects lens distortion using yet another formula.
• CamChecker is a tool for automatically determining lens distortion and generates a different set of parameters from everything else.
• zhang_undistort is a tool distributed with hugin that uses CamChecker parameters to actually correct distortion.
• Distortion Remove uses a completely different approach with a distortion matrix. Page in german only: http://www.stoske.de/digicam/Artikel/verzeichnung.html
### See also
Image positioning model
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8763564229011536, "perplexity_flag": "middle"}
|
http://nrich.maths.org/7074/note
|
### Cubic Spin
Prove that the graph of f(x) = x^3 - 6x^2 +9x +1 has rotational symmetry. Do graphs of all cubics have rotational symmetry?
### Sine Problem
In this 'mesh' of sine graphs, one of the graphs is the graph of the sine function. Find the equations of the other graphs to reproduce the pattern.
### Parabolic Patterns
The illustration shows the graphs of fifteen functions. Two of them have equations y=x^2 and y=-(x-4)^2. Find the equations of all the other graphs.
# Agile Algebra
##### Stage: 5 Challenge Level:
Why do this problem?
Making substitutions to make the task in hand easier is an example of a valuable technique in many areas of mathematics (eg integration, transformations of functions, change of axes, diagonalisation of matrices etc.) By introducing students to this technique as an example of a general process it will help them to understand what is going on when they meet the process in many different mathematical situations. These equations give students useful practice in algebraic manipulation. They will need to look for symmmetrical features in the expressions and exploit the symmetry to make it easier to solve the equations.
When we use mathematical language to communicate ideas, a frame of reference is usually taken for granted unless we specify otherwise (eg base ten numbers). However the same mathematical relationships can be expressed with different frames of reference and some mathematical tasks are simpler in one frame of reference than in another.Typically let's refer to two frames of reference as A and B and say we have a problem stated in A, then the technique is to map the given relationships to B, work in B and then map the results back to A.
Possible approach
It is helpful to introduce this problem with some discussion about switching frames of reference to make the equations easier to solve and how you are looking for symmetry in order to choose a good substitution.
One approach would be to divide the class into 4 groups and give each group one of the equations and ask them to discuss the symmetries they notice, to try possible substitutions and to solve their equation. You could then give assistance to the groups separately helping them as far as possible to do the work for themselves with your guidance rather than you as the teacher doing too much for them. This could lead into a homework. In the next lesson you could start with a class discussion of the symmetries in each equation in turn and then a representative from each group could present the solution to the class on the board.
Either before this lesson, or as part of the homework between your lessons, you might get the class to read the introduction to the article "The Why and How of Substitution" and perhaps to work through Example 1 in that article.
Possible extension
(1a) $\frac{x^2 - 10x + 15}{x^2 - 6x + 15}=\frac{3x}{x^2 - 8x + 15}.$
(1b) $\frac{(x^2 + x + 1)^2}{(x^2 + 1)(x^2 - x + 1)} = \frac {1}{3}.$
(2) $(2x - 3)^4 + (2x - 5)^4 = 2.$
(3a) $(x - 2)(x + 1)(x + 4)(x + 7) = 19.$
(3b) $(12x - 1)(6x - 1)(4x - 1)(3x - 1) = 5.$
Possible support
Make up your own equation by taking a simple equation and making a substitution to make it more complicated. For example take any quadratic equation in $x$ and turn it into a quartic equation by substituting $x=X+\frac{1}{X}.$
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935721755027771, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/27587/higgs-mass-and-the-hierarchy-problem
|
# Higgs mass and the hierarchy problem
I was wondering what is the opinion about importance of the hierarchy problem in the hep community? I'm still a student and I don't really understand, why there is so much attention around this issue.
1 loop corrections to the Higgs mass are divergent - in the cut-off regularization proportional to $\Lambda^2$ and therefore require large fine tuning between the parameters to make those corrections small. But this kind of problem do not appear in the dimensional regularization.
People like the value of $\Lambda$ to be very large, with an argument that it should correspond to same energy scale at which our theory breaks down. I don't think, that we should treat the scale $\Lambda$, as some kind of a physical scale of our model cut-off, as it is just a parameter to regularize the integral. Just like the $4+\epsilon$ dimension in the dimensional regularization is not a physical thing. Why do we apply a physical meaning to $\Lambda$? Not to mention the troubles with the Lorentz invariance.
Maybe the hierarchy problem is an argument that the cut-off regularization scheme is just not right to use?
-
I fixed your latex. You can include it yourself by enclosing it in dollar signs (like an inline equation in latex). – Joe Fitzsimons Apr 18 '12 at 8:25
## 2 Answers
Whether you do your calculations using a cutoff regularization or dimensional regularization or another regularization is just a technical detail that has nothing to do with the existence of the hierarchy problem. Order by order, you will get the same results whatever your chosen regularization or scheme is.
The schemes and algorithms may differ by the precise moment at which you subtract some unphysical infinite terms etc. Indeed, the dimensional regularization cures power-law divergences from scratch. But the hierarchy problem may be expressed in a way that is manifestly independent of these technicalities.
The hierarchy problem is the problem that one has to fine-tune actual physical parameters of a theory expressed at a high energy scale with a huge accuracy – with error margins smaller than $(E_{low}/E_{high})^k$ where $k$ is a positive power – in order for this high-energy theory to produce the low-energy scale and light objects at all.
If I formulate the problem in this way, it's clear that it doesn't matter what scheme you are using to do the calculations. In particular, your miraculous "cure" based on the dimensional regularization may hide the explicit $\Lambda^2$ in intermediate results. But it doesn't change anything about the dependence on the high-energy parameters.
What you would really need for a "cure" of the physical problem is to pretend that no high-energy scale physics exists at all. But it does. It's clear that the Standard Model breaks before we reach the Planck energy and probably way before that. There have to be more detailed physical laws that operate at the GUT scale or the Planck scale and those new laws have new parameters.
The low-energy parameters such as the LHC-measured Higgs mass 125 GeV are complicated functions of the more fundamental high-energy parameters governing the GUT-scale or Planck-scale theory. And if you figure out what condition is needed for the high-scale parameters to make the Higgs $10^{15}$ times lighter than the reduced Planck scale, you will see that they're unnaturally fine-tuned conditions requiring some dimensionful parameters to be in some precise ranges.
More generally, it's very important to distinguish true physical insights and true physical problems from some artifacts depending on a formalism. One common misconception is the belief of some people that if the space is discretized, converted to a lattice, a spin network, or whatever, one cures the problem of non-renormalizability of theories such as gravity.
But this is a deep misunderstanding. The actual physical problem hiding under the "nonrenormalizability" label isn't the appearance of the symbol $\infty$ which is just a symbol that one should interpret rationally. We know that this $\infty$ as such isn't a problem because at the end, it gets subtracted in one way or another; it is unphysical. The main physical problem is the need to specify infinitely many coupling constants – coefficients of the arbitrarily-high-order terms in the Lagrangian – to uniquely specify the theory. The cutoff approach makes it clear because there are many kinds of divergences that differ and each of these divergent expressions has to be "renamed" as a finite constant, producing a finite unspecific parameter along the way. But even if you avoid infinities and divergent terms from scratch, the unspecified parameters – the finite remainders of the infinite subtractions – are still there. A theory with infinitely many terms in the Lagrangian has infinitely many pieces of data that must be measured before one may predict anything: it remains unpredictive at any point.
In a similar way, fine-tuning required for the high-energy parameters is a problem because using the Bayesian inference, one may argue that it was "highly unlikely" for the parameters to conspire in such a way that the high-energy physical laws produce e.g. the light Higgs boson. The degree of fine-tuning (parameterized by a small number) is therefore translated as a small probability (given by the same small number) that the original theory (a class of theory with some parameters) agrees with the observations.
When this fine-tuning is of order $0.1$ or even $0.01$, it's probably OK. Physicists have different tastes what degree of fine-tuning they're ready to tolerate. For example, many phenomenologists have thought that even a $0.1$-style fine-tuning is a problem – the little hierarchy problem – that justifies the production of hundreds of complicated papers. Many others disagree that the term "little hierarchy problem" deserves to be viewed as a real one at all. But pretty much everyone who understands the actual "flow of information" in quantum field theory calculations as well as the basic Bayesian inference seems to agree that fine-tuning and the hierarchy problem is a problem when it becomes too severe. The problem isn't necessarily an "inconsistency" but it does mean that there should exist an improved explanation why the Higgs is so unnaturally light. The role of this explanation is to modify the naive Bayesian measure – with a uniform probability distribution for the parameters – that made the observed Higgs mass look very unlikely. Using a better conceptual framework, the prior probabilities are modified so that the small parameters observed at low energies are no longer unnatural i.e. unlikely.
Symmetries such as the supersymmetry and new physics near the electroweak scale are two major representatives of the solution to the hierarchy problem. They eliminate the huge "power law" dependence on the parameters describing the high-energy theory. One still has to explain why the parameters at the high energy scale are so that the Higgs is much lighter than the GUT scale but the amount of fine-tuning needed to explain such a thing may be just "logarithmic", i.e. "one in $15\ln 10$" where 15 is the base-ten logarithm of the ratio of the mass scales. And this is of course a huge improvement over the fine-tuning at precision "1 in 1 quadrillion".
-
"One common misconception is the belief of some people that if the space is discretized, converted to a lattice, a spin network, or whatever, one cures the problem of non-renormalizability of theories such as gravity." But once you have a natural lattice/spin network/etc. and this is actually the fundamental theory (and not a field thoery with infinitely many parameters to specify), then you can compute your observable values and the problem is really gone, isn't it? – Nick Kidman May 20 '12 at 12:53
Lubos - Just to play devils advocate- "It's clear that the Standard Model breaks ... There have to be more detailed physical laws that operate at the GUT scale or the Planck scale and those new laws have new parameters." I agree that there is certainly new physics and thus new mass parameters above the weak scale, but is it obvious that it will affect the Higgs mass? For example, you could say that eventually the plank mass will cause problems, but this naively only appears in inverse powers as $\frac{1}{m_{pl}}$ (in perturbation theory) so it wouldn't lead to tuning of the higgs mass. – DJBunk Jun 20 '12 at 15:22
Dear @Nick, there can't be any fundamental theory on a lattice or any other similar discrete background, that was really my point. You can't compute anything because all the coefficients of the non-renormalizable interactions in the continuum limit simply get translated to infinitely many terms you may construct on the lattice. And this ignorance about the infinitely many parameters is the problem, not the question whether they're hiding under the sign $\infty$. So no real problem can ever be solved by discretizing the spacetime. – Luboš Motl Jun 25 '12 at 19:08
In quantum field theory, the actual rules that may remove the infinitely many parameters is the scale invariance. If one requires that the theory is scale-invariant in the short-distance limit, the ultimate UV, and that's what's pretty much true for almost all consistent QFTs, this determines all the infinitely many parameters up to a finite number of them. This is the actual source of the knowledge and removal of the infinite ignorance and it requires a continuum spacetime because lattices aren't self-similar and can't produce scale-invariant theories. – Luboš Motl Jun 25 '12 at 19:10
Dear @DJBunk, it's trivial to show that the Higgs mass is hugely affected by pretty much everything at the GUT or Planck scale unless one may show that the effect is canceled. Your $1/m_{Pl}$ coefficients appear in front of nonrenormalizable interactions induced by the Planck-scale physics, in this case. But the Higgs mass isn't a nonrenormalizable interaction. Quite on the contrary, it's a relevant term, with a positive power of mass, so $m_h^2 h^2/2$ gets corrected by terms like $M^2 h^2$ etc. where $M$ is of order the Planck mass; a correction of Planck-scale physics looks like this: huge. – Luboš Motl Jun 25 '12 at 19:13
show 3 more comments
It is not clear to me whether you are making reference to the physical mass (propagator's pole) or to the renormalized mass (in certain renormalization scheme) which has not to be anything measurable at all.
The physical mass does not depend on the energy scale at which you are performing your experiments. However, physical coefficients of interacting terms (which are actually amplitudes of probability) do depend on the energy scale you make the experiment, even when classically they don't. For example, one does not talk about the value of the electron's mass at 1 MeV or at 10 GeV, but one does talk about the value of the fine structure constant $\alpha$ at 1 MeV or at 10 GeV. Right?
So, taking Lubos's definition of the hierarchy problem:
The hierarchy problem is the problem that one has to fine-tune actual physical parameters of a theory expressed at a high energy scale with a huge accuracy – with error margins smaller than (Elow/Ehigh)k where k is a positive power – in order for this high-energy theory to produce the low-energy scale and light objects at all.
I do not see what "physical parameters" (observables) one has to fine-tune because, in my opinion, the coefficient of the quadratic term in the Hamiltonian is not the physical mass (even if you see the theory a la Wilson and this coefficient depends on an energy scale $\Lambda$, it does not represent a physical mass at that energy contrary to what some people must think).
Maybe I'm wrong... Why?
-
The point is that there are Planck scales in the world, and the coefficient of the quadratic term has to be fine tuned to make the physical mass be what it is. It isn't tuned to a special point, either. I don't understand the confusion--- the cutoff is physical, it's where gravity kicks in to regulate the integrals. – Ron Maimon Jul 13 '12 at 20:35
Thank you. I know that the coefficient of the quartic term has to be fine tuned to cancel the contribution of the cut-off scale (Planck energy or GUT scale or whatever) so that the physical mass be much lower than the cut-off scale. I also know that quartic interactions of scalar fields leads to quadratic dependences on the energy scale in the running of the coefficient – drake Jul 13 '12 at 21:03
So why is there still a question? – Ron Maimon Jul 13 '12 at 21:05
Sorry, I am new here and I don't know how to use these comments... Let me continue with my previous comment: My point is that the coefficient of the quadratic term is not physical because it is not the physical mass. Thus, I do not see the problem in fine-tunning a parameter that is not physical. However, I would see a problem if one had to fine-tune an observable to cancel the contribution of the cut-off scale. Is it clear? I think we are allowed to choose the coefficient of this term as we want. Thank you. – drake Jul 13 '12 at 21:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300557971000671, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/users/1564/henrick-hellstrom?tab=activity&sort=comments&page=5
|
# Henrick Hellström
reputation
1311
bio
website
location
age
member for 1 year, 3 months
seen 1 hour ago
profile views 29
| | | bio | visits | | |
|-------------|------------------|---------|----------|------------|------------------|
| | 2,521 reputation | website | | member for | 1 year, 3 months |
| 1311 badges | location | | seen | 1 hour ago | |
# 177 Comments
| | | |
|-------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Mar12 | comment | Diffie-Hellman key agreement with both Server Authentication and Perfect Forward SecrecyI guess you meant to describe a repeated adaptive attack that would reduce the security bounds, rather than an immediate break? Choosing 2048/256 bit DH parameters and a properly designed HMAC-SHA256 based KDF, seeding AES-GCM, would seem to be adequate for at least 100 bits of CCA2 security. Would anything be gained by changing the key derivation step 4, in such way that the server hello message includes a random nonce that is mixed in with the Diffie-Hellman shared secret $M^1_{tmp}$ when $k_{tmp}$ is derived? |
| Mar11 | comment | Diffie-Hellman key agreement with both Server Authentication and Perfect Forward SecrecyThank you for you comments. Regarding step 4, Diffie-Hellman works like this: Client calculates $M^1_{tmp} \leftarrow (S^0_{publ})^{C^1_{priv}}$ and server calculates $Mî_{tmp} \leftarrow (C^1_{publ})^{S^0_{priv}}$ |
| Mar11 | comment | Diffie-Hellman key agreement with both Server Authentication and Perfect Forward SecrecyI think you might have to spell out the steps involved in this attack. The presumption is that $Adversary$ might see $C^1_{publ}$, but not $C^1_{priv}$. How does $Adversary$ make $Client$ perform step 4 based on $S^0_{publ}$ and $Adv^1_{priv}$, which seems to be necessary in order for the client to perform step 7 without failure, given that the step 6 has been injected by $Adversary$ in the way you describe? |
| Mar11 | comment | Diffie-Hellman key agreement with both Server Authentication and Perfect Forward SecrecyWell, fwiw, I have already done that. (Read it, analyzed it, implemented it.) But that is largely beside the point, since TLS 1.2 clearly isn't the protocol with the least overhead that meets the requirements listed in the question. Even if you fix the cipher suite (and possibly invent a new one for a DH_DHE key agreement), you will have to send and process redundant negotiation messages. Hence my question. |
| Mar11 | comment | Method and explanation for calculating difference in speed between DES and RSAYou have not provided sufficient information. Firstly, different implementations of both algorithms have different speed. Secondly, what RSA padding is used? Thirdly, what size of the RSA modulus is assumed? Fourthly, what is the length of the plain text to be encrypted? |
| Mar10 | comment | Where to store the private key and the public key in a communication protocolSecure protocols are designed to be secure even if they are known. Security through obscurity is not real security, at least not in the cryptographic sense. However, you are right that the flexibility of SSL/TLS is a valid security concern (in the sense that it complicates both the protocol and the implementations). Many existing SSL/TLS implementations will allow you to restrict the set of allowed protocol versions and cipher suites. Do that, and ISTM you will have accomplished what you want. |
| Mar8 | comment | How difficult is it to check if a group element is in a sub group?@Poncho: Thanks, fixed. |
| Mar7 | comment | Timing attack on modular exponentiation@SmitJohnth: The second part of my answer concerns the case where the attacker doesn't only vary the input to the function, but also varies other external factors. In this case it is possible to not only count the bits of the exponent, but determine their position. |
| Mar7 | comment | Equivalents to a physical hat+shaking? |
| Mar6 | comment | Timing attack on modular exponentiation@SmitJohnth: The purpose of the side channel attack I describe is to determine the position of the 1 bits with arbitrary precision. |
| Mar6 | comment | Timing attack on modular exponentiation@SmitJohnth: Well, presumably you implemented the critical parts, such as multiplication, in assembler. If you don't, it will be hard to avoid input dependent conditional branches when dealing with carries. |
| Mar5 | comment | is AES secure for java application licensingRunning the application in a debugger will reveal enough information to calculate every possible key that will unlock your software. |
| Mar4 | comment | Are there any secure commutative ciphers?This question probably has to be rephrased. Commutativity is, by itself, more of a security vulnerability than a security feature. If you really need this commutativity, you probably have to build an entire protocol around it, to make sure it can't be exploited by an attacker. Just asking for the security of the commutative cipher itself doesn't make much sense. |
| Mar3 | comment | In RSA, how to make sure that $p-1$ and $q-1$ are still hard to factorize?Ah, no, I was comparing apples to oranges: The algorithms I was timing were too different to make sense of comparing them. |
| Mar3 | comment | In RSA, how to make sure that $p-1$ and $q-1$ are still hard to factorize?@Poncho: If you look closely at FIPS 186 prime generation, it contains a counter that makes the search for $p$ break and generate a new $q$. If you don't have that step, you might end up with a subprime for which finding a prime is infeasible. |
| Mar2 | comment | In RSA, how to make sure that $p-1$ and $q-1$ are still hard to factorize?@Poncho: Well, our results show that finding primes $p$ with $p-1$ having large prime factors might take about 10 times longer. Another possibility is that Rabin-Miller is more likely to return false positive in the first iterations for $p = rk + 1$, which would also cause a slow down. |
| Mar2 | comment | In RSA, how to make sure that $p-1$ and $q-1$ are still hard to factorize?IME step 2 is more expensive in the sense that it is less likely, given a fixed prime $r$ and a randomly selected $k$ of a fixed size, a number of the form $rk + 1$ is a prime, compared to a completely random odd integer of the same size. I only have experimental evidence to support this conjecture, though. |
| Mar2 | comment | What is “Blinding” used for in cryptography? |
| Mar2 | comment | Timing attack on modular exponentiationThen again, some of that info doesn't necessarily help in common scenarios. |
| Mar2 | comment | What is “Blinding” used for in cryptography?Then again, it is usually not the value $x$ to-be-raised that has to be blinded, but the private exponent $d$. Your method does not blind $d$, so if the timing attack works independently of the value-to-be-raised, it has no effect at all. |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935390830039978, "perplexity_flag": "middle"}
|
http://lucatrevisan.wordpress.com/tag/sat/
|
in theory
"Marge, I agree with you - in theory. In theory, communism works. In theory." -- Homer Simpson
# Tag Archive
You are currently browsing the tag archive for the ‘SAT’ tag.
## Bounded Independence, AC0, and Random 3SAT
January 23, 2009 in theory | Tags: Mark Braverman, Pseudorandomness, SAT | 18 comments
As reported here, here and here, Mark Braverman has just announced a proof of a 1990 conjecture by Linial and Nisan.
Mark proves that if ${C}$ is an AC0 boolean circuit (with NOT gates and with AND gates and OR gates of unbounded fan-in) of depth ${d}$ and size ${S}$, and if ${X}$ is any ${k}$-wise independent distribution with ${k = (\log S)^{O(d^2)}}$, then
$\displaystyle {\mathbb E} C(U_n) \approx {\mathbb E} C(X)$
that is, ${X}$ “fools” the circuit ${C}$ into thinking that ${X}$ is the uniform distribution ${U_n}$ over ${\{0,1\}^n}$. Plausibly, this might be true even for ${k = O((\log S)^{d-1})}$.
Nothing was known for depth 3 or more, and the depth-2 case was settled only recently by Bazzi, with a proof that, as you may remember, has been significantly simplified by Razborov about six months ago.
Mark’s proof relies on approximating ${C}$ via low-degree polynomials. The point is that if ${p}$ is an ${n}$-variate (real valued) polynomial of degree ${\leq k}$, and ${X}$ is a ${k}$-wise independent distribution ranging over ${\{0,1\}^n}$, then
$\displaystyle {\mathbb E} p(X) = {\mathbb E} p(U_n)$
Now if we could show that ${p}$ approximates ${C}$ both under ${X}$ and under ${U_n}$, in the sense that ${{\mathbb E} p(X) \approx {\mathbb E} C(X)}$, and also ${{\mathbb E} p(U_n) \approx {\mathbb E} C(U_n)}$, then we would be done.
The Razborov-Smolenski lower bound technique gives a probabilistic construction of a polynomial ${p}$ such that for every input ${x}$ one has a high probability that ${p(x) = C(x)}$. In particular, one get one polynomial ${p}$ such that both
$\displaystyle {\mathbb P}_{x\sim X} [p(x) = C(x) ] = 1-o(1) \ \ \ \ \ (1)$
and
$\displaystyle {\mathbb P}_{x\sim U_n} [p(x) = C(x) ] = 1-o(1) \ \ \ \ \ (2)$
Unfortunately this is not sufficient, because the polynomial ${p}$ might be very large at a few points, and so even if ${p}$ agrees with ${C}$ with high probability there is no guarantee that the average of ${p}$ is close to the average of ${C}$.
Using a result of Linial, Mansour and Nisan (developed in the context of learning theory), one can construct a different kind of low-degree approximating polynomial ${p}$, which is such that
$\displaystyle {\mathbb E}_{x\sim U_n} | C(x) - p(x)|^2 = o(1) \ \ \ \ \ (3)$
The Linial-Mansour-Nisan approximation, however, says nothing about the relation between ${p}$ and ${C}$ under the distribution ${X}$.
Using ideas of Bazzi’s, however, if we had a single polynomial ${p}$ such that properties (1), (2) and (3) are satisfied simultaneously, then we could construct another low-degree polynomial ${p'}$ such that ${{\mathbb E} p'(X) \approx {\mathbb E} C(X)}$, and also ${{\mathbb E} p'(U_n) \approx {\mathbb E} C(U_n)}$, giving us that ${C}$ is fooled by ${X}$.
As far as I understand, Mark constructs a polynomial satisfying properties (1), (2) and (3) by starting from the Razborov-Smolenski polynomial ${p}$, and then observing that the indicator function ${E}$ of the points on which ${p \neq C}$ is itself a boolean function admitting a Linial-Mansour-Nisan approximation ${e}$. Defining ${p' := p\cdot (1-e)}$, we have that ${p'}$ has all the required properties, because multiplying by ${1-e}$ “zeroes out” the points on which ${p}$ is excessively large.
I have been interested in this problem for some time because of a connection with the complexity of 3SAT on random instances.
Read the rest of this entry »
## More ways to prove unsatisfiability of random k-SAT
August 21, 2007 in theory | Tags: SAT | 1 comment
Earlier, here and here, we discussed the following problem: we pick at random a k-CNF formula with \$n\$ variables and \$m\$ clauses; if \$m\$ is at least \$c_k n\$, for a constant \$c_k\$, then we know that with high probability the formula is unsatisfiable; is there an algorithmic way of certifying this unsatisfiability?
One approach we discussed is a reduction to 2SAT, which works provided \$m\$ is at least of the order of \$n^{k-1}\$. What about sparser formulas?
Here is another possible reduction. Starting from the formula, construct an hypergraph that has 2n vertices and m hyperedges as follows. For every variable \$x_i\$ we have the two vertices \$[x_i=0]\$ and \$[x_i=1]\$, and, for every clause with \$k\$ variables, we have the hyperedge that connects the \$k\$ vertices corresponding to the unique assignment to those \$k\$ variables that contradicts the clause. For example, the clause
\$(x_3 \vee \neg x_5 \vee x_6)\$
corresponds to the hyperedge
\$([x_3=0],[x_5=1],[x_6=0])\$.
Now, if the formula is random, we have a random hypergraph. Also, if the formula is satisfiable we have an independent set of size \$n\$; half as big as the total number of vertices: just take the vertices consistent with the assignment. (An independent set is a set of vertices such that no hyperedge is completely contained in the set.) Certifying unsatisfiability of a random formula now reduces to certifying that a given random hypergraph has no large independent set.
Unfortunately, we don’t know of any algorithm for this problem, except for the case of graphs. As I may discuss in a future post, spectral methods allow us to certify that a given random graph with \$n\$ vertices and average degree \$d\$ has no independent set larger than about \$n/O(\sqrt{d})\$. By this, I mean that there is a definition of certificate that, when existing, always correctly proves such an upper bound, and when we pick at random a graph of average degree \$d\$ there is a high probability that a certificate proving an upper bound of \$n/O(sqrt{d})\$ to the size of the largest independent set exists and can be found efficiently.
It is too bad that the above reduction produces a graph only when we start from 2SAT, a problem for which we already know how to decide (and hence certify) satisfiability in polynomial time.
But, and here is a great idea of Goerdt and Krivelevich from 2000, we can reduce the problem of certifying non-existence of large independent sets in random hypergraps to the problem of certifying non-existence of large independent sets in random graphs.
Suppose we have an hypergraph with n vertices and m hyperedges, each involving 4 vertices. Construct now a graph with \$n^2\$ vertices, one for every pairs of vertices in the hypergraph, and for every hyperedge \$(a,b,c,d)\$ in the hypergraph create the edge \$([a,b],[c,d])\$ in the graph. (Assume for now that we choose the ordering of vertices at random, even if this means that we only achieve a randomized reduction. There are ways to make the reduction deterministic.) Now, if the hypergraph has an independent set of size \$\geq t\$ then clearly the graph has an independent set of size \$\geq t^2\$. Furthermore, if we started from a random hypergraph then we are getting a random graph. So if \$m\$ is of the order of \$n^2\$ we are able to refute the claim that there is an independent set of size \$n/2\$ in the hypergraph (by refuting the claim that there is an independent of size \$n^2 /4\$ in the graph).
In general, for even \$k\$, these ideas give a way of refuting a random \$k\$-SAT instance with \$n\$ variables and \$n^{k/2}\$ clauses.
(The original paper of Goerdt and Kirvelevich had an extra polylog term needed to make the spectral techniques work. But later more sophisticated analyses have removed the polylog bound, either by using slightly different reductions or by directly improving the bounds on the sparsity of random graphs for which one can certify the non-existence of large independent sets. See this paper by Feige and Ofek for the latter approach.)
Instead of thinking in terms of reductions to graph and hypergraph problems, one may directly see the method as associating a matrix to the formula and proving that certain properties of the matrix imply the unsatisfiability of the formula.
A generalization of this way of seeing the argument has led to an algorithm by Friedman, Goerdt and Krivelevich that certifies unsatisfiability of random kSAT instances with about \$n^{k/2}\$ clauses even if \$k\$ is odd. I think it would be interesting to have a combinatorial view of what happens in that algorithm.
This is the state of the art for algorithms that find certificates of unsatisfiability.
There is also some intuition for why \$n^{1.5}\$ is a natural barrier. The algorithmic techniques described so far are “no more powerful” than semidefinite programming: the standard semidefinite relaxation of Max 2SAT proves that a given 2SAT formula is unsatisfiable, whenever it is the case, and a standard semidefinite programming relaxation of independent set (the Lovasz theta function) proves with high probability that a random graph has no large independent set. It is conjectured, however, that no “simple” reduction of random 3SAT to a semidefinite programming problem yelds a refutation if the number of clauses is less than \$n^{1.5}\$. This has been verified by Feige and Ofek for a natural reduction.
Recently, Feige, Kim and Ofek have defined a new type of witness of unsatisfiability that is verifiable in polynomial time and that exists with high probability for formulas with about \$n^{1.4}\$ clauses. (It is not known, however, how to construct such witnesses in polynomial time given a formula.) As could be expected, their witness-verification algorithm employs something that we know how to do in polynomial time but that is very hard for semidefinite programs: verifying the unsatisfiability of a given linear system over GF(2).
## Proving unsatisfiability of random kSAT
May 16, 2007 in theory | Tags: SAT | 4 comments
In the previous random kSAT post we saw that for every \$k\$ there is a constant \$c_k\$ such that
1. A random kSAT formula with \$n\$ variables and \$m\$ clauses is conjectured to be almost surely satisfiable when \$m/n c_k + \epsilon\$;
2. There is an algorithm that is conjectured to find satisfying assignments with high probability when given a random kSAT formula with \$n\$ variables and fewer than \$(c_k – \epsilon) n\$ clauses.
So, conjecturally, the probability of satisfiability of a random kSAT formula has a sudden jump at a certain threshold value of the ratio of clauses to variables, and in the regime where the formula is likely to be satisfiable, the kSAT problem is easy-on-average.
What about the regime where the formula is likely to be unsatisfiable? Is the problem still easy on average? And what would that exactly mean? The natural question about average-case complexity is: is there an efficient algorithm that, in the unsatisfiable regime, finds with high probability a certifiably correct answer? In other words, is there an algorithm that efficiently delivers a proof of unsatisfiability given a random formula with \$m\$ clauses and \$n\$ variables, \$m> (c_k + \epsilon) n\$?
Some non-trivial algorithms, that I am going to describe shortly, find such unsatisfiability proofs but only in regimes of fairly high density. It is also known that certain broad classes of algorithms fail for all constant densities. It is plausible that finding unsatisfiability proofs for random kSAT formulas with any constant density is an intractable problem. If so, its intractability has a number of interesting consequences, as shown by Feige.
A first observation is that if we have an unsatisfiable 2SAT formula then we can easily prove its unsatisfiability, and so we may try to come with some kind of reduction from 3SAT to 2SAT. In general, this is of course hopeless. But consider a random 3SAT formula \$\phi\$ with \$n\$ variables and \$10 n^2\$ clauses. Now, set \$x_1 \leftarrow 0\$ in \$\phi\$, and consider the resulting formula \$\phi’\$. The variable \$x_1\$ occurred in about \$30 n\$ clauses, positively in about \$15 n\$ of them (which have now become 2SAT clauses in \$\phi’\$) and negatively in about \$15 n\$ clauses, that have now disappeared in \$\phi’\$. Let’s look at the 2SAT clauses of \$\phi’\$: there are about \$15 n\$ such clauses, they are random, so they are extremely likely to be unsatisfiable, and, if so, we can easily prove that they are. If the 2SAT subset of \$\phi’\$ is unsatisfiable, then so is \$\phi’\$, and so we have a proof of unsatisfiability for \$\phi’\$.
Now set \$x_1 \leftarrow 1\$ in \$\phi\$, thus constructing a new formula \$\phi”\$. As before, the 2SAT part of \$\phi”\$ is likely to be unsatisfiable, and, if so, its unsatisfiability is easily provable in polynomial time.
Overall, we have that we can prove that \$\phi\$ is unsatisfiable when setting \$x_1 \leftarrow 0\$, and also unsatisfiable when setting \$x_1\leftarrow 1\$, and so \$\phi\$ is unsatisfiable.
This works when \$m\$ is about \$n^2\$ for 3SAT, and when \$m\$ is about \$n^{k-1}\$ for kSAT. By fixing \$O(\log n)\$ at a time it is possible to shave another polylog factor. These idea is due to Beame, Karp, Pitassi, and Saks.
A limitation of this approach is that it produces polynomial-size resolution proofs of unsatisfiability and, in fact tree-like resolution proofs. It is known that polynomial-size resolution proofs do not exist for random 3SAT formulas with fewer than \$n^{1.5-\epsilon}\$ clauses, and tree-like resolution proofs do not exist even when the number of clauses is just less than \$n^{2-\epsilon}\$. This is a limitation that afflicts all backtracking algorithms, and so all approaches of the form “let’s fix some variables, then apply the 2SAT algorithm.” So something really different is needed to make further progress.
Besides the 2SAT algorithm, what other algorithms do we have to prove that no solution exists for a given problem? There are algorithms for linear and semidefinite programming, and there is Gaussian elimination. We’ll see how they can be applied to random kSAT in the next theory post.
## Random kSAT
May 7, 2007 in theory | Tags: SAT | 9 comments
Pick a random instance of 3SAT by picking at random \$m\$ of the possible \$8 {n\choose 3}\$ clauses that can be constructed over \$n\$ variables. It is easy to see that if one sets \$m=cn\$, for a sufficiently large constant \$c\$, then the formula will be unsatisfiable with very high probability (at least \$1-2^n \cdot (7/8)^m\$), and it is also possible (but less easy) to see that if \$c\$ is a sufficiently small constant, then the formula is satisfiable with very high probability.
A number of questions come to mind:
1. If I plot, for large \$n\$, the probability that a random 3SAT formula with \$n\$ variables and \$cn\$ clauses is satisfiable, against the density \$c\$, what does the graph look like? We just said the probability is going to be close to 1 for small \$c\$ and close to \$0\$ for large \$c\$, but does it go down smoothly or sharply?
Here the conjecture, supported by experimental evidence, is that the graph looks like a step function: that there is a constant \$c_3\$ such that the probability of satisfiability is \$1-o_n(1)\$ for density \$c_3\$. A similar behavior is conjectured for kSAT for all \$k\$, with the threshold value \$c_k\$ being dependent on \$k\$.
Friedgut proved a result that comes quite close to establishing the conjecture.
For, say, 3SAT, the statement of the conjecture is that there is a value \$c_3\$ such that for every interval size \$\epsilon\$, every confidence \$\delta\$ and every sufficiently large \$n\$, if you pick a 3SAT formula with \$(c_3+\epsilon)n\$ clauses and \$n\$ variables, the probability of satisfiability is at most \$\delta\$, but if you pick a formula with \$(c_3-\epsilon)n\$ clauses then the probability of satisfiability is at least \$1-\delta\$.
Friedgut proved that for every \$n\$ there is a density \$c_{3,n}\$, such that for every interval size \$\epsilon\$, every confidence \$\delta\$ and every sufficiently large \$n\$, if you pick a 3SAT formula with \$(c_{3,n}+\epsilon)n\$ clauses and \$n\$ variables, the probability of satisfiability is at most \$\delta\$, but if you pick a formula with \$(c_{3,n}-\epsilon)n\$ clauses then the probability of satisfiability is at least \$1-\delta\$.
So, for larger and larger \$n\$, the graph of proability of satisfiability versus density does look more and more like a step function, but Friedgut’s proof does not guarantee that the location of the step stays the same. Of course the location is not going to move, but nobody has been able to prove that yet.
Semi-rigorous methods (by which I mean, methods where you make things up as you go along) from statistical physics predict the truth of the conjecture and predict a specific value for \$c_3\$ (and for \$c_k\$ for each \$k\$) that agrees with experiments. It remains a long-term challenge to turn these arguments into a rigorous proof.
For large \$k\$, work by Achlioptas, Moore, and Peres shows almost matching upper and lower bounds on \$c_k\$ by a second moment approach. They show that if you pick a random kSAT formula for large \$k\$ the variance of the number of satisfying assignments of the formula is quite small, and so the formula is likely to be unsatisfiable when the average number of assignments is close to zero (which actually just follows from Markov’s inequality), but also the formula is likely to be satisfiable when the average number of assignments is large. Their methods, however, do not improve previous results for 3SAT. Indeed, it is known that the variance is quite large for 3SAT, and the conjectured location of \$c_3\$ is not the place where the average number of assignments goes from being small to being large. (The conjectured value of \$c_3\$ is smaller.)
2. Pick a random formula with a density that makes it very likely that the formula is satisfiable: is this a distribution of inputs that makes 3SAT hard-on-average?
Before addressing the question we need to better specify what we mean by hard-on-average (and, complementarily, easy-on-average) in this case. For example, the algorithm that always says “satisfiable” works quite well; over the random choice of the formula, the error probability of the algorithm is extremely small. In such settings, however, what one would like from an algorithm is to produce an actual satisfying assignment. So far, all known lower bounds for \$c_3\$ are algorithmic, so in the density range in which we rigorously know that a random 3SAT formula is likely to be satisfiable we also know how to produce, with high probability, a satisfying assignment in polynomial time. The results for large \$k\$, however, are non-constructive and it remains an open question to match them with an algorithmic approach.
The statistical physics methods that suggest the existence of sharp thresholds also inspired an algorithm (the survey propagation algorithm) that, in experiments, efficiently finds satisfying assignments in the full range of density in which 3SAT formulas are believed to be satisfiable with high probability. It is an exciting, but very difficult, question to rigorously analyze the behavior of this algorithm.
3. Pick a random formula with a density that makes it very likely that the formula is unsatisfiable: is this a distribution of inputs that makes 3SAT hard-on-average?
Again, an algorithm that simply says “unsatisfiable” works with high probability. The interesting question, however, is whether there is an algorithm that efficiently and with high probability delivers certificates of unsatisfiability. (Just like the survey propagation algorithm delivers certificates of satisfiability in the density range in which they exist, or so it is conjectured.) This will be the topic of the next post.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 57, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458066821098328, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.