text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
Influid mechanics,hydrostatic equilibrium, also calledhydrostatic balanceandhydrostasy, is the condition of afluidorplasticsolid at rest, which occurs when external forces, such asgravity, are balanced by apressure-gradient force.[1]In the planetary physics of Earth, the pressure-gradient force prevents gravity from collapsing theatmosphere of Earthinto a thin, dense shell, whereas gravity prevents the pressure-gradient force from diffusing the atmosphere intoouter space.[2][3]In general, it is what causes objects in space to be spherical.
Hydrostatic equilibrium is the distinguishing criterion betweendwarf planetsandsmall solar system bodies, and features inastrophysicsandplanetary geology. Said qualification of equilibrium indicates that the shape of the object is symmetrically rounded, mostly due torotation, into anellipsoid, where any irregular surface features are consequent to a relatively thin solidcrust. In addition to the Sun, there area dozen or so equilibrium objects confirmed to existin theSolar System.
For a hydrostatic fluid on Earth:dP=−ρ(P)g(h)dh{\displaystyle dP=-\rho (P)\,g(h)\,dh}
Newton's laws of motionstate that a volume of a fluid that is not in motion or that is in a state of constant velocity must have zero net force on it. This means the sum of the forces in a given direction must be opposed by an equal sum of forces in the opposite direction. This force balance is called a hydrostatic equilibrium.
The fluid can be split into a large number ofcuboidvolume elements; by considering a single element, the action of the fluid can be derived.
There are three forces: the force downwards onto the top of the cuboid from the pressure,P, of the fluid above it is, from the definition ofpressure,Ftop=−PtopA{\displaystyle F_{\text{top}}=-P_{\text{top}}A}Similarly, the force on the volume element from the pressure of the fluid below pushing upwards isFbottom=PbottomA{\displaystyle F_{\text{bottom}}=P_{\text{bottom}}A}
Finally, theweightof the volume element causes a force downwards. If thedensityisρ, the volume isVandgthestandard gravity, then:Fweight=−ρgV{\displaystyle F_{\text{weight}}=-\rho gV}The volume of this cuboid is equal to the area of the top or bottom, times the height – the formula for finding the volume of a cube.Fweight=−ρgAh{\displaystyle F_{\text{weight}}=-\rho gAh}
By balancing these forces, the total force on the fluid is∑F=Fbottom+Ftop+Fweight=PbottomA−PtopA−ρgAh{\displaystyle \sum F=F_{\text{bottom}}+F_{\text{top}}+F_{\text{weight}}=P_{\text{bottom}}A-P_{\text{top}}A-\rho gAh}This sum equals zero if the fluid's velocity is constant. Dividing by A,0=Pbottom−Ptop−ρgh{\displaystyle 0=P_{\text{bottom}}-P_{\text{top}}-\rho gh}Or,Ptop−Pbottom=−ρgh{\displaystyle P_{\text{top}}-P_{\text{bottom}}=-\rho gh}Ptop−Pbottomis a change in pressure, andhis the height of the volume element—a change in the distance above the ground. By saying these changes areinfinitesimallysmall, the equation can be written indifferentialform.dP=−ρgdh{\displaystyle dP=-\rho g\,dh}Density changes with pressure, and gravity changes with height, so the equation would be:dP=−ρ(P)g(h)dh{\displaystyle dP=-\rho (P)\,g(h)\,dh}
Note finally that this last equation can be derived by solving the three-dimensionalNavier–Stokes equationsfor the equilibrium situation whereu=v=∂p∂x=∂p∂y=0{\displaystyle u=v={\frac {\partial p}{\partial x}}={\frac {\partial p}{\partial y}}=0}Then the only non-trivial equation is thez{\displaystyle z}-equation, which now reads∂p∂z+ρg=0{\displaystyle {\frac {\partial p}{\partial z}}+\rho g=0}Thus, hydrostatic balance can be regarded as a particularly simple equilibrium solution of the Navier–Stokes equations.
By plugging theenergy–momentum tensorfor aperfect fluidTμν=(ρc2+P)uμuν+Pgμν{\displaystyle T^{\mu \nu }=\left(\rho c^{2}+P\right)u^{\mu }u^{\nu }+Pg^{\mu \nu }}into theEinstein field equationsRμν=8πGc4(Tμν−12gμνT){\displaystyle R_{\mu \nu }={\frac {8\pi G}{c^{4}}}\left(T_{\mu \nu }-{\frac {1}{2}}g_{\mu \nu }T\right)}and using the conservation condition∇μTμν=0{\displaystyle \nabla _{\mu }T^{\mu \nu }=0}one can derive theTolman–Oppenheimer–Volkoff equationfor the structure of a static, spherically symmetric relativistic star in isotropic coordinates:dPdr=−GM(r)ρ(r)r2(1+P(r)ρ(r)c2)(1+4πr3P(r)M(r)c2)(1−2GM(r)rc2)−1{\displaystyle {\frac {dP}{dr}}=-{\frac {GM(r)\rho (r)}{r^{2}}}\left(1+{\frac {P(r)}{\rho (r)c^{2}}}\right)\left(1+{\frac {4\pi r^{3}P(r)}{M(r)c^{2}}}\right)\left(1-{\frac {2GM(r)}{rc^{2}}}\right)^{-1}}In practice,Ρandρare related by an equation of state of the formf(Ρ,ρ) = 0, withfspecific to makeup of the star.M(r) is a foliation of spheres weighted by the mass densityρ(r), with the largest sphere having radiusr:M(r)=4π∫0rdr′r′2ρ(r′).{\displaystyle M(r)=4\pi \int _{0}^{r}dr'\,r'^{2}\rho (r').}Per standard procedure in taking the nonrelativistic limit, we letc→ ∞, so that the factor(1+P(r)ρ(r)c2)(1+4πr3P(r)M(r)c2)(1−2GM(r)rc2)−1→1{\displaystyle \left(1+{\frac {P(r)}{\rho (r)c^{2}}}\right)\left(1+{\frac {4\pi r^{3}P(r)}{M(r)c^{2}}}\right)\left(1-{\frac {2GM(r)}{rc^{2}}}\right)^{-1}\rightarrow 1}Therefore, in the nonrelativistic limit the Tolman–Oppenheimer–Volkoff equation reduces to Newton's hydrostatic equilibrium:dPdr=−GM(r)ρ(r)r2=−g(r)ρ(r)⟶dP=−ρ(h)g(h)dh{\displaystyle {\frac {dP}{dr}}=-{\frac {GM(r)\rho (r)}{r^{2}}}=-g(r)\,\rho (r)\longrightarrow dP=-\rho (h)\,g(h)\,dh}(we have made the trivial notation changeh=rand have usedf(Ρ,ρ) = 0 to expressρin terms ofP).[4]A similar equation can be computed for rotating, axially symmetric stars, which in its gauge independent form reads:∂iPP+ρ−∂ilnut+utuφ∂iuφut=0{\displaystyle {\frac {\partial _{i}P}{P+\rho }}-\partial _{i}\ln u^{t}+u_{t}u^{\varphi }\partial _{i}{\frac {u_{\varphi }}{u_{t}}}=0}Unlike the TOV equilibrium equation, these are two equations (for instance, if as usual when treating stars, one chooses spherical coordinates as basis coordinates(t,r,θ,φ){\displaystyle (t,r,\theta ,\varphi )}, the indexiruns for the coordinatesrandθ{\displaystyle \theta }).
The hydrostatic equilibrium pertains tohydrostaticsand the principles of equilibrium offluids. A hydrostatic balance is a particular balance for weighing substances in water. Hydrostatic balance allows thediscoveryof theirspecific gravities. This equilibrium is strictly applicable when an ideal fluid is in steady horizontal laminar flow, and when any fluid is at rest or in vertical motion at constant speed. It can also be a satisfactory approximation when flow speeds are low enough that acceleration is negligible.
From the time ofIsaac Newtonmuch work has been done on the subject of the equilibrium attained when a fluid rotates in space. This has application to both stars and objects like planets, which may have been fluid in the past or in which the solid material deforms like a fluid when subjected to very high stresses.
In any given layer of a star, there is a hydrostatic equilibrium between the outward-pushing pressure gradient and the weight of the material above pressing inward. One can also study planets under the assumption of hydrostatic equilibrium. A rotating star or planet in hydrostatic equilibrium is usually anoblate spheroid, anellipsoidin which two of the principal axes are equal and longer than the third.
An example of this phenomenon is the starVega, which has a rotation period of 12.5 hours. Consequently, Vega is about 20% larger at the equator than from pole to pole.
In his 1687Philosophiæ Naturalis Principia MathematicaNewton correctly stated that a rotating fluid of uniform density under the influence of gravity would take the form of a spheroid and that the gravity (including the effect ofcentrifugal force) would be weaker at the equator than at the poles by an amount equal (at leastasymptotically) to five fourths the centrifugal force at the equator.[5]In 1742,Colin Maclaurinpublished his treatise on fluxions in which he showed that the spheroid was an exact solution. If we designate the equatorial radius byre,{\displaystyle r_{e},}the polar radius byrp,{\displaystyle r_{p},}and theeccentricitybyϵ,{\displaystyle \epsilon ,}with
he found that the gravity at the poles is[6]
whereG{\displaystyle G}is the gravitational constant,ρ{\displaystyle \rho }is the (uniform) density, andM{\displaystyle M}is the total mass. The ratio of this tog0,{\displaystyle g_{0},}the gravity if the fluid is not rotating, is asymptotic to
asϵ{\displaystyle \epsilon }goes to zero, wheref{\displaystyle f}is the flattening:
The gravitational attraction on the equator (not including centrifugal force) is
Asymptotically, we have:
Maclaurin showed (still in the case of uniform density) that the component of gravity toward the axis of rotation depended only on the distance from the axis and was proportional to that distance, and the component in the direction toward the plane of the equator depended only on the distance from that plane and was proportional to that distance. Newton had already pointed out that the gravity felt on the equator (including the lightening due to centrifugal force) has to berpregp{\displaystyle {\frac {r_{p}}{r_{e}}}g_{p}}in order to have the same pressure at the bottom of channels from the pole or from the equator to the centre, so the centrifugal force at the equator must be
Defining the latitude to be the angle between a tangent to the meridian and axis of rotation, the total gravity felt at latitudeϕ{\displaystyle \phi }(including the effect of centrifugal force) is
This spheroid solution is stable up to a certain (critical)angular momentum(normalized byMGρre{\displaystyle M{\sqrt {G\rho r_{e}}}}), but in 1834,Carl Jacobishowed that it becomes unstable once the eccentricity reaches 0.81267 (orf{\displaystyle f}reaches 0.3302).
Above the critical value, the solution becomes aJacobi, or scalene, ellipsoid(one with all three axes different).Henri Poincaréin 1885 found that at still higher angular momentum it will no longer be ellipsoidal butpiriformoroviform. The symmetry drops from the 8-fold D2hpoint groupto the 4-fold C2v, with its axis perpendicular to the axis of rotation.[7]Other shapes satisfy the equations beyond that, but are not stable, at least not near the point ofbifurcation.[7][8]Poincaré was unsure what would happen at higher angular momentum but concluded that eventually the blob would split into two.
The assumption of uniform density may apply more or less to a molten planet or a rocky planet but does not apply to a star or to a planet like the earth which has a dense metallic core. In 1737,Alexis Clairautstudied the case of density varying with depth.[9]Clairaut's theoremstates that the variation of the gravity (including centrifugal force) is proportional to the square of the sine of the latitude, with the proportionality depending linearly on the flattening (f{\displaystyle f}) and the ratio at the equator of centrifugal force to gravitational attraction. (Compare with the exact relation above for the case of uniform density.) Clairaut's theorem is a special case for an oblate spheroid of a connexion found later byPierre-Simon Laplacebetween the shape and the variation of gravity.[10]
If the star has a massive nearby companion object,tidal forcescome into play as well, which distort the star into a scalene shape if rotation alone would make it a spheroid. An example of this isBeta Lyrae.
Hydrostatic equilibrium is also important for theintracluster medium, where it restricts the amount of fluid that can be present in the core of acluster of galaxies.
We can also use the principle of hydrostatic equilibrium to estimate thevelocity dispersionofdark matterin clusters of galaxies. Onlybaryonicmatter (or, rather, the collisions thereof) emitsX-rayradiation. The absolute X-rayluminosityper unit volume takes the formLX=Λ(TB)ρB2{\displaystyle {\mathcal {L}}_{X}=\Lambda (T_{B})\rho _{B}^{2}}whereTB{\displaystyle T_{B}}andρB{\displaystyle \rho _{B}}are the temperature and density of the baryonic matter, andΛ(T){\displaystyle \Lambda (T)}is some function of temperature and fundamental constants. The baryonic density satisfies the above equationdP=−ρgdr{\displaystyle dP=-\rho g\,dr}:pB(r+dr)−pB(r)=−drρB(r)Gr2∫0r4πr2ρM(r)dr.{\displaystyle p_{B}(r+dr)-p_{B}(r)=-dr{\frac {\rho _{B}(r)G}{r^{2}}}\int _{0}^{r}4\pi r^{2}\,\rho _{M}(r)\,dr.}The integral is a measure of the total mass of the cluster, withr{\displaystyle r}being the proper distance to the center of the cluster. Using theideal gas lawpB=kTBρB/mB{\displaystyle p_{B}=kT_{B}\rho _{B}/m_{B}}(k{\displaystyle k}is theBoltzmann constantandmB{\displaystyle m_{B}}is a characteristic mass of the baryonic gas particles) and rearranging, we arrive atddr(kTB(r)ρB(r)mB)=−ρB(r)Gr2∫0r4πr2ρM(r)dr.{\displaystyle {\frac {d}{dr}}\left({\frac {kT_{B}(r)\rho _{B}(r)}{m_{B}}}\right)=-{\frac {\rho _{B}(r)G}{r^{2}}}\int _{0}^{r}4\pi r^{2}\,\rho _{M}(r)\,dr.}Multiplying byr2/ρB(r){\displaystyle r^{2}/\rho _{B}(r)}and differentiating with respect tor{\displaystyle r}yieldsddr[r2ρB(r)ddr(kTB(r)ρB(r)mB)]=−4πGr2ρM(r).{\displaystyle {\frac {d}{dr}}\left[{\frac {r^{2}}{\rho _{B}(r)}}{\frac {d}{dr}}\left({\frac {kT_{B}(r)\rho _{B}(r)}{m_{B}}}\right)\right]=-4\pi Gr^{2}\rho _{M}(r).}If we make the assumption that cold dark matter particles have an isotropic velocity distribution, the same derivation applies to these particles, and their densityρD=ρM−ρB{\displaystyle \rho _{D}=\rho _{M}-\rho _{B}}satisfies the non-linear differential equationddr[r2ρD(r)ddr(kTD(r)ρD(r)mD)]=−4πGr2ρM(r).{\displaystyle {\frac {d}{dr}}\left[{\frac {r^{2}}{\rho _{D}(r)}}{\frac {d}{dr}}\left({\frac {kT_{D}(r)\rho _{D}(r)}{m_{D}}}\right)\right]=-4\pi Gr^{2}\rho _{M}(r).}With perfect X-ray and distance data, we could calculate the baryon density at each point in the cluster and thus the dark matter density. We could then calculate the velocity dispersionσD2{\displaystyle \sigma _{D}^{2}}of the dark matter, which is given byσD2=kTDmD.{\displaystyle \sigma _{D}^{2}={\frac {kT_{D}}{m_{D}}}.}The central density ratioρB(0)/ρM(0){\displaystyle \rho _{B}(0)/\rho _{M}(0)}is dependent on theredshiftz{\displaystyle z}of the cluster and is given byρB(0)/ρM(0)∝(1+z)2(θs)3/2{\displaystyle \rho _{B}(0)/\rho _{M}(0)\propto (1+z)^{2}\left({\frac {\theta }{s}}\right)^{3/2}}whereθ{\displaystyle \theta }is the angular width of the cluster ands{\displaystyle s}the proper distance to the cluster. Values for the ratio range from 0.11 to 0.14 for various surveys.[11]
The concept of hydrostatic equilibrium has also become important in determining whether an astronomical object is aplanet,dwarf planet, orsmall Solar System body. According to thedefinition of planetthat was adopted by theInternational Astronomical Unionin 2006, one defining characteristic of planets and dwarf planets is that they are objects that have sufficient gravity to overcome their own rigidity and assume hydrostatic equilibrium. Such a body often has the differentiated interior and geology of a world (aplanemo), but near-hydrostatic or formerly hydrostatic bodies such as the proto-planet4 Vestamay also be differentiated and some hydrostatic bodies (notablyCallisto) have not thoroughly differentiated since their formation. Often, the equilibrium shape is anoblate spheroid, as is the case with Earth. However, in the cases of moons in synchronous orbit, nearly unidirectional tidal forces create ascalene ellipsoid. Also, the purported dwarf planetHaumeais scalene because of its rapid rotation though it may not currently be in equilibrium.
Icy objects were previously believed to need less mass to attain hydrostatic equilibrium than rocky objects. The smallest object that appears to have an equilibrium shape is the icy moonMimasat 396 km, but the largest icy object known to have an obviously non-equilibrium shape is the icy moonProteusat 420 km, and the largest rocky bodies in an obviously non-equilibrium shape are the asteroidsPallasandVestaat about 520 km. However, Mimas is not actually in hydrostatic equilibrium for its current rotation. The smallest body confirmed to be in hydrostatic equilibrium is the dwarf planetCeres, which is icy, at 945 km, and the largest known body to have a noticeable deviation from hydrostatic equilibrium isIapetusbeing made of mostly permeable ice and almost no rock.[12]At 1,469 km Iapetus is neither spherical nor ellipsoid. Instead, it is rather in a strange walnut-like shape due to its uniqueequatorial ridge.[13]Some icy bodies may be in equilibrium at least partly due to a subsurface ocean, which is not the definition of equilibrium used by the IAU (gravity overcoming internal rigid-body forces). Even larger bodies deviate from hydrostatic equilibrium, although they are ellipsoidal: examples are Earth'sMoonat 3,474 km (mostly rock),[14]and the planetMercuryat 4,880 km (mostly metal).[15]
In 2024, Kiss et al. found thatQuaoarhas an ellipsoidal shape incompatible with hydrostatic equilibrium for its current spin. They hypothesised that Quaoar originally had a rapid rotation and was in hydrostatic equilibrium but that its shape became "frozen in" and did not change as it spun down because of tidal forces from its moonWeywot.[16]If so, this would resemble the situation of Iapetus, which is too oblate for its current spin.[17][18]Iapetus is generally still considered aplanetary-mass moonnonetheless[19]though not always.[20]
Solid bodies have irregular surfaces, but local irregularities may be consistent with global equilibrium. For example, the massive base of the tallest mountain on Earth,Mauna Kea, has deformed and depressed the level of the surrounding crust and so the overall distribution of mass approaches equilibrium.
In the atmosphere, the pressure of the air decreases with increasing altitude. This pressure difference causes an upward force called thepressure-gradient force. The force of gravity balances this out, keeps the atmosphere bound to Earth and maintains pressure differences with altitude.
|
https://en.wikipedia.org/wiki/Hydrostatic_equilibrium
|
In mathematics, themethod of steepest descentorsaddle-point methodis an extension ofLaplace's methodfor approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.
The integral to be estimated is often of the form
whereCis a contour, and λ is large. One version of the method of steepest descent deforms the contour of integrationCinto a new path integrationC′so that the following conditions hold:
The method of steepest descent was first published byDebye (1909), who used it to estimateBessel functionsand pointed out that it occurred in the unpublished note byRiemann (1863)abouthypergeometric functions. The contour of steepest descent has a minimax property, seeFedoryuk (2001).Siegel (1932)described some other unpublished notes of Riemann, where he used this method to derive theRiemann–Siegel formula.
The method of steepest descent is a method to approximate a complex integral of the formI(λ)=∫Cf(z)eλg(z)dz{\displaystyle I(\lambda )=\int _{C}f(z)e^{\lambda g(z)}\,\mathrm {d} z}for largeλ→∞{\displaystyle \lambda \rightarrow \infty }, wheref(z){\displaystyle f(z)}andg(z){\displaystyle g(z)}areanalytic functionsofz{\displaystyle z}. Because the integrand is analytic, the contourC{\displaystyle C}can be deformed into a new contourC′{\displaystyle C'}without changing the integral. In particular, one seeks a new contour on which the imaginary part, denotedℑ(⋅){\displaystyle \Im (\cdot )}, ofg(z)=ℜ[g(z)]+iℑ[g(z)]{\displaystyle g(z)=\Re [g(z)]+i\,\Im [g(z)]}is constant (ℜ(⋅){\displaystyle \Re (\cdot )}denotes the real part). ThenI(λ)=eiλℑ{g(z)}∫C′f(z)eλℜ{g(z)}dz,{\displaystyle I(\lambda )=e^{i\lambda \Im \{g(z)\}}\int _{C'}f(z)e^{\lambda \Re \{g(z)\}}\,\mathrm {d} z,}and the remaining integral can be approximated with other methods likeLaplace's method.[1]
The method is called the method ofsteepest descentbecause for analyticg(z){\displaystyle g(z)}, constant phase contours are equivalent to steepest descent contours.
Ifg(z)=X(z)+iY(z){\displaystyle g(z)=X(z)+iY(z)}is ananalytic functionofz=x+iy{\displaystyle z=x+iy}, it satisfies theCauchy–Riemann equations∂X∂x=∂Y∂yand∂X∂y=−∂Y∂x.{\displaystyle {\frac {\partial X}{\partial x}}={\frac {\partial Y}{\partial y}}\qquad {\text{and}}\qquad {\frac {\partial X}{\partial y}}=-{\frac {\partial Y}{\partial x}}.}Then∂X∂x∂Y∂x+∂X∂y∂Y∂y=∇X⋅∇Y=0,{\displaystyle {\frac {\partial X}{\partial x}}{\frac {\partial Y}{\partial x}}+{\frac {\partial X}{\partial y}}{\frac {\partial Y}{\partial y}}=\nabla X\cdot \nabla Y=0,}so contours of constant phase are also contours of steepest descent.
Letf,S:Cn→CandC⊂Cn. If
whereℜ(⋅){\displaystyle \Re (\cdot )}denotes the real part, and there exists a positive real numberλ0such that
then the following estimate holds:[2]
Proof of the simple estimate:
Letxbe a complexn-dimensional vector, and
denote theHessian matrixfor a functionS(x). If
is a vector function, then itsJacobian matrixis defined as
Anon-degenerate saddle point,z0∈Cn, of a holomorphic functionS(z)is a critical point of the function (i.e.,∇S(z0) = 0) where the function's Hessian matrix has a non-vanishing determinant (i.e.,detSzz″(z0)≠0{\displaystyle \det S''_{zz}(z^{0})\neq 0}).
The following is the main tool for constructing the asymptotics of integrals in the case of a non-degenerate saddle point:
TheMorse lemmafor real-valued functions generalizes as follows[3]forholomorphic functions: near a non-degenerate saddle pointz0of a holomorphic functionS(z), there exist coordinates in terms of whichS(z) −S(z0)is exactly quadratic. To make this precise, letSbe a holomorphic function with domainW⊂Cn, and letz0inWbe a non-degenerate saddle point ofS, that is,∇S(z0) = 0anddetSzz″(z0)≠0{\displaystyle \det S''_{zz}(z^{0})\neq 0}. Then there exist neighborhoodsU⊂Wofz0andV⊂Cnofw= 0, and abijectiveholomorphic functionφ:V→Uwithφ(0) =z0such that
Here, theμjare theeigenvaluesof the matrixSzz″(z0){\displaystyle S_{zz}''(z^{0})}.
The following proof is a straightforward generalization of the proof of the realMorse Lemma, which can be found in.[4]We begin by demonstrating
From the identity
we conclude that
and
Without loss of generality, we translate the origin toz0, such thatz0= 0andS(0) = 0. Using the Auxiliary Statement, we have
Since the origin is a saddle point,
we can also apply the Auxiliary Statement to the functionsgi(z)and obtain
Recall that an arbitrary matrixAcan be represented as a sum of symmetricA(s)and anti-symmetricA(a)matrices,
The contraction of any symmetric matrixBwith an arbitrary matrixAis
i.e., the anti-symmetric component ofAdoes not contribute because
Thus,hij(z)in equation (1) can be assumed to be symmetric with respect to the interchange of the indicesiandj. Note that
hence,det(hij(0)) ≠ 0because the origin is a non-degenerate saddle point.
Let us show byinductionthat there are local coordinatesu= (u1, ...un),z=ψ(u), 0 =ψ(0), such that
First, assume that there exist local coordinatesy= (y1, ...yn),z=φ(y), 0 =φ(0), such that
whereHijis symmetric due to equation (2). By a linear change of the variables(yr, ...yn), we can assure thatHrr(0) ≠ 0. From thechain rule, we have
Therefore:
whence,
The matrix(Hij(0))can be recast in theJordan normal form:(Hij(0)) =LJL−1, wereLgives the desired non-singular linear transformation and the diagonal ofJcontains non-zeroeigenvaluesof(Hij(0)). IfHij(0) ≠ 0then, due to continuity ofHij(y), it must be also non-vanishing in some neighborhood of the origin. Having introducedH~ij(y)=Hij(y)/Hrr(y){\displaystyle {\tilde {H}}_{ij}(y)=H_{ij}(y)/H_{rr}(y)}, we write
Motivated by the last expression, we introduce new coordinatesz=η(x), 0 =η(0),
The change of the variablesy↔xis locally invertible since the correspondingJacobianis non-zero,
Therefore,
Comparing equations (4) and (5), we conclude that equation (3) is verified. Denoting theeigenvaluesofSzz″(0){\displaystyle S''_{zz}(0)}byμj, equation (3) can be rewritten as
Therefore,
From equation (6), it follows thatdetSww″(φ(0))=μ1⋯μn{\displaystyle \det S''_{ww}({\boldsymbol {\varphi }}(0))=\mu _{1}\cdots \mu _{n}}. TheJordan normal formofSzz″(0){\displaystyle S''_{zz}(0)}readsSzz″(0)=PJzP−1{\displaystyle S''_{zz}(0)=PJ_{z}P^{-1}}, whereJzis an upper diagonal matrix containing theeigenvaluesanddetP≠ 0; hence,detSzz″(0)=μ1⋯μn{\displaystyle \det S''_{zz}(0)=\mu _{1}\cdots \mu _{n}}. We obtain from equation (7)
Ifdetφw′(0)=−1{\displaystyle \det {\boldsymbol {\varphi }}'_{w}(0)=-1}, then interchanging two variables assures thatdetφw′(0)=+1{\displaystyle \det {\boldsymbol {\varphi }}'_{w}(0)=+1}.
Assume
Then, the following asymptotic holds
whereμjare eigenvalues of theHessianSxx″(x0){\displaystyle S''_{xx}(x^{0})}and(−μj)−12{\displaystyle (-\mu _{j})^{-{\frac {1}{2}}}}are defined with arguments
This statement is a special case of more general results presented in Fedoryuk (1987).[5]
First, we deform the contourIxinto a new contourIx′⊂Ωx{\displaystyle I'_{x}\subset \Omega _{x}}passing through the saddle pointx0and sharing the boundary withIx. This deformation does not change the value of the integralI(λ). We employ theComplex Morse Lemmato change the variables of integration. According to the lemma, the functionφ(w)maps a neighborhoodx0∈U⊂ Ωxonto a neighborhoodΩwcontaining the origin. The integralI(λ)can be split into two:I(λ) =I0(λ) +I1(λ), whereI0(λ)is the integral overU∩Ix′{\displaystyle U\cap I'_{x}}, whileI1(λ)is overIx′∖(U∩Ix′){\displaystyle I'_{x}\setminus (U\cap I'_{x})}(i.e., the remaining part of the contourI′x). Since the latter region does not contain the saddle pointx0, the value ofI1(λ)is exponentially smaller thanI0(λ)asλ→ ∞;[6]thus,I1(λ)is ignored. Introducing the contourIwsuch thatU∩Ix′=φ(Iw){\displaystyle U\cap I'_{x}={\boldsymbol {\varphi }}(I_{w})}, we have
Recalling thatx0=φ(0)as well asdetφw′(0)=1{\displaystyle \det {\boldsymbol {\varphi }}_{w}'(0)=1}, we expand the pre-exponential functionf[φ(w)]{\displaystyle f[{\boldsymbol {\varphi }}(w)]}into a Taylor series and keep just the leading zero-order term
Here, we have substituted the integration regionIwbyRnbecause both contain the origin, which is a saddle point, hence they are equal up to an exponentially small term.[7]The integrals in the r.h.s. of equation (11) can be expressed as
From this representation, we conclude that condition (9) must be satisfied in order for the r.h.s. and l.h.s. of equation (12) to coincide. According to assumption 2,ℜ(Sxx″(x0)){\displaystyle \Re \left(S_{xx}''(x^{0})\right)}is anegatively defined quadratic form(viz.,ℜ(μj)<0{\displaystyle \Re (\mu _{j})<0}) implying the existence of the integralIj{\displaystyle {\mathcal {I}}_{j}}, which is readily calculated
Equation (8) can also be written as
where the branch of
is selected as follows
Consider important special cases:
If the functionS(x)has multiple isolated non-degenerate saddle points, i.e.,
where
is anopen coverofΩx, then the calculation of the integral asymptotic is reduced to the case of a single saddle point by employing thepartition of unity. Thepartition of unityallows us to construct a set of continuous functionsρk(x) : Ωx→ [0, 1], 1 ≤k≤K,such that
Whence,
Therefore asλ→ ∞we have:
where equation (13) was utilized at the last stage, and the pre-exponential functionf(x)at least must be continuous.
When∇S(z0) = 0anddetSzz″(z0)=0{\displaystyle \det S''_{zz}(z^{0})=0}, the pointz0∈Cnis called adegenerate saddle pointof a functionS(z).
Calculating the asymptotic of
whenλ→ ∞,f(x)is continuous, andS(z)has a degenerate saddle point, is a very rich problem, whose solution heavily relies on thecatastrophe theory. Here, the catastrophe theory replaces theMorse lemma, valid only in the non-degenerate case, to transform the functionS(z)into one of the multitude of canonical representations. For further details see, e.g.,Poston & Stewart (1978)andFedoryuk (1987).
Integrals with degenerate saddle points naturally appear in many applications includingoptical causticsand the multidimensionalWKB approximationin quantum mechanics.
The other cases such as, e.g.,f(x)and/orS(x)are discontinuous or when an extremum ofS(x)lies at the integration region's boundary, require special care (see, e.g.,Fedoryuk (1987)andWong (1989)).
An extension of the steepest descent method is the so-callednonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions ofRiemann–Hilbert factorizationproblems.
Given a contourCin thecomplex sphere, a functionfdefined on that contour and a special point, say infinity, one seeks a functionMholomorphic away from the contourC, with prescribed jump acrossC, and with a given normalization at infinity. Iffand henceMare matrices rather than scalars this is a problem that in general does not admit an explicit solution.
An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.
The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of the Russian mathematician Alexander Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, steepest descent contours solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov).
The nonlinear stationary phase/steepest descent method has applications to the theory ofsolitonequations andintegrable models,random matricesandcombinatorics.
Another extension is theMethod of Chester–Friedman–Ursellfor coalescing saddle points and uniform asymptotic extensions.
|
https://en.wikipedia.org/wiki/Saddle-point_method
|
Inmathematics,Laplace's method, named afterPierre-Simon Laplace, is a technique used to approximateintegralsof the form
wheref{\displaystyle f}is a twice-differentiablefunction,M{\displaystyle M}is a largenumber, and the endpointsa{\displaystyle a}andb{\displaystyle b}could be infinite. This technique was originally presented in the book byLaplace (1774).
InBayesian statistics,Laplace's approximationcan refer to either approximating theposterior normalizing constantwith Laplace's method or approximating the posterior distribution with aGaussiancentered at themaximum a posteriori estimate.[1][2]Laplace approximations are used in theintegrated nested Laplace approximationsmethod for fast approximations ofBayesian inference.
Let the functionf(x){\displaystyle f(x)}have a uniqueglobal maximumatx0{\displaystyle x_{0}}.M>0{\displaystyle M>0}is a constant here. The following two functions are considered:
Then,x0{\displaystyle x_{0}}is the global maximum ofg{\displaystyle g}andh{\displaystyle h}as well. Hence:
AsMincreases, the ratio forh{\displaystyle h}will grow exponentially, while the ratio forg{\displaystyle g}does not change. Thus, significant contributions to the integral of this function will come only from pointsx{\displaystyle x}in aneighborhoodofx0{\displaystyle x_{0}}, which can then be estimated.
To state and motivate the method, one must make several assumptions. It is assumed thatx0{\displaystyle x_{0}}is not an endpoint of the interval of integration and that the valuesf(x){\displaystyle f(x)}cannot be very close tof(x0){\displaystyle f(x_{0})}unlessx{\displaystyle x}is close tox0{\displaystyle x_{0}}.
f(x){\displaystyle f(x)}can be expanded aroundx0{\displaystyle x_{0}}byTaylor's theorem,
whereR=O((x−x0)3){\displaystyle R=O\left((x-x_{0})^{3}\right)}(see:big O notation).
Sincef{\displaystyle f}has a global maximum atx0{\displaystyle x_{0}}, andx0{\displaystyle x_{0}}is not an endpoint, it is astationary point, i.e.f′(x0)=0{\displaystyle f'(x_{0})=0}. Therefore, the second-order Taylor polynomial approximatingf(x){\displaystyle f(x)}is
Then, just one more step is needed to get a Gaussian distribution. Sincex0{\displaystyle x_{0}}is a global maximum of the functionf{\displaystyle f}it can be stated, by definition of thesecond derivative, thatf″(x0)≤0{\displaystyle f''(x_{0})\leq 0}, thus giving the relation
f(x)≈f(x0)−12|f″(x0)|(x−x0)2{\displaystyle f(x)\approx f(x_{0})-{\frac {1}{2}}|f''(x_{0})|(x-x_{0})^{2}}
forx{\displaystyle x}close tox0{\displaystyle x_{0}}. The integral can then be approximated with:
Iff″(x0)<0{\displaystyle f''(x_{0})<0}this latter integral becomes aGaussian integralif we replace the limits of integration by−∞{\displaystyle -\infty }and+∞{\displaystyle +\infty }; whenM{\displaystyle M}is large this creates only a small error because the exponential decays very fast away fromx0{\displaystyle x_{0}}. Computing this Gaussian integral we obtain:
A generalization of this method and extension to arbitrary precision is provided by the bookFog (2008).
Supposef(x){\displaystyle f(x)}is a twice continuously differentiable function on[a,b],{\displaystyle [a,b],}and there exists a unique pointx0∈(a,b){\displaystyle x_{0}\in (a,b)}such that:
Then:
Lower bound:Letε>0{\displaystyle \varepsilon >0}. Sincef″{\displaystyle f''}is continuous there existsδ>0{\displaystyle \delta >0}such that if|x0−c|<δ{\displaystyle |x_{0}-c|<\delta }thenf″(c)≥f″(x0)−ε.{\displaystyle f''(c)\geq f''(x_{0})-\varepsilon .}ByTaylor's Theorem, for anyx∈(x0−δ,x0+δ),{\displaystyle x\in (x_{0}-\delta ,x_{0}+\delta ),}
Then we have the following lower bound:
where the last equality was obtained by a change of variables
Rememberf″(x0)<0{\displaystyle f''(x_{0})<0}so we can take the square root of its negation.
If we divide both sides of the above inequality by
and take the limit we get:
since this is true for arbitraryε{\displaystyle \varepsilon }we get the lower bound:
Note that this proof works also whena=−∞{\displaystyle a=-\infty }orb=∞{\displaystyle b=\infty }(or both).
Upper bound:The proof is similar to that of the lower bound but there are a few inconveniences. Again we start by picking anε>0{\displaystyle \varepsilon >0}but in order for the proof to work we needε{\displaystyle \varepsilon }small enough so thatf″(x0)+ε<0.{\displaystyle f''(x_{0})+\varepsilon <0.}Then, as above, by continuity off″{\displaystyle f''}andTaylor's Theoremwe can findδ>0{\displaystyle \delta >0}so that if|x−x0|<δ{\displaystyle |x-x_{0}|<\delta }, then
Lastly, by our assumptions (assuminga,b{\displaystyle a,b}are finite) there exists anη>0{\displaystyle \eta >0}such that if|x−x0|≥δ{\displaystyle |x-x_{0}|\geq \delta }, thenf(x)≤f(x0)−η{\displaystyle f(x)\leq f(x_{0})-\eta }.
Then we can calculate the following upper bound:
If we divide both sides of the above inequality by
and take the limit we get:
Sinceε{\displaystyle \varepsilon }is arbitrary we get the upper bound:
And combining this with the lower bound gives the result.
Note that the above proof obviously fails whena=−∞{\displaystyle a=-\infty }orb=∞{\displaystyle b=\infty }(or both). To deal with these cases, we need some extra assumptions. A sufficient (not necessary) assumption is that forn=1,{\displaystyle n=1,}
and that the numberη{\displaystyle \eta }as above exists (note that this must be an assumption in the case when the interval[a,b]{\displaystyle [a,b]}is infinite). The proof proceeds otherwise as above, but with a slightly different approximation of integrals:
When we divide by
we get for this term
whose limit asn→∞{\displaystyle n\to \infty }is0{\displaystyle 0}. The rest of the proof (the analysis of the interesting term) proceeds as above.
The given condition in the infinite interval case is, as said above, sufficient but not necessary. However, the condition is fulfilled in many, if not in most, applications: the condition simply says that the integral we are studying must be well-defined (not infinite) and that the maximum of the function atx0{\displaystyle x_{0}}must be a "true" maximum (the numberη>0{\displaystyle \eta >0}must exist). There is no need to demand that the integral is finite forn=1{\displaystyle n=1}but it is enough to demand that the integral is finite for somen=N.{\displaystyle n=N.}
This method relies on 4 basic concepts such as
The “approximation” in this method is related to therelative errorand not theabsolute error. Therefore, if we set
the integral can be written as
wheres{\displaystyle s}is a small number whenM{\displaystyle M}is a large number obviously and the relative error will be
Now, let us separate this integral into two parts:y∈[−Dy,Dy]{\displaystyle y\in [-D_{y},D_{y}]}region and the rest.
Let’s look at theTaylor expansionofM(f(x)−f(x0)){\displaystyle M(f(x)-f(x_{0}))}aroundx0{\displaystyle x_{0}}and translatex{\displaystyle x}toy{\displaystyle y}because we do the comparison in y-space, we will get
Note thatf′(x0)=0{\displaystyle f'(x_{0})=0}becausex0{\displaystyle x_{0}}is a stationary point. From this equation you will find that the terms higher than second derivative in this Taylor expansion is suppressed as the order of1M{\displaystyle {\tfrac {1}{\sqrt {M}}}}so thatexp(M(f(x)−f(x0))){\displaystyle \exp(M(f(x)-f(x_{0})))}will get closer to theGaussian functionas shown in figure. Besides,
Because we do the comparison in y-space,y{\displaystyle y}is fixed iny∈[−Dy,Dy]{\displaystyle y\in [-D_{y},D_{y}]}which will causex∈[−sDy,sDy]{\displaystyle x\in [-sD_{y},sD_{y}]}; however,s{\displaystyle s}is inversely proportional toM{\displaystyle {\sqrt {M}}}, the chosen region ofx{\displaystyle x}will be smaller whenM{\displaystyle M}is increased.
Relying on the 3rd concept, even if we choose a very largeDy,sDywill finally be a very small number whenM{\displaystyle M}is increased to a huge number. Then, how can we guarantee the integral of the rest will tend to 0 whenM{\displaystyle M}is large enough?
The basic idea is to find a functionm(x){\displaystyle m(x)}such thatm(x)≥f(x){\displaystyle m(x)\geq f(x)}and the integral ofeMm(x){\displaystyle e^{Mm(x)}}will tend to zero whenM{\displaystyle M}grows. Because the exponential function ofMm(x){\displaystyle Mm(x)}will be always larger than zero as long asm(x){\displaystyle m(x)}is a real number, and this exponential function is proportional tom(x),{\displaystyle m(x),}the integral ofeMf(x){\displaystyle e^{Mf(x)}}will tend to zero. For simplicity, choosem(x){\displaystyle m(x)}as atangentthrough the pointx=sDy{\displaystyle x=sD_{y}}as shown in the figure:
If the interval of the integration of this method is finite, we will find that no matterf(x){\displaystyle f(x)}is continue in the rest region, it will be always smaller thanm(x){\displaystyle m(x)}shown above whenM{\displaystyle M}is large enough. By the way, it will be proved later that the integral ofeMm(x){\displaystyle e^{Mm(x)}}will tend to zero whenM{\displaystyle M}is large enough.
If the interval of the integration of this method is infinite,m(x){\displaystyle m(x)}andf(x){\displaystyle f(x)}might always cross to each other. If so, we cannot guarantee that the integral ofeMf(x){\displaystyle e^{Mf(x)}}will tend to zero finally. For example, in the case off(x)=sin(x)x,{\displaystyle f(x)={\tfrac {\sin(x)}{x}},}∫0∞eMf(x)dx{\displaystyle \int _{0}^{\infty }e^{Mf(x)}dx}will always diverge. Therefore, we need to require that∫d∞eMf(x)dx{\displaystyle \int _{d}^{\infty }e^{Mf(x)}dx}can converge for the infinite interval case. If so, this integral will tend to zero whend{\displaystyle d}is large enough and we can choose thisd{\displaystyle d}as the cross ofm(x){\displaystyle m(x)}andf(x).{\displaystyle f(x).}
You might ask why not choose∫d∞ef(x)dx{\displaystyle \int _{d}^{\infty }e^{f(x)}dx}as a convergent integral? Let me use an example to show you the reason. Suppose the rest part off(x){\displaystyle f(x)}is−lnx,{\displaystyle -\ln x,}thenef(x)=1x{\displaystyle e^{f(x)}={\tfrac {1}{x}}}and its integral will diverge; however, whenM=2,{\displaystyle M=2,}the integral ofeMf(x)=1x2{\displaystyle e^{Mf(x)}={\tfrac {1}{x^{2}}}}converges. So, the integral of some functions will diverge whenM{\displaystyle M}is not a large number, but they will converge whenM{\displaystyle M}is large enough.
Based on these four concepts, we can derive the relative error of this method.
Laplace's approximation is sometimes written as
whereh{\displaystyle h}is positive.
Importantly, the accuracy of the approximation depends on the variable of integration, that is, on what stays ing(x){\displaystyle g(x)}and what goes intoh(x).{\displaystyle h(x).}[3]
First, usex0=0{\displaystyle x_{0}=0}to denote the global maximum, which will simplify this derivation. We are interested in the relative error, written as|R|{\displaystyle |R|},
where
So, if we let
andA0≡e−πy2{\displaystyle A_{0}\equiv e^{-\pi y^{2}}}, we can get
since∫−∞∞A0dy=1{\displaystyle \int _{-\infty }^{\infty }A_{0}\,dy=1}.
For the upper bound, note that|A+B|≤|A|+|B|,{\displaystyle |A+B|\leq |A|+|B|,}thus we can separate this integration into 5 parts with 3 different types (a), (b) and (c), respectively. Therefore,
where(a1){\displaystyle (a_{1})}and(a2){\displaystyle (a_{2})}are similar, let us just calculate(a1){\displaystyle (a_{1})}and(b1){\displaystyle (b_{1})}and(b2){\displaystyle (b_{2})}are similar, too, I’ll just calculate(b1){\displaystyle (b_{1})}.
For(a1){\displaystyle (a_{1})}, after the translation ofz≡πy2{\displaystyle z\equiv \pi y^{2}}, we can get
This means that as long asDy{\displaystyle D_{y}}is large enough, it will tend to zero.
For(b1){\displaystyle (b_{1})}, we can get
where
andh(x){\displaystyle h(x)}should have the same sign ofh(0){\displaystyle h(0)}during this region. Let us choosem(x){\displaystyle m(x)}as the tangent across the point atx=sDy{\displaystyle x=sD_{y}}, i.e.m(sy)=g(sDy)−g(0)+g′(sDy)(sy−sDy){\displaystyle m(sy)=g(sD_{y})-g(0)+g'(sD_{y})\left(sy-sD_{y}\right)}which is shown in the figure
From this figure you can find that whens{\displaystyle s}orDy{\displaystyle D_{y}}gets smaller, the region satisfies the above inequality will get larger. Therefore, if we want to find a suitablem(x){\displaystyle m(x)}to cover the wholef(x){\displaystyle f(x)}during the interval of(b1){\displaystyle (b_{1})},Dy{\displaystyle D_{y}}will have an upper limit. Besides, because the integration ofe−αx{\displaystyle e^{-\alpha x}}is simple, let me use it to estimate the relative error contributed by this(b1){\displaystyle (b_{1})}.
Based on Taylor expansion, we can get
and
and then substitute them back into the calculation of(b1){\displaystyle (b_{1})}; however, you can find that the remainders of these two expansions are both inversely proportional to the square root ofM{\displaystyle M}, let me drop them out to beautify the calculation. Keeping them is better, but it will make the formula uglier.
Therefore, it will tend to zero whenDy{\displaystyle D_{y}}gets larger, but don't forget that the upper bound ofDy{\displaystyle D_{y}}should be considered during this calculation.
About the integration nearx=0{\displaystyle x=0}, we can also useTaylor's Theoremto calculate it. Whenh′(0)≠0{\displaystyle h'(0)\neq 0}
and you can find that it is inversely proportional to the square root ofM{\displaystyle M}. In fact,(c){\displaystyle (c)}will have the same behave whenh(x){\displaystyle h(x)}is a constant.
Conclusively, the integral near the stationary point will get smaller asM{\displaystyle {\sqrt {M}}}gets larger, and the rest parts will tend to zero as long asDy{\displaystyle D_{y}}is large enough; however, we need to remember thatDy{\displaystyle D_{y}}has an upper limit which is decided by whether the functionm(x){\displaystyle m(x)}is always larger thang(x)−g(0){\displaystyle g(x)-g(0)}in the rest region. However, as long as we can find onem(x){\displaystyle m(x)}satisfying this condition, the upper bound ofDy{\displaystyle D_{y}}can be chosen as directly proportional toM{\displaystyle {\sqrt {M}}}sincem(x){\displaystyle m(x)}is a tangent across the point ofg(x)−g(0){\displaystyle g(x)-g(0)}atx=sDy{\displaystyle x=sD_{y}}. So, the biggerM{\displaystyle M}is, the biggerDy{\displaystyle D_{y}}can be.
In the multivariate case, wherex{\displaystyle \mathbf {x} }is ad{\displaystyle d}-dimensional vector andf(x){\displaystyle f(\mathbf {x} )}is a scalar function ofx{\displaystyle \mathbf {x} }, Laplace's approximation is usually written as:
whereH(f)(x0){\displaystyle H(f)(\mathbf {x} _{0})}is theHessian matrixoff{\displaystyle f}evaluated atx0{\displaystyle \mathbf {x} _{0}}and where|⋅|{\displaystyle |\cdot |}denotesmatrix determinant. Analogously to the univariate case, the Hessian is required to benegative-definite.[4]
By the way, althoughx{\displaystyle \mathbf {x} }denotes ad{\displaystyle d}-dimensional vector, the termdx{\displaystyle d\mathbf {x} }denotes aninfinitesimalvolumehere, i.e.dx:=dx1dx2⋯dxd{\displaystyle d\mathbf {x} :=dx_{1}dx_{2}\cdots dx_{d}}.
In extensions of Laplace's method,complex analysis, and in particularCauchy's integral formula, is used to find a contourof steepest descentfor an (asymptotically with largeM) equivalent integral, expressed as aline integral. In particular, if no pointx0where the derivative off{\displaystyle f}vanishes exists on the real line, it may be necessary to deform the integration contour to an optimal one, where the above analysis will be possible. Again, the main idea is to reduce, at least asymptotically, the calculation of the given integral to that of a simpler integral that can be explicitly evaluated. See the book of Erdelyi (1956) for a simple discussion (where the method is termedsteepest descents).
The appropriate formulation for the complexz-plane is
for a path passing through the saddle point atz0. Note the explicit appearance of a minus sign to indicate the direction of the second derivative: one mustnottake the modulus. Also note that if the integrand ismeromorphic, one may have to add residues corresponding to poles traversed while deforming the contour (see for example section 3 of Okounkov's paperSymmetric functions and random partitions).
An extension of thesteepest descent methodis the so-callednonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions ofRiemann–Hilbert factorization problems.
Given a contourCin thecomplex sphere, a functionf{\displaystyle f}defined on that contour and a special point, such as infinity, a holomorphic functionMis sought away fromC, with prescribed jump acrossC, and with a given normalization at infinity. Iff{\displaystyle f}and henceMare matrices rather than scalars this is a problem that in general does not admit an explicit solution.
An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.
The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, "steepest descent contours" solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov).
The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations andintegrable models,random matricesandcombinatorics.
In the generalization, evaluation of the integral is considered equivalent to finding the norm of the distribution with density
Denoting the cumulative distributionF(x){\displaystyle F(x)}, if there is a diffeomorphicGaussian distributionwith density
the norm is given by
and the correspondingdiffeomorphismis
whereΦ{\displaystyle \Phi }denotes cumulative standardnormal distributionfunction.
In general, any distribution diffeomorphic to the Gaussian distribution has density
and themedian-point is mapped to the median of the Gaussian distribution. Matching the logarithm of the density functions and their derivatives at the median point up to a given order yields a system of equations that determine the approximate values ofγ{\displaystyle \gamma }andg{\displaystyle g}.
The approximation was introduced in 2019 by D. Makogon and C. Morais Smith primarily in the context ofpartition functionevaluation for a system of interacting fermions.[5]
For complex integrals in the form:
witht≫1,{\displaystyle t\gg 1,}we make the substitutiont=iuand the change of variables=c+ix{\displaystyle s=c+ix}to get the bilateral Laplace transform:
We then splitg(c+ix) in its real and complex part, after which we recoveru=t/i. This is useful forinverse Laplace transforms, thePerron formulaand complex integration.
Laplace's method can be used to deriveStirling's approximation
for a largeintegerN. From the definition of theGamma function, we have
Now we change variables, lettingx=Nz{\displaystyle x=Nz}so thatdx=Ndz.{\displaystyle dx=Ndz.}Plug these values back in to obtain
This integral has the form necessary for Laplace's method with
which is twice-differentiable:
The maximum off(z){\displaystyle f(z)}lies atz0= 1, and the second derivative off(z){\displaystyle f(z)}has the value −1 at this point. Therefore, we obtain
This article incorporates material from saddle point approximation onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
|
https://en.wikipedia.org/wiki/Laplace%27s_method
|
Inmathematical analysis, themaximumandminimum[a]of afunctionare, respectively, the greatest and least value taken by the function. Known generically asextremum,[b]they may be defined either within a givenrange(thelocalorrelativeextrema) or on the entiredomain(theglobalorabsoluteextrema) of a function.[1][2][3]Pierre de Fermatwas one of the first mathematicians to propose a general technique,adequality, for finding the maxima and minima of functions.
As defined inset theory, the maximum and minimum of asetare thegreatest and least elementsin the set, respectively. Unboundedinfinite sets, such as the set ofreal numbers, have no minimum or maximum.
Instatistics, the corresponding concept is thesample maximum and minimum.
A real-valuedfunctionfdefined on adomainXhas aglobal(orabsolute)maximum pointatx∗, iff(x∗) ≥f(x)for allxinX. Similarly, the function has aglobal(orabsolute)minimum pointatx∗, iff(x∗) ≤f(x)for allxinX. The value of the function at a maximum point is called themaximum valueof the function, denotedmax(f(x)){\displaystyle \max(f(x))}, and the value of the function at a minimum point is called theminimum valueof the function, (denotedmin(f(x)){\displaystyle \min(f(x))}for clarity). Symbolically, this can be written as follows:
The definition of global minimum point also proceeds similarly.
If the domainXis ametric space, thenfis said to have alocal(orrelative)maximum pointat the pointx∗, if there exists someε> 0 such thatf(x∗) ≥f(x)for allxinXwithin distanceεofx∗. Similarly, the function has alocal minimum pointatx∗, iff(x∗) ≤f(x) for allxinXwithin distanceεofx∗. A similar definition can be used whenXis atopological space, since the definition just given can be rephrased in terms of neighbourhoods. Mathematically, the given definition is written as follows:
The definition of local minimum point can also proceed similarly.
In both the global and local cases, the concept of astrict extremumcan be defined. For example,x∗is astrict global maximum pointif for allxinXwithx≠x∗, we havef(x∗) >f(x), andx∗is astrict local maximum pointif there exists someε> 0such that, for allxinXwithin distanceεofx∗withx≠x∗, we havef(x∗) >f(x). Note that a point is a strict global maximum point if and only if it is the unique global maximum point, and similarly for minimum points.
Acontinuousreal-valued function with acompactdomain always has a maximum point and a minimum point. An important example is a function whose domain is a closed and boundedintervalofreal numbers(see the graph above).
Finding global maxima and minima is the goal ofmathematical optimization. If a function is continuous on a closed interval, then by theextreme value theorem, global maxima and minima exist. Furthermore, a global maximum (or minimum) either must be a local maximum (or minimum) in the interior of the domain, or must lie on the boundary of the domain. So a method of finding a global maximum (or minimum) is to look at all the local maxima (or minima) in the interior, and also look at the maxima (or minima) of the points on the boundary, and take the greatest (or least) one.
Fordifferentiable functions,Fermat's theoremstates that local extrema in the interior of a domain must occur atcritical points(or points where the derivative equals zero).[4]However, not all critical points are extrema. One can often distinguish whether a critical point is a local maximum, a local minimum, or neither by using thefirst derivative test,second derivative test, orhigher-order derivative test, given sufficient differentiability.[5]
For any function that is definedpiecewise, one finds a maximum (or minimum) by finding the maximum (or minimum) of each piece separately, and then seeing which one is greatest (or least).
For a practical example,[6]assume a situation where someone has200{\displaystyle 200}feet of fencing and is trying to maximize the square footage of a rectangular enclosure, wherex{\displaystyle x}is the length,y{\displaystyle y}is the width, andxy{\displaystyle xy}is the area:
The derivative with respect tox{\displaystyle x}is:
Setting this equal to0{\displaystyle 0}
reveals thatx=50{\displaystyle x=50}is our onlycritical point.
Now retrieve theendpointsby determining the interval to whichx{\displaystyle x}is restricted. Since width is positive, thenx>0{\displaystyle x>0}, and sincex=100−y{\displaystyle x=100-y},that implies thatx<100{\displaystyle x<100}.Plug in critical point50{\displaystyle 50},as well as endpoints0{\displaystyle 0}and100{\displaystyle 100},intoxy=x(100−x){\displaystyle xy=x(100-x)},and the results are2500,0,{\displaystyle 2500,0,}and0{\displaystyle 0}respectively.
Therefore, the greatest area attainable with a rectangle of200{\displaystyle 200}feet of fencing is50×50=2500{\displaystyle 50\times 50=2500}.[6]
For functions of more than one variable, similar conditions apply. For example, in the (enlargeable) figure on the right, the necessary conditions for alocalmaximum are similar to those of a function with only one variable. The firstpartial derivativesas toz(the variable to be maximized) are zero at the maximum (the glowing dot on top in the figure). The second partial derivatives are negative. These are only necessary, not sufficient, conditions for a local maximum, because of the possibility of asaddle point. For use of these conditions to solve for a maximum, the functionzmust also bedifferentiablethroughout. Thesecond partial derivative testcan help classify the point as a relative maximum or relative minimum.
In contrast, there are substantial differences between functions of one variable and functions of more than one variable in the identification of global extrema. For example, if a bounded differentiable functionfdefined on a closed interval in the real line has a single critical point, which is a local minimum, then it is also a global minimum (use theintermediate value theoremandRolle's theoremto prove this bycontradiction). In two and more dimensions, this argument fails. This is illustrated by the function
whose only critical point is at (0,0), which is a local minimum withf(0,0) = 0. However, it cannot be a global one, becausef(2,3) = −5.
If the domain of a function for which an extremum is to be found consists itself of functions (i.e. if an extremum is to be found of afunctional), then the extremum is found using thecalculus of variations.
Maxima and minima can also be defined for sets. In general, if anordered setShas agreatest elementm, thenmis amaximal elementof the set, also denoted asmax(S){\displaystyle \max(S)}. Furthermore, ifSis a subset of an ordered setTandmis the greatest element ofSwith (respect to order induced byT), thenmis aleast upper boundofSinT. Similar results hold forleast element,minimal elementandgreatest lower bound. The maximum and minimum function for sets are used indatabases, and can be computed rapidly, since the maximum (or minimum) of a set can be computed from the maxima of a partition; formally, they are self-decomposable aggregation functions.
In the case of a generalpartial order, aleast element(i.e., one that is less than all others) should not be confused with theminimal element(nothing is lesser). Likewise, agreatest elementof apartially ordered set(poset) is anupper boundof the set which is contained within the set, whereas themaximal elementmof a posetAis an element ofAsuch that ifm≤b(for anybinA), thenm=b. Any least element or greatest element of a poset is unique, but a poset can have several minimal or maximal elements. If a poset has more than one maximal element, then these elements will not be mutually comparable.
In atotally orderedset, orchain, all elements are mutually comparable, so such a set can have at most one minimal element and at most one maximal element. Then, due to mutual comparability, the minimal element will also be the least element, and the maximal element will also be the greatest element. Thus in a totally ordered set, we can simply use the termsminimumandmaximum.
If a chain is finite, then it will always have a maximum and a minimum. If a chain is infinite, then it need not have a maximum or a minimum. For example, the set ofnatural numbershas no maximum, though it has a minimum. If an infinite chainSis bounded, then theclosureCl(S) of the set occasionally has a minimum and a maximum, in which case they are called thegreatest lower boundand theleast upper boundof the setS, respectively.
|
https://en.wikipedia.org/wiki/Maximum_and_minimum
|
In the study ofdynamical systems, ahyperbolic equilibrium pointorhyperbolic fixed pointis afixed pointthat does not have anycenter manifolds. Near ahyperbolicpoint the orbits of a two-dimensional,non-dissipativesystem resemble hyperbolas. This fails to hold in general.Strogatznotes that "hyperbolic is an unfortunate name—it sounds like it should mean 'saddle point'—but it has become standard."[1]Several properties hold about a neighborhood of a hyperbolic point, notably[2]
IfT:Rn→Rn{\displaystyle T\colon \mathbb {R} ^{n}\to \mathbb {R} ^{n}}is aC1map andpis afixed pointthenpis said to be ahyperbolic fixed pointwhen theJacobian matrixDT(p){\displaystyle \operatorname {D} T(p)}has noeigenvalueson the complex unit circle.
One example of amapwhose only fixed point is hyperbolic isArnold's cat map:
Since the eigenvalues are given by
We know that the Lyapunov exponents are:
Therefore it is a saddle point.
LetF:Rn→Rn{\displaystyle F\colon \mathbb {R} ^{n}\to \mathbb {R} ^{n}}be aC1vector fieldwith a critical pointp, i.e.,F(p) = 0, and letJdenote theJacobian matrixofFatp. If the matrixJhas no eigenvalues with zero real parts thenpis calledhyperbolic. Hyperbolic fixed points may also be calledhyperbolic critical pointsorelementary critical points.[3]
TheHartman–Grobman theoremstates that the orbit structure of a dynamical system in aneighbourhoodof a hyperbolic equilibrium point istopologically equivalentto the orbit structure of thelinearizeddynamical system.
Consider the nonlinear system
(0, 0) is the only equilibrium point. The Jacobian matrix of the linearization at the equilibrium point is
The eigenvalues of this matrix are−α±α2−42{\displaystyle {\frac {-\alpha \pm {\sqrt {\alpha ^{2}-4}}}{2}}}. For all values ofα≠ 0, the eigenvalues have non-zero real part. Thus, this equilibrium point is a hyperbolic equilibrium point. The linearized system will behave similar to the non-linear system near (0, 0). Whenα= 0, the system has a nonhyperbolic equilibrium at (0, 0).
In the case of an infinite dimensional system—for example systems involving a time delay—the notion of the "hyperbolic part of the spectrum" refers to the above property.
|
https://en.wikipedia.org/wiki/Hyperbolic_equilibrium_point
|
Inmathematics,hyperbolic geometry(also calledLobachevskian geometryorBolyai–Lobachevskiangeometry) is anon-Euclidean geometry. Theparallel postulateofEuclidean geometryis replaced with:
(Compare the above withPlayfair's axiom, the modern version ofEuclid'sparallel postulate.)
Thehyperbolic planeis aplanewhere every point is asaddle point.
Hyperbolic planegeometryis also the geometry ofpseudospherical surfaces, surfaces with a constant negativeGaussian curvature.Saddle surfaceshave negative Gaussian curvature in at least some regions, where theylocallyresemble the hyperbolic plane.
Thehyperboloid modelof hyperbolic geometry provides a representation ofeventsone temporal unit into the future inMinkowski space, the basis ofspecial relativity. Each of these events corresponds to arapidityin some direction.
When geometers first realised they were working with something other than the standard Euclidean geometry, they described their geometry under many different names;Felix Kleinfinally gave the subject the namehyperbolic geometryto include it in the now rarely used sequenceelliptic geometry(spherical geometry), parabolic geometry (Euclidean geometry), and hyperbolic geometry.
In theformer Soviet Union, it is commonly called Lobachevskian geometry, named after one of its discoverers, the Russian geometerNikolai Lobachevsky.
Hyperbolic geometry is more closely related to Euclidean geometry than it seems: the onlyaxiomaticdifference is theparallel postulate.
When the parallel postulate is removed from Euclidean geometry the resulting geometry isabsolute geometry.
There are two kinds of absolute geometry, Euclidean and hyperbolic.
All theorems of absolute geometry, including the first 28 propositions of book one ofEuclid'sElements, are valid in Euclidean and hyperbolic geometry.
Propositions 27 and 28 of Book One of Euclid'sElementsprove the existence of parallel/non-intersecting lines.
This difference also has many consequences: concepts that are equivalent in Euclidean geometry are not equivalent in hyperbolic geometry; new concepts need to be introduced.
Further, because of theangle of parallelism, hyperbolic geometry has anabsolute scale, a relation between distance and angle measurements.
Single lines in hyperbolic geometry have exactly the same properties as single straight lines in Euclidean geometry. For example, two points uniquely define a line, and line segments can be infinitely extended.
Two intersecting lines have the same properties as two intersecting lines in Euclidean geometry. For example, two distinct lines can intersect in no more than one point, intersecting lines form equal opposite angles, and adjacent angles of intersecting lines aresupplementary.
When a third line is introduced, then there can be properties of intersecting lines that differ from intersecting lines in Euclidean geometry. For example, given two intersecting lines there are infinitely many lines that do not intersect either of the given lines.
These properties are all independent of themodelused, even if the lines may look radically different.
Non-intersecting lines in hyperbolic geometry also have properties that differ from non-intersecting lines inEuclidean geometry:
This implies that there are throughPan infinite number of coplanar lines that do not intersectR.
These non-intersecting lines are divided into two classes:
Some geometers simply use the phrase "parallellines" to mean "limiting parallellines", withultraparallellines meaning justnon-intersecting.
Theselimiting parallelsmake an angleθwithPB; this angle depends only on theGaussian curvatureof the plane and the distancePBand is called theangle of parallelism.
For ultraparallel lines, theultraparallel theoremstates that there is a unique line in the hyperbolic plane that is perpendicular to each pair of ultraparallel lines.
In hyperbolic geometry, the circumference of a circle of radiusris greater than2πr{\displaystyle 2\pi r}.
LetR=1−K{\displaystyle R={\frac {1}{\sqrt {-K}}}}, whereK{\displaystyle K}is theGaussian curvatureof the plane. In hyperbolic geometry,K{\displaystyle K}is negative, so the square root is of a positive number.
Then the circumference of a circle of radiusris equal to:
And the area of the enclosed disk is:
Therefore, in hyperbolic geometry the ratio of a circle's circumference to its radius is always strictly greater than2π{\displaystyle 2\pi }, though it can be made arbitrarily close by selecting a small enough circle.
If the Gaussian curvature of the plane is −1 then thegeodesic curvatureof a circle of radiusris:1tanh(r){\displaystyle {\frac {1}{\tanh(r)}}}[1]
In hyperbolic geometry, there is no line whose points are all equidistant from another line. Instead, the points that are all the same distance from a given line lie on a curve called ahypercycle.
Another special curve is thehorocycle, whosenormalradii (perpendicularlines) are alllimiting parallelto each other (all converge asymptotically in one direction to the sameideal point, the centre of the horocycle).
Through every pair of points there are two horocycles. The centres of the horocycles are theideal pointsof theperpendicular bisectorof the line-segment between them.
Given any three distinct points, they all lie on either a line, hypercycle,horocycle, or circle.
Thelengthof a line-segment is the shortest length between two points.
The arc-length of a hypercycle connecting two points is longer than that of the line segment and shorter than that of the arc horocycle, connecting the same two points.
The lengths of the arcs of both horocycles connecting two points are equal, and are longer than the arclength of any hypercycle connecting the points and shorter than the arc of any circle connecting the two points.
If the Gaussian curvature of the plane is −1, then thegeodesic curvatureof a horocycle is 1 and that of a hypercycle is between 0 and 1.[1]
Unlike Euclidean triangles, where the angles always add up to πradians(180°, astraight angle), in hyperbolic space the sum of the angles of a triangle is always strictly less than π radians (180°). The difference is called thedefect. Generally, the defect of a convex hyperbolic polygon withn{\displaystyle n}sides is its angle sum subtracted from(n−2)⋅180∘{\displaystyle (n-2)\cdot 180^{\circ }}.
The area of a hyperbolic triangle is given by its defect in radians multiplied byR2, which is also true for all convex hyperbolic polygons.[2]Therefore all hyperbolic triangles have an area less than or equal toR2π. The area of a hyperbolicideal trianglein which all three angles are 0° is equal to this maximum.
As inEuclidean geometry, each hyperbolic triangle has anincircle. In hyperbolic space, if all three of its vertices lie on ahorocycleorhypercycle, then the triangle has nocircumscribed circle.
As insphericalandelliptical geometry, in hyperbolic geometry if two triangles are similar, they must be congruent.
Special polygons in hyperbolic geometry are the regularapeirogonandpseudogonuniform polygonswith an infinite number of sides.
InEuclidean geometry, the only way to construct such a polygon is to make the side lengths tend to zero and the apeirogon is indistinguishable from a circle, or make the interior angles tend to 180° and the apeirogon approaches a straight line.
However, in hyperbolic geometry, a regular apeirogon or pseudogon has sides of any length (i.e., it remains a polygon with noticeable sides).
The side and anglebisectorswill, depending on the side length and the angle between the sides, be limiting or diverging parallel. If the bisectors are limiting parallel then it is an apeirogon and can be inscribed and circumscribed by concentrichorocycles.
If the bisectors are diverging parallel then it is a pseudogon and can be inscribed and circumscribed byhypercycles(since all its vertices are the same distance from a line, the axis, and the midpoints of its sides are also equidistant from that same axis).
Like the Euclidean plane it is also possible to tessellate the hyperbolic plane withregular polygonsasfaces.
There are an infinite number of uniform tilings based on theSchwarz triangles(pqr) where 1/p+ 1/q+ 1/r< 1, wherep,q,rare each orders of reflection symmetry at three points of thefundamental domain triangle, the symmetry group is a hyperbolictriangle group. There are also infinitely many uniform tilings that cannot be generated from Schwarz triangles, some for example requiring quadrilaterals as fundamental domains.[3]
Though hyperbolic geometry applies for any surface with a constant negativeGaussian curvature, it is usual to assume a scale in which the curvatureKis −1.
This results in some formulas becoming simpler. Some examples are:
Compared to Euclidean geometry, hyperbolic geometry presents many difficulties for a coordinate system: the angle sum of aquadrilateralis always less than 360°; there are no equidistant lines, so a proper rectangle would need to be enclosed by two lines and two hypercycles; parallel-transporting a line segment around a quadrilateral causes it to rotate when it returns to the origin; etc.
There are however different coordinate systems for hyperbolic plane geometry. All are based around choosing a point (the origin) on a chosen directed line (thex-axis) and after that many choices exist.
The Lobachevsky coordinatesxandyare found by dropping a perpendicular onto thex-axis.xwill be the label of the foot of the perpendicular.ywill be the distance along the perpendicular of the given point from its foot (positive on one side and negative on the other).
Another coordinate system measures the distance from the point to thehorocyclethrough the origin centered around(0,+∞){\displaystyle (0,+\infty )}and the length along this horocycle.[5]
Other coordinate systems use the Klein model or the Poincaré disk model described below, and take the Euclidean coordinates as hyperbolic.
A Cartesian-like[citation needed]coordinate system (x, y) on the oriented hyperbolic plane is constructed as follows. Choose a line in the hyperbolic plane together with an orientation and an originoon this line. Then:
The distance between two points represented by (x_i, y_i),i=1,2in this coordinate system is[citation needed]dist(⟨x1,y1⟩,⟨x2,y2⟩)=arcosh(coshy1cosh(x2−x1)coshy2−sinhy1sinhy2).{\displaystyle \operatorname {dist} (\langle x_{1},y_{1}\rangle ,\langle x_{2},y_{2}\rangle )=\operatorname {arcosh} \left(\cosh y_{1}\cosh(x_{2}-x_{1})\cosh y_{2}-\sinh y_{1}\sinh y_{2}\right)\,.}
This formula can be derived from the formulas abouthyperbolic triangles.
The corresponding metric tensor field is:(ds)2=cosh2y(dx)2+(dy)2{\displaystyle (\mathrm {d} s)^{2}=\cosh ^{2}y\,(\mathrm {d} x)^{2}+(\mathrm {d} y)^{2}}.
In this coordinate system, straight lines take one of these forms ((x,y) is a point on the line;x0,y0,A, andαare parameters):
ultraparallel to thex-axis
asymptotically parallel on the negative side
asymptotically parallel on the positive side
intersecting perpendicularly
intersecting at an angleα
Generally, these equations will only hold in a bounded domain (ofxvalues). At the edge of that domain, the value ofyblows up to ±infinity.
Since the publication ofEuclid'sElementscirca 300BC, manygeometerstried to prove theparallel postulate. Some tried to prove it byassuming its negation and trying to derive a contradiction. Foremost among these wereProclus,Ibn al-Haytham(Alhacen),Omar Khayyám,[6]Nasīr al-Dīn al-Tūsī,Witelo,Gersonides,Alfonso, and laterGiovanni Gerolamo Saccheri,John Wallis,Johann Heinrich Lambert, andLegendre.[7]Their attempts were doomed to failure (as we now know, the parallel postulate is not provable from the other postulates), but their efforts led to the discovery of hyperbolic geometry.
The theorems of Alhacen, Khayyam and al-Tūsī onquadrilaterals, including theIbn al-Haytham–Lambert quadrilateralandKhayyam–Saccheri quadrilateral, were the first theorems on hyperbolic geometry. Their works on hyperbolic geometry had a considerable influence on its development among later European geometers, including Witelo, Gersonides, Alfonso, John Wallis and Saccheri.[8]
In the 18th century,Johann Heinrich Lambertintroduced thehyperbolic functions[9]and computed the area of ahyperbolic triangle.[10]
In the 19th century, hyperbolic geometry was explored extensively byNikolai Lobachevsky,János Bolyai,Carl Friedrich GaussandFranz Taurinus. Unlike their predecessors, who just wanted to eliminate the parallel postulate from the axioms of Euclidean geometry, these authors realized they had discovered a new geometry.[11][12]
Gauss wrote in an 1824 letter to Franz Taurinus that he had constructed it, but Gauss did not publish his work. Gauss called it "non-Euclidean geometry"[13]causing several modern authors to continue to consider "non-Euclidean geometry" and "hyperbolic geometry" to be synonyms. Taurinus published results on hyperbolic trigonometry in 1826, argued that hyperbolic geometry is self-consistent, but still believed in the special role of Euclidean geometry. The complete system of hyperbolic geometry was published by Lobachevsky in 1829/1830, while Bolyai discovered it independently and published in 1832.
In 1868,Eugenio Beltramiprovided models of hyperbolic geometry, and used this to prove that hyperbolic geometry was consistentif and only ifEuclidean geometry was.
The term "hyperbolic geometry" was introduced byFelix Kleinin 1871.[14]Klein followed an initiative ofArthur Cayleyto use the transformations ofprojective geometryto produceisometries. The idea used aconic sectionorquadricto define a region, and usedcross ratioto define ametric. The projective transformations that leave the conic section or quadricstableare the isometries. "Klein showed that if theCayley absoluteis a real curve then the part of the projective plane in its interior is isometric to the hyperbolic plane..."[15]
The discovery of hyperbolic geometry had importantphilosophicalconsequences. Before its discovery many philosophers (such asHobbesandSpinoza) viewed philosophical rigor in terms of the "geometrical method", referring to the method of reasoning used inEuclid'sElements.
KantinCritique of Pure Reasonconcluded that space (inEuclidean geometry) and time are not discovered by humans as objective features of the world, but are part of an unavoidable systematic framework for organizing our experiences.[16]
It is said that Gauss did not publish anything about hyperbolic geometry out of fear of the "uproar of theBoeotians" (stereotyped as dullards by the ancient Athenians[17]), which would ruin his status asprinceps mathematicorum(Latin, "the Prince of Mathematicians").[18]The "uproar of the Boeotians" came and went, and gave an impetus to great improvements inmathematical rigour,analytical philosophyandlogic. Hyperbolic geometry was finally proved consistent and is therefore another valid geometry.
Because Euclidean, hyperbolic and elliptic geometry are all consistent, the question arises: which is the real geometry of space, and if it is hyperbolic or elliptic, what is its curvature?
Lobachevsky had already tried to measure the curvature of the universe by measuring theparallaxofSiriusand treating Sirius as the ideal point of anangle of parallelism. He realized that his measurements werenot precise enoughto give a definite answer, but he did reach the conclusion that if the geometry of the universe is hyperbolic, then theabsolute lengthis at least one million times the diameter ofEarth's orbit(2000000AU, 10parsec).[19]Some argue that his measurements were methodologically flawed.[20]
Henri Poincaré, with hissphere-worldthought experiment, came to the conclusion that everyday experience does not necessarily rule out other geometries.
Thegeometrization conjecturegives a complete list of eight possibilities for the fundamental geometry of our space. The problem in determining which one applies is that, to reach a definitive answer, we need to be able to look at extremely large shapes – much larger than anything on Earth or perhaps even in our galaxy.[21]
Special relativityplaces space and time on equal footing, so that one considers the geometry of a unifiedspacetimeinstead of considering space and time separately.[22][23]Minkowski geometryreplacesGalilean geometry(which is the 3-dimensional Euclidean space with time ofGalilean relativity).[24]
In relativity, rather than Euclidean, elliptic and hyperbolic geometry, the appropriate geometries to consider areMinkowski space,de Sitter spaceandanti-de Sitter space,[25][26]corresponding to zero, positive and negative curvature respectively.
Hyperbolic geometry enters special relativity throughrapidity, which stands in forvelocity, and is expressed by ahyperbolic angle. The study of this velocity geometry has been calledkinematic geometry. The space of relativistic velocities has a three-dimensional hyperbolic geometry, where the distance function is determined from the relative velocities of "nearby" points (velocities).[27]
There exist variouspseudospheresin Euclidean space that have a finite area of constant negative Gaussian curvature.
ByHilbert's theorem, one cannot isometricallyimmersea complete hyperbolic plane (a complete regular surface of constant negativeGaussian curvature) in a 3-D Euclidean space.
Other usefulmodelsof hyperbolic geometry exist in Euclidean space, in which the metric is not preserved. A particularly well-known paper model based on thepseudosphereis due toWilliam Thurston.
The art ofcrochethas beenusedto demonstrate hyperbolic planes, the first such demonstration having been made byDaina Taimiņa.[28]
In 2000, Keith Henderson demonstrated a quick-to-make paper model dubbed the "hyperbolic soccerball" (more precisely, atruncated order-7 triangular tiling).[29][30]
Instructions on how to make a hyperbolic quilt, designed byHelaman Ferguson,[31]have been made available byJeff Weeks.[32]
Variouspseudospheres– surfaces with constant negative Gaussian curvature – can be embedded in 3-D space under the standard Euclidean metric, and so can be made into tangible models. Of these, thetractoid(or pseudosphere) is the best known; using the tractoid as a model of the hyperbolic plane is analogous to using aconeorcylinderas a model of the Euclidean plane. However, the entire hyperbolic plane cannot be embedded into Euclidean space in this way, and various other models are more convenient for abstractly exploring hyperbolic geometry.
There are fourmodelscommonly used for hyperbolic geometry: theKlein model, thePoincaré disk model, thePoincaré half-plane model, and the Lorentz orhyperboloid model. These models define a hyperbolic plane which satisfies the axioms of a hyperbolic geometry. Despite their names, the first three mentioned above were introduced as models of hyperbolic space byBeltrami, not byPoincaréorKlein. All these models are extendable to more dimensions.
TheBeltrami–Klein model, also known as the projective disk model, Klein disk model andKlein model, is named afterEugenio BeltramiandFelix Klein.
For the two dimensions this model uses the interior of theunit circlefor the complete hyperbolicplane, and thechordsof this circle are the hyperbolic lines.
For higher dimensions this model uses the interior of theunit ball, and thechordsof thisn-ball are the hyperbolic lines.
ThePoincaré disk model, also known as the conformal disk model, also employs the interior of theunit circle, but lines are represented by arcs of circles that areorthogonalto the boundary circle, plus diameters of the boundary circle.
ThePoincaré half-plane modeltakes one-half of the Euclidean plane, bounded by a lineBof the plane, to be a model of the hyperbolic plane. The lineBis not included in the model.
The Euclidean plane may be taken to be a plane with theCartesian coordinate systemand thex-axisis taken as lineBand the half plane is the upper half (y> 0 ) of this plane.
Thehyperboloid modelor Lorentz model employs a 2-dimensionalhyperboloidof revolution (of two sheets, but using one) embedded in 3-dimensionalMinkowski space. This model is generally credited to Poincaré, but Reynolds[33]says thatWilhelm Killingused this model in 1885
Thehemispheremodel is not often used as model by itself, but it functions as a useful tool for visualizing transformations between the other models.
The hemisphere model uses the upper half of theunit sphere:x2+y2+z2=1,z>0.{\displaystyle x^{2}+y^{2}+z^{2}=1,z>0.}
The hyperbolic lines are half-circles orthogonal to the boundary of the hemisphere.
The hemisphere model is part of aRiemann sphere, and different projections give different models of the hyperbolic plane:
All models essentially describe the same structure. The difference between them is that they represent differentcoordinate chartslaid down on the samemetric space, namely the hyperbolic plane. The characteristic feature of the hyperbolic plane itself is that it has a constant negativeGaussian curvature, which is indifferent to the coordinate chart used. Thegeodesicsare similarly invariant: that is, geodesics map to geodesics under coordinate transformation. Hyperbolic geometry is generally introduced in terms of the geodesics and their intersections on the hyperbolic plane.[34]
Once we choose a coordinate chart (one of the "models"), we can alwaysembedit in a Euclidean space of same dimension, but the embedding is clearly not isometric (since the curvature of Euclidean space is 0). The hyperbolic space can be represented by infinitely many different charts; but the embeddings in Euclidean space due to these four specific charts show some interesting characteristics.
Since the four models describe the same metric space, each can be transformed into the other.
See, for example:
In 1966 David Gans proposed aflattened hyperboloid modelin the journalAmerican Mathematical Monthly.[35]It is anorthographic projectionof the hyperboloid model onto the xy-plane.
This model is not as widely used as other models but nevertheless is quite useful in the understanding of hyperbolic geometry.
The conformal square model of the hyperbolic plane arises from usingSchwarz–Christoffel mappingto convert thePoincaré diskinto a square.[37]This model has finite extent, like the Poincaré disk. However, all of the points are inside a square. This model is conformal, which makes it suitable for artistic applications.
The band model employs a portion of the Euclidean plane between two parallel lines.[38]Distance is preserved along one line through the middle of the band. Assuming the band is given by{z∈C:|Imz|<π/2}{\displaystyle \{z\in \mathbb {C} :|\operatorname {Im} z|<\pi /2\}}, the metric is given by|dz|sec(Imz){\displaystyle |dz|\sec(\operatorname {Im} z)}.
Everyisometry(transformationormotion) of the hyperbolic plane to itself can be realized as the composition of at most threereflections. Inn-dimensional hyperbolic space, up ton+1 reflections might be required. (These are also true for Euclidean and spherical geometries, but the classification below is different.)
All isometries of the hyperbolic plane can be classified into these classes:
M. C. Escher's famous printsCircle Limit IIIandCircle Limit IVillustrate the conformal disc model (Poincaré disk model) quite well. The white lines inIIIare not quite geodesics (they arehypercycles), but are close to them. It is also possible to see the negativecurvatureof the hyperbolic plane, through its effect on the sum of angles in triangles and squares.
For example, inCircle Limit IIIevery vertex belongs to three triangles and three squares. In the Euclidean plane, their angles would sum to 450°; i.e., a circle and a quarter. From this, we see that the sum of angles of a triangle in the hyperbolic plane must be smaller than 180°. Another visible property isexponential growth. InCircle Limit III, for example, one can see that the number of fishes within a distance ofnfrom the center rises exponentially. The fishes have an equal hyperbolic area, so the area of a ball of radiusnmust rise exponentially inn.
The art ofcrochethasbeen usedto demonstrate hyperbolic planes (pictured above) with the first being made byDaina Taimiņa,[28]whose bookCrocheting Adventures with Hyperbolic Planeswon the 2009Bookseller/Diagram Prize for Oddest Title of the Year.[39]
HyperRogueis aroguelikegame set on various tilings of the hyperbolic plane.
Hyperbolic geometry is not limited to 2 dimensions; a hyperbolic geometry exists for every higher number of dimensions.
Hyperbolic spaceof dimensionnis a special case of a Riemanniansymmetric spaceof noncompact type, as it isisomorphicto the quotient
Theorthogonal groupO(1,n)actsby norm-preserving transformations onMinkowski spaceR1,n, and it actstransitivelyon the two-sheet hyperboloid of norm 1 vectors. Timelike lines (i.e., those with positive-norm tangents) through the origin pass through antipodal points in the hyperboloid, so the space of such lines yields a model of hyperbolicn-space. Thestabilizerof any particular line is isomorphic to theproductof the orthogonal groups O(n) and O(1), where O(n) acts on the tangent space of a point in the hyperboloid, and O(1) reflects the line through the origin. Many of the elementary concepts in hyperbolic geometry can be described inlinear algebraicterms: geodesic paths are described by intersections with planes through the origin, dihedral angles between hyperplanes can be described by inner products of normal vectors, and hyperbolic reflection groups can be given explicit matrix realizations.
In small dimensions, there are exceptional isomorphisms ofLie groupsthat yield additional ways to consider symmetries of hyperbolic spaces. For example, in dimension 2, the isomorphismsSO+(1, 2) ≅ PSL(2,R) ≅ PSU(1, 1)allow one to interpret the upper half plane model as the quotientSL(2,R)/SO(2)and the Poincaré disc model as the quotientSU(1, 1)/U(1). In both cases, the symmetry groups act by fractional linear transformations, since both groups are the orientation-preserving stabilizers inPGL(2,C)of the respective subspaces of the Riemann sphere. The Cayley transformation not only takes one model of the hyperbolic plane to the other, but realizes the isomorphism of symmetry groups as conjugation in a larger group. In dimension 3, the fractional linear action ofPGL(2,C)on the Riemann sphere is identified with the action on the conformal boundary of hyperbolic 3-space induced by the isomorphismO+(1, 3) ≅ PGL(2,C). This allows one to study isometries of hyperbolic 3-space by considering spectral properties of representative complex matrices. For example, parabolic transformations are conjugate to rigid translations in the upper half-space model, and they are exactly those transformations that can be represented byunipotentupper triangularmatrices.
"Three scientists, Ibn al-Haytham, Khayyam and al-Tūsī, had made the most considerable contribution to this branch of geometry whose importance came to be completely recognized only in the 19th century. In essence their propositions concerning the properties of quadrangles which they considered assuming that some of the angles of these figures were acute of obtuse, embodied the first few theorems of the hyperbolic and the elliptic geometries. Their other proposals showed that various geometric statements were equivalent to the Euclidean postulate V. It is extremely important that these scholars established the mutual connection between this postulate and the sum of the angles of a triangle and a quadrangle. By their works on the theory of parallel lines Arab mathematicians directly influenced the relevant investigations of their European counterparts. The first European attempt to prove the postulate on parallel lines – made by Witelo, the Polish scientists of the 13th century, while revising Ibn al-Haytham'sBook of Optics(Kitab al-Manazir) – was undoubtedly prompted by Arabic sources. The proofs put forward in the 14th century by the Jewish scholarLevi ben Gerson, who lived in southern France, and by the above-mentioned Alfonso from Spain directly border on Ibn al-Haytham's demonstration. Above, we have demonstrated thatPseudo-Tusi's Exposition of Euclidhad stimulated both J. Wallis's and G. Saccheri's studies of the theory of parallel lines."
|
https://en.wikipedia.org/wiki/Hyperbolic_geometry
|
In the mathematical area ofgame theoryand ofconvex optimization, aminimax theoremis a theorem that claims that
under certain conditions on the setsX{\displaystyle X}andY{\displaystyle Y}and on the functionf{\displaystyle f}.[1]It is always true that the left-hand side is at most the right-hand side (max–min inequality) but equality only holds under certain conditions identified by minimax theorems. The first theorem in this sense isvon Neumann's minimax theorem about two-playerzero-sum gamespublished in 1928,[2]which is considered the starting point ofgame theory. Von Neumann is quoted as saying "As far as I can see, there could be no theory of games ... without that theorem ... I thought there was nothing worth publishing until the Minimax Theorem was proved".[3]Since then, several generalizations and alternative versions of von Neumann's original theorem have appeared in the literature.[4][5]
Von Neumann's original theorem[2]was motivated by game theory and applies to the case where
Under these assumptions, von Neumann proved that
In the context of two-playerzero-sum games, the setsX{\displaystyle X}andY{\displaystyle Y}correspond to the strategy sets of the first and second player, respectively, which consist of lotteries over their actions (so-calledmixed strategies), and their payoffs are defined by thepayoff matrixA{\displaystyle A}. The functionf(x,y){\displaystyle f(x,y)}encodes theexpected valueof the payoff to the first player when the first player plays the strategyx{\displaystyle x}and the second player plays the strategyy{\displaystyle y}.
Von Neumann's minimax theorem can be generalized to domains that are compact and convex, and to functions that are concave in their first argument and convex in their second argument (known as concave-convex functions). Formally, letX⊆Rn{\displaystyle X\subseteq \mathbb {R} ^{n}}andY⊆Rm{\displaystyle Y\subseteq \mathbb {R} ^{m}}becompactconvexsets. Iff:X×Y→R{\displaystyle f:X\times Y\rightarrow \mathbb {R} }is a continuous function that is concave-convex, i.e.
Then we have that
Sion's minimax theorem is a generalization of von Neumann's minimax theorem due toMaurice Sion,[6]relaxing the requirement that X and Y be standard simplexes and that f be bilinear. It states:[6][7]
LetX{\displaystyle X}be aconvexsubset of alinear topological spaceand letY{\displaystyle Y}be acompactconvexsubset of alinear topological space. Iff{\displaystyle f}is a real-valuedfunctiononX×Y{\displaystyle X\times Y}with
Then we have that
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
Thisgame theoryarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Minimax_theorem
|
In mathematics, themax–min inequalityis as follows:
When equality holds one says thatf,W, andZsatisfies a strong max–min property (or asaddle-pointproperty). The example functionf(z,w)=sin(z+w){\displaystyle \ f(z,w)=\sin(z+w)\ }illustrates that the equality does not hold for every function.
A theorem giving conditions onf,W, andZwhich guarantee the saddle point property is called aminimax theorem.
Defineg(z)≜infw∈Wf(z,w).{\displaystyle g(z)\triangleq \inf _{w\in W}f(z,w)\ .}For allz∈Z{\displaystyle z\in Z}, we getg(z)≤f(z,w){\textstyle g(z)\leq f(z,w)}for allw∈W{\displaystyle w\in W}by definition of the infimum being a lower bound. Next, for allw∈W{\textstyle w\in W},f(z,w)≤supz∈Zf(z,w){\displaystyle f(z,w)\leq \sup _{z\in Z}f(z,w)}for allz∈Z{\textstyle z\in Z}by definition of the supremum being an upper bound. Thus, for allz∈Z{\displaystyle z\in Z}andw∈W{\displaystyle w\in W},g(z)≤f(z,w)≤supz∈Zf(z,w){\displaystyle g(z)\leq f(z,w)\leq \sup _{z\in Z}f(z,w)}makingh(w)≜supz∈Zf(z,w){\displaystyle h(w)\triangleq \sup _{z\in Z}f(z,w)}an upper bound ong(z){\displaystyle g(z)}for any choice ofw∈W{\displaystyle w\in W}. Because the supremum is the least upper bound,supz∈Zg(z)≤h(w){\displaystyle \sup _{z\in Z}g(z)\leq h(w)}holds for allw∈W{\displaystyle w\in W}. From this inequality, we also see thatsupz∈Zg(z){\displaystyle \sup _{z\in Z}g(z)}is a lower bound onh(w){\displaystyle h(w)}. By the greatest lower bound property of infimum,supz∈Zg(z)≤infw∈Wh(w){\displaystyle \sup _{z\in Z}g(z)\leq \inf _{w\in W}h(w)}. Putting all the pieces together, we get
supz∈Zinfw∈Wf(z,w)=supz∈Zg(z)≤infw∈Wh(w)=infw∈Wsupz∈Zf(z,w){\displaystyle \sup _{z\in Z}\inf _{w\in W}f(z,w)=\sup _{z\in Z}g(z)\leq \inf _{w\in W}h(w)=\inf _{w\in W}\sup _{z\in Z}f(z,w)}
which proves the desired inequality.◼{\displaystyle \blacksquare }
|
https://en.wikipedia.org/wiki/Max%E2%80%93min_inequality
|
Themountain pass theoremis anexistence theoremfrom thecalculus of variations, originally due toAntonio AmbrosettiandPaul Rabinowitz.[1][2]Given certain conditions on a function, the theorem demonstrates the existence of asaddle point. The theorem is unusual in that there are many other theorems regarding the existence ofextrema, but few regarding saddle points.
The assumptions of the theorem are:
If we define:
and:
then the conclusion of the theorem is thatcis a critical value ofI.
The intuition behind the theorem is in the name "mountain pass." ConsiderIas describing elevation. Then we know two low spots in the landscape: the origin becauseI[0]=0{\displaystyle I[0]=0}, and a far-off spotvwhereI[v]≤0{\displaystyle I[v]\leq 0}. In between the two lies a range of mountains (at‖u‖=r{\displaystyle \Vert u\Vert =r}) where the elevation is high (higher thana>0). In order to travel along a pathgfrom the origin tov, we must pass over the mountains—that is, we must go up and then down. SinceIis somewhat smooth, there must be a critical point somewhere in between. (Think along the lines of themean-value theorem.) The mountain pass lies along the path that passes at the lowest elevation through the mountains. Note that this mountain pass is almost always asaddle point.
For a proof, see section 8.5 of Evans.
LetX{\displaystyle X}beBanach space. The assumptions of the theorem are:
In this case there is acritical pointx¯∈X{\displaystyle {\overline {x}}\in X}ofΦ{\displaystyle \Phi }satisfyingm(r)≤Φ(x¯){\displaystyle m(r)\leq \Phi ({\overline {x}})}. Moreover, if we define
then
For a proof, see section 5.5 of Aubin and Ekeland.
|
https://en.wikipedia.org/wiki/Mountain_pass_theorem
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.