text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
Nanoparticle tracking analysis(NTA) is a method for visualizing and analyzing particles in liquids that relates the rate ofBrownian motionto particle size. The rate of movement is related only to theviscosityand temperature of the liquid; it is not influenced by particledensityorrefractive index. NTA allows the determination of a size distribution profile of small particles with a diameter of approximately10–1000nmin liquid suspension.
The technique is used in conjunction with anultramicroscopeand a laser illumination unit that together allow small particles in liquid suspension to be visualized moving under Brownian motion. The light scattered by the particles is captured using aCCDorEMCCDcamera over multiple frames. Computer software is then used to track the motion of each particle from frame to frame. The rate of particle movement is related to a sphere equivalenthydrodynamicradius as calculated through theStokes–Einstein equation. The technique calculates particle size on a particle-by particle basis, overcoming inherent weaknesses in ensemble techniques such asdynamic light scattering.[1]Since video clips form the basis of the analysis, accurate characterization of real time events such as aggregation and dissolution is possible. Samples require minimal preparation, minimizing the time required to process each sample. Speculators suggest that eventually the analysis may be done in real-time with no preparation, e.g. when detecting the presence of airborne viruses or biological weapons.
NTA currently operates for particles from about10 to 1000 nmin diameter, depending on particle type. Analysis of particles at the lowest end of this range is possible only for particles composed of materials with a high refractive index, such as gold and silver. The upper size limit is restricted by the limited Brownian motion of large particles; because a large particle moves very slowly, accuracy is diminished. The viscosity of the solvent also influences the movement of particles, and it, too, plays a part in determining the upper size limit for a specific system.
NTA has been used by commercial, academic, and government laboratories working withnanoparticle toxicology,drug delivery,exosomes,microvesicles,bacterial membrane vesicles, and other small biological particles,virologyandvaccineproduction,ecotoxicology,protein aggregation,orthopedic implants, inks and pigments, andnanobubbles.[citation needed]
Interferometric nanoparticle tracking analysis (iNTA) is the next generation of NTA technology. It is based oninterferometric scattering microscopy(iSCAT), which enhances the signal of weak scatterers. In contrast to NTA, iNTA has a superior resolution based on a two-parameter analysis, including the size and the scattering cross-section of the particle.[2]
Bothdynamic light scattering(DLS) and nanoparticle tracking analysis (NTA) measure the Brownian motion of nanoparticles whose speed of motion, or diffusion constant,Dt, is related to particle size through the Stokes–Einstein equation.
where
In NTA this motion is analyzed by video – individual particle positional changes are tracked in two dimensions from which the particle diffusion is determined. KnowingDt, the particle hydrodynamic diameter can be then determined.
In contrast, DLS does not visualize the particles individually but analyzes, using adigital correlator, the time dependent scattering intensity fluctuations. These fluctuations are caused by interference effects arising from the relative Brownian movements of an ensemble of a large number of particles within a sample. Through analysis of the resultant exponential autocorrelation function, average particle size can be calculated as well as a polydispersity index. For multi-exponential autocorrelation functions arising from polydisperse samples, deconvolution can give limited information about the particle size distribution profile.
NTA and related technologies were developed by Bob Carr.[3]Along with John Knowles, Carr foundedNanoSight Ltdin 2003. ThisUnited Kingdom-based company, of which Knowles is thechairmanand Carr is thechief technology officer, manufactures instruments that use NTA to detect and analyze small particles in industrial and academic laboratories.[4]In 2004 Particle Metrix GmbH was founded in Germany by Hanno Wachernig. Particle Metrix makes the ZetaView, which operates on the same NTA principle but uses different optics and fluidics in an attempt to improve sampling, zeta potential, and fluorescence detection.
|
https://en.wikipedia.org/wiki/Nanoparticle_tracking_analysis
|
Thenarrow escape problem[1][2]is a ubiquitous problem inbiology,biophysicsandcellular biology.
The mathematical formulation is the following: aBrownian particle(ion,molecule, orprotein) is confined to a bounded domain (a compartment or a cell) by a reflecting boundary, except for a small window through which it can escape. The narrow escape problem is that of calculating the mean escape time. This time diverges as the window shrinks, thus rendering the calculation asingular perturbationproblem.[3][4][5][6][7][8][9]
When escape is even more stringent due to severe geometrical restrictions at the place of escape, the narrow escape problem becomes thedire strait problem.[10][11]
The narrow escape problem was proposed in the context of biology and biophysics byD. Holcmanand Z. Schuss,[12]and later on with A.Singer and led to the narrow escape theory in applied mathematics andcomputational biology.[13][14][15]
The motion of a particle is described by the Smoluchowski limit of theLangevin equation:[16][17]dXt=2DdBt+1γF(x)dt,{\displaystyle dX_{t}={\sqrt {2D}}\,dB_{t}+{\frac {1}{\gamma }}F(x)\,dt,}whereD{\displaystyle D}is thediffusion coefficientof the particle,γ{\displaystyle \gamma }is thefriction coefficientper unit of mass,F(x){\displaystyle F(x)}the force per unit of mass, andBt{\displaystyle B_{t}}is aBrownian motion.
A common question is to estimate themean sojourn timeof a particle diffusing in a bounded domainΩ{\displaystyle \Omega }before it escapes through a small absorbing window∂Ωa{\displaystyle \partial \Omega _{a}}in its boundary∂Ω{\displaystyle \partial \Omega }. The time is estimated asymptotically in the limitε=|∂Ωa||∂Ω|≪1{\textstyle \varepsilon ={\frac {|\partial \Omega _{a}|}{|\partial \Omega |}}\ll 1}
Theprobability density function(pdf)pε(x,t){\displaystyle p_{\varepsilon }(x,t)}is the probability of finding the particle at positionx{\displaystyle x}at timet{\displaystyle t}.
The pdf satisfies theFokker–Planck equation:∂∂tpε(x,t)=DΔpε(x,t)−1γ∇(pε(x,t)F(x)){\displaystyle {\frac {\partial }{\partial t}}p_{\varepsilon }(x,t)=D\Delta p_{\varepsilon }(x,t)-{\frac {1}{\gamma }}\nabla (p_{\varepsilon }(x,t)F(x))}with initial conditionpε(x,0)=ρ0(x){\displaystyle p_{\varepsilon }(x,0)=\rho _{0}(x)\,}and mixed Dirichlet–Neumannboundary conditions(t>0{\displaystyle t>0})pε(x,t)=0forx∈∂Ωa{\displaystyle p_{\varepsilon }(x,t)=0{\text{ for }}x\in \partial \Omega _{a}}D∂∂npε(x,t)−pε(x,t)γF(x)⋅n(x)=0forx∈∂Ω−∂Ωa{\displaystyle D{\frac {\partial }{\partial n}}p_{\varepsilon }(x,t)-{\frac {p_{\varepsilon }(x,t)}{\gamma }}F(x)\cdot n(x)=0{\text{ for }}x\in \partial \Omega -\partial \Omega _{a}}
The functionuε(y)=∫Ω∫0∞pε(x,ty)dtdx{\displaystyle u_{\varepsilon }(y)=\int _{\Omega }\int _{0}^{\infty }p_{\varepsilon }(x,ty)\,dt\,dx}represents the mean sojourn time of particle, conditioned on the initial positiony{\displaystyle y}. It is the solution of the boundary value problem
DΔuε(y)+1γF(y)⋅∇uε(y)=−1{\displaystyle D\Delta u_{\varepsilon }(y)+{\frac {1}{\gamma }}F(y)\cdot \nabla u_{\varepsilon }(y)=-1}uε(y)=0fory∈∂Ωa{\displaystyle u_{\varepsilon }(y)=0{\text{ for }}y\in \partial \Omega _{a}}∂uε(y)∂n=0fory∈∂Ωr{\displaystyle {\frac {\partial u_{\varepsilon }(y)}{\partial n}}=0{\text{ for }}y\in \partial \Omega _{r}}
The solution depends on the dimension of the domain. For a particle diffusing on a two-dimensional diskuε(y)=AπDln1ε+O(1),{\displaystyle u_{\varepsilon }(y)={\frac {A}{\pi D}}\ln {\frac {1}{\varepsilon }}+O(1),}whereA{\displaystyle A}is the surface of the domain. The functionuϵ(y){\displaystyle u_{\epsilon }(y)}does not depend on the initial positiony{\displaystyle y}, except for a small boundary layer near the absorbing boundary due to the asymptotic form.
The first order term matters in dimension 2: for a circular disk of radiusR{\displaystyle R}, the mean escape time of a particle starting in the center isE(τ|x(0)=0)=R2D(log(1ε)+log2+14+O(ε)).{\displaystyle E(\tau |x(0)=0)={\frac {R^{2}}{D}}\left(\log \left({\frac {1}{\varepsilon }}\right)+\log 2+{\frac {1}{4}}+O(\varepsilon )\right).}
The escape time averaged with respect to a uniform initial distribution of the particle is given byE(τ)=R2D(log(1ε)+log2+18+O(ε)).{\displaystyle E(\tau )={\frac {R^{2}}{D}}\left(\log \left({\frac {1}{\varepsilon }}\right)+\log 2+{\frac {1}{8}}+O(\varepsilon )\right).}
The geometry of the small opening can affect the escape time: if the absorbing window is located at a corner of angleα{\displaystyle \alpha }, then:
Eτ=|Ω|αD[log1ε+O(1)].{\displaystyle E\tau ={\frac {|\Omega |}{\alpha D}}\left[\log {\frac {1}{\varepsilon }}+O(1)\right].}
More surprising, near a cusp in a two dimensional domain, the escape timeEτ{\displaystyle E\tau }grows algebraically, rather than logarithmically: in the domain bounded between two tangent circles, the escape time is:Eτ=|Ω|(d−1)D(1ε+O(1)),{\displaystyle E\tau ={\frac {|\Omega |}{(d-1)D}}\left({\frac {1}{\varepsilon }}+O(1)\right),}whered> 1is the ratio of the radii. Finally, when the domain is an annulus, the escape time to a small opening located on the inner circle involves a second parameter which isβ=R1R2<1,{\displaystyle \beta ={\frac {R_{1}}{R_{2}}}<1,}the ratio of the inner to the outer radii, the escape time, averaged with respect to a uniform initial distribution, is:Eτ=(R22−R12)D[log1ε+log2+2β2]+12R221−β2log1β−14R22+O(ε,β4)R22.{\displaystyle E\tau ={\frac {(R_{2}^{2}-R_{1}^{2})}{D}}\left[\log {\frac {1}{\varepsilon }}+\log 2+2\beta ^{2}\right]+{\frac {1}{2}}{\frac {R_{2}^{2}}{1-\beta ^{2}}}\log {\frac {1}{\beta }}-{\frac {1}{4}}R_{2}^{2}+O(\varepsilon ,\beta ^{4})R_{2}^{2}.}
This equation contains two terms of the asymptotic expansion ofEτ{\displaystyle E\tau }and2ϵ{\displaystyle 2\epsilon }is the angle of the absorbing boundary. The caseβ{\displaystyle \beta }close to 1 remains open, and for general domains, the asymptotic expansion of the escape time remains an open problem. So does the problem of computing the escape time near a cusp point in three-dimensional domains. For Brownian motion in a field of forceF(x)≠0{\displaystyle F(x)\neq 0}the gap in the spectrum is not necessarily small between the first and the second eigenvalues, depending on the relative size of the small hole and the force barriers, the particle has to overcome in order to escape. The escape stream is not necessarilyPoissonian.
A theorem that relates the Brownian motion escape problem to a (deterministic) partial differential equation problem is the following.
Theorem—LetΩ{\displaystyle \Omega }be a bounded domain with smooth boundary∂Ω{\displaystyle \partial \Omega }andΓ{\displaystyle \Gamma }be a closed subset of∂Ω{\displaystyle \partial \Omega }. For eachx∈Ω{\displaystyle x\in \Omega }, letτx{\displaystyle \tau _{x}}be the first time of a particle hittingΓ{\displaystyle \Gamma }, assuming that the particle starts fromx{\displaystyle x}, is subject to the Brownian motion inΩ{\displaystyle \Omega }, and reflects from∂Ω{\displaystyle \partial \Omega }. Then, the mean first passage time,T(x):=E[τx]{\displaystyle T(x):=\mathbb {E} [\tau _{x}]}, and its variance,v(x):=E[(τx−T(x))2]{\displaystyle v(x):=\mathbb {E} [(\tau _{x}-T(x))^{2}]}, are solutions of the following boundary value problems:−ΔT=2inΩ,T=0onΓ,∂nT=0on∂Ω∖Γ{\displaystyle -\Delta T=2{\text{ in }}\Omega ,~T=0{\text{ on }}\Gamma ,~\partial _{n}T=0{\text{ on }}\partial \Omega \setminus \Gamma }−Δv=2|∇T|2inΩ,v=0onΓ,∂nv=0on∂Ω∖Γ{\displaystyle -\Delta v=2\vert \nabla T\vert ^{2}{\text{ in }}\Omega ,~v=0{\text{ on }}\Gamma ,~\partial _{n}v=0{\text{ on }}\partial \Omega \setminus \Gamma }
Here∂n:=n⋅∇{\displaystyle \partial _{n}:=n\cdot \nabla }is the derivative in the directionn{\displaystyle n}, the exterior normal to∂Ω.{\displaystyle \partial \Omega .}Moreover, the average of the variance can be calculated from the formulav¯:=1|Ω|∫Ωv(x)dx=1|Ω|∫ΩT2(x)dx=:T2{\displaystyle {\bar {v}}:={\frac {1}{\vert \Omega \vert }}\int _{\Omega }v(x)dx={\frac {1}{\vert \Omega \vert }}\int _{\Omega }T^{2}(x)dx=:T^{2}}
The first part of the theorem is a classical result, while the average variance was proved in 2011 by Carey Caginalp and Xinfu Chen.[18][19][20]
The escape time has been the subject of a number of studies using the small gate as an asymptotically small parameter. The following closed form result[18][19][20]gives an exact solution that confirms these asymptotic formulae and extends them to gates that are not necessarily small.
Theorem (Carey Caginalp and Xinfu Chen Closed Formula)—In 2-D, with points identified by complex numbers, letΩ:={reiθ|0≤r<1,−ε≤θ≤2π−ε},Γ:={eiθ||θ|≤ε}{\displaystyle \Omega :=\left\{re^{i\theta }\vert 0\leq r<1,{\text{ }}-\varepsilon \leq \theta \leq 2\pi -\varepsilon \right\},~\Gamma :=\left\{e^{i\theta }\vert \vert \theta \vert \leq \varepsilon \right\}}
Then the mean first passage timeT(z){\displaystyle T(z)}, forz∈Ω¯{\displaystyle z\in {\bar {\Omega }}}, is given byT(z)=1−|z|22+2log|1−z+(1−ze−iε)(1−zeiε)2sinε2|{\displaystyle T(z)={\frac {1-\vert z\vert ^{2}}{2}}+2\log {\left|{\frac {1-z+{\sqrt {(1-ze^{-i\varepsilon })(1-ze^{i\varepsilon })}}}{2\sin {\frac {\varepsilon }{2}}}}\right|}}
Another set of results concerns the probability density of the location of exit.[19]
Theorem (Carey Caginalp and Xinfu Chen Probability Density)—The probability density of the location of a particle at time of its exit is given byj¯(eiθ):=−12π∂∂rT(eiθ)={0,ifε<θ<2π−ε12πcosθ2sin2ε2−sin2θ2,if|θ|<ε{\displaystyle {\bar {j}}(e^{i\theta }):=-{\frac {1}{2\pi }}{\frac {\partial }{\partial r}}T(e^{i\theta })={\begin{cases}0,&{\text{if }}\varepsilon <\theta <2\pi -\varepsilon \\{\frac {1}{2\pi }}{\frac {\cos {\frac {\theta }{2}}}{\sqrt {\sin ^{2}{\frac {\varepsilon }{2}}-\sin ^{2}{\frac {\theta }{2}}}}},&{\text{if }}\vert \theta \vert <\varepsilon \end{cases}}}
That is, for any (Borel set)γ⊂∂Ω{\displaystyle \gamma \subset \partial \Omega }, the probability that a particle, starting either at the origin or uniformly distributed inΩ{\displaystyle \Omega }, exhibiting Brownian motion inΩ{\displaystyle \Omega }, reflecting when it hits∂Ω∖Γ{\displaystyle \partial \Omega \setminus \Gamma }, and escaping once it hitsΓ{\displaystyle \Gamma }, ends up escaping fromγ{\displaystyle \gamma }isP(γ)=∫γj¯(y)dSy{\displaystyle P(\gamma )=\int _{\gamma }{\bar {j}}(y)dS_{y}}wheredSy{\displaystyle dS_{y}}is the surface element of∂Ω{\displaystyle \partial \Omega }aty∈∂Ω{\displaystyle y\in \partial \Omega }.
In simulation there is a random error due to the statistical sampling process. This error can be limited by appealing to thecentral limit theoremand using a large number of samples. There is also a discretization error due to the finite size approximation of the step size in approximating the Brownian motion. One can then obtain empirical results as step size and gate size vary. Using the exact result quoted above for the particular case of the circle, it is possible to make a careful comparison of the exact solution with the numerical solution.[21][22]This illuminates the distinction between finite steps and continuous diffusion. A distribution of exit locations was also obtained through simulations for this problem.
The forward rate of chemical reactions is the reciprocal of the narrow escape time, which generalizes the classical Smoluchowski formula for Brownian particles located in an infinite medium. A Markov description can be used to estimate the binding and unbinding to a small number of sites.[23]
|
https://en.wikipedia.org/wiki/Narrow_escape_problem
|
Osmosis(/ɒzˈmoʊsɪs/,USalso/ɒs-/)[1]is the spontaneous net movement ordiffusionofsolventmolecules through aselectively-permeable membranefrom a region of highwater potential(region of lowersoluteconcentration) to a region of low water potential (region of higher solute concentration),[2]in the direction that tends to equalize the solute concentrations on the two sides.[3][4][5]It may also be used to describe a physical process in which any solvent moves across a selectively permeable membrane (permeable to the solvent, but not the solute) separating two solutions of different concentrations.[6][7]Osmosis can be made to dowork.[8]Osmotic pressureis defined as the externalpressurerequired to prevent net movement of solvent across the membrane. Osmotic pressure is acolligative property, meaning that the osmotic pressure depends on themolar concentrationof the solute but not on its identity.
Osmosis is a vital process inbiological systems, asbiological membranesare semipermeable. In general, these membranes are impermeable to large andpolarmolecules, such asions,proteins, andpolysaccharides, while being permeable to non-polar orhydrophobicmolecules likelipidsas well as to small molecules like oxygen, carbon dioxide, nitrogen, and nitric oxide. Permeability depends on solubility, charge, or chemistry, as well as solute size. Water molecules travel through the plasma membrane, tonoplast membrane (vacuole) or organelle membranes by diffusing across the phospholipid bilayer viaaquaporins(small transmembrane proteins similar to those responsible for facilitateddiffusionand ion channels). Osmosis provides the primary means by whichwateris transported into and out ofcells. Theturgor pressureof a cell is largely maintained by osmosis across the cell membrane between the cell interior and its relatively hypotonic environment.
Some kinds of osmotic flow have been observed since ancient times, e.g., on the construction of Egyptian pyramids.[9]Jean-Antoine Nolletfirst documented observation of osmosis in 1748.[10][a]The word "osmosis" descends from the words "endosmose" and "exosmose", which were coined by French physicianRené Joachim Henri Dutrochet(1776–1847) from the Greek words ἔνδον (éndon"within"), ἔξω (éxō"outer, external"), and ὠσμός (ōsmós"push, impulsion").[n 1]In 1867,Moritz Traubeinvented highly selective precipitation membranes, advancing the art and technique of measurement of osmotic flow.[9]
Osmosis is the movement of a solvent across asemipermeable membranetoward a higher concentration of solute. In biological systems, the solvent is typically water, but osmosis can occur in other liquids, supercritical liquids, and even gases.[11][12]
When a cell is submerged inwater, the water molecules pass through the cell membrane from an area of low solute concentration to high solute concentration. For example, if the cell is submerged in saltwater, water molecules move out of the cell. If a cell is submerged in freshwater, water molecules move into the cell.
When the membrane has a volume of pure water on both sides, water molecules pass in and out in each direction at exactly the same rate. There is no net flow of water through the membrane.
Osmosis can be demonstrated when potato slices are added to a high salt solution. The water from inside the potato moves out to the solution, causing the potato to shrink and to lose its 'turgor pressure'. The more concentrated the salt solution, the bigger the loss in size and weight of the potato slice.
Chemical gardensdemonstrate the effect of osmosis in inorganic chemistry.
The mechanism responsible for driving osmosis has commonly been represented in biology and chemistry texts as either the dilution of water by solute (resulting in lower concentration of water on the higher solute concentration side of the membrane and therefore adiffusionof water along a concentration gradient) or by a solute's attraction to water (resulting in less free water on the higher solute concentration side of the membrane and therefore net movement of water toward the solute). Both of these notions have been conclusively refuted.
The diffusion model of osmosis is rendered untenable by the fact that osmosis can drive water across a membrane toward a higher concentration of water.[13]The "bound water" model is refuted by the fact that osmosis is independent of the size of the solute molecules—a colligative property[14]—or how hydrophilic they are.
It is difficult to describe osmosis without a mechanical or thermodynamic explanation, but essentially there is an interaction between the solute and water that counteracts the pressure that otherwise free solute molecules would exert. One fact to take note of is that heat from the surroundings is able to be converted into mechanical energy (water rising).
Many thermodynamic explanations go into the concept ofchemical potentialand how the function of the water on the solution side differs from that of pure water due to the higher pressure and the presence of the solute counteracting such that the chemical potential remains unchanged. Thevirial theoremdemonstrates that attraction between the molecules (water and solute) reduces the pressure, and thus the pressure exerted by water molecules on each other in solution is less than in pure water, allowing pure water to "force" the solution until the pressure reaches equilibrium.[14]
Osmotic pressureis the main agent of support in many plants. The osmotic entry of water raises the turgor pressure exerted against thecell wall, until it equals the osmotic pressure, creating asteady state.[15]
When a plant cell is placed in a solution that is hypertonic relative to the cytoplasm, water moves out of the cell and the cell shrinks. In doing so, the cell becomesflaccid. In extreme cases, the cell becomesplasmolyzed– thecell membranedisengages with the cell wall due to lack of water pressure on it.[16]
When a plant cell is placed in a solution that is hypotonic relative to the cytoplasm, water moves into the cell and the cell swells to becometurgid.[15]
Osmosis also plays a vital role in human cells by facilitating the movement of water across cell membranes. This process is crucial for maintaining proper cell hydration, as cells can be sensitive to dehydration or overhydration. In human cells, osmosis is essential for maintaining the balance of water and solutes, ensuring optimal cellular function. Imbalances in osmotic pressure can lead to cellular dysfunction, highlighting the importance of osmosis in sustaining the health and integrity of human cells.[citation needed]
In certain environments, osmosis can be harmful to organisms.Freshwaterandsaltwater aquarium fish, for example, will quickly die should they be placed in water of a maladaptive salinity. The osmotic effect of table salt to killleechesandslugsis another example of a way osmosis can cause harm to organisms.[16]
Suppose an animal or plant cell is placed in a solution of sugar or salt in water.
This means that if a cell is put in a solution which has a solute concentration higher than its own, it will shrivel, and if it is put in a solution with a lower solute concentration than its own, the cell will swell and may even burst.[citation needed]
Osmosis may be opposed by increasing the pressure in the region of high solute concentration with respect to that in the low solute concentration region. Theforceper unit area, or pressure, required to prevent the passage ofwater(or any other high-liquiditysolution) through a selectively permeable membrane and into a solution of greater concentration is equivalent to the osmotic pressure of thesolution, orturgor.Osmotic pressureis acolligative property, meaning that the property depends on the concentration of the solute, but not on its content or chemical identity.
The osmotic gradient is the difference in concentration between twosolutionson either side of asemipermeable membrane, and is used to tell the difference in percentages of the concentration of a specific particle dissolved in a solution.
Usually the osmotic gradient is used while comparing solutions that have a semipermeable membrane between them allowing water to diffuse between the two solutions, toward the hypertonic solution (the solution with the higher concentration). Eventually, the force of the column of water on the hypertonic side of the semipermeable membrane will equal the force of diffusion on the hypotonic (the side with a lesser concentration) side, creating equilibrium. When equilibrium is reached, water continues to flow, but it flows both ways in equal amounts as well as force, therefore stabilizing the solution.
Reverse osmosis is a separation process that uses pressure to force a solvent through asemi-permeable membranethat retains the solute on one side and allows the pure solvent to pass to the other side, forcing it from a region of high solute concentration through a membrane to a region of low solute concentration by applying a pressure in excess of theosmotic pressure. This process is known primarily for its role in turning seawater into drinking water, when salt and other unwanted substances are ridded from the water molecules.[17]
Osmosis may be used directly to achieve separation of water from a solution containing unwanted solutes. A "draw" solution of higher osmotic pressure than the feed solution is used to induce a net flow of water through a semi-permeable membrane, such that the feed solution becomes concentrated as the draw solution becomes dilute. The diluted draw solution may then be used directly (as with an ingestible solute like glucose), or sent to a secondary separation process for the removal of the draw solute. This secondary separation can be more efficient than a reverse osmosis process would be alone, depending on the draw solute used and the feedwater treated.Forward osmosisis an area of ongoing research, focusing on applications indesalination,water purification,water treatment,food processing, and other areas of study.
Future developments in osmosis and osmosis research hold promise for a range of applications. Researchers are exploring advanced materials for more efficient osmotic processes, leading to improved water desalination and purification technologies. Additionally, the integration of osmotic power generation, where the osmotic pressure difference between saltwater and freshwater is harnessed for energy, presents a sustainable and renewable energy source with significant potential. Furthermore, the field of medical research is looking at innovative drug delivery systems that utilize osmotic principles, offering precise and controlled administration of medications within the body. As technology and understanding in this field continue to evolve, the applications of osmosis are expected to expand, addressing various global challenges in water sustainability, energy generation, and healthcare.[18]
Original text:Avant que de finir ce Mémoire, je crois devoir rendre compte d'un fait que je dois au hasard, & qui me parut d'abord … singulier … j'en avois rempli une fiole cylindrique, longue de cinq pouces, & d'un pouce de diamètre ou environ; & l'ayant couverte d'un morceau de vessie mouillée & ficelée au col du vaisseau, je l'avois plongée dans un grand vase plein d'eau, afin d'être sûr qu'il ne rentrât aucun air dans l'esprit de vin. Au bout de cinq ou six heures, je fus tout surpris de voir que la fiole étoit plus pleine qu'au moment de son immersion, quoiqu'elle le fût alors autant que ses bords pouvoient le permettre; la vessie qui lui servoit de bouchon, étoit devenue convexe & si tendue, qu’en la piquant avec une épingle, il en sortit un jet de liqueur qui s'éleva à plus d'un pied de hauteur.
Translation:Before finishing this memoir, I think I should report an event that I owe to chance and which at first seemed to me … strange … I filled [with alcohol] a cylindrical vial, five inches long and about one inch in diameter; and [after] having covered it with piece of damp bladder [which was] tied to the neck of the vial, I immersed it in a large bowl full of water, in order to be sure that no air re-entered the alcohol. At the end of 5 or 6 hours, I was very surprised to see that the vial was fuller than at the moment of its immersion, although it [had been filled] as far as its sides would allow; the bladder that served as its cap, bulged and had become so stretched that on pricking it with a needle, there came from it a jet of alcohol that rose more than a foot high.
|
https://en.wikipedia.org/wiki/Osmosis
|
Inprobability theory, theSchramm–Loewner evolutionwith parameterκ, also known asstochastic Loewner evolution(SLEκ), is a family of random planar curves that have been proven to be thescaling limitof a variety of two-dimensional lattice models instatistical mechanics. Given a parameterκand adomainUin thecomplex plane, it gives a family of random curves inU, withκcontrolling how much the curve turns. There are two main variants of SLE,chordal SLEwhich gives a family of random curves from two fixed boundary points, andradial SLE, which gives a family of random curves from a fixed boundary point to a fixed interior point. These curves are defined to satisfyconformal invarianceand a domainMarkov property.
It was discovered byOded Schramm(2000) as a conjectured scaling limit of the planaruniform spanning tree(UST) and the planarloop-erased random walk(LERW) probabilistic processes, and developed by him together withGreg LawlerandWendelin Wernerin a series of joint papers.
Besides UST and LERW, the Schramm–Loewner evolution is conjectured or proven to describe thescaling limitof various stochastic processes in the plane, such ascritical percolation, thecritical Ising model, thedouble-dimer model,self-avoiding walks, and other criticalstatistical mechanicsmodels that exhibit conformal invariance. The SLE curves are the scaling limits of interfaces and other non-self-intersecting random curves in these models. The main idea is that the conformal invariance and a certainMarkov propertyinherent in such stochastic processes together make it possible to encode these planar curves into a one-dimensional Brownian motion running on the boundary of the domain (the driving function in Loewner's differential equation). This way, many important questions about the planar models can be translated into exercises inItô calculus. Indeed, several mathematically non-rigorous predictions made by physicists usingconformal field theoryhave been proven using this strategy.
IfD{\displaystyle D}is asimply connected,opencomplex domainnot equal toC{\displaystyle \mathbb {C} }, andγ{\displaystyle \gamma }is a simple curve inD{\displaystyle D}starting on the boundary (a continuous function withγ(0){\displaystyle \gamma (0)}on the boundary ofD{\displaystyle D}andγ((0,∞)){\displaystyle \gamma ((0,\infty ))}a subset ofD{\displaystyle D}), then for eacht≥0{\displaystyle t\geq 0}, the complementDt=D∖γ([0,t]){\displaystyle D_{t}=D\smallsetminus \gamma ([0,t])}ofγ([0,t]){\displaystyle \gamma ([0,t])}is simply connected and thereforeconformally isomorphictoD{\displaystyle D}by theRiemann mapping theorem. Ifft{\displaystyle f_{t}}is a suitable normalized isomorphism fromD{\displaystyle D}toDt{\displaystyle D_{t}}, then it satisfies a differential equation found byLoewner (1923, p. 121) in his work on theBieberbach conjecture.
Sometimes it is more convenient to use the inverse functiongt{\displaystyle g_{t}}offt{\displaystyle f_{t}}, which is a conformal mapping fromDt{\displaystyle D_{t}}toD{\displaystyle D}.
In Loewner's equation,z∈D{\displaystyle z\in D},t≥0{\displaystyle t\geq 0}, and the boundary values at timet=0{\displaystyle t=0}aref0(z)=z{\displaystyle f_{0}(z)=z}org0(z)=z{\displaystyle g_{0}(z)=z}. The equation depends on adriving functionζ(t){\displaystyle \zeta (t)}taking values in the boundary ofD{\displaystyle D}. IfD{\displaystyle D}is the unit disk and the curveγ{\displaystyle \gamma }is parameterized by "capacity", then Loewner's equation is
WhenD{\displaystyle D}is the upper half plane the Loewner equation differs from this by changes of variable and is
The driving functionζ{\displaystyle \zeta }and the curveγ{\displaystyle \gamma }are related by
whereft{\displaystyle f_{t}}andgt{\displaystyle g_{t}}are extended by continuity.
LetD{\displaystyle D}be the upper half plane and consider an SLE0, so the driving functionζ{\displaystyle \zeta }is a Brownian motion of diffusivity zero. The functionζ{\displaystyle \zeta }is thus identically zero almost surely and
Schramm–Loewner evolution is the random curveγgiven by the Loewner equation as in the previous section, for the driving function
whereB(t) is Brownian motion on the boundary ofD, scaled by some realκ. In other words, Schramm–Loewner evolution is a probability measure on planar curves, given as the image of Wiener measure under this map.
In general the curve γ need not be simple, and the domainDtis not the complement ofγ([0,t]) inD, but is instead the unbounded component of the complement.
There are two versions of SLE, using two families of curves, each depending on a non-negative real parameterκ:
SLE depends on a choice of Brownian motion on the boundary of the domain, and there are several variations depending on what sort of Brownian motion is used: for example it might start at a fixed point, or start at a uniformly distributed point on the unit circle, or might have a built in drift, and so on. The parameterκcontrols the rate of diffusion of the Brownian motion, and the behavior of SLE depends critically on its value.
The two domains most commonly used in Schramm–Loewner evolution are the upper half plane and the unit disk. Although the Loewner differential equation in these two cases look different, they are equivalent up to changes of variables as the unit disk and the upper half plane are conformally equivalent. However a conformal equivalence between them does not preserve the Brownian motion on their boundaries used to drive Schramm–Loewner evolution.
When SLE corresponds to some conformal field theory, the parameterκis related to thecentral chargecof the conformal field theory by
Each value ofc< 1 corresponds to two values ofκ, one valueκbetween 0 and 4, and a "dual" value 16/κgreater than 4. (seeBauer & Bernard (2002a)Bauer & Bernard (2002b))
Beffara (2008)showed that theHausdorff dimensionof the paths (with probability 1) is equal to min(2, 1 +κ/8).
The probability of chordal SLEκγbeing on the left of fixed pointx0+iy0=z0∈H{\displaystyle x_{0}+iy_{0}=z_{0}\in \mathbb {H} }was computed bySchramm (2001a)[1]
whereΓ{\displaystyle \Gamma }is theGamma functionand2F1(a,b,c,d){\displaystyle _{2}F_{1}(a,b,c,d)}is thehypergeometric function. This was derived by using the martingale property of
andItô's lemmato obtain the following partial differential equation forw:=xy{\displaystyle w:={\tfrac {x}{y}}}
Forκ= 4, the RHS is1−1πarg(z0){\displaystyle 1-{\tfrac {1}{\pi }}\arg(z_{0})}, which was used in the construction of the harmonic explorer,[2]and forκ= 6, we obtain Cardy's formula, which was used by Smirnov to prove conformal invariance inpercolation.[3]
Lawler, Schramm & Werner (2001b)used SLE6to prove the conjecture ofMandelbrot (1982)that the boundary of planar Brownian motion hasfractal dimension4/3.
Criticalpercolationon thetriangular latticewas proved to be related to SLE6byStanislav Smirnov.[4]Combined with earlier work ofHarry Kesten,[5]this led to the determination of many of thecritical exponentsfor percolation.[6]This breakthrough, in turn, allowed further analysis of many aspects of this model.[7][8]
Loop-erased random walkwas shown to converge to SLE2by Lawler, Schramm and Werner.[9]This allowed derivation of many quantitative properties of loop-erased random walk (some of which were derived earlier by Richard Kenyon[10]). The related randomPeano curveoutlining theuniform spanning treewas shown to converge to SLE8.[9]
Rohde and Schramm showed thatκis related to thefractal dimensionof a curve by the following relation
Computer programs (Matlab) are presented inthis GitHub repositoryto simulate Schramm Loewner Evolution planar curves.
|
https://en.wikipedia.org/wiki/Schramm%E2%80%93Loewner_evolution
|
Single-particle trajectories(SPTs) consist of a collection of successivediscretepoints causal in time. Thesetrajectoriesare acquired from images in experimental data. In the context of cell biology, the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule.
Molecules can now by visualized based on recentsuper-resolution microscopy, which allow routine collections of thousands of short and long trajectories.[1]These trajectories explore part of a cell, either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell,[2]as emphasized in various cell types such as neuronal cells,[3]astrocytes,immunecells and many others.
SPT allowed observing moving particles. These trajectories are used to investigate cytoplasm or membrane organization,[4]but also the cell nucleus dynamics, remodeler dynamics or mRNA production. Due to the constant improvement of the instrumentation, the spatial resolution is continuously decreasing, reaching now values of approximately 20 nm, while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues. A variant of super-resolution microscopy called sptPALM is used to detect the local and dynamically changing organization of molecules in cells, or events of DNA binding by transcription factors in mammalian nucleus. Super-resolution image acquisition and particle tracking are crucial to guarantee a high quality data[5][6][7]
Once points are acquired, the next step is to reconstruct a trajectory. This step is done known tracking algorithms to connect the acquired points.[8]Tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise.
The redundancy of many short (SPTs) is a key feature to extract biophysical information parameters from empirical data at a molecular level.[9]In contrast, long isolated trajectories have been used to extract information along trajectories, destroying the natural spatial heterogeneity associated to the various positions. The main statistical tool is to compute themean-square displacement(MSD) or second orderstatistical moment:
For a Brownian motion,⟨|X(t+Δt)−X(t)|2⟩=2nDt{\displaystyle \langle |X(t+\Delta t)-X(t)|^{2}\rangle =2nDt}, where D is the diffusion coefficient,nis dimension of the space. Some other properties can also be recovered from long trajectories, such as the radius of confinement for a confined motion.[12]The MSD has been widely used in early applications of long but not necessarily redundant single-particle trajectories in a biological context. However, the MSD applied to long trajectories suffers from several issues. First, it is not precise in part because the measured points could be correlated. Second, it cannot be used to compute any physical diffusion coefficient when trajectories consists of switching episodes for example alternating between free and confined diffusion. At low spatiotemporal resolution of the observed trajectories, the MSD behaves sublinearly with time, a process known as anomalous diffusion, which is due in part to the averaging of the different phases of the particle motion. In the context of cellular transport (ameoboid), high resolution motion analysis of long SPTs[13]in micro-fluidic chambers containing obstacles revealed different types of cell motions. Depending on the obstacle density: crawling was found at low density of obstacles and directed motion and random phases can even be differentiated.
Statistical methods to extract information from SPTs are based on stochastic models, such as theLangevin equationor itsSmoluchowski's limitand associated models that account for additional localization point identification noise or memory kernel.[14]TheLangevin equationdescribes a stochastic particle driven by a Brownian forceΞ{\displaystyle \Xi }and a field of force (e.g., electrostatic, mechanical, etc.) with an expressionF(x,t){\displaystyle F(x,t)}:
where m is the mass of the particle andΓ=6πaρ{\displaystyle \Gamma =6\pi a\rho }is thefriction coefficientof a diffusing particle,ρ{\displaystyle \rho }theviscosity. HereΞ{\displaystyle \Xi }is theδ{\displaystyle \delta }-correlatedGaussianwhite noise. The force can derived from a potential well U so thatF(x,t)=−U′(x){\displaystyle F(x,t)=-U'(x)}and in that case, the equation takes the form
whereε=kBT,{\displaystyle \varepsilon =k_{\text{B}}T,}is the energy andkB{\displaystyle k_{\text{B}}}theBoltzmann constantandTthe temperature. Langevin's equation is used to describe trajectories where inertia or acceleration matters. For example, at very short timescales, when a molecule unbinds from a binding site or escapes from a potential well[15]and the inertia term allows the particles to move away from the attractor and thus prevents immediate rebinding that could plague numerical simulations.
In the large friction limitγ→∞{\displaystyle \gamma \to \infty }the trajectoriesx(t){\displaystyle x(t)}of the Langevin equation converges in probability to those of the Smoluchowski's equation
wherew˙(t){\displaystyle {\dot {w}}(t)}isδ{\displaystyle \delta }-correlated. This equation is obtained when the diffusion coefficient is constant in space. When this is not case, coarse grained equations (at a coarse spatial resolution) should be derived from molecular considerations. Interpretation of the physical forces are not resolved by Ito's vsStratonovich integralrepresentations or any others.
For a timescale much longer than the elementary molecular collision, the position of a tracked particle is described by a more general overdamped limit of the Langevin stochastic model. Indeed, if the acquisition timescale of empirical recorded trajectories is much lower compared to the thermal fluctuations, rapid events are not resolved in the data. Thus at this coarser spatiotemporal scale, the motion description is replaced by an effective stochastic equation
whereb(X){\displaystyle {b}(X)}is the drift field andBe{\displaystyle {B}_{e}}the diffusion matrix. The effectivediffusion tensorcan vary in spaceD(X)=12B(X)BTXT{\displaystyle D(X)={\frac {1}{2}}B(X)B^{T}X^{T}}(XT{\textstyle X^{T}}denotes the transpose ofX{\textstyle X}). This equation is not derived but assumed. However the diffusion coefficient should be smooth enough as any discontinuity in D should be resolved by a spatial scaling to analyse the source of discontinuity (usually inert obstacles or transitions between two medias). The observed effective diffusion tensor is not necessarily isotropic and can be state-dependent, whereas the friction coefficientγ{\displaystyle \gamma }remains constant as long as the medium stays the same and the microscopic diffusion coefficient (or tensor) could remain isotropic.
The development of statistical methods are based on stochastic models, a possible deconvolution procedure applied to the trajectories. Numerical simulations could also be used to identify specific features that could be extracted from single-particle trajectories data.[16]The goal of building a statistical ensemble from SPTs data is to observe local physical properties of the particles, such as velocity, diffusion, confinement or attracting forces reflecting the interactions of the particles with their local nanometer environments. It is possible to use stochastic modeling to construct from diffusion coefficient (or tensor) the confinement or local density of obstacles reflecting the presence of biological objects of different sizes.
Several empirical estimators have been proposed to recover the local diffusion coefficient, vector field and even organized patterns in the drift, such as potential wells.[17]The construction of empirical estimators that serve to recover physical properties from parametric and non-parametric statistics. Retrieving statistical parameters of a diffusion process from one-dimensional time series statistics use the first moment estimator or Bayesian inference.
The models and the analysis assume that processes are stationary, so that the statistical properties of trajectories do not change over time. In practice, this assumption is satisfied when trajectories are acquired for less than a minute, where only few slow changes may occur on the surface of a neuron for example. Non stationary behavior are observed using a time-lapse analysis, with a delay of tens of minutes between successive acquisitions.
The coarse-grained model Eq. 1 is recovered from the conditional moments of the trajectory by computing the incrementsΔX=X(t+Δt)−X(t){\displaystyle \Delta X=X(t+\Delta t)-X(t)}:
Here the notationE[⋅|X(t)=x]{\displaystyle E[\cdot \,|\,X(t)=x]}means averaging over all trajectories that are at pointxat timet. The coefficients of the Smoluchowski equation can be statistically estimated at each pointxfrom an infinitely large sample of its trajectories in the neighborhood of the pointxat timet.
In practice, the expectations for a and D are estimated by finite sample averages andΔt{\displaystyle \Delta t}is the time-resolution of the recorded trajectories. Formulas for a and D are approximated at the time stepΔt{\displaystyle \Delta t}, where for tens to hundreds of points falling in any bin. This is usually enough for the estimation.
To estimate the local drift and diffusion coefficients, trajectories are first grouped within a small neighbourhood. The field of observation is partitioned into square binsS(xk,r){\displaystyle S(x_{k},r)}of side r and centrexk{\displaystyle x_{k}}and the local drift and diffusion are estimated for each of the square. Considering a sample withNt{\displaystyle N_{t}}trajectories{xi(t1),…,xi(tNs)},{\displaystyle \{x^{i}(t_{1}),\dots ,x^{i}(t_{N_{s}})\},}wheretj{\displaystyle t_{j}}are the sampling times, the discretization of equation for the drifta(xk)=(ax(xk),ay(xk)){\displaystyle a(x_{k})=(a_{x}(x_{k}),a_{y}(x_{k}))}at positionxk{\displaystyle x_{k}}is given for each spatial projection on the x and y axis by
whereNk{\displaystyle N_{k}}is the number of points of trajectory that fall in the squareS(xk,r){\displaystyle S(x_{k},r)}. Similarly, the components of the effective diffusion tensorD(xk){\displaystyle D(x_{k})}are approximated by the empirical sums
The moment estimation requires a large number of trajectories passing through each point, which agrees precisely with the massive data generated by the a certain types of super-resolution data such as those acquired by sptPALM technique on biological samples. The exact inversion of Lagenvin's equation demands in theory an infinite number of trajectories passing through any point x of interest. In practice, the recovery of the drift and diffusion tensor is obtained after a region is subdivided by a square grid of radiusror by moving sliding windows (of the order of 50 to 100 nm).
Algorithms based on mapping the density of points extracted from trajectories allow to reveal local binding and trafficking interactions and organization of dynamic subcellular sites. The algorithms can be applied to study regions of high density, revealved by SPTs. Examples are organelles such as endoplasmic reticulum or cell membranes. The method is based on spatiotemporal segmentation to detect local architecture and boundaries of high-density regions for domains measuring hundreds of nanometers.[18]
|
https://en.wikipedia.org/wiki/Single_particle_trajectories
|
Single-particle tracking(SPT) is the observation of the motion of individual particles within a medium. The coordinates time series, which can be either in two dimensions (x,y) or in three dimensions (x,y,z), is referred to as atrajectory. The trajectory is typically analyzed using statistical methods to extract information about the underlying dynamics of the particle.[1][2][3]These dynamics can reveal information about the type of transport being observed (e.g., thermal or active), the medium where the particle is moving, and interactions with other particles. In the case of random motion, trajectory analysis can be used to measure thediffusion coefficient.
In life sciences, single-particle tracking is broadly used to quantify the dynamics of molecules/proteins in live cells (of bacteria, yeast, mammalian cells and liveDrosophilaembryos).[4][5][6][7][8]It has been extensively used to study the transcription factor dynamics in live cells.[9][10][11]This method has been extensively used in the last decade to understand the target-search mechanism of proteins in live cells. It addresses fundamental biological questions such as how a protein of interest finds its target in the complex cellular environment? how long does it take to find its target site for binding? what is the residence time of proteins binding to DNA?[5]Recently, SPT has been used to study the kinetics of protein translating and processing in vivo. For molecules which bind large structures such as ribosomes, SPT can be used to extract information about the binding kinetics. As ribosome binding increases the effective size of the smaller molecule, the diffusion rate decreases upon binding. By monitoring these changes in diffusion behavior, direct measurements of binding events are obtained.[12][13]Furthermore, exogenous particles are employed as probes to assess the mechanical properties of the medium, a technique known as passivemicrorheology.[14]This technique has been applied to investigate the motion of lipids and proteins within membranes,[15][16]molecules in the nucleus[8]and cytoplasm,[17]organelles and molecules therein,[18]lipid granules,[19][20][21]vesicles, and particles introduced in the cytoplasm or the nucleus. Additionally, single-particle tracking has been extensively used in the study of reconstituted lipid bilayers,[22]intermittent diffusion between 3D and either 2D (e.g., a membrane)[23]or 1D (e.g., a DNA polymer) phases, and synthetic entangled actin networks.[24][25]
The most common type of particles used in single particle tracking are based either onscatterers, such as polystyrene beads or goldnanoparticlesthat can be tracked using bright field illumination, orfluorescentparticles. Forfluorescent tags, there are many different options with their own advantages and disadvantages, includingquantum dots,fluorescent proteins, organicfluorophores, and cyanine dyes.
On a fundamental level, once the images are obtained, single-particle tracking is a two step process. First the particles are detected and then the localized different particles are connected in order to obtain individual trajectories.
Besides performing particle tracking in 2D, there are several imaging modalities for 3D particle tracking, includingmultifocal plane microscopy,[26]double helix point spread function microscopy,[27]and introducing astigmatism via a cylindrical lens or adaptive optics.
|
https://en.wikipedia.org/wiki/Single_particle_tracking
|
Incomputational fluid dynamics, theStochastic Eulerian Lagrangian Method (SELM)[1]is an approach to capture essential features of fluid-structure interactions subject tothermal fluctuationswhile introducing approximations which facilitate analysis and the development of tractable numerical methods. SELM is a hybrid approach utilizing anEulerian descriptionfor the continuum hydrodynamic fields and aLagrangian descriptionfor elastic structures. Thermal fluctuations are introduced through stochastic driving fields. Approaches also are introduced for the stochastic fields of the SPDEs to obtain numerical methods taking into account the numerical discretization artifacts to maintain statistical principles, such as fluctuation-dissipation balance and other properties in statistical mechanics.[1]
The SELM fluid-structure equations typically used are
The pressurepis determined by the incompressibility condition for the fluid
TheΓ,Λ{\displaystyle \Gamma ,\Lambda }operators couple the Eulerian and Lagrangian degrees of freedom. TheX,V{\displaystyle X,V}denote the composite vectors of the full set of Lagrangian coordinates for the structures. TheΦ{\displaystyle \Phi }is the potential energy for a configuration of the structures. Thefthm,Fthm{\displaystyle f_{\mathrm {thm} },F_{\mathrm {thm} }}are stochastic driving fields accounting for thermal fluctuations. Theλ,ξ{\displaystyle \lambda ,\xi }areLagrange multipliersimposing constraints, such as local rigid bodydeformations. To ensure that dissipation occurs only through theΥ{\displaystyle \Upsilon }coupling and not as a consequence of the interconversion by the operatorsΓ,Λ{\displaystyle \Gamma ,\Lambda }the following adjoint conditions are imposed
Thermal fluctuations are introduced through Gaussian random fields with mean zero and the covariance structure
To obtain simplified descriptions and efficient numerical methods, approximations in various limiting physical regimes have been considered to remove dynamics on small time-scales or inertial degrees of freedom. In different limiting regimes, the SELM framework can be related to theimmersed boundary method,accelerated Stokesian dynamics, andarbitrary Lagrangian Eulerian method. The SELM approach has been shown to yield stochastic fluid-structure dynamics that are consistent with statistical mechanics. In particular, the SELM dynamics have been shown to satisfydetailed-balancefor theGibbs–Boltzmann ensemble. Different types of coupling operators have also been introduced allowing for descriptions of structures involving generalized coordinates and additional translational or rotational degrees of freedom. For numerically discretizing the SELM SPDEs, general methods were also introduced for deriving numerical stochastic fields for SPDEs that take discretization artifacts into account to maintain statistical principles, such as fluctuation-dissipation balance and other properties in statistical mechanics.[1]
SELM methods have been used for simulations of
viscoelastic fluids and soft materials,[2]particle inclusions within curved fluid interfaces[3][4]and other microscopic systems and engineered devices.[5][6][7]
|
https://en.wikipedia.org/wiki/Stochastic_Eulerian_Lagrangian_method
|
Stokesian dynamics[1]is a solution technique for theLangevin equation, which is the relevant form ofNewton's 2nd lawfor aBrownian particle. The method treats the suspended particles in a discrete sense while the continuum approximation remains valid for the surrounding fluid, i.e., the suspended particles are generally assumed to be significantly larger than the molecules of the solvent. The particles then interact through hydrodynamic forces transmitted via the continuum fluid, and when the particleReynolds numberis small, these forces are determined through the linear Stokes equations (hence the name of the method). In addition, the method can also resolve non-hydrodynamic forces, such as Brownian forces, arising from the fluctuating motion of the fluid, and interparticle or external forces. Stokesian Dynamics can thus be applied to a variety of problems, including sedimentation, diffusion and rheology, and it aims to provide the same level of understanding for multiphase particulate systems as molecular dynamics does for statistical properties of matter. ForN{\displaystyle N}rigid particles of radiusa{\displaystyle a}suspended in an incompressible Newtonian fluid of viscosityη{\displaystyle \eta }and densityρ{\displaystyle \rho }, the motion of the fluid is governed by the Navier–Stokes equations, while the motion of the particles is described by the coupled equation of motion:
In the above equationFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \mathbf{U}}is the particle translational/rotational velocity
vector of dimension 6N.FH{\displaystyle \mathbf {F} ^{\mathrm {H} }}is the hydrodynamic force, i.e., force exerted by the fluid on the particle due to relative motion between them.Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \mathbf{F}^\mathrm{B}}is thestochasticBrownianforce due to thermal motion of fluid particles.FP{\displaystyle \mathbf {F} ^{\mathrm {P} }}is the deterministic nonhydrodynamic force, which may be almost any form of interparticle or external force, e.g. electrostatic repulsion between like charged particles.Brownian dynamicsis one of the popular techniques of solving theLangevin equation, but the hydrodynamic interaction inBrownian dynamicsis highly simplified and normally includes only the isolated body resistance. On the other hand, Stokesian dynamics includes the many body hydrodynamic interactions. Hydrodynamic interaction is very important for non-equilibrium suspensions, like a shearedsuspension, where it plays a vital role in its microstructure and hence its properties. Stokesian dynamics is used primarily for non-equilibrium suspensions where it has been shown to provide results which agree with experiments.[2]
When the motion on the particle scale is such that the particle Reynolds number is small, the hydrodynamic force exerted on the particles in a suspension undergoing a bulk linear shear flow is:
Here,U∞{\displaystyle \mathbf {U} ^{\infty }}is the velocity of the bulk shear flow evaluated at the particle
center,Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \mathbf{E}^{\infty}}is the symmetric part of the velocity-gradient tensor;RFU{\displaystyle \mathbf {R} _{\mathrm {FU} }}andRFE{\displaystyle \mathbf {R} _{\mathrm {FE} }}are the configuration-dependent resistance matrices that give the hydrodynamic force/torque on the particles due to their motion relative to the fluid (RFU{\displaystyle \mathbf {R} _{\mathrm {FU} }}) and due to the imposed shear flow (RFE{\displaystyle \mathbf {R} _{\mathrm {FE} }}). Note that the subscripts on the matrices indicate the coupling between kinematic (U{\displaystyle \mathbf {U} }) and dynamic (F{\displaystyle \mathbf {F} }) quantities.
One of the key features of Stokesian dynamics is its handling of the hydrodynamic interactions, which is fairly accurate without being computationally inhibitive (likeboundary integral methods) for a large number of particles. Classical Stokesian dynamics requiresO(N3){\displaystyle O(N^{3})}operations whereNis the number of particles in the system (usually a periodic box). Recent advances have reduced the computational cost to aboutO(N1.25logN).{\displaystyle O(N^{1.25}\,\log N).}[3][4]
The stochastic or Brownian forceFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \mathbf{F}^\mathrm{B}}arises from the thermal fluctuations in the fluid and is characterized by:
The angle brackets denote an ensemble average,Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle k}is the Boltzmann constant,T{\displaystyle T}is the absolute temperature andδ(t){\displaystyle \delta (t)}is the delta function. The amplitude of the correlation between the Brownian forces at time0{\displaystyle 0}and at timet{\displaystyle t}results from the fluctuation-dissipation theorem for the N-body system.
|
https://en.wikipedia.org/wiki/Stokesian_dynamics
|
Surface diffusionis a general process involving the motion ofadatoms,molecules, and atomic clusters (adparticles) at solid materialsurfaces.[1]The process can generally be thought of in terms of particles jumping between adjacentadsorptionsites on a surface, as in figure 1. Just as in bulkdiffusion, this motion is typically a thermally promoted process with rates increasing with increasing temperature. Many systems display diffusion behavior that deviates from the conventional model of nearest-neighbor jumps.[2]Tunneling diffusion is a particularly interesting example of an unconventional mechanism wherein hydrogen has been shown to diffuse on cleanmetalsurfaces via thequantum tunnelingeffect.
Various analytical tools may be used toelucidatesurface diffusion mechanisms and rates, the most important of which arefield ion microscopyandscanning tunneling microscopy.[3]While in principle the process can occur on a variety of materials, most experiments are performed on crystalline metal surfaces. Due to experimental constraints most studies of surface diffusion are limited to well below themelting pointof thesubstrate, and much has yet to be discovered regarding how these processes take place at higher temperatures.[4]
Surface diffusion rates and mechanisms are affected by a variety of factors including the strength of the surface-adparticlebond, orientation of the surface lattice, attraction and repulsion between surface species andchemical potentialgradients. It is an important concept in surfacephase formation,epitaxial growth, heterogeneouscatalysis, and other topics insurface science.[5]As such, the principles of surface diffusion are critical for thechemical productionandsemiconductorindustries. Real-world applications relying heavily on these phenomena includecatalytic converters,integrated circuitsused in electronic devices, andsilver halidesalts used inphotographic film.[5]
Surface diffusion kinetics can be thought of in terms of adatoms residing atadsorptionsites on a 2Dlattice, moving between adjacent (nearest-neighbor) adsorption sites by a jumping process.[1][6]The jump rate is characterized by an attemptfrequencyand athermodynamicfactor that dictates the probability of an attempt resulting in a successful jump. The attempt frequency ν is typically taken to be simply thevibrational frequencyof the adatom, while the thermodynamic factor is aBoltzmann factordependent on temperature and Ediff, thepotential energybarrier to diffusion. Equation 1 describes the relationship:
WhereνandEdiffare as described above,Γis the jump or hopping rate, T is temperature, andkBis theBoltzmann constant. Ediffmust be smaller than the energy of desorption for diffusion to occur, otherwise desorption processes would dominate. Importantly, equation 1 tells us how strongly the jump rate varies with temperature. The manner in which diffusion takes place is dependent on the relationship betweenEdiffandkBTas is given in the thermodynamic factor: whenEdiff< kBTthe thermodynamic factor approaches unity andEdiffceases to be a meaningful barrier to diffusion. This case, known asmobile diffusion, is relatively uncommon and has only been observed in a few systems.[7]For the phenomena described throughout this article, it is assumed thatEdiff>> kBTand thereforeΓ<<ν. In the case ofFickian diffusionit is possible to extract both theνandEdifffrom anArrhenius plotof the logarithm of the diffusion coefficient,D, versus 1/T. For cases where more than one diffusion mechanism is present (see below), there may be more than oneEdiffsuch that the relative distribution between the different processes would change with temperature.
Random walkstatistics describe themean squared displacementof diffusing species in terms of the number of jumpsNand the distance per jumpa. The number of successful jumps is simplyΓmultiplied by the time allowed for diffusion,t. In the most basic model only nearest-neighbor jumps are considered andacorresponds to the spacing between nearest-neighbor adsorption sites. The root mean squared displacement goes as:
The diffusion coefficient is given as:
wherez=2{\displaystyle z=2}for 1D diffusion as would be the case for in-channel diffusion,z=4{\displaystyle z=4}for 2D diffusion, andz=6{\displaystyle z=6}for 3D diffusion.[8]
There are four different general schemes in which diffusion may take place.[9]Tracer diffusion and chemical diffusion differ in the level of adsorbate coverage at the surface, while intrinsic diffusion and mass transfer diffusion differ in the nature of the diffusion environment. Tracer diffusion and intrinsic diffusion both refer to systems where adparticles experience a relatively homogeneous environment, whereas in chemical and mass transfer diffusion adparticles are more strongly affected by their surroundings.
Orientational anisotropy takes the form of a difference in both diffusion rates and mechanisms at the varioussurface orientationsof a given material. For a given crystalline material eachMiller Indexplane may display unique diffusion phenomena.Close packedsurfaces such as thefcc(111) tend to have higher diffusion rates than the correspondingly more "open" faces of the same material such as fcc (100).[10][11]
Directional anisotropy refers to a difference in diffusion mechanism or rate in a particular direction on a given crystallographic plane. These differences may be a result of either anisotropy in the surface lattice (e.g. arectangular lattice) or the presence of steps on a surface. One of the more dramatic examples of directional anisotropy is the diffusion of adatoms on channeled surfaces such as fcc (110), where diffusion along the channel is much faster than diffusion across the channel.
Diffusion of adatoms may occur by a variety of mechanisms. The manner in which they diffuse is important as it may dictate the kinetics of movement, temperature dependence, and overall mobility of surface species, among other parameters. The following is a summary of the most important of these processes:[12]
Recent theoretical work as well as experimental work performed since the late 1970s has brought to light a remarkable variety of surface diffusion phenomena both with regard to kinetics as well as to mechanisms. Following is a summary of some of the more notable phenomena:
Cluster diffusion involves motion of atomic clusters ranging in size fromdimersto islands containing hundreds of atoms. Motion of the cluster may occur via the displacement of individual atoms, sections of the cluster, or the entire cluster moving at once.[23]All of these processes involve a change in the cluster’scenter of mass.
Surface diffusion is a critically important concept in heterogeneous catalysis, as reaction rates are often dictated by the ability of reactants to "find" each other at a catalyst surface. With increased temperature adsorbed molecules, molecular fragments, atoms, and clusters tend to have much greater mobility (see equation 1). However, with increased temperature the lifetime of adsorption decreases as the factor kBT becomes large enough for the adsorbed species to overcome the barrier to desorption, Q (see figure 2).Reactionthermodynamicsaside because of the interplay between increased rates of diffusion and decreased lifetime of adsorption, increased temperature may in some cases decrease the overall rate of the reaction.
Surface diffusion may be studied by a variety of techniques, including both direct and indirect observations. Two experimental techniques that have proved very useful in this area of study are field ion microscopy andscanning tunneling microscopy.[3]By visualizing the displacement of atoms or clusters over time, it is possible to extract useful information regarding the manner in which the relevant species diffuse-both mechanistic and rate-related information. In order to study surface diffusion on the atomistic scale it is unfortunately necessary to perform studies on rigorously clean surfaces and inultra high vacuum(UHV) conditions or in the presence of small amounts ofinertgas, as is the case when using He or Ne as imaging gas infield-ion microscopyexperiments.
|
https://en.wikipedia.org/wiki/Surface_diffusion
|
Twophysical systemsare inthermal equilibriumif there is no net flow of thermal energy between them when they are connected by a path permeable toheat. Thermal equilibrium obeys thezeroth law of thermodynamics. A system is said to be in thermal equilibrium with itself if the temperature within the system is spatially uniform and temporally constant.
Systems inthermodynamic equilibriumare always in thermal equilibrium, but the converse is not always true. If the connection between the systems allows transfer of energy as 'change ininternal energy' but does not allow transfer of matter or transfer of energy aswork, the two systems may reach thermal equilibrium without reaching thermodynamic equilibrium.
The relation of thermal equilibrium is an instance of equilibrium between two bodies, which means that it refers to transfer through a selectively permeable partition of matter or work; it is called a diathermal connection. According to Lieb and Yngvason, the essential meaning of the relation of thermal equilibrium includes that it is reflexive and symmetric. It is not included in the essential meaning whether it is or is not transitive. After discussing the semantics of the definition, they postulate a substantial physical axiom, that they call the "zeroth law of thermodynamics", that thermal equilibrium is a transitive relation. They comment that the equivalence classes of systems so established are called isotherms.[1]
Thermal equilibrium of a body in itself refers to the body when it is isolated. The background is that no heat enters or leaves it, and that it is allowed unlimited time to settle under its own intrinsic characteristics. When it is completely settled, so that macroscopic change is no longer detectable, it is in its own thermal equilibrium. It is not implied that it is necessarily in other kinds of internal equilibrium. For example, it is possible that a body might reach internal thermal equilibrium but not be in internal chemical equilibrium; glass is an example.[2]
One may imagine an isolated system, initially not in its own state of internal thermal equilibrium. It could be subjected to a fictive thermodynamic operation of partition into two subsystems separated by nothing, no wall. One could then consider the possibility of transfers of energy as heat between the two subsystems. A long time after the fictive partition operation, the two subsystems will reach a practically stationary state, and so be in the relation of thermal equilibrium with each other. Such an adventure could be conducted in indefinitely many ways, with different fictive partitions. All of them will result in subsystems that could be shown to be in thermal equilibrium with each other, testing subsystems from different partitions. For this reason, an isolated system, initially not its own state of internal thermal equilibrium, but left for a long time, practically always will reach a final state which may be regarded as one of internal thermal equilibrium. Such a final state is one of spatial uniformity or homogeneity of temperature.[3]The existence of such states is a basic postulate of classical thermodynamics.[4][5]This postulate is sometimes, but not often, called the minus first law of thermodynamics.[6]A notable exception exists for isolated quantum systems which aremany-body localizedand whichneverreach internal thermal equilibrium.
Heat can flowinto or out of aclosed systemby way ofthermal conductionor ofthermal radiationto or from a thermal reservoir, and when this process is effecting net transfer of heat, the system is not in thermal equilibrium. While the transfer of energy as heat continues, the system's temperature can be changing.
If bodies are prepared with separately microscopically stationary states, and are then put into purely thermal connection with each other, by conductive or radiative pathways, they will be in thermal equilibrium with each other just when the connection is followed by no change in either body. But if initially they are not in a relation of thermal equilibrium, heat will flow from the hotter to the colder, by whatever pathway, conductive or radiative, is available, and this flow will continue until thermal equilibrium is reached and then they will have the same temperature.
One form of thermal equilibrium is radiative exchange equilibrium.[7][8]Two bodies, each with its own uniform temperature, in solely radiative connection, no matter how far apart, or what partially obstructive, reflective, or refractive, obstacles lie in their path of radiative exchange, not moving relative to one another, will exchange thermal radiation, in net the hotter transferring energy to the cooler, and will exchange equal and opposite amounts just when they are at the same temperature. In this situation,Kirchhoff's law of equality of radiative emissivity and absorptivityand theHelmholtz reciprocityprinciple are in play.
If an initiallyisolated physical system, without internal walls that establishadiabatically isolatedsubsystems, is left long enough, it will usually reach a state of thermal equilibrium in itself, in which its temperature will beuniformthroughout, but not necessarily a state of thermodynamic equilibrium, if there is some structural barrier that can prevent some possible processes in the system from reaching equilibrium; glass is an example. Classical thermodynamics in general considers idealized systems that have reached internal equilibrium, and idealized transfers of matter andenergybetween them.
An isolated physical system may beinhomogeneous, or may be composed of several subsystems separated from each other by walls. If an initially inhomogeneous physical system, without internal walls, is isolated by a thermodynamic operation, it will in general over time change its internal state. Or if it is composed of several subsystems separated from each other by walls, it may change its state after a thermodynamic operation that changes its walls. Such changes may include change of temperature or spatial distribution of temperature, by changing the state of constituent materials. A rod of iron, initially prepared to be hot at one end and cold at the other, when isolated, will change so that its temperature becomes uniform all along its length; during the process, the rod is not in thermal equilibrium until its temperature is uniform. In a system prepared as a block of ice floating in a bath of hot water, and then isolated, the ice can melt; during the melting, the system is not in thermal equilibrium; but eventually, its temperature will become uniform; the block of ice will not re-form. A system prepared as a mixture of petrol vapour and air can be ignited by a spark and produce carbon dioxide and water; if this happens in an isolated system, it will increase the temperature of the system, and during the increase, the system is not in thermal equilibrium; but eventually, the system will settle to a uniform temperature.
Such changes in isolated systems are irreversible in the sense that while such a change will occur spontaneously whenever the system is prepared in the same way, the reverse change will practically never occur spontaneously within the isolated system; this is a large part of the content of thesecond law of thermodynamics. Truly perfectly isolated systems do not occur in nature, and always are artificially prepared.
One may consider a system contained in a very tall adiabatically isolating vessel with rigid walls initially containing a thermally heterogeneous distribution of material, left for a long time under the influence of a steady gravitational field, along its tall dimension, due to an outside body such as the earth. It will settle to a state of uniform temperature throughout, though not of uniform pressure or density, and perhaps containing several phases. It is then in internal thermal equilibrium and even in thermodynamic equilibrium. This means that all local parts of the system are in mutual radiative exchange equilibrium. This means that the temperature of the system is spatially uniform.[8]This is so in all cases, including those of non-uniform external force fields. For an externally imposed gravitational field, this may be proved in macroscopic thermodynamic terms, by the calculus of variations, using the method of Lagrange multipliers.[9][10][11][12][13][14]Considerations of kinetic theory or statistical mechanics also support this statement.[15][16][17][18][19][20][21]
There is an important distinction between thermal andthermodynamic equilibrium. According to Münster (1970), in states of thermodynamic equilibrium, the state variables of a system do not change at a measurable rate. Moreover, "The proviso 'at a measurable rate' implies that we can consider an equilibrium only with respect to specified processes and defined experimental conditions." Also, a state of thermodynamic equilibrium can be described by fewer macroscopic variables than any other state of a given body of matter. A single isolated body can start in a state which is not one of thermodynamic equilibrium, and can change till thermodynamic equilibrium is reached. Thermal equilibrium is a relation between two bodies or closed systems, in which transfers are allowed only of energy and take place through a partition permeable to heat, and in which the transfers have proceeded till the states of the bodies cease to change.[22]
An explicit distinction between 'thermal equilibrium' and 'thermodynamic equilibrium' is made by C.J. Adkins. He allows that two systems might be allowed to exchange heat but be constrained from exchanging work; they will naturally exchange heat till they have equal temperatures, and reach thermal equilibrium, but in general, will not be in thermodynamic equilibrium. They can reach thermodynamic equilibrium when they are allowed also to exchange work.[23]
Another explicit distinction between 'thermal equilibrium' and 'thermodynamic equilibrium' is made by B. C. Eu. He considers two systems in thermal contact, one a thermometer, the other a system in which several irreversible processes are occurring. He considers the case in which, over the time scale of interest, it happens that both the thermometer reading and the irreversible processes are steady. Then there is thermal equilibrium without thermodynamic equilibrium. Eu proposes consequently that the zeroth law of thermodynamics can be considered to apply even when thermodynamic equilibrium is not present; also he proposes that if changes are occurring so fast that a steady temperature cannot be defined, then "it is no longer possible to describe the process by means of a thermodynamic formalism. In other words, thermodynamics has no meaning for such a process."[24]
A planet is in thermal equilibrium when the incident energy reaching it (typically thesolar irradiancefrom its parent star) is equal to theinfraredenergy radiated away to space.
|
https://en.wikipedia.org/wiki/Thermal_equilibrium
|
Thermodynamic equilibriumis a notion ofthermodynamicswithaxiomaticstatus referring to an internalstateof a singlethermodynamic system, or a relation between several thermodynamic systems connected by more or less permeable or impermeablewalls. In thermodynamic equilibrium, there are no netmacroscopicflowsof mass nor of energy within a system or between systems. In a system that is in its own state of internal thermodynamic equilibrium, not only is there an absence ofmacroscopicchange, but there is an “absence of anytendencytoward change on a macroscopic scale.”[1]
Systems in mutual thermodynamic equilibrium are simultaneously in mutualthermal,mechanical,chemical, andradiativeequilibria. Systems can be in one kind of mutual equilibrium, while not in others. In thermodynamic equilibrium, all kinds of equilibrium hold at once and indefinitely, unless disturbed by athermodynamic operation. In a macroscopic equilibrium, perfectly or almost perfectly balanced microscopic exchanges occur; this is the physical explanation of the notion of macroscopic equilibrium.
A thermodynamic system in a state of internal thermodynamic equilibrium has a spatially uniform temperature. Itsintensive properties, other than temperature, may be driven to spatial inhomogeneity by an unchanging long-range force field imposed on it by its surroundings.
In systems that are at a state ofnon-equilibriumthere are, by contrast, net flows of matter or energy. If such changes can be triggered to occur in a system in which they are not already occurring, the system is said to be in a "meta-stable equilibrium".
Though not a widely named "law," it is anaxiomof thermodynamics that there exist states of thermodynamic equilibrium. Thesecond law of thermodynamicsstates that when anisolatedbody of material starts from an equilibrium state, in which portions of it are held at different states by more or less permeable or impermeable partitions, and a thermodynamic operation removes or makes the partitions more permeable, then it spontaneously reaches its own new state of internal thermodynamic equilibrium and this is accompanied by an increase in the sum of theentropiesof the portions.
Classical thermodynamics deals with states ofdynamic equilibrium. The state of a system at thermodynamic equilibrium is the one for which somethermodynamic potentialis minimized (in the absence of an applied voltage),[2]or for which theentropy(S) is maximized, for specified conditions. One such potential is theHelmholtz free energy(A), for a closed system at constant volume and temperature (controlled by a heat bath):
Another potential, theGibbs free energy(G), is minimized at thermodynamic equilibrium in a closed system at constant temperature and pressure, both controlled by the surroundings:
whereTdenotes the absolute thermodynamic temperature,Pthe pressure,Sthe entropy,Vthe volume, andUthe internal energy of the system. In other words,ΔG=0{\displaystyle \Delta G=0}is a necessary condition forchemical equilibriumunder these conditions (in the absence of an applied voltage).
Thermodynamic equilibrium is the unique stable stationary state that is approached or eventually reached as the system interacts with its surroundings over a long time. The above-mentioned potentials are mathematically constructed to be the thermodynamic quantities that are minimized under the particular conditions in the specified surroundings.
The various types of equilibriums are achieved as follows:
Often the surroundings of a thermodynamic system may also be regarded as another thermodynamic system. In this view, one may consider the system and its surroundings as two systems in mutual contact, with long-range forces also linking them. The enclosure of the system is the surface of contiguity or boundary between the two systems. In the thermodynamic formalism, that surface is regarded as having specific properties of permeability. For example, the surface of contiguity may be supposed to be permeable only to heat, allowing energy to transfer only as heat. Then the two systems are said to be in thermal equilibrium when the long-range forces are unchanging in time and the transfer of energy as heat between them has slowed and eventually stopped permanently; this is an example of a contact equilibrium. Other kinds of contact equilibrium are defined by other kinds of specific permeability.[3]When two systems are in contact equilibrium with respect to a particular kind of permeability, they have common values of the intensive variable that belongs to that particular kind of permeability. Examples of such intensive variables are temperature, pressure, chemical potential.
A contact equilibrium may be regarded also as an exchange equilibrium. There is a zero balance of rate of transfer of some quantity between the two systems in contact equilibrium. For example, for a wall permeable only to heat, the rates of diffusion of internal energy as heat between the two systems are equal and opposite. An adiabatic wall between the two systems is 'permeable' only to energy transferred as work; at mechanical equilibrium the rates of transfer of energy as work between them are equal and opposite. If the wall is a simple wall, then the rates of transfer of volume across it are also equal and opposite; and the pressures on either side of it are equal. If the adiabatic wall is more complicated, with a sort of leverage, having an area-ratio, then the pressures of the two systems in exchange equilibrium are in the inverse ratio of the volume exchange ratio; this keeps the zero balance of rates of transfer as work.
A radiative exchange can occur between two otherwise separate systems. Radiative exchange equilibrium prevails when the two systems have the same temperature.[4]
A collection of matter may be entirelyisolatedfrom its surroundings. If it has been left undisturbed for an indefinitely long time, classical thermodynamics postulates that it is in a state in which no changes occur within it, and there are no flows within it. This is a thermodynamic state of internal equilibrium.[5][6](This postulate is sometimes, but not often, called the "minus first" law of thermodynamics.[7]One textbook[8]calls it the "zeroth law", remarking that the authors think this more befitting that title than itsmore customary definition, which apparently was suggested byFowler.)
Such states are a principal concern in what is known as classical or equilibrium thermodynamics, for they are the only states of the system that are regarded as well defined in that subject. A system in contact equilibrium with another system can by athermodynamic operationbe isolated, and upon the event of isolation, no change occurs in it. A system in a relation of contact equilibrium with another system may thus also be regarded as being in its own state of internal thermodynamic equilibrium.
The thermodynamic formalism allows that a system may have contact with several other systems at once, which may or may not also have mutual contact, the contacts having respectively different permeabilities. If these systems are all jointly isolated from the rest of the world those of them that are in contact then reach respective contact equilibria with one another.
If several systems are free of adiabatic walls between each other, but are jointly isolated from the rest of the world, then they reach a state of multiple contact equilibrium, and they have a common temperature, a total internal energy, and a total entropy.[9][10][11][12]Amongst intensive variables, this is a unique property of temperature. It holds even in the presence of long-range forces. (That is, there is no "force" that can maintain temperature discrepancies.) For example, in a system in thermodynamic equilibrium in a vertical gravitational field, the pressure on the top wall is less than that on the bottom wall, but the temperature is the same everywhere.
A thermodynamic operation may occur as an event restricted to the walls that are within the surroundings, directly affecting neither the walls of contact of the system of interest with its surroundings, nor its interior, and occurring within a definitely limited time. For example, an immovable adiabatic wall may be placed or removed within the surroundings. Consequent upon such an operation restricted to the surroundings, the system may be for a time driven away from its own initial internal state of thermodynamic equilibrium. Then, according to the second law of thermodynamics, the whole undergoes changes and eventually reaches a new and final equilibrium with the surroundings. Following Planck, this consequent train of events is called a naturalthermodynamic process.[13]It is allowed in equilibrium thermodynamics just because the initial and final states are of thermodynamic equilibrium, even though during the process there is transient departure from thermodynamic equilibrium, when neither the system nor its surroundings are in well defined states of internal equilibrium. A natural process proceeds at a finite rate for the main part of its course. It is thereby radically different from a fictive quasi-static 'process' that proceeds infinitely slowly throughout its course, and is fictively 'reversible'. Classical thermodynamics allows that even though a process may take a very long time to settle to thermodynamic equilibrium, if the main part of its course is at a finite rate, then it is considered to be natural, and to be subject to the second law of thermodynamics, and thereby irreversible. Engineered machines and artificial devices and manipulations are permitted within the surroundings.[14][15]The allowance of such operations and devices in the surroundings but not in the system is the reason why Kelvin in one of his statements of the second law of thermodynamics spoke of"inanimate" agency; a system in thermodynamic equilibrium is inanimate.[16]
Otherwise, a thermodynamic operation may directly affect a wall of the system.
It is often convenient to suppose that some of the surrounding subsystems are so much larger than the system that the process can affect the intensive variables only of the surrounding subsystems, and they are then called reservoirs for relevant intensive variables.
It can be useful to distinguish between global and local thermodynamic equilibrium. In thermodynamics, exchanges within a system and between the system and the outside are controlled byintensiveparameters. As an example, temperature controlsheat exchanges.Global thermodynamic equilibrium(GTE) means that thoseintensiveparameters are homogeneous throughout the whole system, whilelocal thermodynamic equilibrium(LTE) means that those intensive parameters are varying in space and time, but are varying so slowly that, for any point, one can assume thermodynamic equilibrium in some neighborhood about that point.
If the description of the system requires variations in the intensive parameters that are too large, the very assumptions upon which the definitions of these intensive parameters are based will break down, and the system will be in neither global nor local equilibrium. For example, it takes a certain number of collisions for a particle to equilibrate to its surroundings. If the average distance it has moved during these collisions removes it from the neighborhood it is equilibrating to, it will never equilibrate, and there will be no LTE. Temperature is, by definition, proportional to the average internal energy of an equilibrated neighborhood. Since there is no equilibrated neighborhood, the concept of temperature doesn't hold, and the temperature becomes undefined.
This local equilibrium may apply only to a certain subset of particles in the system. For example, LTE is usually applied only tomassive particles. In aradiatinggas, thephotonsbeing emitted and absorbed by the gas do not need to be in a thermodynamic equilibrium with each other or with the massive particles of the gas for LTE to exist. In some cases, it is not considered necessary for free electrons to be in equilibrium with the much more massive atoms or molecules for LTE to exist.
As an example, LTE will exist in a glass of water that contains a meltingice cube. The temperature inside the glass can be defined at any point, but it is colder near the ice cube than far away from it. If energies of the molecules located near a given point are observed, they will be distributed according to theMaxwell–Boltzmann distributionfor a certain temperature. If the energies of the molecules located near another point are observed, they will be distributed according to the Maxwell–Boltzmann distribution for another temperature.
Local thermodynamic equilibrium does not require either local or global stationarity. In other words, each small locality need not have a constant temperature. However, it does require that each small locality change slowly enough to practically sustain its local Maxwell–Boltzmann distribution of molecular velocities. A global non-equilibrium state can be stably stationary only if it is maintained by exchanges between the system and the outside. For example, a globally-stable stationary state could be maintained inside the glass of water by continuously adding finely powdered ice into it to compensate for the melting, and continuously draining off the meltwater. Naturaltransport phenomenamay lead a system from local to global thermodynamic equilibrium. Going back to our example, thediffusionof heat will lead our glass of water toward global thermodynamic equilibrium, a state in which the temperature of the glass is completely homogeneous.[17]
Careful and well informed writers about thermodynamics, in their accounts of thermodynamic equilibrium, often enough make provisos or reservations to their statements. Some writers leave such reservations merely implied or more or less unstated.
For example, one widely cited writer,H. B. Callenwrites in this context: "In actuality, few systems are in absolute and true equilibrium." He refers to radioactive processes and remarks that they may take "cosmic times to complete, [and] generally can be ignored". He adds "In practice, the criterion for equilibrium is circular.Operationally, a system is in an equilibrium state if its properties are consistently described by thermodynamic theory!"[18]
J.A. Beattie and I. Oppenheim write: "Insistence on a strict interpretation of the definition of equilibrium would rule out the application of thermodynamics to practically all states of real systems."[19]
Another author, cited by Callen as giving a "scholarly and rigorous treatment",[20]and cited by Adkins as having written a "classic text",[21]A.B. Pippardwrites in that text: "Given long enough a supercooled vapour will eventually condense, ... . The time involved may be so enormous, however, perhaps 10100years or more, ... . For most purposes, provided the rapid change is not artificially stimulated, the systems may be regarded as being in equilibrium."[22]
Another author, A. Münster, writes in this context. He observes that thermonuclear processes often occur so slowly that they can be ignored in thermodynamics. He comments: "The concept 'absolute equilibrium' or 'equilibrium with respect to all imaginable processes', has therefore, no physical significance." He therefore states that: "... we can consider an equilibrium only with respect to specified processes and defined experimental conditions."[23]
According toL. Tisza: "... in the discussion of phenomena near absolute zero. The absolute predictions of the classical theory become particularly vague because the occurrence of frozen-in nonequilibrium states is very common."[24]
The most general kind of thermodynamic equilibrium of a system is through contact with the surroundings that allows simultaneous passages of all chemical substances and all kinds of energy.[clarification needed]A system in thermodynamic equilibrium may move with uniform acceleration through space but must not change its shape or size while doing so; thus it is defined by a rigid volume in space. It may lie within external fields of force, determined by external factors of far greater extent than the system itself, so that events within the system cannot in an appreciable amount affect the external fields of force. The system can be in thermodynamic equilibrium only if the external force fields are uniform, and are determining its uniform acceleration, or if it lies in a non-uniform force field but is held stationary there by local forces, such as mechanical pressures, on its surface.
Thermodynamic equilibrium is aprimitive notionof the theory of thermodynamics. According toP.M. Morse: "It should be emphasized that the fact that there are thermodynamic states, ..., and the fact that there are thermodynamic variables which are uniquely specified by the equilibrium state ... arenotconclusions deduced logically from some philosophical first principles. They are conclusions ineluctably drawn from more than two centuries of experiments."[25]This means that thermodynamic equilibrium is not to be defined solely in terms of other theoretical concepts of thermodynamics. M. Bailyn proposes a fundamental law of thermodynamics that defines and postulates the existence of states of thermodynamic equilibrium.[26]
Textbook definitions of thermodynamic equilibrium are often stated carefully, with some reservation or other.
For example, A. Münster writes: "An isolated system is in thermodynamic equilibrium when, in the system, no changes of state are occurring at a measurable rate." There are two reservations stated here; the system is isolated; any changes of state are immeasurably slow. He discusses the second proviso by giving an account of a mixture oxygen and hydrogen at room temperature in the absence of a catalyst. Münster points out that a thermodynamic equilibrium state is described by fewer macroscopic variables than is any other state of a given system. This is partly, but not entirely, because all flows within and through the system are zero.[27]
R. Haase's presentation of thermodynamics does not start with a restriction to thermodynamic equilibrium because he intends to allow for non-equilibrium thermodynamics. He considers an arbitrary system with time invariant properties. He tests it for thermodynamic equilibrium by cutting it off from all external influences, except external force fields. If after insulation, nothing changes, he says that the system was inequilibrium.[28]
In a section headed "Thermodynamic equilibrium", H.B. Callen defines equilibrium states in a paragraph. He points out that they "are determined by intrinsic factors" within the system. They are "terminal states", towards which the systems evolve, over time, which may occur with "glacial slowness".[29]This statement does not explicitly say that for thermodynamic equilibrium, the system must be isolated; Callen does not spell out what he means by the words "intrinsic factors".
Another textbook writer, C.J. Adkins, explicitly allows thermodynamic equilibrium to occur in a system which is not isolated. His system is, however, closed with respect to transfer of matter. He writes: "In general, the approach to thermodynamic equilibrium will involve both thermal and work-like interactions with the surroundings." He distinguishes such thermodynamic equilibrium from thermal equilibrium, in which only thermal contact is mediating transfer of energy.[30]
Another textbook author,J.R. Partington, writes: "(i)An equilibrium state is one which is independent of time." But, referring to systems "which are only apparently in equilibrium", he adds : "Such systems are in states of ″false equilibrium.″" Partington's statement does not explicitly state that the equilibrium refers to an isolated system. Like Münster, Partington also refers to the mixture of oxygen and hydrogen. He adds a proviso that "In a true equilibrium state, the smallest change of any external condition which influences the state will produce a small change of state ..."[31]This proviso means that thermodynamic equilibrium must be stable against small perturbations; this requirement is essential for the strict meaning of thermodynamic equilibrium.
A student textbook by F.H. Crawford has a section headed "Thermodynamic Equilibrium". It distinguishes several drivers of flows, and then says: "These are examples of the apparently universal tendency of isolated systems toward a state of complete mechanical, thermal, chemical, and electrical—or, in a single word,thermodynamic—equilibrium."[32]
A monograph on classical thermodynamics by H.A. Buchdahl considers the "equilibrium of a thermodynamic system", without actually writing the phrase "thermodynamic equilibrium". Referring to systems closed to exchange of matter, Buchdahl writes: "If a system is in a terminal condition which is properly static, it will be said to be inequilibrium."[33]Buchdahl's monograph also discusses amorphous glass, for the purposes of thermodynamic description. It states: "More precisely, the glass may be regarded as beingin equilibriumso long as experimental tests show that 'slow' transitions are in effect reversible."[34]It is not customary to make this proviso part of the definition of thermodynamic equilibrium, but the converse is usually assumed: that if a body in thermodynamic equilibrium is subject to a sufficiently slow process, that process may be considered to be sufficiently nearly reversible, and the body remains sufficiently nearly in thermodynamic equilibrium during the process.[35]
A. Münster carefully extends his definition of thermodynamic equilibrium for isolated systems by introducing a concept ofcontact equilibrium. This specifies particular processes that are allowed when considering thermodynamic equilibrium for non-isolated systems, with special concern for open systems, which may gain or lose matter from or to their surroundings. A contact equilibrium is between the system of interest and a system in the surroundings, brought into contact with the system of interest, the contact being through a special kind of wall; for the rest, the whole joint system is isolated. Walls of this special kind were also considered byC. Carathéodory, and are mentioned by other writers also. They are selectively permeable. They may be permeable only to mechanical work, or only to heat, or only to some particular chemical substance. Each contact equilibrium defines an intensive parameter; for example, a wall permeable only to heat defines an empirical temperature. A contact equilibrium can exist for each chemical constituent of the system of interest. In a contact equilibrium, despite the possible exchange through the selectively permeable wall, the system of interest is changeless, as if it were in isolated thermodynamic equilibrium. This scheme follows the general rule that "... we can consider an equilibrium only with respect to specified processes and defined experimental conditions."[23]Thermodynamic equilibrium for an open system means that, with respect to every relevant kind of selectively permeable wall, contact equilibrium exists when the respective intensive parameters of the system and surroundings are equal.[3]This definition does not consider the most general kind of thermodynamic equilibrium, which is through unselective contacts. This definition does not simply state that no current of matter or energy exists in the interior or at the boundaries; but it is compatible with the following definition, which does so state.
M. Zemanskyalso distinguishes mechanical, chemical, and thermal equilibrium. He then writes: "When the conditions for all three types of equilibrium are satisfied, the system is said to be in a state of thermodynamic equilibrium".[36]
P.M. Morsewrites that thermodynamics is concerned with "states of thermodynamic equilibrium". He also uses the phrase "thermal equilibrium" while discussing transfer of energy as heat between a body and a heat reservoir in its surroundings, though not explicitly defining a special term 'thermal equilibrium'.[37]
J.R. Waldram writes of "a definite thermodynamic state". He defines the term "thermal equilibrium" for a system "when its observables have ceased to change over time". But shortly below that definition he writes of a piece of glass that has not yet reached its "fullthermodynamic equilibrium state".[38]
Considering equilibrium states, M. Bailyn writes: "Each intensive variable has its own type of equilibrium." He then defines thermal equilibrium, mechanical equilibrium, and material equilibrium. Accordingly, he writes: "If all the intensive variables become uniform,thermodynamic equilibriumis said to exist." He is not here considering the presence of an external force field.[39]
J.G. Kirkwoodand I. Oppenheim define thermodynamic equilibrium as follows: "A system is in a state ofthermodynamic equilibriumif, during the time period allotted for experimentation, (a) its intensive properties are independent of time and (b) no current of matter or energy exists in its interior or at its boundaries with the surroundings." It is evident that they are not restricting the definition to isolated or to closed systems. They do not discuss the possibility of changes that occur with "glacial slowness", and proceed beyond the time period allotted for experimentation. They note that for two systems in contact, there exists a small subclass of intensive properties such that if all those of that small subclass are respectively equal, then all respective intensive properties are equal. States of thermodynamic equilibrium may be defined by this subclass, provided some other conditions are satisfied.[40]
A thermodynamic system consisting of a single phase in the absence of external forces, in its own internal thermodynamic equilibrium, is homogeneous.[41]This means that the material in any small volume element of the system can be interchanged with the material of any other geometrically congruent volume element of the system, and the effect is to leave the system thermodynamically unchanged. In general, a strong external force field makes a system of a single phase in its own internal thermodynamic equilibrium inhomogeneous with respect to someintensive variables. For example, a relatively dense component of a mixture can be concentrated by centrifugation.
Such equilibrium inhomogeneity, induced by external forces, does not occur for the intensive variable temperature. According toE.A. Guggenheim, "The most important conception of thermodynamics is temperature."[42]Planck introduces his treatise with a brief account of heat and temperature and thermal equilibrium, and then announces: "In the following we shall deal chiefly with homogeneous, isotropic bodies of any form, possessing throughout their substance the same temperature and density, and subject to a uniform pressure acting everywhere perpendicular to the surface."[41]As did Carathéodory, Planck was setting aside surface effects and external fields and anisotropic crystals. Though referring to temperature, Planck did not there explicitly refer to the concept of thermodynamic equilibrium. In contrast, Carathéodory's scheme of presentation of classical thermodynamics for closed systems postulates the concept of an "equilibrium state" following Gibbs (Gibbs speaks routinely of a "thermodynamic state"), though not explicitly using the phrase 'thermodynamic equilibrium', nor explicitly postulating the existence of a temperature to define it.
Although thermodynamic laws are immutable, systems can be created that delay the time to reach thermodynamic equilibrium. In a thought experiment, Reed A. Howald conceived of a system called "The Fizz Keeper"[43]consisting of a cap with a nozzle that can re-pressurize any standard bottle of carbonated beverage. Nitrogen and oxygen, which air are mostly made out of, would keep getting pumped in, which would slow down the rate at which the carbon dioxide fizzles out of the system. This is possible because the thermodynamic equilibrium between the unconverted and converted carbon dioxide inside the bottle would stay the same. To come to this conclusion, he also appeals toHenry's Law, which states that gases dissolve in direct proportion to their partial pressures. By influencing the partial pressure on the top of a closed system, this would help slow down the rate of fizzing out of carbonated beverages which is governed by thermodynamic equilibrium. The equilibria of carbon dioxide and other gases would not change, however the partial pressure on top would slow down the rate of dissolution extending the time a gas stays in a particular state. due to the nature of thermal equilibrium of the remainder of the beverage. The equilibrium constant of carbon dioxide would be completely independent of the nitrogen and oxygen pumped into the system, which would slow down the diffusion of gas, and yet not have an impact on the thermodynamics of the entire system.
The temperature within a system in thermodynamic equilibrium is uniform in space as well as in time. In a system in its own state of internal thermodynamic equilibrium, there are no net internal macroscopic flows. In particular, this means that all local parts of the system are in mutual radiative exchange equilibrium. This means that the temperature of the system is spatially uniform.[4]This is so in all cases, including those of non-uniform external force fields. For an externally imposed gravitational field, this may be proved in macroscopic thermodynamic terms, by the calculus of variations, using the method of Langrangian multipliers.[44][45][46][47][48][49]Considerations of kinetic theory or statistical mechanics also support this statement.[50][51][52][53][54][55][56]
In order that a system may be in its own internal state of thermodynamic equilibrium, it is of course necessary, but not sufficient, that it be in its own internal state of thermal equilibrium; it is possible for a system to reach internal mechanical equilibrium before it reaches internal thermal equilibrium.[57]
In his exposition of his scheme of closed system equilibrium thermodynamics, C. Carathéodory initially postulates that experiment reveals that a definite number of real variables define the states that are the points of the manifold of equilibria.[9]In the words of Prigogine and Defay (1945): "It is a matter of experience that when we have specified a certain number of macroscopic properties of a system, then all the other properties are fixed."[58][59]As noted above, according to A. Münster, the number of variables needed to define a thermodynamic equilibrium is the least for any state of a given isolated system. As noted above, J.G. Kirkwood and I. Oppenheim point out that a state of thermodynamic equilibrium may be defined by a special subclass of intensive variables, with a definite number of members in that subclass.
If the thermodynamic equilibrium lies in an external force field, it is only the temperature that can in general be expected to be spatially uniform. Intensive variables other than temperature will in general be non-uniform if the external force field is non-zero. In such a case, in general, additional variables are needed to describe the spatial non-uniformity.
As noted above, J.R. Partington points out that a state of thermodynamic equilibrium is stable against small transient perturbations. Without this condition, in general, experiments intended to study systems in thermodynamic equilibrium are in severe difficulties.
When a body of material starts from a non-equilibrium state of inhomogeneity or chemical non-equilibrium, and is then isolated, it spontaneously evolves towards its own internal state of thermodynamic equilibrium. It is not necessary that all aspects of internal thermodynamic equilibrium be reached simultaneously; some can be established before others. For example, in many cases of such evolution, internal mechanical equilibrium is established much more rapidly than the other aspects of the eventual thermodynamic equilibrium.[57]Another example is that, in many cases of such evolution, thermal equilibrium is reached much more rapidly than chemical equilibrium.[60]
In an isolated system, thermodynamic equilibrium by definition persists over an indefinitely long time. In classical physics it is often convenient to ignore the effects of measurement and this is assumed in the present account.
To consider the notion of fluctuations in an isolated thermodynamic system, a convenient example is a system specified by its extensive state variables, internal energy, volume, and mass composition. By definition they are time-invariant. By definition, they combine with time-invariant nominal values of their conjugate intensive functions of state, inverse temperature, pressure divided by temperature, and the chemical potentials divided by temperature, so as to exactly obey the laws of thermodynamics.[61]But the laws of thermodynamics, combined with the values of the specifying extensive variables of state, are not sufficient to provide knowledge of those nominal values. Further information is needed, namely, of the constitutive properties of the system.
It may be admitted that on repeated measurement of those conjugate intensive functions of state, they are found to have slightly different values from time to time. Such variability is regarded as due to internal fluctuations. The different measured values average to their nominal values.
If the system is truly macroscopic as postulated by classical thermodynamics, then the fluctuations are too small to detect macroscopically. This is called the thermodynamic limit. In effect, the molecular nature of matter and the quantal nature of momentum transfer have vanished from sight, too small to see. According to Buchdahl: "... there is no place within the strictly phenomenological theory for the idea of fluctuations about equilibrium (see, however, Section 76)."[62]
If the system is repeatedly subdivided, eventually a system is produced that is small enough to exhibit obvious fluctuations. This is a mesoscopic level of investigation. The fluctuations are then directly dependent on the natures of the various walls of the system. The precise choice of independent state variables is then important. At this stage, statistical features of the laws of thermodynamics become apparent.
If the mesoscopic system is further repeatedly divided, eventually a microscopic system is produced. Then the molecular character of matter and the quantal nature of momentum transfer become important in the processes of fluctuation. One has left the realm of classical or macroscopic thermodynamics, and one needs quantum statistical mechanics. The fluctuations can become relatively dominant, and questions of measurement become important.
The statement that 'the system is its own internal thermodynamic equilibrium' may be taken to mean that 'indefinitely many such measurements have been taken from time to time, with no trend in time in the various measured values'. Thus the statement, that 'a system is in its own internal thermodynamic equilibrium, with stated nominal values of its functions of state conjugate to its specifying state variables', is far far more informative than a statement that 'a set of single simultaneous measurements of those functions of state have those same values'. This is because the single measurements might have been made during a slight fluctuation, away from another set of nominal values of those conjugate intensive functions of state, that is due to unknown and different constitutive properties. A single measurement cannot tell whether that might be so, unless there is also knowledge of the nominal values that belong to the equilibrium state.
An explicit distinction between 'thermal equilibrium' and 'thermodynamic equilibrium' is made by B. C. Eu. He considers two systems in thermal contact, one a thermometer, the other a system in which there are several occurring irreversible processes, entailing non-zero fluxes; the two systems are separated by a wall permeable only to heat. He considers the case in which, over the time scale of interest, it happens that both the thermometer reading and the irreversible processes are steady. Then there is thermal equilibrium without thermodynamic equilibrium. Eu proposes consequently that the zeroth law of thermodynamics can be considered to apply even when thermodynamic equilibrium is not present; also he proposes that if changes are occurring so fast that a steady temperature cannot be defined, then "it is no longer possible to describe the process by means of a thermodynamic formalism. In other words, thermodynamics has no meaning for such a process."[63]This illustrates the importance for thermodynamics of the concept of temperature.
Thermal equilibriumis achieved when two systems inthermal contactwith each other cease to have a net exchange of energy. It follows that if two systems are in thermal equilibrium, then their temperatures are the same.[64]
Thermal equilibrium occurs when a system'smacroscopicthermal observables have ceased to change with time. For example, anideal gaswhosedistribution functionhas stabilised to a specificMaxwell–Boltzmann distributionwould be in thermal equilibrium. This outcome allows a single temperature andpressureto be attributed to the whole system. For an isolated body, it is quite possible for mechanical equilibrium to be reached before thermal equilibrium is reached, but eventually, all aspects of equilibrium, including thermal equilibrium, are necessary for thermodynamic equilibrium.[65]
A system's internal state of thermodynamic equilibrium should be distinguished from a "stationary state" in which thermodynamic parameters are unchanging in time but the system is not isolated, so that there are, into and out of the system, non-zero macroscopic fluxes which are constant in time.[66]
Non-equilibrium thermodynamics is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium. Most systems found in nature are not in thermodynamic equilibrium because they are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics.[67]Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods.
Laws governing systems which are far from equilibrium are also debatable. One of the guiding principles for these systems is the maximum entropy production principle.[68][69]It states that a non-equilibrium system evolves such as to maximize its entropy production.[70][71]
|
https://en.wikipedia.org/wiki/Thermodynamic_equilibrium
|
TheTyndall effectislight scattering by particlesin acolloidsuch as a very finesuspension(asol). Also known asTyndall scattering, it is similar toRayleigh scattering, in that the intensity of the scattered light isinversely proportionalto the fourth power of thewavelength, soblue lightis scattered much more strongly than red light. An example in everyday life is the blue colour sometimes seen in the smoke emitted bymotorcycles, in particulartwo-strokemachines where the burnt engine oil provides these particles.[1]The same effect can also be observed withtobacco smokewhose fine particles also preferentially scatter blue light.
Under the Tyndall effect, the longer wavelengths aretransmittedmore, while the shorter wavelengths are morediffusely reflectedviascattering.[1]The Tyndall effect is seen when light-scatteringparticulate matteris dispersed in an otherwise light-transmitting medium, where thediameterof an individualparticleis in the range of roughly 40 to 900nm, i.e. somewhat below or near the wavelengths ofvisible light(400–750 nm).
It is particularly applicable to colloidal mixtures; for example, the Tyndall effect is used innephelometersto determine the size and density of particles inaerosols[1]and other colloidal matter. Investigation of the phenomenon led directly to the invention of theultramicroscopeandturbidimetry.
It is named after the 19th-century physicistJohn Tyndall, who first studied the phenomenon extensively.[1]
Prior to his discovery of the phenomenon, Tyndall was primarily known for his work on the absorption and emission of radiant heat on a molecular level. In his investigations in that area, it had become necessary to use air from which all traces of floating dust and otherparticulateshad been removed, and the best way to detect these particulates was to bathe the air in intenselight.[2]In the 1860s, Tyndall did a number of experiments with light, shining beams through various gases and liquids and recording the results. In doing so, Tyndall discovered that when gradually filling the tube with smoke and then shining a beam of light through it, the beam appeared to be blue from the sides of the tube but red from the far end.[3]This observation enabled Tyndall to first propose the phenomenon which would later bear his name.
In 1902, theultramicroscopewas developed byRichard Adolf Zsigmondy(1865–1929) andHenry Siedentopf(1872–1940), working forCarl Zeiss AG. Curiosity about the Tyndall effect led them to apply bright sunlight for illumination and they were able to determine the size of 4 nm smallgoldnanoparticles that generate thecranberry glasscolour. This work led directly to Zsigmondy'sNobel Prize for chemistry.[4][5]
Rayleigh scatteringis defined by a mathematical formula that requires the light-scattering particles to be far smaller than the wavelength of the light.[6]For a dispersion of particles to qualify for the Rayleigh formula, the particle sizes need to be below roughly 40 nanometres (for visible light),[citation needed]and the particles may be individual molecules.[6]Colloidalparticles are bigger and are in the rough vicinity of the size of a wavelength of light. Tyndall scattering, i.e. colloidal particle scattering,[7]is much more intense than Rayleigh scattering due to the bigger particle sizes involved.[citation needed]The importance of the particle size factor for intensity can be seen in the large exponent it has in the mathematical statement of the intensity of Rayleigh scattering. If the colloid particles arespheroid, Tyndall scattering can be mathematically analyzed in terms ofMie theory, which admits particle sizes in the rough vicinity of the wavelength of light.[6]Light scatteringby particles of complex shape are described by theT-matrix method.[8]
The color of blueeyesis due to the Tyndallscatteringof light by atranslucentlayer ofturbidmedia in theiriscontaining numerous small particles of about 0.6 micrometers in diameter. These particles are finely suspended within the fibrovascular structure of thestromaor front layer of the iris.[9]Some brown irises have the same layer, except with moremelaninin it. Moderate amounts of melanin make hazel, dark blue and green eyes.
In eyes that contain both particles and melanin, melanin absorbs light. In the absence of melanin, the layer istranslucent(i.e. the light passing through is randomly and diffusely scattered by the particles) and a noticeable portion of the light that enters this translucent layer re-emerges via a radial scattered path. That is, there isbackscatter, the redirection of the light waves back out to the open air.
Scattering takes place to a greater extent at shorter wavelengths. The longer wavelengths tend to pass straight through the translucent layer with unaltered paths of yellow light, and then encounter the next layer further back in the iris, which is a light absorber called the epithelium oruveathat is colored brownish-black. The brightness or intensity of scattered blue light that is scattered by the particles is due to this layer along with the turbid medium of particles within the stroma.
Thus, the longer wavelengths are not reflected (by scattering) back to the open air as much as the shorter wavelengths. Because the shorter wavelengths are the blue wavelengths, this gives rise to a blue hue in the light that comes out of the eye.[10][11]The blue iris is an example of astructural colorbecause it relies only on the interference of light through the turbid medium to generate the color.
Blue eyes and brown eyes, therefore, are anatomically different from each other in a genetically non-variable way because of the difference between turbid media and melanin.[citation needed]Both kinds of eye color can remain functionally separate despite being "mixed" together.
When the day's sky isovercast,sunlightpasses through theturbiditylayer of the clouds, resulting in scattered,diffuse lighton the ground (sunbeam). This exhibitsMie scatteringinstead of Tyndall scattering because the cloud droplets are larger than the wavelength of the light and scatters all colors approximately equally.[citation needed]When the daytime sky iscloudless, the sky's color is blue due toRayleigh scatteringinstead of Tyndall scattering because the scattering particles are the air molecules, which are much smaller than the wavelengths of visible light.[12]Similarly, the termTyndall effectis incorrectly applied to light scattering by large,macroscopicdustparticles in the air as due to their large size, they do not exhibit Tyndall scattering.[1]
|
https://en.wikipedia.org/wiki/Tyndall_effect
|
Anultramicroscopeis amicroscopewith a system that lights the object in a way that allows viewing of tinyparticlesvialight scattering, and notlight reflectionorabsorption. When the diameter of a particle is below or near thewavelengthofvisible light(around 500nanometers), the particle cannot be seen in alight microscopewith the usual methods of illumination. Theultra-inultramicroscoperefers to the ability to see objects whose diameter is shorter than the wavelength of visible light, on the model of theultra-inultraviolet.
In the system, the particles to be observed are dispersed in a liquid or gascolloid(or less often in a coarsersuspension). The colloid is placed in a light-absorbing, dark enclosure, and illuminated with a convergent beam of intense light entering from one side. Light hitting the colloid particles will be scattered. In discussions about light scattering, the converging beam is called a "Tyndall cone". The scene is viewed through an ordinary microscope placed at right angles to the direction of the lightbeam. Under the microscope, the individual particles will appear as small fuzzy spots of light moving irregularly. The spots are inherently fuzzy because light scattering produces fuzzier images than light reflection. The particles are inBrownian motionin most kinds of liquid and gas colloids, which causes the movement of the spots. The ultramicroscope system can also be used to observe tiny nontransparent particles dispersed in a transparent solid or gel.
Ultramicroscopes have been used for general observation ofaerosolsandcolloids, in studyingBrownian motion, in observingionizationtracks incloud chambers, and in studying biologicalultrastructure.
In 1902, the ultramicroscope was developed byRichard Adolf Zsigmondy(1865–1929) andHenry Siedentopf(1872–1940), working forCarl Zeiss AG.[1]Applying bright sunlight for illumination they were able to determine the size of 4 nm smallnanoparticlesincranberry glass. Zsigmondy further improved the ultramicroscope and presented the immersion ultramicroscope in 1912, allowing the observation of suspended nanoparticles in defined fluidic volumes.[2][3]In 1925, he was awarded the Nobel Prize in Chemistry for his research on colloids and the ultramicroscope.
Later the development ofelectron microscopesprovided additional ways to see objects too small for light microscopy.
|
https://en.wikipedia.org/wiki/Ultramicroscope
|
Incomputer science, theiterated logarithmofn{\displaystyle n}, writtenlog*n{\displaystyle n}(usually read "log star"), is the number of times thelogarithmfunction must beiterativelyapplied before the result is less than or equal to1{\displaystyle 1}.[1]The simplest formal definition is the result of thisrecurrence relation:
In computer science,lg*is often used to indicate thebinary iterated logarithm, which iterates thebinary logarithm(with base2{\displaystyle 2}) instead of the natural logarithm (with basee). Mathematically, the iterated logarithm is well defined for any base greater thane1/e≈1.444667{\displaystyle e^{1/e}\approx 1.444667}, not only for base2{\displaystyle 2}and basee. The "super-logarithm" functionslogb(n){\displaystyle \mathrm {slog} _{b}(n)}is "essentially equivalent" to the baseb{\displaystyle b}iterated logarithm (although differing in minor details ofrounding) and forms an inverse to the operation oftetration.[2]
The iterated logarithm is useful inanalysis of algorithmsandcomputational complexity, appearing in the time and space complexity bounds of some algorithms such as:
The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential:
yb=bb⋅⋅b⏟y≫bb⋅⋅by⏟n{\displaystyle {^{y}b}=\underbrace {b^{b^{\cdot ^{\cdot ^{b}}}}} _{y}\gg \underbrace {b^{b^{\cdot ^{\cdot ^{b^{y}}}}}} _{n}}
the inverse grows much slower:logb∗x≪logbnx{\displaystyle \log _{b}^{*}x\ll \log _{b}^{n}x}.
For all values ofnrelevant to counting the running times of algorithms implemented in practice (i.e.,n≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5.
Higher bases give smaller iterated logarithms.
The iterated logarithm is closely related to thegeneralized logarithm functionused insymmetric level-index arithmetic. The additivepersistence of a number, the number of times someone must replace the number by the sum of its digits before reaching itsdigital root, isO(log∗n){\displaystyle O(\log ^{*}n)}.
Incomputational complexity theory, Santhanam[6]shows that thecomputational resourcesDTIME—computation timefor adeterministic Turing machine— andNTIME— computation time for anon-deterministic Turing machine— are distinct up tonlog∗n.{\displaystyle n{\sqrt {\log ^{*}n}}.}
|
https://en.wikipedia.org/wiki/Iterated_logarithm
|
Inmathematics, theWiener process(orBrownian motion, due to its historical connection withthe physical process of the same name) is a real-valuedcontinuous-timestochastic processdiscovered byNorbert Wiener.[1][2]It is one of the best knownLévy processes(càdlàgstochastic processes withstationaryindependent increments). It occurs frequently in pure andapplied mathematics,economics,quantitative finance,evolutionary biology, andphysics.
The Wiener process plays an important role in both pure and applied mathematics. In pure mathematics, the Wiener process gave rise to the study of continuous timemartingales. It is a key process in terms of which more complicated stochastic processes can be described. As such, it plays a vital role instochastic calculus,diffusion processesand evenpotential theory. It is the driving process ofSchramm–Loewner evolution. Inapplied mathematics, the Wiener process is used to represent the integral of awhite noiseGaussian process, and so is useful as a model of noise inelectronics engineering(seeBrownian noise), instrument errors infiltering theoryand disturbances incontrol theory.
The Wiener process has applications throughout the mathematical sciences. In physics it is used to study Brownian motion and other types of diffusion via theFokker–PlanckandLangevin equations. It also forms the basis for the rigorouspath integral formulationofquantum mechanics(by theFeynman–Kac formula, a solution to theSchrödinger equationcan be represented in terms of the Wiener process) and the study ofeternal inflationinphysical cosmology. It is also prominent in themathematical theory of finance, in particular theBlack–Scholesoption pricing model.[3]
The Wiener processWt{\displaystyle W_{t}}is characterised by the following properties:[4]
That the process has independent increments means that if0 ≤s1<t1≤s2<t2thenWt1−Ws1andWt2−Ws2are independent random variables, and the similar condition holds fornincrements.
An alternative characterisation of the Wiener process is the so-calledLévy characterisationthat says that the Wiener process is an almost surely continuousmartingalewithW0= 0andquadratic variation[Wt,Wt] =t(which means thatWt2−tis also a martingale).
A third characterisation is that the Wiener process has a spectral representation as a sine series whose coefficients are independentN(0, 1) random variables. This representation can be obtained using theKarhunen–Loève theorem.
Another characterisation of a Wiener process is thedefinite integral(from time zero to timet) of a zero mean, unit variance, delta correlated ("white")Gaussian process.[5]
The Wiener process can be constructed as thescaling limitof arandom walk, or other discrete-time stochastic processes with stationary independent increments. This is known asDonsker's theorem. Like the random walk, the Wiener process is recurrent in one or two dimensions (meaning that it returns almost surely to any fixedneighborhoodof the origin infinitely often) whereas it is not recurrent in dimensions three and higher (where a multidimensional Wiener process is a process such that its coordinates are independent Wiener processes).[6]Unlike the random walk, it isscale invariant, meaning thatα−1Wα2t{\displaystyle \alpha ^{-1}W_{\alpha ^{2}t}}is a Wiener process for any nonzero constantα. TheWiener measureis theprobability lawon the space ofcontinuous functionsg, withg(0) = 0, induced by the Wiener process. Anintegralbased on Wiener measure may be called aWiener integral.
Letξ1,ξ2,…{\displaystyle \xi _{1},\xi _{2},\ldots }bei.i.d.random variables with mean 0 and variance 1. For eachn, define a continuous time stochastic processWn(t)=1n∑1≤k≤⌊nt⌋ξk,t∈[0,1].{\displaystyle W_{n}(t)={\frac {1}{\sqrt {n}}}\sum \limits _{1\leq k\leq \lfloor nt\rfloor }\xi _{k},\qquad t\in [0,1].}This is a random step function. Increments ofWn{\displaystyle W_{n}}are independent because theξk{\displaystyle \xi _{k}}are independent. For largen,Wn(t)−Wn(s){\displaystyle W_{n}(t)-W_{n}(s)}is close toN(0,t−s){\displaystyle N(0,t-s)}by the central limit theorem.Donsker's theoremasserts that asn→∞{\displaystyle n\to \infty },Wn{\displaystyle W_{n}}approaches a Wiener process, which explains the ubiquity of Brownian motion.[7]
The unconditionalprobability density functionfollows anormal distributionwith mean = 0 and variance =t, at a fixed timet:fWt(x)=12πte−x2/(2t).{\displaystyle f_{W_{t}}(x)={\frac {1}{\sqrt {2\pi t}}}e^{-x^{2}/(2t)}.}
Theexpectationis zero:E[Wt]=0.{\displaystyle \operatorname {E} [W_{t}]=0.}
Thevariance, using the computational formula, ist:Var(Wt)=t.{\displaystyle \operatorname {Var} (W_{t})=t.}
These results follow immediately from the definition that increments have anormal distribution, centered at zero. ThusWt=Wt−W0∼N(0,t).{\displaystyle W_{t}=W_{t}-W_{0}\sim N(0,t).}
Thecovarianceandcorrelation(wheres≤t{\displaystyle s\leq t}):cov(Ws,Wt)=s,corr(Ws,Wt)=cov(Ws,Wt)σWsσWt=sst=st.{\displaystyle {\begin{aligned}\operatorname {cov} (W_{s},W_{t})&=s,\\\operatorname {corr} (W_{s},W_{t})&={\frac {\operatorname {cov} (W_{s},W_{t})}{\sigma _{W_{s}}\sigma _{W_{t}}}}={\frac {s}{\sqrt {st}}}={\sqrt {\frac {s}{t}}}.\end{aligned}}}
These results follow from the definition that non-overlapping increments are independent, of which only the property that they are uncorrelated is used. Suppose thatt1≤t2{\displaystyle t_{1}\leq t_{2}}.cov(Wt1,Wt2)=E[(Wt1−E[Wt1])⋅(Wt2−E[Wt2])]=E[Wt1⋅Wt2].{\displaystyle \operatorname {cov} (W_{t_{1}},W_{t_{2}})=\operatorname {E} \left[(W_{t_{1}}-\operatorname {E} [W_{t_{1}}])\cdot (W_{t_{2}}-\operatorname {E} [W_{t_{2}}])\right]=\operatorname {E} \left[W_{t_{1}}\cdot W_{t_{2}}\right].}
SubstitutingWt2=(Wt2−Wt1)+Wt1{\displaystyle W_{t_{2}}=(W_{t_{2}}-W_{t_{1}})+W_{t_{1}}}we arrive at:E[Wt1⋅Wt2]=E[Wt1⋅((Wt2−Wt1)+Wt1)]=E[Wt1⋅(Wt2−Wt1)]+E[Wt12].{\displaystyle {\begin{aligned}\operatorname {E} [W_{t_{1}}\cdot W_{t_{2}}]&=\operatorname {E} \left[W_{t_{1}}\cdot ((W_{t_{2}}-W_{t_{1}})+W_{t_{1}})\right]\\&=\operatorname {E} \left[W_{t_{1}}\cdot (W_{t_{2}}-W_{t_{1}})\right]+\operatorname {E} \left[W_{t_{1}}^{2}\right].\end{aligned}}}
SinceWt1=Wt1−Wt0{\displaystyle W_{t_{1}}=W_{t_{1}}-W_{t_{0}}}andWt2−Wt1{\displaystyle W_{t_{2}}-W_{t_{1}}}are independent,E[Wt1⋅(Wt2−Wt1)]=E[Wt1]⋅E[Wt2−Wt1]=0.{\displaystyle \operatorname {E} \left[W_{t_{1}}\cdot (W_{t_{2}}-W_{t_{1}})\right]=\operatorname {E} [W_{t_{1}}]\cdot \operatorname {E} [W_{t_{2}}-W_{t_{1}}]=0.}
Thuscov(Wt1,Wt2)=E[Wt12]=t1.{\displaystyle \operatorname {cov} (W_{t_{1}},W_{t_{2}})=\operatorname {E} \left[W_{t_{1}}^{2}\right]=t_{1}.}
A corollary useful for simulation is that we can write, fort1<t2:Wt2=Wt1+t2−t1⋅Z{\displaystyle W_{t_{2}}=W_{t_{1}}+{\sqrt {t_{2}-t_{1}}}\cdot Z}whereZis an independent standard normal variable.
Wiener (1923) also gave a representation of a Brownian path in terms of a randomFourier series. Ifξn{\displaystyle \xi _{n}}are independent Gaussian variables with mean zero and variance one, thenWt=ξ0t+2∑n=1∞ξnsinπntπn{\displaystyle W_{t}=\xi _{0}t+{\sqrt {2}}\sum _{n=1}^{\infty }\xi _{n}{\frac {\sin \pi nt}{\pi n}}}andWt=2∑n=1∞ξnsin((n−12)πt)(n−12)π{\displaystyle W_{t}={\sqrt {2}}\sum _{n=1}^{\infty }\xi _{n}{\frac {\sin \left(\left(n-{\frac {1}{2}}\right)\pi t\right)}{\left(n-{\frac {1}{2}}\right)\pi }}}represent a Brownian motion on[0,1]{\displaystyle [0,1]}. The scaled processcW(tc){\displaystyle {\sqrt {c}}\,W\left({\frac {t}{c}}\right)}is a Brownian motion on[0,c]{\displaystyle [0,c]}(cf.Karhunen–Loève theorem).
The joint distribution of the running maximumMt=max0≤s≤tWs{\displaystyle M_{t}=\max _{0\leq s\leq t}W_{s}}andWtisfMt,Wt(m,w)=2(2m−w)t2πte−(2m−w)22t,m≥0,w≤m.{\displaystyle f_{M_{t},W_{t}}(m,w)={\frac {2(2m-w)}{t{\sqrt {2\pi t}}}}e^{-{\frac {(2m-w)^{2}}{2t}}},\qquad m\geq 0,w\leq m.}
To get the unconditional distribution offMt{\displaystyle f_{M_{t}}}, integrate over−∞ <w≤m:fMt(m)=∫−∞mfMt,Wt(m,w)dw=∫−∞m2(2m−w)t2πte−(2m−w)22tdw=2πte−m22t,m≥0,{\displaystyle {\begin{aligned}f_{M_{t}}(m)&=\int _{-\infty }^{m}f_{M_{t},W_{t}}(m,w)\,dw=\int _{-\infty }^{m}{\frac {2(2m-w)}{t{\sqrt {2\pi t}}}}e^{-{\frac {(2m-w)^{2}}{2t}}}\,dw\\[5pt]&={\sqrt {\frac {2}{\pi t}}}e^{-{\frac {m^{2}}{2t}}},\qquad m\geq 0,\end{aligned}}}
the probability density function of aHalf-normal distribution. The expectation[8]isE[Mt]=∫0∞mfMt(m)dm=∫0∞m2πte−m22tdm=2tπ{\displaystyle \operatorname {E} [M_{t}]=\int _{0}^{\infty }mf_{M_{t}}(m)\,dm=\int _{0}^{\infty }m{\sqrt {\frac {2}{\pi t}}}e^{-{\frac {m^{2}}{2t}}}\,dm={\sqrt {\frac {2t}{\pi }}}}
If at timet{\displaystyle t}the Wiener process has a known valueWt{\displaystyle W_{t}}, it is possible to calculate the conditional probability distribution of the maximum in interval[0,t]{\displaystyle [0,t]}(cf.Probability distribution of extreme points of a Wiener stochastic process). Thecumulative probability distribution functionof the maximum value,conditionedby the known valueWt{\displaystyle W_{t}}, is:FMWt(m)=Pr(MWt=max0≤s≤tW(s)≤m∣W(t)=Wt)=1−e−2m(m−Wt)t,m>max(0,Wt){\displaystyle \,F_{M_{W_{t}}}(m)=\Pr \left(M_{W_{t}}=\max _{0\leq s\leq t}W(s)\leq m\mid W(t)=W_{t}\right)=\ 1-\ e^{-2{\frac {m(m-W_{t})}{t}}}\ \,,\,\ \ m>\max(0,W_{t})}
For everyc> 0the processVt=(1/c)Wct{\displaystyle V_{t}=(1/{\sqrt {c}})W_{ct}}is another Wiener process.
The processVt=W1−t−W1{\displaystyle V_{t}=W_{1-t}-W_{1}}for0 ≤t≤ 1is distributed likeWtfor0 ≤t≤ 1.
The processVt=tW1/t{\displaystyle V_{t}=tW_{1/t}}is another Wiener process.
Consider a Wiener processW(t){\displaystyle W(t)},t∈R{\displaystyle t\in \mathbb {R} }, conditioned so thatlimt→±∞tW(t)=0{\displaystyle \lim _{t\to \pm \infty }tW(t)=0}(which holds almost surely) and as usualW(0)=0{\displaystyle W(0)=0}. Then the following are all Wiener processes (Takenaka 1988):W1,s(t)=W(t+s)−W(s),s∈RW2,σ(t)=σ−1/2W(σt),σ>0W3(t)=tW(−1/t).{\displaystyle {\begin{array}{rcl}W_{1,s}(t)&=&W(t+s)-W(s),\quad s\in \mathbb {R} \\W_{2,\sigma }(t)&=&\sigma ^{-1/2}W(\sigma t),\quad \sigma >0\\W_{3}(t)&=&tW(-1/t).\end{array}}}Thus the Wiener process is invariant under the projective groupPSL(2,R), being invariant under the generators of the group. The action of an elementg=[abcd]{\displaystyle g={\begin{bmatrix}a&b\\c&d\end{bmatrix}}}isWg(t)=(ct+d)W(at+bct+d)−ctW(ac)−dW(bd),{\displaystyle W_{g}(t)=(ct+d)W\left({\frac {at+b}{ct+d}}\right)-ctW\left({\frac {a}{c}}\right)-dW\left({\frac {b}{d}}\right),}which defines agroup action, in the sense that(Wg)h=Wgh.{\displaystyle (W_{g})_{h}=W_{gh}.}
LetW(t){\displaystyle W(t)}be a two-dimensional Wiener process, regarded as a complex-valued process withW(0)=0∈C{\displaystyle W(0)=0\in \mathbb {C} }. LetD⊂C{\displaystyle D\subset \mathbb {C} }be an open set containing 0, andτD{\displaystyle \tau _{D}}be associated Markov time:τD=inf{t≥0|W(t)∉D}.{\displaystyle \tau _{D}=\inf\{t\geq 0|W(t)\not \in D\}.}Iff:D→C{\displaystyle f:D\to \mathbb {C} }is aholomorphic functionwhich is not constant, such thatf(0)=0{\displaystyle f(0)=0}, thenf(Wt){\displaystyle f(W_{t})}is a time-changed Wiener process inf(D){\displaystyle f(D)}(Lawler 2005). More precisely, the processY(t){\displaystyle Y(t)}is Wiener inD{\displaystyle D}with the Markov timeS(t){\displaystyle S(t)}whereY(t)=f(W(σ(t))){\displaystyle Y(t)=f(W(\sigma (t)))}S(t)=∫0t|f′(W(s))|2ds{\displaystyle S(t)=\int _{0}^{t}|f'(W(s))|^{2}\,ds}σ(t)=S−1(t):t=∫0σ(t)|f′(W(s))|2ds.{\displaystyle \sigma (t)=S^{-1}(t):\quad t=\int _{0}^{\sigma (t)}|f'(W(s))|^{2}\,ds.}
If apolynomialp(x,t)satisfies thepartial differential equation(∂∂t+12∂2∂x2)p(x,t)=0{\displaystyle \left({\frac {\partial }{\partial t}}+{\frac {1}{2}}{\frac {\partial ^{2}}{\partial x^{2}}}\right)p(x,t)=0}then the stochastic processMt=p(Wt,t){\displaystyle M_{t}=p(W_{t},t)}is amartingale.
Example:Wt2−t{\displaystyle W_{t}^{2}-t}is a martingale, which shows that thequadratic variationofWon[0,t]is equal tot. It follows that the expectedtime of first exitofWfrom (−c,c) is equal toc2.
More generally, for every polynomialp(x,t)the following stochastic process is a martingale:Mt=p(Wt,t)−∫0ta(Ws,s)ds,{\displaystyle M_{t}=p(W_{t},t)-\int _{0}^{t}a(W_{s},s)\,\mathrm {d} s,}whereais the polynomiala(x,t)=(∂∂t+12∂2∂x2)p(x,t).{\displaystyle a(x,t)=\left({\frac {\partial }{\partial t}}+{\frac {1}{2}}{\frac {\partial ^{2}}{\partial x^{2}}}\right)p(x,t).}
Example:p(x,t)=(x2−t)2,{\displaystyle p(x,t)=\left(x^{2}-t\right)^{2},}a(x,t)=4x2;{\displaystyle a(x,t)=4x^{2};}the process(Wt2−t)2−4∫0tWs2ds{\displaystyle \left(W_{t}^{2}-t\right)^{2}-4\int _{0}^{t}W_{s}^{2}\,\mathrm {d} s}is a martingale, which shows that the quadratic variation of the martingaleWt2−t{\displaystyle W_{t}^{2}-t}on [0,t] is equal to4∫0tWs2ds.{\displaystyle 4\int _{0}^{t}W_{s}^{2}\,\mathrm {d} s.}
About functionsp(xa,t)more general than polynomials, seelocal martingales.
The set of all functionswwith these properties is of full Wiener measure. That is, a path (sample function) of the Wiener process has all these properties almost surely:
lim supt→+∞|w(t)|2tloglogt=1,almost surely.{\displaystyle \limsup _{t\to +\infty }{\frac {|w(t)|}{\sqrt {2t\log \log t}}}=1,\quad {\text{almost surely}}.}
Local modulus of continuity:lim supε→0+|w(ε)|2εloglog(1/ε)=1,almost surely.{\displaystyle \limsup _{\varepsilon \to 0+}{\frac {|w(\varepsilon )|}{\sqrt {2\varepsilon \log \log(1/\varepsilon )}}}=1,\qquad {\text{almost surely}}.}
Global modulus of continuity(Lévy):lim supε→0+sup0≤s<t≤1,t−s≤ε|w(s)−w(t)|2εlog(1/ε)=1,almost surely.{\displaystyle \limsup _{\varepsilon \to 0+}\sup _{0\leq s<t\leq 1,t-s\leq \varepsilon }{\frac {|w(s)-w(t)|}{\sqrt {2\varepsilon \log(1/\varepsilon )}}}=1,\qquad {\text{almost surely}}.}
The dimension doubling theorems say that theHausdorff dimensionof a set under a Brownian motion doubles almost surely.
The image of theLebesgue measureon [0,t] under the mapw(thepushforward measure) has a densityLt. Thus,∫0tf(w(s))ds=∫−∞+∞f(x)Lt(x)dx{\displaystyle \int _{0}^{t}f(w(s))\,\mathrm {d} s=\int _{-\infty }^{+\infty }f(x)L_{t}(x)\,\mathrm {d} x}for a wide class of functionsf(namely: all continuous functions; all locally integrable functions; all non-negative measurable functions). The densityLtis (more exactly, can and will be chosen to be) continuous. The numberLt(x) is called thelocal timeatxofwon [0,t]. It is strictly positive for allxof the interval (a,b) whereaandbare the least and the greatest value ofwon [0,t], respectively. (Forxoutside this interval the local time evidently vanishes.) Treated as a function of two variablesxandt, the local time is still continuous. Treated as a function oft(whilexis fixed), the local time is asingular functioncorresponding to anonatomicmeasure on the set of zeros ofw.
These continuity properties are fairly non-trivial. Consider that the local time can also be defined (as the density of the pushforward measure) for a smooth function. Then, however, the density is discontinuous, unless the given function is monotone. In other words, there is a conflict between good behavior of a function and good behavior of its local time. In this sense, the continuity of the local time of the Wiener process is another manifestation of non-smoothness of the trajectory.
Theinformation rateof the Wiener process with respect to the squared error distance, i.e. its quadraticrate-distortion function, is given by[10]R(D)=2π2Dln2≈0.29D−1.{\displaystyle R(D)={\frac {2}{\pi ^{2}D\ln 2}}\approx 0.29D^{-1}.}Therefore, it is impossible to encode{wt}t∈[0,T]{\displaystyle \{w_{t}\}_{t\in [0,T]}}using abinary codeof less thanTR(D){\displaystyle TR(D)}bitsand recover it with expected mean squared error less thanD{\displaystyle D}. On the other hand, for anyε>0{\displaystyle \varepsilon >0}, there existsT{\displaystyle T}large enough and abinary codeof no more than2TR(D){\displaystyle 2^{TR(D)}}distinct elements such that the expectedmean squared errorin recovering{wt}t∈[0,T]{\displaystyle \{w_{t}\}_{t\in [0,T]}}from this code is at mostD−ε{\displaystyle D-\varepsilon }.
In many cases, it is impossible toencodethe Wiener process withoutsamplingit first. When the Wiener process is sampled at intervalsTs{\displaystyle T_{s}}before applying a binary code to represent these samples, the optimal trade-off betweencode rateR(Ts,D){\displaystyle R(T_{s},D)}and expectedmean square errorD{\displaystyle D}(in estimating the continuous-time Wiener process) follows the parametric representation[11]R(Ts,Dθ)=Ts2∫01log2+[S(φ)−16θ]dφ,{\displaystyle R(T_{s},D_{\theta })={\frac {T_{s}}{2}}\int _{0}^{1}\log _{2}^{+}\left[{\frac {S(\varphi )-{\frac {1}{6}}}{\theta }}\right]d\varphi ,}Dθ=Ts6+Ts∫01min{S(φ)−16,θ}dφ,{\displaystyle D_{\theta }={\frac {T_{s}}{6}}+T_{s}\int _{0}^{1}\min \left\{S(\varphi )-{\frac {1}{6}},\theta \right\}d\varphi ,}whereS(φ)=(2sin(πφ/2))−2{\displaystyle S(\varphi )=(2\sin(\pi \varphi /2))^{-2}}andlog+[x]=max{0,log(x)}{\displaystyle \log ^{+}[x]=\max\{0,\log(x)\}}. In particular,Ts/6{\displaystyle T_{s}/6}is the mean squared error associated only with the sampling operation (without encoding).
The stochastic process defined byXt=μt+σWt{\displaystyle X_{t}=\mu t+\sigma W_{t}}is called aWiener process with drift μand infinitesimal variance σ2. These processes exhaust continuousLévy processes, which means that they are the only continuous Lévy processes,
as a consequence of the Lévy–Khintchine representation.
Two random processes on the time interval [0, 1] appear, roughly speaking, when conditioning the Wiener process to vanish on both ends of [0,1]. With no further conditioning, the process takes both positive and negative values on [0, 1] and is calledBrownian bridge. Conditioned also to stay positive on (0, 1), the process is calledBrownian excursion.[12]In both cases a rigorous treatment involves a limiting procedure, since the formulaP(A|B) =P(A∩B)/P(B) does not apply whenP(B) = 0.
Ageometric Brownian motioncan be writteneμt−σ2t2+σWt.{\displaystyle e^{\mu t-{\frac {\sigma ^{2}t}{2}}+\sigma W_{t}}.}
It is a stochastic process which is used to model processes that can never take on negative values, such as the value of stocks.
The stochastic processXt=e−tWe2t{\displaystyle X_{t}=e^{-t}W_{e^{2t}}}is distributed like theOrnstein–Uhlenbeck processwith parametersθ=1{\displaystyle \theta =1},μ=0{\displaystyle \mu =0}, andσ2=2{\displaystyle \sigma ^{2}=2}.
Thetime of hittinga single pointx> 0 by the Wiener process is a random variable with theLévy distribution. The family of these random variables (indexed by all positive numbersx) is aleft-continuousmodification of aLévy process. Theright-continuousmodificationof this process is given by times offirst exitfrom closed intervals [0,x].
Thelocal timeL= (Lxt)x∈R,t≥ 0of a Brownian motion describes the time that the process spends at the pointx. FormallyLx(t)=∫0tδ(x−Bt)ds{\displaystyle L^{x}(t)=\int _{0}^{t}\delta (x-B_{t})\,ds}whereδis theDirac delta function. The behaviour of the local time is characterised byRay–Knight theorems.
LetAbe an event related to the Wiener process (more formally: a set, measurable with respect to the Wiener measure, in the space of functions), andXtthe conditional probability ofAgiven the Wiener process on the time interval [0,t] (more formally: the Wiener measure of the set of trajectories whose concatenation with the given partial trajectory on [0,t] belongs toA). Then the processXtis a continuous martingale. Its martingale property follows immediately from the definitions, but its continuity is a very special fact – a special case of a general theorem stating that all Brownian martingales are continuous. A Brownian martingale is, by definition, amartingaleadapted to the Brownian filtration; and the Brownian filtration is, by definition, thefiltrationgenerated by the Wiener process.
The time-integral of the Wiener processW(−1)(t):=∫0tW(s)ds{\displaystyle W^{(-1)}(t):=\int _{0}^{t}W(s)\,ds}is calledintegrated Brownian motionorintegrated Wiener process. It arises in many applications and can be shown to have the distributionN(0,t3/3),[13]calculated using the fact that the covariance of the Wiener process ist∧s=min(t,s){\displaystyle t\wedge s=\min(t,s)}.[14]
For the general case of the process defined byVf(t)=∫0tf′(s)W(s)ds=∫0t(f(t)−f(s))dWs{\displaystyle V_{f}(t)=\int _{0}^{t}f'(s)W(s)\,ds=\int _{0}^{t}(f(t)-f(s))\,dW_{s}}Then, fora>0{\displaystyle a>0},Var(Vf(t))=∫0t(f(t)−f(s))2ds{\displaystyle \operatorname {Var} (V_{f}(t))=\int _{0}^{t}(f(t)-f(s))^{2}\,ds}cov(Vf(t+a),Vf(t))=∫0t(f(t+a)−f(s))(f(t)−f(s))ds{\displaystyle \operatorname {cov} (V_{f}(t+a),V_{f}(t))=\int _{0}^{t}(f(t+a)-f(s))(f(t)-f(s))\,ds}In fact,Vf(t){\displaystyle V_{f}(t)}is always a zero mean normal random variable. This allows for simulation ofVf(t+a){\displaystyle V_{f}(t+a)}givenVf(t){\displaystyle V_{f}(t)}by takingVf(t+a)=A⋅Vf(t)+B⋅Z{\displaystyle V_{f}(t+a)=A\cdot V_{f}(t)+B\cdot Z}whereZis a standard normal variable andA=cov(Vf(t+a),Vf(t))Var(Vf(t)){\displaystyle A={\frac {\operatorname {cov} (V_{f}(t+a),V_{f}(t))}{\operatorname {Var} (V_{f}(t))}}}B2=Var(Vf(t+a))−A2Var(Vf(t)){\displaystyle B^{2}=\operatorname {Var} (V_{f}(t+a))-A^{2}\operatorname {Var} (V_{f}(t))}The case ofVf(t)=W(−1)(t){\displaystyle V_{f}(t)=W^{(-1)}(t)}corresponds tof(t)=t{\displaystyle f(t)=t}. All these results can be seen as direct consequences ofItô isometry.
Then-times-integrated Wiener process is a zero-mean normal variable with variancet2n+1(tnn!)2{\displaystyle {\frac {t}{2n+1}}\left({\frac {t^{n}}{n!}}\right)^{2}}. This is given by theCauchy formula for repeated integration.
Every continuous martingale (starting at the origin) is a time changed Wiener process.
Example:2Wt=V(4t) whereVis another Wiener process (different fromWbut distributed likeW).
Example.Wt2−t=VA(t){\displaystyle W_{t}^{2}-t=V_{A(t)}}whereA(t)=4∫0tWs2ds{\displaystyle A(t)=4\int _{0}^{t}W_{s}^{2}\,\mathrm {d} s}andVis another Wiener process.
In general, ifMis a continuous martingale thenMt−M0=VA(t){\displaystyle M_{t}-M_{0}=V_{A(t)}}whereA(t) is thequadratic variationofMon [0,t], andVis a Wiener process.
Corollary.(See alsoDoob's martingale convergence theorems) LetMtbe a continuous martingale, andM∞−=lim inft→∞Mt,{\displaystyle M_{\infty }^{-}=\liminf _{t\to \infty }M_{t},}M∞+=lim supt→∞Mt.{\displaystyle M_{\infty }^{+}=\limsup _{t\to \infty }M_{t}.}
Then only the following two cases are possible:−∞<M∞−=M∞+<+∞,{\displaystyle -\infty <M_{\infty }^{-}=M_{\infty }^{+}<+\infty ,}−∞=M∞−<M∞+=+∞;{\displaystyle -\infty =M_{\infty }^{-}<M_{\infty }^{+}=+\infty ;}other cases (such asM∞−=M∞+=+∞,{\displaystyle M_{\infty }^{-}=M_{\infty }^{+}=+\infty ,}M∞−<M∞+<+∞{\displaystyle M_{\infty }^{-}<M_{\infty }^{+}<+\infty }etc.) are of probability 0.
Especially, a nonnegative continuous martingale has a finite limit (ast→ ∞) almost surely.
All stated (in this subsection) for martingales holds also forlocal martingales.
A wide class ofcontinuous semimartingales(especially, ofdiffusion processes) is related to the Wiener process via a combination of time change andchange of measure.
Using this fact, thequalitative propertiesstated above for the Wiener process can be generalized to a wide class of continuous semimartingales.[15][16]
The complex-valued Wiener process may be defined as a complex-valued random process of the formZt=Xt+iYt{\displaystyle Z_{t}=X_{t}+iY_{t}}whereXt{\displaystyle X_{t}}andYt{\displaystyle Y_{t}}areindependentWiener processes (real-valued). In other words, it is the 2-dimensional Wiener process, where we identifyR2{\displaystyle \mathbb {R} ^{2}}withC{\displaystyle \mathbb {C} }.[17]
Brownian scaling, time reversal, time inversion: the same as in the real-valued case.
Rotation invariance: for every complex numberc{\displaystyle c}such that|c|=1{\displaystyle |c|=1}the processc⋅Zt{\displaystyle c\cdot Z_{t}}is another complex-valued Wiener process.
Iff{\displaystyle f}is anentire functionthen the processf(Zt)−f(0){\displaystyle f(Z_{t})-f(0)}is a time-changed complex-valued Wiener process.
Example:Zt2=(Xt2−Yt2)+2XtYti=UA(t){\displaystyle Z_{t}^{2}=\left(X_{t}^{2}-Y_{t}^{2}\right)+2X_{t}Y_{t}i=U_{A(t)}}whereA(t)=4∫0t|Zs|2ds{\displaystyle A(t)=4\int _{0}^{t}|Z_{s}|^{2}\,\mathrm {d} s}andU{\displaystyle U}is another complex-valued Wiener process.
In contrast to the real-valued case, a complex-valued martingale is generally not a time-changed complex-valued Wiener process. For example, the martingale2Xt+iYt{\displaystyle 2X_{t}+iY_{t}}is not (hereXt{\displaystyle X_{t}}andYt{\displaystyle Y_{t}}are independent Wiener processes, as before).
The Brownian sheet is a multiparamateric generalization. The definition varies from authors, some define the Brownian sheet to have specifically a two-dimensional time parametert{\displaystyle t}while others define it for general dimensions.
Generalities:
Numerical path sampling:
|
https://en.wikipedia.org/wiki/Wiener_process
|
Ingraph theory,eigenvector centrality(also calledeigencentralityorprestige score[1]) is a measure of the influence of anodein a connectednetwork. Relative scores are assigned to all nodes in the network based on the concept that connections to high-scoring nodes contribute more to the score of the node in question than equal connections to low-scoring nodes. A high eigenvector score means that a node is connected to many nodes who themselves have high scores.[2][3]
Google'sPageRankand theKatz centralityare variants of the eigenvector centrality.[4]
For a given graphG:=(V,E){\displaystyle G:=(V,E)}with|V|{\displaystyle |V|}vertices letA=(av,t){\displaystyle A=(a_{v,t})}be theadjacency matrix, i.e.av,t=1{\displaystyle a_{v,t}=1}if vertexv{\displaystyle v}is linked to vertext{\displaystyle t}, andav,t=0{\displaystyle a_{v,t}=0}otherwise. The relative centrality score,xv{\displaystyle x_{v}}, of vertexv{\displaystyle v}can be defined as:
whereM(v){\displaystyle M(v)}is the set of neighbors ofv{\displaystyle v}andλ{\displaystyle \lambda }is a constant. With a small rearrangement this can be rewritten in vector notation as theeigenvectorequation
In general, there will be many differenteigenvaluesλ{\displaystyle \lambda }for which a non-zero eigenvector solution exists. However, the connectedness assumption and the additional requirement that all the entries in the eigenvector be non-negative imply (by thePerron–Frobenius theorem) that only the greatest eigenvalue results in the desired centrality measure.[5]Thevth{\displaystyle v^{\text{th}}}component of the related eigenvector then gives the relative centrality score of the vertexv{\displaystyle v}in the network. The eigenvector is only defined up to a common factor, so only the ratios of the centralities of the vertices are well defined. To define an absolute score, one must normalise the eigenvector e.g. such that the sum over all vertices is 1 or the total number of verticesn.Power iterationis one of manyeigenvalue algorithmsthat may be used to find this dominant eigenvector.[4]Furthermore, this can be generalized so that the entries inAcan be real numbers representing connection strengths, as in astochastic matrix.
Google'sPageRankis based on the normalized eigenvector centrality, or normalized prestige, combined with a random jump assumption.[1]The PageRank of a nodev{\displaystyle v}has recursive dependence on the PageRank of other nodes that point to it. The normalized adjacency matrixN{\displaystyle N}is defined as:N(u,v)={1od(u),if(u,v)∈E0,if(u,v)∉E{\displaystyle N(u,v)={\begin{cases}{1 \over \operatorname {od} (u)},&{\text{if }}(u,v)\in E\\0,&{\text{if }}(u,v)\not \in E\end{cases}}}whereod(u){\displaystyle od(u)}is theout-degreeof nodeu{\displaystyle u}, or in vector form:
wheree{\displaystyle \mathbf {e} }is the vector of ones, anddiag(x){\displaystyle \mathbf {diag} (\mathbf {x} )}is the diagonal matrix of vectorx{\displaystyle \mathbf {x} }.N{\displaystyle \mathbf {N} }is a row-stochastic matrix.
The normalized eigenvector prestige score is defined as:
or in vector form,
Eigenvector centrality is a measure of the influence a node has on a network. If a node is pointed to by many nodes (which also have high eigenvector centrality) then that node will have high eigenvector centrality.[6]
The earliest use of eigenvector centrality is byEdmund Landauin an 1895 paper on scoring chess tournaments.[7][8]
More recently, researchers across many fields have analyzed applications, manifestations, and extensions of eigenvector centrality in a variety of domains:
|
https://en.wikipedia.org/wiki/Eigenvector_centrality
|
Incondensed matter physics,Anderson localization(also known asstrong localization)[1]is the absence of diffusion of waves in adisorderedmedium. This phenomenon is named after the American physicistP. W. Anderson, who was the first to suggest that electron localization is possible in a lattice potential, provided that the degree ofrandomness(disorder) in the lattice is sufficiently large, as can be realized for example in a semiconductor withimpuritiesordefects.[2]
Anderson localization is a general wave phenomenon that applies to the transport of electromagnetic waves, acoustic waves, quantum waves, spin waves, etc. This phenomenon is to be distinguished fromweak localization, which is the precursor effect of Anderson localization (see below), and fromMott localization, named after SirNevill Mott, where the transition from metallic to insulating behaviour isnotdue to disorder, but to a strong mutualCoulomb repulsionof electrons.
In the originalAnderson tight-binding model, the evolution of thewave functionψon thed-dimensional latticeZdis given by theSchrödinger equation
where theHamiltonianHis given by[2]
wherej,k{\displaystyle j,k}are lattice locations. The self-energyEj{\displaystyle E_{j}}is taken asrandom and independently distributed. The interaction potentialV(r)=V(|j−k|){\displaystyle V(r)=V(|j-k|)}is required to fall off faster than1/r3{\displaystyle 1/r^{3}}in ther→∞{\displaystyle r\to \infty }limit. For example, one may takeEj{\displaystyle E_{j}}uniformly distributedwithin a band of energies[−W,+W],{\displaystyle [-W,+W],}and
Starting withψ0{\displaystyle \psi _{0}}localized at the origin, one is interested in how fast the probability distribution|ψ|2{\displaystyle |\psi |^{2}}diffuses. Anderson's analysis shows the following:
The phenomenon of Anderson localization, particularly that of weak localization, finds its origin in thewave interferencebetween multiple-scattering paths. In the strong scattering limit, the severe interferences can completely halt the waves inside the disordered medium.
For non-interacting electrons, a highly successful approach was put forward in 1979 by Abrahamset al.[3]This scaling hypothesis of localization suggests that a disorder-inducedmetal-insulator transition(MIT) exists for non-interacting electrons in three dimensions (3D) at zero magnetic field and in the absence of spin-orbit coupling. Much further work has subsequently supported these scaling arguments both analytically and numerically (Brandeset al., 2003; see Further Reading). In 1D and 2D, the same hypothesis shows that there are no extended states and thus no MIT or only an apparent MIT.[4]However, since 2 is the lower critical dimension of the localization problem, the 2D case is in a sense close to 3D: states are only marginally localized for weak disorder and a smallspin-orbit couplingcan lead to the existence of extended states and thus an MIT. Consequently, the localization lengths of a 2D system with potential-disorder can be quite large so that in numerical approaches one can always find a localization-delocalization transition when either decreasing system size for fixed disorder or increasing disorder for fixed system size.
Most numerical approaches to the localization problem use the standard tight-binding AndersonHamiltonianwith onsite-potential disorder. Characteristics of the electroniceigenstatesare then investigated by studies of participation numbers obtained by exact diagonalization, multifractal properties, level statistics and many others. Especially fruitful is thetransfer-matrix method(TMM) which allows a direct computation of the localization lengths and further validates the scaling hypothesis by a numerical proof of the existence of a one-parameter scaling function. Direct numerical solution of Maxwell equations to demonstrate Anderson localization of light has been implemented (Conti and Fratalocchi, 2008).
Recent work has shown that a non-interacting Anderson localized system can becomemany-body localizedeven in the presence of weak interactions. This result has been rigorously proven in 1D, while perturbative arguments exist even for two and three dimensions.
Anderson localization can be observed in a perturbed periodic potential where the transverse localization of light is caused by random fluctuations on a photonic lattice. Experimental realizations of transverse localization were reported for a 2D lattice (Schwartzet al., 2007) and a 1D lattice (Lahiniet al., 2006). Transverse Anderson localization of light has also been demonstrated in an optical fiber medium (Karbasiet al., 2012) and a biological medium (Choiet al., 2018), and has also been used to transport images through the fiber (Karbasiet al., 2014). It has also been observed by localization of aBose–Einstein condensatein a 1D disordered optical potential (Billyet al., 2008; Roatiet al., 2008).
In 3D, observations are more rare. Anderson localization of elastic waves in a 3D disordered medium has been reported (Huet al., 2008). The observation of the MIT has been reported in a 3D model with atomic matter waves (Chabéet al., 2008). The MIT, associated with the nonpropagative electron waves has been reported in a cm-sized crystal (Yinget al., 2016).Random laserscan operate using this phenomenon.
The existence of Anderson localization for light in 3D was debated for years (Skipetrovet al., 2016) and remains unresolved today. Reports of Anderson localization of light in 3D random media were complicated by the competing/masking effects of absorption (Wiersmaet al., 1997; Storzeret al., 2006; Scheffoldet al., 1999; see Further Reading) and/or fluorescence (Sperlinget al., 2016). Recent experiments (Naraghiet al., 2016; Cobuset al., 2023) support theoretical predictions that the vector nature of light prohibits the transition to Anderson localization (John, 1992; Skipetrovet al., 2019).
Standard diffusion has no localization property, being in disagreement with quantum predictions. However, it turns out that it is based on approximation of theprinciple of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. This approximation is repaired inmaximal entropy random walk, also repairing the disagreement: it turns out to lead to exactly the quantum ground state stationary probability distribution with its strong localization properties.[5][6]
|
https://en.wikipedia.org/wiki/Anderson_localization
|
Inphysics,critical phenomenais the collective name associated with the
physics ofcritical points. Most of them stem from the divergence of thecorrelation length, but also the dynamics slows down. Critical phenomena includescalingrelations among different quantities,power-lawdivergences of some quantities (such as themagnetic susceptibilityin theferromagnetic phase transition) described bycritical exponents,universality,fractalbehaviour, andergodicitybreaking. Critical phenomena take place insecond order phase transitions, although not exclusively.
The critical behavior is usually different from themean-field approximationwhich is valid away from thephase transition, since the latter neglects correlations, which become increasingly important as the system approaches the critical point where the correlation length diverges. Many properties of the critical behavior of a system can be derived in the framework of therenormalization group.
In order to explain the physical origin of these phenomena, we shall use theIsing modelas a pedagogical example.
Consider a2D{\displaystyle 2D}square array of classical spins which may only take two positions: +1 and −1, at a certain temperatureT{\displaystyle T}, interacting through theIsingclassicalHamiltonian:
where the sum is extended over the pairs of nearest neighbours andJ{\displaystyle J}is a coupling constant, which we will consider to be fixed. There is a certain temperature, called theCurie temperatureorcritical temperature,Tc{\displaystyle T_{c}}below which the system presentsferromagneticlong range order. Above it, it isparamagneticand is apparently disordered.
At temperature zero, the system may only take one global sign, either +1 or -1. At higher temperatures, but belowTc{\displaystyle T_{c}}, the state is still globally magnetized, but clusters of the opposite sign appear. As the temperature increases, these clusters start to contain smaller clusters themselves, in a typical Russian dolls picture. Their typical size, called thecorrelation length,ξ{\displaystyle \xi }grows with temperature until it diverges atTc{\displaystyle T_{c}}. This means that the whole system is such a cluster, and there is no global magnetization. Above that temperature, the system is globally disordered, but with ordered clusters within it, whose size is again calledcorrelation length, but it is now decreasing with temperature. At infinite temperature, it is again zero, with the system fully disordered.
The correlation length diverges at the critical point: asT→Tc{\displaystyle T\to T_{c}},ξ→∞{\displaystyle \xi \to \infty }. This divergence poses no physical problem. Other physical observables diverge at this point, leading to some confusion at the beginning.
The most important issusceptibility. Let us apply a very small magnetic field to the
system in the critical point. A very small magnetic field is not able to magnetize a large coherent cluster, but with thesefractalclusters the picture changes. It affects easily the smallest size clusters, since they have a nearlyparamagneticbehaviour. But this change, in its turn, affects the next-scale clusters, and the perturbation climbs the ladder until the whole system changes radically. Thus, critical systems are very sensitive to small changes in the environment.
Other observables, such as thespecific heat, may also diverge at this point. All these divergences stem from that of the correlation length.
As we approach the critical point, these diverging observables behave asA(T)∝(T−Tc)α{\displaystyle A(T)\propto (T-T_{c})^{\alpha }}for some exponentα,{\displaystyle \alpha \,,}where, typically, the value of the exponent α is the same above and below Tc. These exponents are calledcritical exponentsand are robust observables. Even more, they take the same values for very different physical systems. This intriguing phenomenon, calleduniversality, is explained, qualitatively and also quantitatively, by therenormalization group.[1]
Critical phenomena may also appear fordynamicquantities, not only forstaticones. In fact, the divergence of the characteristictimeτ{\displaystyle \tau }of a system is directly related to the divergence of the thermalcorrelation lengthξ{\displaystyle \xi }by the introduction of a dynamical exponentzand the relationτ=ξz{\displaystyle \tau =\xi ^{\,z}}.[2]The voluminousstatic universality classof a system splits into different, less voluminousdynamic universality classeswith different values ofzbut a common static critical behaviour, and by approaching the critical point one may observe all kinds of slowing-down phenomena. The divergence of relaxation timeτ{\displaystyle \tau }at criticality leads to singularities in various collective transport quantities, e.g., the interdiffusivity,shear viscosityη∼ξxη{\displaystyle \eta \sim \xi ^{x_{\eta }}},[3]and bulk viscosityζ∼ξxζ{\displaystyle \zeta \sim \xi ^{x_{\zeta }}}. The dynamic critical exponents follow certain scaling relations, viz.,z=d+xη{\displaystyle z=d+x_{\eta }}, where d is the space dimension. There is only one independent dynamic critical exponent. Values of these exponents are dictated by several universality classes. According to the Hohenberg−Halperin nomenclature,[4]for the model H[5]universality class (fluids)xη≃0.068,z≃3.068{\displaystyle x_{\eta }\simeq 0.068,z\simeq 3.068}.
Ergodicityis the assumption that a system, at a given temperature, explores the full phase space, just each state takes different probabilities. In an Ising ferromagnet belowTc{\displaystyle T_{c}}this does not happen. IfT<Tc{\displaystyle T<T_{c}}, never mind how close they are, the system has chosen a global magnetization, and the phase space is divided into two regions. From one of them it is impossible to reach the other, unless a magnetic field is applied, or temperature is raised aboveTc{\displaystyle T_{c}}.
See alsosuperselection sector
The main mathematical tools to study critical points arerenormalization group, which takes advantage of the Russian dolls picture or theself-similarityto explain universality and predict numerically the critical exponents, andvariational perturbation theory, which converts divergent perturbation expansions into convergent strong-coupling expansions relevant to critical phenomena. In two-dimensional systems,conformal field theoryis a powerful tool which has discovered many new properties of 2D critical systems, employing the fact that scale invariance, along with a few other requisites, leads to an infinitesymmetry group.
The critical point is described by aconformal field theory. According to therenormalization grouptheory, the defining property of criticality is that the characteristiclength scaleof the structure of the physical system, also known as thecorrelation lengthξ, becomes infinite. This can happen alongcritical linesinphase space. This effect is the cause of thecritical opalescencethat can be observed as a binary fluid mixture approaches its liquid–liquid critical point.
In systems in equilibrium, the critical point is reached only by precisely tuning a control parameter. However, in somenon-equilibriumsystems, the critical point is anattractorof the dynamics in a manner that is robust with respect to system parameters, a phenomenon referred to asself-organized criticality.[6]
Applications arise inphysicsandchemistry, but also in fields such associology. For example, it is natural to describe a system of twopolitical partiesby anIsing model. Thereby, at a transition from one majority to the other, the above-mentioned critical phenomena may appear.[7]
|
https://en.wikipedia.org/wiki/Critical_phenomena
|
In themathematicalfield ofgraph theory, aHamiltonian path(ortraceable path) is apathin an undirected or directed graph that visits eachvertexexactly once. AHamiltonian cycle(orHamiltonian circuit) is acyclethat visits each vertex exactly once. A Hamiltonian path that starts and ends at adjacent vertices can be completed by adding one more edge to form a Hamiltonian cycle, and removing any edge from a Hamiltonian cycle produces a Hamiltonian path. The computational problems of determining whether such paths and cycles exist in graphs areNP-complete; seeHamiltonian path problemfor details.
Hamiltonian paths and cycles are named afterWilliam Rowan Hamilton, who invented theicosian game, now also known asHamilton's puzzle, which involves finding a Hamiltonian cycle in the edge graph of thedodecahedron. Hamilton solved this problem using theicosian calculus, analgebraic structurebased onroots of unitywith many similarities to thequaternions(also invented by Hamilton). This solution does not generalize to arbitrary graphs.
Despite being named after Hamilton, Hamiltonian cycles in polyhedra had also been studied a year earlier byThomas Kirkman, who, in particular, gave an example of a polyhedron without Hamiltonian cycles.[1]Even earlier, Hamiltonian cycles and paths in theknight's graphof thechessboard, theknight's tour, had been studied in the 9th century inIndian mathematicsbyRudrata, and around the same time inIslamic mathematicsbyal-Adli ar-Rumi. In 18th century Europe, knight's tours were published byAbraham de MoivreandLeonhard Euler.[2]
AHamiltonian pathortraceable pathis apaththat visits each vertex of the graph exactly once. A graph that contains a Hamiltonian path is called atraceable graph. A graph isHamiltonian-connectedif for every pair of vertices there is a Hamiltonian path between the two vertices.
AHamiltonian cycle,Hamiltonian circuit,vertex tourorgraph cycleis acyclethat visits each vertex exactly once. A graph that contains a Hamiltonian cycle is called aHamiltonian graph.
Similar notions may be defined fordirected graphs, where each edge (arc) of a path or cycle can only be traced in a single direction (i.e., the vertices are connected with arrows and the edges traced "tail-to-head").
AHamiltonian decompositionis an edge decomposition of a graph into Hamiltonian circuits.
AHamilton mazeis a type of logic puzzle in which the goal is to find the unique Hamiltonian cycle in a given graph.[3][4]
Any Hamiltonian cycle can be converted to a Hamiltonian path by removing one of its edges, but a Hamiltonian path can be extended to a Hamiltonian cycle only if its endpoints are adjacent.
All Hamiltonian graphs arebiconnected, but a biconnected graph need not be Hamiltonian (see, for example, thePetersen graph).[9]
AnEulerian graphG(aconnected graphin which every vertex has even degree) necessarily has an Euler tour, a closed walk passing through each edge ofGexactly once. This tour corresponds to a Hamiltonian cycle in theline graphL(G), so the line graph of every Eulerian graph is Hamiltonian. Line graphs may have other Hamiltonian cycles that do not correspond to Euler tours, and in particular the line graphL(G)of every Hamiltonian graphGis itself Hamiltonian, regardless of whether the graphGis Eulerian.[10]
Atournament(with more than two vertices) is Hamiltonian if and only if it isstrongly connected.
The number of different Hamiltonian cycles in a complete undirected graph onnvertices is(n− 1)!/2and in a complete directed graph onnvertices is(n− 1)!. These counts assume that cycles that are the same apart from their starting point are not counted separately.
The best vertexdegreecharacterization of Hamiltonian graphs was provided in 1972 by theBondy–Chvátaltheorem, which generalizes earlier results byG. A. Dirac(1952) andØystein Ore. Both Dirac's and Ore's theorems can also be derived fromPósa's theorem(1962). Hamiltonicity has been widely studied with relation to various parameters such as graphdensity,toughness,forbidden subgraphsanddistanceamong other parameters.[11]Dirac and Ore's theorems basically state that a graph is Hamiltonian if it hasenough edges.
The Bondy–Chvátal theorem operates on theclosurecl(G)of a graphGwithnvertices, obtained by repeatedly adding a new edgeuvconnecting anonadjacentpair of verticesuandvwithdeg(v) + deg(u) ≥nuntil no more pairs with this property can be found.
Bondy–Chvátal Theorem (1976)—A graph is Hamiltonian if and only if its closure is Hamiltonian.
As complete graphs are Hamiltonian, all graphs whose closure is complete are Hamiltonian, which is the content of the following earlier theorems by Dirac and Ore.
Dirac's Theorem (1952)—Asimple graphwithnvertices (n≥3{\displaystyle n\geq 3}) is Hamiltonian if every vertex has degreen2{\displaystyle {\tfrac {n}{2}}}or greater.
Ore's Theorem(1960)—Asimple graphwithnvertices (n≥3{\displaystyle n\geq 3}) is Hamiltonian if, for every pair of non-adjacent vertices, the sum of their degrees isnor greater.
The following theorems can be regarded as directed versions:
Ghouila–Houiri (1960)—Astrongly connectedsimpledirected graphwithnvertices is Hamiltonian if every vertex has a full degree greater than or equal ton.
Meyniel (1973)—Astrongly connectedsimpledirected graphwithnvertices is Hamiltonian if the sum of full degrees of every pair of distinct non-adjacent vertices is greater than or equal to2n−1{\displaystyle 2n-1}
The number of vertices must be doubled because each undirected edge corresponds to two directed arcs and thus the degree of a vertex in the directed graph is twice the degree in the undirected graph.
Rahman–Kaykobad(2005)—Asimple graphwithnvertices has a Hamiltonian path if, for every non-adjacent vertex pairs the sum of their degrees and their shortest path length is greater thann.[12]
The above theorem can only recognize the existence of a Hamiltonian path in a graph and not a Hamiltonian Cycle.
Many of these results have analogues for balancedbipartite graphs, in which the vertex degrees are compared to the number of vertices on a single side of the bipartition rather than the number of vertices in the whole graph.[13]
Theorem—A 4-connected planar triangulation has a Hamiltonian cycle.[14]
Theorem—A 4-connected planar graph has a Hamiltonian cycle.[15]
An algebraic representation of the Hamiltonian cycles of a given weighted digraph (whose arcs are assigned weights from a certain ground field) is theHamiltonian cycle polynomialof its weighted adjacency matrix defined as the sum of the products of the arc weights of the digraph's Hamiltonian cycles. This polynomial is not identically zero as a function in the arc weights if and only if the digraph is Hamiltonian. The relationship between the computational complexities of computing it andcomputing the permanentwas shown by Grigoriy Kogan.[16]
|
https://en.wikipedia.org/wiki/Hamiltonian_path
|
Aknight's touris a sequence of moves of aknighton achessboardsuch that the knight visits every square exactly once. If the knight ends on a square that is one knight's move from the beginning square (so that it could tour the board again immediately, following the same path), the tour is "closed", or "re-entrant"; otherwise, it is "open".[1][2]
Theknight's tour problemis themathematical problemof finding a knight's tour. Creating aprogramto find a knight's tour is a common problem given tocomputer sciencestudents.[3]Variations of the knight's tour problem involve chessboards of different sizes than the usual8 × 8, as well as irregular (non-rectangular) boards.
The knight's tour problem is an instance of the more generalHamiltonian path problemingraph theory. The problem of finding a closed knight's tour is similarly an instance of theHamiltonian cycle problem. Unlike the general Hamiltonian path problem, the knight's tour problem can be solved inlinear time.[4]
The earliest known reference to the knight's tour problem dates back to the 9th century AD. InRudrata'sKavyalankara[5](5.15), a Sanskrit work on Poetics, the pattern of a knight's tour on a half-board has been presented as an elaborate poetic figure (citra-alaṅkāra) called theturagapadabandhaor 'arrangement in the steps of a horse'. The same verse in four lines of eight syllables each can be read from left to right or by following the path of the knight on tour. Since theIndic writing systemsused for Sanskrit are syllabic, each syllable can be thought of as representing a square on a chessboard. Rudrata's example is as follows:
transliterated:
For example, the first line can be read from left to right or by moving from the first square to the second line, third syllable (2.3) and then to 1.5 to 2.7 to 4.8 to 3.6 to 4.4 to 3.2.
TheSri Vaishnavapoet and philosopherVedanta Desika, during the 14th century, in his 1,008-verse magnum opus praising the deityRanganatha's divine sandals ofSrirangam,Paduka Sahasram(in chapter 30:Chitra Paddhati) has composed two consecutiveSanskritverses containing 32 letters each (inAnushtubhmeter) where the second verse can be derived from the first verse by performing a Knight's tour on a4 × 8board, starting from the top-left corner.[6]The transliterated 19th verse is as follows:
(1)
(30)
(9)
(20)
(3)
(24)
(11)
(26)
(16)
(19)
(2)
(29)
(10)
(27)
(4)
(23)
(31)
(8)
(17)
(14)
(21)
(6)
(25)
(12)
(18)
(15)
(32)
(7)
(28)
(13)
(22)
(5)
The 20th verse that can be obtained by performing Knight's tour on the above verse is as follows:
sThi thA sa ma ya rA ja thpA
ga tha rA mA dha kE ga vi |
dhu ran ha sAm sa nna thA dhA
sA dhyA thA pa ka rA sa rA ||
It is believed that Desika composed all 1,008 verses (including the specialChaturanga Turanga Padabandhammentioned above) in a single night as a challenge.[7]
A tour reported in the fifth book of Bhagavantabaskaraby by Bhat Nilakantha, a cyclopedic work in Sanskrit on ritual, law and politics, written either about 1600 or about 1700 describes three knight's tours. The tours are not only reentrant but also symmetrical, and the verses are based on the same tour, starting from different squares.[8]Nilakantha's work is an extraordinary achievement being a fully symmetric closed tour, predating the work of Euler (1759) by at least 60 years.
After Nilakantha, one of the first mathematicians to investigate the knight's tour wasLeonhard Euler. The first procedure for completing the knight's tour was Warnsdorf's rule, first described in 1823 by H. C. von Warnsdorf.
In the 20th century, theOulipogroup of writers used it, among many others. The most notable example is the10 × 10knight's tour which sets the order of the chapters inGeorges Perec's novelLife a User's Manual.
The sixth game of theWorld Chess Championship 2010betweenViswanathan AnandandVeselin Topalovsaw Anand making 13 consecutive knight moves (albeit using both knights); online commentators jested that Anand was trying to solve the knight's tour problem during the game.
Schwenk[10]proved that for anym×nboard withm≤n, a closed knight's tour is always possibleunlessone or more of these three conditions are met:
Cullet al.and Conradet al.proved that on any rectangular board whose smaller dimension is at least 5, there is a (possibly open) knight's tour.[4][11]For anym×nboard withm≤n, a (possibly open) knight's tour is always possibleunlessone or more of these three conditions are met:
On an8 × 8board, there are exactly 26,534,728,821,064directedclosed tours (i.e. two tours along the same path that travel in opposite directions are counted separately, as arerotationsandreflections).[14][15][16]The number ofundirectedclosed tours is half this number, since every tour can be traced in reverse. There are 9,862 undirected closed tours on a6 × 6board.[17]
There are several ways to find a knight's tour on a given board with a computer. Some of these methods arealgorithms, while others areheuristics.
Abrute-force searchfor a knight's tour is impractical on all but the smallest boards.[18]On an8 × 8board, for instance, there are13,267,364,410,532knight's tours,[14]and a much greater number of sequences of knight moves of the same length. It is well beyond the capacity of modern computers (or networks of computers) to perform operations on such a large set. However, the size of this number is not indicative of the difficulty of the problem, which can be solved "by using human insight and ingenuity ... without much difficulty."[18]
By dividing the board into smaller pieces, constructing tours on each piece, and patching the pieces together, one can construct tours on most rectangular boards inlinear time– that is, in a time proportional to the number of squares on the board.[11][19]
Warnsdorf's rule is aheuristicfor finding a single knight's tour. The knight is moved so that it always proceeds to the square from which the knight will have thefewestonward moves. When calculating the number of onward moves for each candidate square, we do not count moves that revisit any square already visited. It is possible to have two or more choices for which the number of onward moves is equal; there are various methods for breaking such ties, including one devised by Pohl[20]and another by Squirrel and Cull.[21]
This rule may also more generally be applied to any graph. In graph-theoretic terms, each move is made to the adjacent vertex with the leastdegree.[22]Although theHamiltonian path problemisNP-hardin general, on many graphs that occur in practice this heuristic is able to successfully locate a solution inlinear time.[20]The knight's tour is such a special case.[23]
Theheuristicwas first described in "Des Rösselsprungs einfachste und allgemeinste Lösung" by H. C. von Warnsdorf in 1823.[23]
A computer program that finds a knight's tour for any starting position using Warnsdorf's rule was written by Gordon Horsington and published in 1984 in the bookCentury/Acorn User Book of Computer Puzzles.[24]
The knight's tour problem also lends itself to being solved by aneural networkimplementation.[25]The network is set up such that every legal knight's move is represented by aneuron, and each neuron is initialized randomly to be either "active" or "inactive" (output of 1 or 0), with 1 implying that the neuron is part of the solution. Each neuron also has a state function (described below) which is initialized to 0.
When the network is allowed to run, each neuron can change its state and output based on the states and outputs of its neighbors (those exactly one knight's move away) according to the following transition rules:
wheret{\displaystyle t}represents discrete intervals of time,U(Ni,j){\displaystyle U(N_{i,j})}is the state of the neuron connecting squarei{\displaystyle i}to squarej{\displaystyle j},V(Ni,j){\displaystyle V(N_{i,j})}is the output of the neuron fromi{\displaystyle i}toj{\displaystyle j}, andG(Ni,j){\displaystyle G(N_{i,j})}is the set of neighbors of the neuron.
Although divergent cases are possible, the network should eventually converge, which occurs when no neuron changes its state from timet{\displaystyle t}tot+1{\displaystyle t+1}. When the network converges, either the network encodes a knight's tour or a series of two or more independent circuits within the same board.
|
https://en.wikipedia.org/wiki/Knight%27s_tour
|
Snakeis agenreofaction video gameswhere the player maneuvers the end of a growing line, often themed as asnake. The player must keep the snake from colliding with both other obstacles and itself, which gets harder as the snake lengthens.
The genre originated in the 1976 competitivearcade video gameBlockadefromGremlin Industrieswhere the goal is to survive longer than the other player.Blockadeand the initial wave of clones that followed were purely abstract and did not usesnaketerminology. The concept evolved into a single-player variant where a line with a head and tail gets longer with each piece of food eaten—often apples or eggs—increasing the likelihood of self-collision. The simplicity and low technical requirements of snake games have resulted in hundreds of versions, some of which have the wordsnakeorwormin the title. The 1982Tronarcade video game, based on the film, includes snake gameplay for the single-playerLight Cyclessegment, and some later snake games borrow the theme.
After a version simply calledSnakewas preloaded onNokiamobile phonesin 1998, there was a resurgence of interest in snake games.
The originalBlockadefrom 1976 and its many clones are two-player games. Viewed from a top-down perspective, each player controls a "snake" with a fixed starting position. The "head" of the snake continually moves forward, unable to stop, growing ever longer. It must be steered left, right, up, and down to avoid hitting walls and the body of either snake. The player who survives the longest wins. Single-player versions are less prevalent and have one or more snakes controlled by the computer, as in the light cycles segment of the 1982Tronarcade game.
In the most common single-player game, the player's snake is of a certain length, so when the head moves the tail does too. Each item eaten by the snake causes the snake to get longer.Snake Bytehas the snake eating apples.Nibblerhas the snake eating abstract objects in a maze.
The Snake genre began with the 1976arcade video gameBlockade[2][3]developed and published byGremlin Industries.[4]It was cloned asBigfoot Bonkersthe same year. In 1977,Atari, Inc.released twoBlockade-inspired games: the arcade gameDominosand Atari VCS gameSurround.[5]Surroundwas one of the nine Atari VCS launch titles in the US and was sold bySearsunder the nameChase. That same year, a similar game was launched for theBally AstrocadeasCheckmate.[6]Mattel releasedSnafufor theIntellivisionconsole in 1982.
The first knownhome computerversion,Worm, was programmed byPeter Trefonasfor theTRS-80and published byCLOADmagazine in 1978.[2]Versions followed from the same author for thePETandApple II. An authorized version of theHustlearcade game, itself a clone ofBlockcade, was published byMilton Bradleyfor theTI-99/4Ain 1980.[7]
The single-playerSnake Bytewas published in 1982 for Atari 8-bit computers, Apple II, and VIC-20; a snake eats apples to complete a level, growing longer in the process. InSnakefor theBBC Micro(1982), by Dave Bresnen, the snake is controlled using the left and right arrow keys relative to the direction it is heading in. The snake increases in speed as it gets longer, and there is only one life.
Nibbler(1982) is a single-player arcade game where the snake fits tightly into a maze, and the gameplay is faster than most snake designs. Another single-player version is part of the 1982Tronarcade game, themed with light cycles. It reinvigorated the snake concept, and many subsequent games borrowed the light cycle theme.
Starting in 1991,Nibbleswas included withMS-DOSfor a period of time as aQBasicsample program. In 1992,Rattler Racewas released as part of the secondMicrosoft Entertainment Pack. It adds enemy snakes to the familiar apple-eating gameplay.
In 1998, themobile gameSnakewas released forNokia 6110.[8]The game was popular, andNokiareleased a series of reiterations, includingSnake II,Snake EX,Snake EX2,Snake III,Snakes,Snake XensiaandSnakes Subsonic. As the game graphics and gameplay evolved, it became less popular.[9]
In 2002,Snakewas made available for download toPocket PCthrough Peter's GameBox.[10]In 2004,TIMmadeSnakeavailable for download through the Tim Wap Fast system.[11]On 28 March 2013,NimbleBitreleasedNimble Quest, an action RPG snake game.[12]In 2015, Armanto released aspiritual successortoSnakein partnership with Rumilus Design calledSnake Rewind.[13]In 2019, scientists tested the touch sensibility of the GLASSES screen cellphones playingSnake.[14]In 2020,Zanco Tiny T2was launched withSnakeinstalled.[15]On 2 March 2020, OrangePixel releasedSnake Corewithshooterelements.[16]On 8 September 2020, Tree Man Games releasedPAKO Caravan, a snake game featuring cars.[17]In December 2020, Retro Widget releasedSnake IIforiPhoneandiPadhome screen andApple Watch.[18]In 2023, users recreatedSnakeusingGPT-4.[19]In 2023,SpotifyaddedSnakeas a downloadable game inside ofplaylistswith more than 20 songs.[20]In 2024,Nothinglaunched aSnakewidgetfor their cellphones.[21]On 21 March 2024, Pictoline releasedQuetzi, a snake game where the player controlsQuetzalcoatl.[22]On 8 April 2025, Tidepool Games releasedMageTrain, aroguelikesnake game.[23]
A series of onlineSnakegames were made. In 2016, Steve Howse launchedSlither.ioas a way to mimic the success ofAgar.io.[24]In 2016, Kooapps releasedSnake.ioand was later launched onApple Arcadein 2023.[25]It was the first and only snake game on Apple Arcade.Snake.iowas also released onNetflixandNintendo Switchin 2024.[26][27]On 4 February 2025,Appxplore (iCandy)releasedSnaky Cat, an.iobattle royalesnake game.[28]
Google has incorporated Snake games into its applications. In 2010,YouTubeaddedSnakeas a hidden game inside of theirvideo player.[29]In 2013,GooglelaunchedSnakedoodleas aneaster eggforweb browsers.[30]In 2019, Google addedSnakeinsideGoogle Mapsas anApril Fools' Dayprank.[31]In 2019,Google ChromelaunchedSnake Gamefor web browsers.[32]On 29 January 2025, Google celebrated the Year of theSnakeof theChinese New Yearwith the relaunch ofDoodle Snake.[33]
In 1996,Next Generationranked it number 41 on their "Top 100 Games of All Time", citing the need for both quick reactions and forethought. In lieu of a title for a specific version, they listed it as "Snake game" in quotes.[34]
On November 29, 2012, theMuseum of Modern Artin New York City announced that the Nokia port of Snake was one of 40 games that the curators wished to add to the museum's collection in the future.[35]
|
https://en.wikipedia.org/wiki/Snake_(video_game_genre)
|
Instatistical mechanics,universalityis the observation that there are properties for a large class of systems that are independent of thedynamicaldetails of the system. Systems display universality in a scaling limit, when a large number of interacting parts come together. The modern meaning of the term was introduced byLeo Kadanoffin the 1960s,[citation needed]but a simpler version of the concept was already implicit in thevan der Waals equationand in the earlierLandau theoryof phase transitions, which did not incorporate scaling correctly.[citation needed]
The term is slowly gaining a broader usage in several fields of mathematics, includingcombinatoricsandprobability theory, whenever the quantitative features of a structure (such as asymptotic behaviour) can be deduced from a few global parameters appearing in the definition, without requiring knowledge of the details of the system.
Therenormalization groupprovides an intuitively appealing, albeit mathematically non-rigorous, explanation of universality. It classifies operators in astatistical field theoryinto relevant and irrelevant. Relevant operators are those responsible for perturbations to the free energy, theimaginary time Lagrangian, that will affect thecontinuum limit, and can be seen at long distances. Irrelevant operators are those that only change the short-distance details. The collection of scale-invariant statistical theories define theuniversality classes, and the finite-dimensional list of coefficients of relevant operators parametrize the near-critical behavior.
The notion of universality originated in the study ofphase transitionsin statistical mechanics.[citation needed]A phase transition occurs when a material changes its properties in a dramatic way: water, as it is heated boils and turns into vapor; or a magnet, when heated, loses its magnetism. Phase transitions are characterized by anorder parameter, such as the density or the magnetization, that changes as a function of a parameter of the system, such as the temperature. The special value of the parameter at which the system changes its phase is the system'scritical point. For systems that exhibit universality, the closer the parameter is to itscritical value, the less sensitively the order parameter depends on the details of the system.
If the parameter β is critical at the value βc, then the order parameterawill be well approximated by[citation needed]
The exponent α is acritical exponentof the system. The remarkable discovery made in the second half of the twentieth century was that very different systems had the same critical exponents .[citation needed]
In 1975,Mitchell Feigenbaumdiscovered universality in iterated maps.[1][2][3]
Universality gets its name because it is seen in a large variety of physical systems. Examples of universality include:
One of the important developments inmaterials sciencein the 1970s and the 1980s was the realization that statistical field theory, similar to quantum field theory, could be used to provide a microscopic theory of universality .[citation needed]The core observation was that, for all of the different systems, the behaviour at aphase transitionis described by a continuum field, and that the same statistical field theory will describe different systems. The scaling exponents in all of these systems can be derived from the field theory alone, and are known ascritical exponents.
The key observation is that near a phase transition orcritical point, disturbances occur at all size scales, and thus one should look for an explicitlyscale-invariant theoryto describe the phenomena, as seems to have been put in a formal theoretical framework first byPokrovskyand Patashinsky in 1965[4].[citation needed]Universality is a by-product of the fact that there are relatively few scale-invariant theories. For any one specificphysical system, the detailed description may have many scale-dependent parameters and aspects. However, as the phase transition is approached, the scale-dependent parameters play less and less of an important role, and the scale-invariant parts of the physical description dominate. Thus, a simplified, and oftenexactly solvable, model can be used to approximate the behaviour of these systems near the critical point.
Percolation may be modeled by a randomelectrical resistornetwork, with electricity flowing from one side of the network to the other. The overall resistance of the network is seen to be described by the average connectivity of the resistors in the network .[citation needed]
The formation of tears and cracks may be modeled by a random network ofelectrical fuses. As the electric current flow through the network is increased, some fuses may pop, but on the whole, the current is shunted around the problem areas, and uniformly distributed. However, at a certain point (at the phase transition) acascade failuremay occur, where the excess current from one popped fuse overloads the next fuse in turn, until the two sides of the net are completely disconnected and no more current flows .[citation needed]
To perform the analysis of such random-network systems, one considers the stochastic space of all possible networks (that is, thecanonical ensemble), and performs a summation (integration) over all possible network configurations. As in the previous discussion, each given random configuration is understood to be drawn from the pool of all configurations with some given probability distribution; the role of temperature in the distribution is typically replaced by the average connectivity of the network .[citation needed]
The expectation values of operators, such as the rate of flow, theheat capacity, and so on, are obtained by integrating over all possible configurations. This act of integration over all possible configurations is the point of commonality between systems instatistical mechanicsandquantum field theory. In particular, the language of therenormalization groupmay be applied to the discussion of the random network models. In the 1990s and 2000s, stronger connections between the statistical models andconformal field theorywere uncovered. The study of universality remains a vital area of research.
Like other concepts fromstatistical mechanics(such asentropyandmaster equations), universality has proven a useful construct for characterizing distributed systems at a higher level, such asmulti-agent systems. The term has been applied[5]to multi-agent simulations, where the system-level behavior exhibited by the system is independent of the degree of complexity of the individual agents, being driven almost entirely by the nature of the constraints governing their interactions. Innetwork dynamics, universality refers to the fact that despite the diversity of nonlinear dynamic models, which differ in many details, the observed behavior of many different systems adheres to a set of universal laws. These laws are independent of the specific details of each system.[6]
|
https://en.wikipedia.org/wiki/Universality_(dynamical_systems)
|
Inmathematical analysis, aspace-filling curveis acurvewhoserangereaches every point in a higher dimensional region, typically theunit square(or more generally ann-dimensional unithypercube). BecauseGiuseppe Peano(1858–1932) was the first to discover one, space-filling curves in the2-dimensional planeare sometimes calledPeano curves, but that phrase also refers to thePeano curve, the specific example of a space-filling curve found by Peano.
The closely relatedFASS curves(approximately space-Filling, self-Avoiding, Simple, and Self-similar curves)
can be thought of as finite approximations of a certain type of space-filling curves.[1][2][3][4][5][6]
Intuitively, a curve in two or three (or higher) dimensions can be thought of as the path of a continuously moving point. To eliminate the inherent vagueness of this notion,Jordanin 1887 introduced the following rigorous definition, which has since been adopted as the precise description of the notion of acurve:
In the most general form, the range of such a function may lie in an arbitrarytopological space, but in the most commonly studied cases, the range will lie in aEuclidean spacesuch as the 2-dimensional plane (aplanar curve) or the 3-dimensional space (space curve).
Sometimes, the curve is identified with theimageof the function (the set of all possible values of the function), instead of the function itself. It is also possible to define curves without endpoints to be a continuous function on thereal line(or on the open unit interval(0, 1)).
In 1890,Giuseppe Peanodiscovered a continuous curve, now called thePeano curve, that passes through every point of the unit square.[7]His purpose was to construct acontinuous mappingfrom theunit intervalonto theunit square. Peano was motivated byGeorg Cantor's earlier counterintuitive result that the infinite number of points in a unit interval is the samecardinalityas the infinite number of points in any finite-dimensionalmanifold, such as the unit square. The problem Peano solved was whether such a mapping could be continuous; i.e., a curve that fills a space. Peano's solution does not set up a continuousone-to-one correspondencebetween the unit interval and the unit square, and indeed such a correspondence does not exist (see§ Propertiesbelow).
It was common to associate the vague notions ofthinnessand 1-dimensionality to curves; all normally encountered curves werepiecewisedifferentiable (that is, have piecewise continuous derivatives), and such curves cannot fill up the entire unit square. Therefore, Peano's space-filling curve was found to be highly counterintuitive.
From Peano's example, it was easy to deduce continuous curves whose ranges contained then-dimensionalhypercube(for any positive integern). It was also easy to extend Peano's example to continuous curves without endpoints, which filled the entiren-dimensional Euclidean space (wherenis 2, 3, or any other positive integer).
Most well-known space-filling curves are constructed iteratively as the limit of a sequence ofpiecewise linearcontinuous curves, each one more closely approximating the space-filling limit.
Peano's ground-breaking article contained no illustrations of his construction, which is defined in terms ofternary expansionsand amirroring operator. But the graphical construction was perfectly clear to him—he made an ornamental tiling showing a picture of the curve in his home in Turin. Peano's article also ends by observing that the technique can be obviously extended to other odd bases besides base 3. His choice to avoid any appeal tographical visualizationwas motivated by a desire for a completely rigorous proof owing nothing to pictures. At that time (the beginning of the foundation of general topology), graphical arguments were still included in proofs, yet were becoming a hindrance to understanding often counterintuitive results.
A year later,David Hilbertpublished in the same journal a variation of Peano's construction.[8]Hilbert's article was the first to include a picture helping to visualize the construction technique, essentially the same as illustrated here. The analytic form of theHilbert curve, however, is more complicated than Peano's.
LetC{\displaystyle {\mathcal {C}}}denote theCantor space2N{\displaystyle \mathbf {2} ^{\mathbb {N} }}.
We start with a continuous functionh{\displaystyle h}from the Cantor spaceC{\displaystyle {\mathcal {C}}}onto the entire unit interval[0,1]{\displaystyle [0,\,1]}. (The restriction of theCantor functionto theCantor setis an example of such a function.) From it, we get a continuous functionH{\displaystyle H}from the topological productC×C{\displaystyle {\mathcal {C}}\;\times \;{\mathcal {C}}}onto the entire unit square[0,1]×[0,1]{\displaystyle [0,\,1]\;\times \;[0,\,1]}by setting
H(x,y)=(h(x),h(y)).{\displaystyle H(x,y)=(h(x),h(y)).\,}
Since the Cantor setC{\displaystyle {\mathcal {C}}}ishomeomorphicto its cartesian product with itselfC×C{\displaystyle {\mathcal {C}}\times {\mathcal {C}}}, there is a continuous bijectiong{\displaystyle g}from the Cantor set ontoC×C{\displaystyle {\mathcal {C}}\;\times \;{\mathcal {C}}}. The compositionf{\displaystyle f}ofH{\displaystyle H}andg{\displaystyle g}is a continuous function mapping the Cantor set onto the entire unit square. (Alternatively, we could use the theorem that everycompactmetric space is a continuous image of the Cantor set to get the functionf{\displaystyle f}.)
Finally, one can extendf{\displaystyle f}to a continuous functionF{\displaystyle F}whose domain is the entire unit interval[0,1]{\displaystyle [0,\,1]}. This can be done either by using theTietze extension theoremon each of the components off{\displaystyle f}, or by simply extendingf{\displaystyle f}"linearly" (that is, on each of the deleted open interval(a,b){\displaystyle (a,\,b)}in the construction of the Cantor set, we define the extension part ofF{\displaystyle F}on(a,b){\displaystyle (a,\,b)}to be the line segment within the unit square joining the valuesf(a){\displaystyle f(a)}andf(b){\displaystyle f(b)}).
If a curve is not injective, then one can find two intersectingsubcurvesof the curve, each obtained by considering the images of two disjoint segments from the curve's domain (the unit line segment). The two subcurves intersect if theintersectionof the two images isnon-empty. One might be tempted to think that the meaning ofcurves intersectingis that they necessarily cross each other, like the intersection point of two non-parallel lines, from one side to the other. However, two curves (or two subcurves of one curve) may contact one another without crossing, as, for example, a line tangent to a circle does.
A non-self-intersecting continuous curve cannot fill the unit square because that will make the curve ahomeomorphismfrom the unit interval onto the unit square (any continuousbijectionfrom acompact spaceonto aHausdorff spaceis a homeomorphism). But a unit square has nocut-point, and so cannot be homeomorphic to the unit interval, in which all points except the endpoints are cut-points. There exist non-self-intersecting curves of nonzero area, theOsgood curves, but byNetto's theoremthey are not space-filling.[9]
For the classic Peano and Hilbert space-filling curves, where two subcurves intersect (in the technical sense), there is self-contact without self-crossing. A space-filling curve can be (everywhere) self-crossing if its approximation curves are self-crossing. A space-filling curve's approximations can be self-avoiding, as the figures above illustrate. In 3 dimensions, self-avoiding approximation curves can even containknots. Approximation curves remain within a bounded portion ofn-dimensional space, but their lengths increase without bound.
Space-filling curves are special cases offractal curves. No differentiable space-filling curve can exist. Roughly speaking, differentiability puts a bound on how fast the curve can turn. Michał Morayne proved that thecontinuum hypothesisis equivalent to the existence of a Peano curve such that at each point of the real line at least one of its components is differentiable.[10]
TheHahn–Mazurkiewicztheorem is the following characterization of spaces that are the continuous image of curves:
Spaces that are the continuous image of a unit interval are sometimes calledPeano spaces.
In many formulations of the Hahn–Mazurkiewicz theorem,second-countableis replaced bymetrizable. These two formulations are equivalent. In one direction a compact Hausdorff space is anormal spaceand, by theUrysohnmetrization theorem, second-countable then implies metrizable. Conversely, a compact metric space is second-countable.
There are many natural examples of space-filling, or rather sphere-filling, curves in the theory of doubly degenerateKleinian groups. For example,Cannon & Thurston (2007)showed that the circle at infinity of theuniversal coverof a fiber of amapping torusof apseudo-Anosov mapis a sphere-filling curve. (Here the sphere is the sphere at infinity ofhyperbolic 3-space.)
Wienerpointed out inThe Fourier Integral and Certain of its Applicationsthat space-filling curves could be used to reduceLebesgue integrationin higher dimensions to Lebesgue integration in one dimension.
Java applets:
|
https://en.wikipedia.org/wiki/Space-filling_curves
|
Instatistics, theDickey–Fuller testtests thenull hypothesisthat aunit rootis present in anautoregressive(AR) time series model. Thealternative hypothesisis different depending on which version of the test is used, but is usuallystationarityortrend-stationarity. The test is named after thestatisticiansDavid DickeyandWayne Fuller, who developed it in 1979.[1]
A simpleARmodel is
whereyt{\displaystyle y_{t}}is the variable of interest,t{\displaystyle t}is the time index,ρ{\displaystyle \rho }is a coefficient, andut{\displaystyle u_{t}}is theerrorterm (assumed to bewhite noise). A unit root is present ifρ=1{\displaystyle \rho =1}. The model would be non-stationary in this case.
The regression model can be written as
whereΔ{\displaystyle \Delta }is thefirst difference operatorandδ≡ρ−1{\displaystyle \delta \equiv \rho -1}. This model can be estimated, and testing for a unit root is equivalent to testingδ=0{\displaystyle \delta =0}. Since the test is done over the residual term rather than raw data, it is not possible to use standardt-distributionto provide critical values. Therefore, thisstatistict{\displaystyle t}has a specificdistributionsimply known as theDickey–Fuller table.
There are three main versions of the test:
1. Test for a unit root:
2. Test for a unit root with constant:
3. Test for a unit root with constant and deterministic time trend:
Each version of the test has its own critical value which depends on the size of the sample. In each case, thenull hypothesisis that there is a unit root,δ=0{\displaystyle \delta =0}. The tests have lowstatistical powerin that they often cannot distinguish between true unit-root processes (δ=0{\displaystyle \delta =0}) and near unit-root processes (δ{\displaystyle \delta }is close to zero). This is called the "near observation equivalence" problem.
The intuition behind the test is as follows. If the seriesy{\displaystyle y}isstationary(ortrend-stationary), then it has a tendency to return to a constant (or deterministically trending) mean. Therefore, large values will tend to be followed by smaller values (negative changes), and small values by larger values (positive changes). Accordingly, the level of the series will be a significant predictor of next period's change, and will have a negative coefficient. If, on the other hand, the series is integrated, then positive changes and negative changes will occur with probabilities that do not depend on the current level of the series; in arandom walk, where you are now does not affect which way you will go next.
It is notable that
may be rewritten as
with a deterministic trend coming froma0t{\displaystyle a_{0}t}and a stochastic intercept term coming fromy0+∑i=1tui{\displaystyle y_{0}+\sum _{i=1}^{t}u_{i}}, resulting in what is referred to as astochastic trend.[2]
There is also an extension of the Dickey–Fuller (DF) test called theaugmented Dickey–Fuller test(ADF), which removes all the structural effects (autocorrelation) in the time series and then tests using the same procedure.
Which of the three main versions of the test should be used is not a minor issue. The decision is important for the size of the unit root test (the probability of rejecting the null hypothesis of a unit root when there is one) and the power of the unit root test (the probability of rejecting the null hypothesis of a unit root when there is not one). Inappropriate exclusion of the intercept or deterministic time trend term leads tobiasin the coefficient estimate forδ, leading to the actual size for the unit root test not matching the reported one. If the time trend term is inappropriately excluded with thea0{\displaystyle a_{0}}term estimated, then the power of the unit root test can be substantially reduced as a trend may be captured through therandom walk with driftmodel.[3]On the other hand, inappropriate inclusion of the intercept or time trend term reduces the power of the unit root test, and sometimes that reduced power can be substantial.
Use of prior knowledge about whether the intercept and deterministic time trend should be included is of course ideal but not always possible. When such prior knowledge is unavailable, various testing strategies (series of ordered tests) have been suggested, e.g. by Dolado, Jenkinson, and Sosvilla-Rivero (1990)[4]and by Enders (2004), often with the ADF extension to remove autocorrelation. Elder and Kennedy (2001) present a simple testing strategy that avoids double and triple testing for the unit root that can occur with other testing strategies, and discuss how to use prior knowledge about the existence or not of long-run growth (or shrinkage) iny.[5]Hacker and Hatemi-J (2010) providesimulationresults on these matters,[6]including simulations covering the Enders (2004) and Elder and Kennedy (2001) unit-root testing strategies. Simulation results are presented in Hacker (2010) which indicate that using aninformation criterionsuch as theSchwarz information criterionmay be useful in determining unit root and trend status within a Dickey–Fuller framework.[7]
|
https://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test
|
Instatistics, anaugmented Dickey–Fuller test(ADF) tests thenull hypothesisthat aunit rootis present in atime seriessample. Thealternative hypothesisdepends on which version of the test is used, but is usuallystationarityortrend-stationarity. It is an augmented version of theDickey–Fuller testfor a larger and more complicated set of time series models.
The augmented Dickey–Fuller (ADF) statistic, used in the test, is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence.[1]
The procedure for the ADF test is the same as for theDickey–Fuller testbut it is applied to the model
whereα{\displaystyle \alpha }is a constant,β{\displaystyle \beta }the coefficient on a time trend andp{\displaystyle p}the lag order of the autoregressive process. Imposing the constraintsα=0{\displaystyle \alpha =0}andβ=0{\displaystyle \beta =0}corresponds to modelling arandom walkand using the constraintβ=0{\displaystyle \beta =0}corresponds to modeling a random walk with a drift. Consequently, there are three main versions of the test, analogous those of theDickey–Fuller test. (See that article for a discussion on dealing with uncertainty about including the intercept and deterministic time trend terms in the test equation.)
By including lags of the orderp, the ADF formulation allows for higher-order autoregressive processes. This means that the lag lengthpmust be determined in order to use the test. One approach to doing this is to test down from high orders and examine thet-valueson coefficients. An alternative approach is to examine information criteria such as theAkaike information criterion,Bayesian information criterionor theHannan–Quinn information criterion.
The unit root test is then carried out under the null hypothesisγ=0{\displaystyle \gamma =0}against the alternative hypothesis ofγ<0.{\displaystyle \gamma <0.}Once a value for the test statistic
is computed, it can be compared to the relevant critical value for the Dickey–Fuller test. As this test is asymmetric, we are only concerned with negative values of our test statisticDFτ{\displaystyle \mathrm {DF} _{\tau }}. If the calculated test statistic is less (more negative) than the critical value, then the null hypothesis ofγ=0{\displaystyle \gamma =0}is rejected and no unit root is present.
The intuition behind the test is that if the series is characterised by a unit root process, then the lagged level of the series (yt−1{\displaystyle y_{t-1}}) will provide no relevant information in predicting the change inyt{\displaystyle y_{t}}besides the one obtained in the lagged changes (Δyt−k{\displaystyle \Delta y_{t-k}}). In this case, theγ=0{\displaystyle \gamma =0}and null hypothesis is not rejected. In contrast, when the process has no unit root, it is stationary and hence exhibits reversion to the mean - so the lagged level will provide relevant information in predicting the change of the series and the null hypothesis of a unit root will be rejected.
A model that includes a constant and a time trend is estimated using sample of 50 observations and yields theDFτ{\displaystyle \mathrm {DF} _{\tau }}statistic of −4.57. This is more negative than the tabulated critical value of −3.50, so at the 95% level, the null hypothesis of a unit root will be rejected.
There are alternativeunit root testssuch as thePhillips–Perron test(PP) or theADF-GLS testprocedure (ERS) developed by Elliott, Rothenberg and Stock (1996).[3]
|
https://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test
|
Instatisticsandeconometrics, theADF-GLS test(orDF-GLS test) is a test for aunit rootin an economictime seriessample. It was developed by Elliott, Rothenberg and Stock (ERS) in 1992 as a modification of theaugmented Dickey–Fuller test(ADF).[1]
A unit root test determines whether a time series variable is non-stationary using an autoregressive model. For series featuring deterministic components in the form of a constant or a linear trend then ERS developed an asymptotically point optimal test to detect a unit root. This testing procedure dominates other existing unit root tests in terms of power. It locally de-trends (de-means) data series to efficiently estimate the deterministic parameters of the series, and use the transformed data to perform a usual ADF unit root test. This procedure helps to remove the means and linear trends for series that are not far from the non-stationary region.[2]
Consider a simple time series modelyt=dt+ut{\displaystyle y_{t}=d_{t}+u_{t}\,}withut=ρut−1+et{\displaystyle u_{t}=\rho u_{t-1}+e_{t}\,}wheredt{\displaystyle d_{t}\,}is the deterministic part andut{\displaystyle u_{t}\,}is the stochastic part ofyt{\displaystyle y_{t}\,}. When the true value ofρ{\displaystyle \rho \,}is close to 1, estimation of the model, i.e.dt{\displaystyle d_{t}\,}will pose efficiency problems because theyt{\displaystyle y_{t}\,}will be close to nonstationary. In this setting, testing for the stationarity features of the given times series will also be subject to general statistical problems. To overcome such problems ERS suggested to locally difference the time series.
Consider the case where closeness to 1 for the autoregressive parameter is modelled asρ=1−cT{\displaystyle \rho =1-{\frac {c}{T}}\,}whereT{\displaystyle T\,}is the number of observations. Now consider filtering the series using1−c¯TL{\displaystyle 1-{\frac {\bar {c}}{T}}L\,}withL{\displaystyle L\,}being a standard lag operator, i.e.y¯t=yt−(c¯/T)yt−1{\displaystyle {\bar {y}}_{t}=y_{t}-({\bar {c}}/T)y_{t-1}\,}. Working withy¯t{\displaystyle {\bar {y}}_{t}\,}would result in power gain, as ERS show, when testing the stationarity features ofyt{\displaystyle y_{t}\,}using the augmented Dickey-Fuller test. This is a point optimal test for whichc¯{\displaystyle {\bar {c}}\,}is set in such a way that the test would have a 50 percent power when the alternative is characterized byρ=1−c/T{\displaystyle \rho =1-c/T\,}forc=c¯{\displaystyle c={\bar {c}}\,}. Depending on the specification ofdt{\displaystyle d_{t}\,},c¯{\displaystyle {\bar {c}}\,}will take different values.
A Primer on Unit Root Tests, P.C.B. Phillips and Z. Xiao
|
https://en.wikipedia.org/wiki/ADF-GLS_test
|
Instatistics, aunit root testtests whether atime seriesvariable is non-stationary and possesses aunit root. The null hypothesis is generally defined as the presence of a unit root and the alternative hypothesis is eitherstationarity,trend stationarityor explosive root depending on the test used.
In general, the approach to unit root testing implicitly assumes that the time series to be tested[yt]t=1T{\displaystyle [y_{t}]_{t=1}^{T}}can be written as,
where,
The task of the test is to determine whether the stochastic component contains a unit root or is stationary.[1]
Other popular tests include:
Unit root tests are closely linked toserial correlationtests. However, while all processes with a unit root will exhibit serial correlation, not all serially correlated time series will have a unit root. Popular serial correlation tests include:
|
https://en.wikipedia.org/wiki/Unit_root_test
|
Instatistics, thePhillips–Perron test(named afterPeter C. B. PhillipsandPierre Perron) is aunit roottest.[1]That is, it is used intime seriesanalysis to test thenull hypothesisthat a time series isintegrated of order1. It builds on theDickey–Fuller testof the null hypothesisρ=1{\displaystyle \rho =1}inΔyt=(ρ−1)yt−1+ut{\displaystyle \Delta y_{t}=(\rho -1)y_{t-1}+u_{t}\,}, whereΔ{\displaystyle \Delta }is thefirst differenceoperator. Like theaugmented Dickey–Fuller test, the Phillips–Perron test addresses the issue that the process generating data foryt{\displaystyle y_{t}}might have a higher order of autocorrelation than is admitted in the test equation—makingyt−1{\displaystyle y_{t-1}}endogenous and thus invalidating the Dickey–Fullert-test. Whilst the augmented Dickey–Fuller test addresses this issue by introducing lags ofΔyt{\displaystyle \Delta y_{t}}as regressors in the test equation, the Phillips–Perron test makes anon-parametriccorrection to the t-test statistic. The test is robust with respect to unspecifiedautocorrelationandheteroscedasticityin the disturbance process of the test equation.
Davidson and MacKinnon (2004) report that the Phillips–Perron test performs worse in finite samples than the augmented Dickey–Fuller test.[2]
|
https://en.wikipedia.org/wiki/Phillips%E2%80%93Perron_test
|
Ineconometrics,cointegrationis astatisticalproperty describing a long-term, stable relationship between two or moretime seriesvariables, even if those variables themselves are individuallynon-stationary(i.e., they have trends). This means that despite their individual fluctuations, the variables move together in the long run, anchored by an underlying equilibrium relationship.
More formally, if several time series are individuallyintegrated of orderd(meaning they requireddifferencesto become stationary) but alinear combinationof them is integrated of a lower order, then those time series are said to be cointegrated. That is, if (X,Y,Z) are each integrated of orderd, and there exist coefficientsa,b,csuch thataX+bY+cZis integrated of order less than d, thenX,Y, andZare cointegrated.
Cointegration is a crucial concept in time series analysis, particularly when dealing with variables that exhibit trends, such asmacroeconomicdata. In an influential paper,[1]Charles Nelson andCharles Plosser(1982) provided statistical evidence that many US macroeconomic time series (like GNP, wages, employment, etc.) have stochastic trends.
If two or more series are individuallyintegrated(in the time series sense) but somelinear combinationof them has a lowerorder of integration, then the series are said to be cointegrated. A common example is where the individual series are first-order integrated (I(1){\displaystyle I(1)}) but some (cointegrating) vector of coefficients exists to form astationarylinear combination of them.
The first to introduce and analyse the concept of spurious—or nonsense—regression wasUdny Yulein 1926.[2]Before the 1980s, many economists usedlinear regressionson non-stationary time series data, which Nobel laureateClive GrangerandPaul Newboldshowed to be a dangerous approach that could producespurious correlation,[3]since standard detrending techniques can result in data that are still non-stationary.[4]Granger's 1987 paper withRobert Engleformalized the cointegrating vector approach, and coined the term.[5]
For integratedI(1){\displaystyle I(1)}processes, Granger and Newbold showed that de-trending does not work to eliminate the problem of spurious correlation, and that the superior alternative is to check for co-integration. Two series withI(1){\displaystyle I(1)}trends can be co-integrated only if there is a genuine relationship between the two. Thus the standard current methodology for time series regressions is to check all-time series involved for integration. If there areI(1){\displaystyle I(1)}series on both sides of the regression relationship, then it is possible for regressions to give misleading results.
The possible presence of cointegration must be taken into account when choosing a technique to test hypotheses concerning the relationship between two variables havingunit roots(i.e. integrated of at least order one).[3]The usual procedure for testing hypotheses concerning the relationship between non-stationary variables was to runordinary least squares(OLS) regressions on data which had been differenced. This method is biased if the non-stationary variables are cointegrated.
For example, regressing the consumption series for any country (e.g. Fiji) against the GNP for a randomly selected dissimilar country (e.g. Afghanistan) might give a highR-squaredrelationship (suggesting high explanatory power on Fiji's consumption from Afghanistan'sGNP). This is calledspurious regression: two integratedI(1){\displaystyle I(1)}series which are not directly causally related may nonetheless show a significant correlation.
The six main methods for testing for cointegration are:
Ifxt{\displaystyle x_{t}}andyt{\displaystyle y_{t}}both haveorder of integrationd=1 and are cointegrated, then a linear combination of them must be stationary for some value ofβ{\displaystyle \beta }andut{\displaystyle u_{t}}. In other words:
whereut{\displaystyle u_{t}}is stationary.
Ifβ{\displaystyle \beta }is known, we can testut{\displaystyle u_{t}}for stationarity with anAugmented Dickey–Fuller testorPhillips–Perron test. Ifβ{\displaystyle \beta }is unknown, we must first estimate it. This is typically done by usingordinary least squares(by regressingyt{\displaystyle y_{t}}onxt{\displaystyle x_{t}}and an intercept). Then, we can run an ADF test onut{\displaystyle u_{t}}. However, whenβ{\displaystyle \beta }is estimated, the critical values of this ADF test are non-standard, and increase in absolute value as more regressors are included.[6]
If the variables are found to be cointegrated, a second-stage regression is conducted. This is a regression ofΔyt{\displaystyle \Delta y_{t}}on the lagged regressors,Δxt{\displaystyle \Delta x_{t}}and the lagged residuals from the first stage,u^t−1{\displaystyle {\hat {u}}_{t-1}}. The second stage regression is given as:Δyt=Δxtb+αut−1+εt{\displaystyle \Delta y_{t}=\Delta x_{t}b+\alpha u_{t-1}+\varepsilon _{t}}
If the variables are not cointegrated (if we cannot reject the null of no cointegration when testingut{\displaystyle u_{t}}), thenα=0{\displaystyle \alpha =0}and we estimate a differences model:Δyt=Δxtb+εt{\displaystyle \Delta y_{t}=\Delta x_{t}b+\varepsilon _{t}}
TheJohansen testis a test for cointegration that allows for more than one cointegrating relationship, unlike the Engle–Granger method, but this test is subject to asymptotic properties, i.e. large samples. If the sample size is too small then the results will not be reliable and one should use Auto Regressive Distributed Lags (ARDL).[7][8]
Peter C. B. PhillipsandSam Ouliaris(1990) show that residual-based unit root tests applied to the estimated cointegrating residuals do not have the usual Dickey–Fuller distributions under the null hypothesis of no-cointegration.[9]Because of the spurious regression phenomenon under the null hypothesis, the distribution of these tests have asymptotic distributions that depend on (1) the number of deterministic trend terms and (2) the number of variables with which co-integration is being tested. These distributions are known as Phillips–Ouliaris distributions and critical values have been tabulated. In finite samples, a superior alternative to the use of these asymptotic critical value is to generate critical values from simulations.
In practice, cointegration is often used for twoI(1){\displaystyle I(1)}series, but it is more generally applicable and can be used for variables integrated of higher order (to detect correlated accelerations or other second-difference effects).Multicointegrationextends the cointegration technique beyond two variables, and occasionally to variables integrated at different orders.
Tests for cointegration assume that the cointegrating vector is constant during the period of study. In reality, it is possible that the long-run relationship between the underlying variables change (shifts in the cointegrating vector can occur). The reason for this might be technological progress, economic crises, changes in the people's preferences and behaviour accordingly, policy or regime alteration, and organizational or institutional developments. This is especially likely to be the case if the sample period is long. To take this issue into account, tests have been introduced for cointegration with one unknownstructural break,[10]and tests for cointegration with two unknown breaks are also available.[11]
SeveralBayesian methodshave been proposed to compute the posterior distribution of the number of cointegrating relationships and the cointegrating linear combinations.[12]
|
https://en.wikipedia.org/wiki/Cointegration
|
Ineconometrics,Kwiatkowski–Phillips–Schmidt–Shin (KPSS) testsare used for testing anull hypothesisthat an observabletime seriesisstationaryaround a deterministic trend (i.e.trend-stationary) against the alternative of aunit root.[1]
Contrary to mostunit root tests, the presence of a unit root is not the null hypothesis but the alternative. Additionally, in the KPSS test, the absence of a unit root is not a proof of stationarity but, by design, of trend-stationarity. This is an important distinction since it is possible for a time series to be non-stationary, have nounit rootyet betrend-stationary. In both unit root and trend-stationary processes, the mean can be growing or decreasing over time; however, in the presence of a shock, trend-stationary processes are mean-reverting (i.e. transitory, the time series will converge again towards the growing mean, which was not affected by the shock) whileunit-rootprocesses have a permanent impact on the mean (i.e. no convergence over time).[2]
Later,Denis Kwiatkowski,Peter C. B. Phillips, Peter Schmidt andYongcheol Shin(1992) proposed a test of the null hypothesis that an observable series istrend-stationary(stationary around a deterministic trend). The series is expressed as the sum of deterministic trend,random walk, and stationary error, and the test is theLagrange multiplier testof the hypothesis that the random walk has zero
variance. KPSS-type tests are intended to complementunit root tests, such as theDickey–Fuller tests. By testing both the unit root hypothesis and the stationarity hypothesis, one can distinguish series that appear to be stationary, series that appear to have a unit root, and series for which the data (or the tests) are not sufficiently informative to be sure whether they are stationary or integrated.
|
https://en.wikipedia.org/wiki/KPSS_tests
|
Clusteringcan refer to the following:
Incomputing:
Ineconomics:
Ingraph theory:
|
https://en.wikipedia.org/wiki/Clustering_(disambiguation)
|
Inprobability theory, theChinese restaurant processis adiscrete-timestochastic process, analogous to seating customers at tables in a restaurant.
Imagine a restaurant with an infinite number of circular tables, each with infinite capacity. Customer 1 sits at the first table. The next customer either sits at the same table as customer 1, or the next table. This continues, with each customer choosing to either sit at an occupied table with a probability proportional to the number of customers already there (i.e., they are more likely to sit at a table with many customers than few), or an unoccupied table. At timen, thencustomers have beenpartitionedamongm≤ntables (or blocks of the partition). The results of this process areexchangeable, meaning the order in which the customers sit does not affect the probability of the finaldistribution. This property greatly simplifies a number of problems inpopulation genetics,linguistic analysis, andimage recognition.
The restaurant analogy first appeared in a 1985 write-up byDavid Aldous,[1]where it was attributed toJim Pitman(who additionally creditsLester Dubins).[2]
An equivalent partition process was published a year earlier byFred Hoppe,[3]using an "urn scheme" akin toPólya's urn. In comparison with Hoppe's urn model, the Chinese restaurant process has the advantage that it naturally lends itself to describing random permutations via their cycle structure, in addition to describing random partitions.
For any positive integern{\displaystyle n}, letPn{\displaystyle {\mathcal {P}}_{n}}denote the set of all partitions of the set{1,2,3,...,n}≜[n]{\displaystyle \{1,2,3,...,n\}\triangleq [n]}. The Chinese restaurant process takes values in the infinite Cartesian product∏n≥1Pn{\displaystyle \prod _{n\geq 1}{\mathcal {P}}_{n}}.
The value of the process at timen{\displaystyle n}is a partitionBn{\displaystyle B_{n}}of the set[n]{\displaystyle [n]}, whose probability distribution is determined as follows. At timen=1{\displaystyle n=1}, the trivial partitionB1={{1}}{\displaystyle B_{1}=\{\{1\}\}}is obtained (with probability one). At timen+1{\displaystyle n+1}the element "n+1{\displaystyle n+1}" is either:
The random partition so generated has some special properties. It isexchangeablein the sense that relabeling{1,...,n}{\displaystyle \{1,...,n\}}does not change the distribution of the partition, and it isconsistentin the sense that the law of the partition of[n−1]{\displaystyle [n-1]}obtained by removing the elementn{\displaystyle n}from the random partitionBn{\displaystyle B_{n}}is the same as the law of the random partitionBn−1{\displaystyle B_{n-1}}.
The probability assigned to any particular partition (ignoring the order in which customers sit around any particular table) is
whereb{\displaystyle b}is a block in the partitionB{\displaystyle B}and|b|{\displaystyle |b|}is the size ofb{\displaystyle b}.
The definition can be generalized by introducing a parameterθ>0{\displaystyle \theta >0}which modifies the probability of the new customer sitting at a new table toθn+θ{\displaystyle {\frac {\theta }{n+\theta }}}and correspondingly modifies the probability of them sitting at a table of size|b|{\displaystyle |b|}to|b|n+θ{\displaystyle {\frac {|b|}{n+\theta }}}. The vanilla process introduced above can be recovered by settingθ=1{\displaystyle \theta =1}. Intuitively,θ{\displaystyle \theta }can be interpreted as the effective number of customers sitting at the first empty table.
An equivalent, but subtly different way to define the Chinese restaurant process, is to let new customers choose companions rather than tables.[4]Customern+1{\displaystyle n+1}chooses to sit at the same table as any one of then{\displaystyle n}seated customers with probability1n+θ{\displaystyle {\frac {1}{n+\theta }}}, or chooses to sit at a new, unoccupied table with probabilityθn+θ{\displaystyle {\frac {\theta }{n+\theta }}}. Notice that in this formulation, the customer chooses a table without having to count table occupancies---we don't need|b|{\displaystyle |b|}.
θ>0{\displaystyle \theta >0}
TheChinese restaurant table distribution(CRT) is theprobability distributionon the number of tables in the Chinese restaurant process.[5]It can be understood as the sum ofn{\displaystyle n}independentBernoullirandom variables, each with a different parameter:
The probability mass function ofK{\displaystyle K}is given by[6]
wheres{\displaystyle s}denotesStirling numbers of the first kind.
This construction can be generalized to a model with two parameters,θ{\displaystyle \theta }&α{\displaystyle \alpha },[2][7]commonly called thestrength(orconcentration) anddiscountparameters respectively. At timen+1{\displaystyle n+1}, the next customer to arrive finds|B|{\displaystyle |B|}occupied tables and decides to sit at an empty table with probability
or at an occupied tableb{\displaystyle b}of size|b|{\displaystyle |b|}with probability
In order for the construction to define a validprobability measureit is necessary to suppose that eitherα<0{\displaystyle \alpha <0}andθ=−Lα{\displaystyle \theta =-L\alpha }for someL∈{1,2,,...}{\displaystyle L\in \{1,2,,...\}}; or that0≤α<1{\displaystyle 0\leq \alpha <1}andθ>−α{\displaystyle \theta >-\alpha }.
Under this model the probability assigned to any particular partitionB{\displaystyle B}of[n]{\displaystyle [n]}, can be expressed in the general case (for any values ofθ,α{\displaystyle \theta ,\alpha }that satisfy the above-mentioned constraints) in terms of thePochhammer k-symbol, as
where, the Pochhammer k-symbol is defined as follows: by convention,(a)0,k=1{\displaystyle (a)_{0,k}=1}, and form>0{\displaystyle m>0}
wherexm¯=∏i=0m−1(x+i){\displaystyle x^{\overline {m}}=\prod _{i=0}^{m-1}(x+i)}is therising factorialandxm_=∏i=0m−1(x−i){\displaystyle x^{\underline {m}}=\prod _{i=0}^{m-1}(x-i)}is thefalling factorial. It is worth noting that for the parameter setting whereα<0{\displaystyle \alpha <0}andθ=−Lα{\displaystyle \theta =-L\alpha }, then(θ+α)|B|−1,α=(|α|(L−1))|B|−1,α{\displaystyle (\theta +\alpha )_{|B|-1,\alpha }=(|\alpha |(L-1))_{|B|-1,\alpha }}, which evaluates to zero whenever|B|>L{\displaystyle |B|>L}, so thatL{\displaystyle L}is an upper bound on the number of blocks in the partition; see the subsection on theDirichlet-categorical modelbelow for more details.
For the case whenθ>0{\displaystyle \theta >0}and0<α<1{\displaystyle 0<\alpha <1}, the partition probability can be rewritten in terms of theGamma functionas
In the one-parameter case, whereα{\displaystyle \alpha }is zero, andθ>0{\displaystyle \theta >0}this simplifies to
Or, whenθ{\displaystyle \theta }is zero, and0<α<1{\displaystyle 0<\alpha <1}
As before, the probability assigned to any particular partition depends only on the block sizes, so as before the random partition is exchangeable in the sense described above. The consistency property still holds, as before, by construction.
Ifα=0{\displaystyle \alpha =0}, the probability distribution of the randompartition of the integern{\displaystyle n}thus generated is theEwens distributionwith parameterθ{\displaystyle \theta }, used inpopulation geneticsand theunified neutral theory of biodiversity.
Here is one way to derive this partition probability. LetCi{\displaystyle C_{i}}be the random block into which the numberi{\displaystyle i}is added, fori=1,2,3,...{\displaystyle i=1,2,3,...}. Then
The probability thatBn{\displaystyle B_{n}}is any particular partition of the set{1,...,n}{\displaystyle \{1,...,n\}}is the product of these probabilities asi{\displaystyle i}runs from1{\displaystyle 1}ton{\displaystyle n}. Now consider the size of blockb{\displaystyle b}: it increases by one each time we add one element into it. When the last element in blockb{\displaystyle b}is to be added in, the block size is|b|−1{\displaystyle |b|-1}. For example, consider this sequence of choices: (generate a new blockb{\displaystyle b})(joinb{\displaystyle b})(joinb{\displaystyle b})(joinb{\displaystyle b}). In the end, blockb{\displaystyle b}has 4 elements and the product of the numerators in the above equation getsθ⋅1⋅2⋅3{\displaystyle \theta \cdot 1\cdot 2\cdot 3}. Following this logic, we obtainPr(Bn=B){\displaystyle \Pr(B_{n}=B)}as above.
For the one parameter case, withα=0{\displaystyle \alpha =0}and0<θ<∞{\displaystyle 0<\theta <\infty }, the number of tables is distributed according to thechinese restaurant table distribution. The expected value of this random variable, given that there aren{\displaystyle n}seated customers, is[9]
whereΨ(θ){\displaystyle \Psi (\theta )}is thedigamma function. For the two-parameter case, forα≠0{\displaystyle \alpha \neq 0}, the expected number of occupied tables is[7]
wherexm¯{\displaystyle x^{\overline {m}}}is the rising factorial (as defined above).
For the parameter choiceα<0{\displaystyle \alpha <0}andθ=−Lα{\displaystyle \theta =-L\alpha }, whereL∈{1,2,3,…}{\displaystyle L\in \{1,2,3,\ldots \}}, the two-parameter Chinese restaurant process is equivalent to theDirichlet-categorical model, which is a hierarchical model that can be defined as follows. Notice that for this parameter setting, the probability of occupying a new table, when there are alreadyL{\displaystyle L}occupied tables, is zero; so that the number of occupied tables is upper bounded byL{\displaystyle L}. If we choose to identify tables withlabelsthat take values in{1,2,…,L}{\displaystyle \{1,2,\ldots ,L\}}, then to generate a random partition of the set[n]={1,2,…,n}{\displaystyle [n]=\{1,2,\ldots ,n\}}, the hierarchical model first draws acategorical label distribution,p=(p1,p2,…,pL){\displaystyle \mathbf {p} =(p_{1},p_{2},\ldots ,p_{L})}from the symmetricDirichlet distribution, with concentration parameterγ=−α>0{\displaystyle \gamma =-\alpha >0}. Then, independently for each of then{\displaystyle n}customers, the table label is drawn from the categoricalp{\displaystyle \mathbf {p} }. Since the Dirichlet distribution isconjugateto the categorical, the hidden variablep{\displaystyle \mathbf {p} }can be marginalized out to obtain theposterior predictive distributionfor the next label state,ℓn+1{\displaystyle \ell _{n+1}}, givenn{\displaystyle n}previous labels
where|bi|≥0{\displaystyle \left|{b_{i}}\right|\geq 0}is the number of customers that are already seated at tablei{\displaystyle i}. Withα=−γ{\displaystyle \alpha =-\gamma }andθ=Lγ{\displaystyle \theta =L\gamma }, this agrees with the above general formula,|bi|−αn+θ{\displaystyle {\frac {|b_{i}|-\alpha }{n+\theta }}}, for the probability of sitting at an occupied table when|bi|≥1{\displaystyle |b_{i}|\geq 1}. The probability for sitting at any of theL−|B|{\displaystyle L-|B|}unoccupied tables, also agrees with the general formula and is given by
The marginal probability for the labels is given by
whereP(ℓ1)=1L{\displaystyle P(\ell _{1})={\frac {1}{L}}}andxm¯=∏i=0m−1(x+i){\displaystyle x^{\overline {m}}=\prod _{i=0}^{m-1}(x+i)}is therising factorial. In general, there are however multiple label states that all correspond to thesamepartition. For a given partition,B{\displaystyle B}, which has|B|≤L{\displaystyle \left|B\right|\leq L}blocks, the number of label states that all correspond to this partition is given by thefalling factorial,L|B|_=∏i=0|B|−1(L−i){\displaystyle L^{\underline {\left|B\right|}}=\prod _{i=0}^{\left|B\right|-1}(L-i)}. Taking this into account, the probability for the partition is
which can be verified to agree with the general version of the partition probability that is given above in terms of the Pochhammer k-symbol. Notice again, that ifB{\displaystyle B}is outside of the support, i.e.|B|>L{\displaystyle |B|>L}, the falling factorial,L|B|_{\displaystyle L^{\underline {|B|}}}evaluates to zero as it should. (Practical implementations that evaluate the log probability for partitions vialogL|B|_=log|Γ(L+1)|−log|Γ(L+1−|B|)|{\displaystyle \log L^{\underline {|B|}}=\log \left|\Gamma (L+1)\right|-\log \left|\Gamma (L+1-|B|)\right|}will return−∞{\displaystyle -\infty }, whenever|B|>L{\displaystyle |B|>L}, as required.)
Consider on the one hand, the one-parameter Chinese restaurant process, withα=0{\displaystyle \alpha =0}andθ>0{\displaystyle \theta >0}, which we denoteCRP(α=0,θ){\displaystyle {\text{CRP}}(\alpha =0,\theta )}; and on the other hand the Dirichlet-categorical model withL{\displaystyle L}a positive integer and where we chooseγ=θL{\displaystyle \gamma ={\frac {\theta }{L}}}, which as shown above, is equivalent toCRP(α=−θL,θ){\displaystyle {\text{CRP}}(\alpha =-{\frac {\theta }{L}},\theta )}. This shows that the Dirichlet-categorical model can be made arbitrarily close toCRP(0,θ){\displaystyle {\text{CRP}}(0,\theta )}, by makingL{\displaystyle L}large.
The two-parameter Chinese restaurant process can equivalently be defined in terms of astick-breaking process.[10]For the case where0≤α<1{\displaystyle 0\leq \alpha <1}andθ>−α{\displaystyle \theta >-\alpha }, the stick breaking process can be described as a hierarchical model, much like the aboveDirichlet-categorical model, except that there is an infinite number of label states. The table labels are drawn independently from the infinite categorical distributionp=(p1,p2,…){\displaystyle \mathbf {p} =(p_{1},p_{2},\ldots )}, the components of which are sampled usingstick breaking: start with a stick of length 1 and randomly break it in two, the length of the left half isp1{\displaystyle p_{1}}and the right half is broken again recursively to givep2,p3,…{\displaystyle p_{2},p_{3},\ldots }. More precisely, the left fraction,fk{\displaystyle f_{k}}, of thek{\displaystyle k}-th break is sampled from thebeta distribution:
The categorical probabilities are:
For the parameter settingsα<0{\displaystyle \alpha <0}andθ=−αL{\displaystyle \theta =-\alpha L}, whereL{\displaystyle L}is a positive integer, and where the categorical is finite:p=(p1,…,pL){\displaystyle \mathbf {p} =(p_{1},\ldots ,p_{L})}, we can samplep{\displaystyle \mathbf {p} }from an ordinary Dirchlet distribution as explainedabove, but it can also be sampled with atruncatedstick-breaking recipe, where the formula for sampling the fractions is modified to:
andfL=1{\displaystyle f_{L}=1}.
It is possible to adapt the model such that each data point is no longer uniquely associated with a class (i.e., we are no longer constructing a partition), but may be associated with any combination of the classes. This strains the restaurant-tables analogy and so is instead likened to a process in which a series of diners samples from some subset of an infinite selection of dishes on offer at a buffet. The probability that a particular diner samples a particular dish is proportional to the popularity of the dish among diners so far, and in addition the diner may sample from the untested dishes. This has been named theIndian buffet processand can be used to infer latent features in data.[11]
The Chinese restaurant process is closely connected toDirichlet processesandPólya's urn scheme, and therefore useful in applications ofBayesian statisticsincludingnonparametricBayesian methods. The Generalized Chinese Restaurant Process is closely related toPitman–Yor process. These processes have been used in many applications, including modeling text, clustering biologicalmicroarraydata,[12]biodiversity modelling, and image reconstruction[13][14]
|
https://en.wikipedia.org/wiki/Chinese_Restaurant_Process
|
Instatistics,cluster analysisis the algorithmic grouping of objects into homogeneous
groups based on numerical measurements.Model-based clustering[1]based on a statistical model for the data, usually amixture model. This has several advantages, including a principledstatisticalbasis for clustering,
and ways to choose the number of clusters, to choose the best clustering model, to assess the uncertainty of the clustering, and to identifyoutliersthat do not belong to any group.
Suppose that for each ofn{\displaystyle n}observations we have data ond{\displaystyle d}variables, denoted byyi=(yi,1,…,yi,d){\displaystyle y_{i}=(y_{i,1},\ldots ,y_{i,d})}for observationi{\displaystyle i}. Then
model-based clustering expresses theprobability density functionofyi{\displaystyle y_{i}}as a finite mixture, or weighted average ofG{\displaystyle G}componentprobability density functions:
wherefg{\displaystyle f_{g}}is a probability density function with
parameterθg{\displaystyle \theta _{g}},τg{\displaystyle \tau _{g}}is the corresponding
mixture probability where∑g=1Gτg=1{\displaystyle \sum _{g=1}^{G}\tau _{g}=1}.
Then in its simplest form, model-based clustering views each component
of the mixture model as a cluster, estimates the model parameters, and assigns
each observation to cluster corresponding to its most likely mixture component.
The most common model for continuous data is thatfg{\displaystyle f_{g}}is amultivariate normal distributionwith mean vectorμg{\displaystyle \mu _{g}}and covariance matrixΣg{\displaystyle \Sigma _{g}}, so thatθg=(μg,Σg){\displaystyle \theta _{g}=(\mu _{g},\Sigma _{g})}.
This defines aGaussian mixture model. The parameters of the model,τg{\displaystyle \tau _{g}}andθg{\displaystyle \theta _{g}}forg=1,…,G{\displaystyle g=1,\ldots ,G},
are typically estimated bymaximum likelihood estimationusing theexpectation-maximization algorithm(EM); see alsoEM algorithm and GMM model.
Bayesian inferenceis also often used for inference about finite
mixture models.[2]The Bayesian approach also allows for the case where the number of components,G{\displaystyle G}, is infinite, using aDirichlet processprior, yielding a Dirichlet process mixture model for clustering.[3]
An advantage of model-based clustering is that it provides statistically
principled ways to choose the number of clusters. Each different choice of the number of groupsG{\displaystyle G}corresponds to a different mixture model. Then standard statisticalmodel selectioncriteria such as theBayesian information criterion(BIC) can be used to chooseG{\displaystyle G}.[4]The integrated completed likelihood (ICL)[5]is a different criterion designed to choose the number of clusters rather than the number of mixture components in the model; these will often be different if highly non-Gaussian clusters are present.
For data with high dimension,d{\displaystyle d}, using a full covariance matrix for each mixture component requires estimation of many parameters, which can result in a loss of precision, generalizabity and interpretability. Thus it is common to use more parsimonious component covariance matrices exploiting their geometric interpretation. Gaussian clusters are ellipsoidal, with their volume, shape and orientation determined by the covariance matrix. Consider theeigendecomposition of a matrix
whereDg{\displaystyle D_{g}}is the matrix of eigenvectors ofΣg{\displaystyle \Sigma _{g}},Ag=diag{A1,g,…,Ad,g}{\displaystyle A_{g}={\mbox{diag}}\{A_{1,g},\ldots ,A_{d,g}\}}is a diagonal matrix whose elements are proportional to
the eigenvalues ofΣg{\displaystyle \Sigma _{g}}in descending order,
andλg{\displaystyle \lambda _{g}}is the associated constant of proportionality.
Thenλg{\displaystyle \lambda _{g}}controls the volume of the ellipsoid,Ag{\displaystyle A_{g}}its shape, andDg{\displaystyle D_{g}}its orientation.[6][7]
Each of the volume, shape and orientation of the clusters can be
constrained to be equal (E) or allowed to vary (V); the orientation can
also be spherical, with identical eigenvalues (I). This yields 14 possible clustering models, shown in this table:
It can be seen that many of these models are more parsimonious, with far fewer
parameters than the unconstrained model that has 90 parameters whenG=4{\displaystyle G=4}andd=9{\displaystyle d=9}.
Several of these models correspond to well-known heuristic clustering methods.
For example,k-means clusteringis equivalent to estimation of the
EII clustering model using the classification EM algorithm.[8]TheBayesian information criterion(BIC)
can be used to choose the best clustering model as well as the number of clusters. It can also be used as the basis for a method to choose the variables
in the clustering model, eliminating variables that are not useful for clustering.[9][10]
Different Gaussian model-based clustering methods have been developed with
an eye to handling high-dimensional data. These include the pgmm method,[11]which is based on the mixture of
factor analyzers model, and the HDclassif method, based on the idea of subspace clustering.[12]
The mixture-of-experts framework extends model-based clustering to include covariates.[13][14]
We illustrate the method with a dateset consisting of three measurements
(glucose, insulin, sspg) on 145 subjects for the purpose of diagnosing
diabetes and the type of diabetes present.[15]The subjects were clinically classified into three groups: normal,
chemical diabetes and overt diabetes, but we use this information only
for evaluating clustering methods, not for classifying subjects.
The BIC plot shows the BIC values for each combination of the number of
clusters,G{\displaystyle G}, and the clustering model from the Table.
Each curve corresponds to a different clustering model.
The BIC favors 3 groups, which corresponds to the clinical assessment.
It also favors the unconstrained covariance model, VVV.
This fits the data well, because the normal patients have low values of
both sspg and insulin, while the distributions of the chemical and
overt diabetes groups are elongated, but in different directions.
Thus the volumes, shapes and orientations of the three groups are clearly
different, and so the unconstrained model is appropriate, as selected
by the model-based clustering method.
The classification plot shows the classification of the subjects by model-based
clustering. The classification was quite accurate, with a 12% error rate
as defined by the clinical classification.
Other well-known clustering methods performed worse with higher
error rates, such assingle-linkage clusteringwith 46%,
average link clustering with 30%,complete-linkage clusteringalso with 30%, andk-means clusteringwith 28%.
Anoutlierin clustering is a data point that does not belong to any of
the clusters. One way of modeling outliers in model-based clustering is
to include an additional mixture component that is very dispersed, with
for example a uniform distribution.[6][16]Another approach is to replace the multivariate
normal densities byt{\displaystyle t}-distributions,[17]with the idea that the long tails of thet{\displaystyle t}-distribution would ensure robustness to outliers.
However, this is not breakdown-robust.[18]A third approach is the "tclust" or data trimming approach[19]which excludes observations identified as
outliers when estimating the model parameters.
Sometimes one or more clusters deviate strongly from the Gaussian assumption.
If a Gaussian mixture is fitted to such data, a strongly non-Gaussian
cluster will often be represented by several mixture components rather than
a single one. In that case, cluster merging can be used to find a better
clustering.[20]A different approach is to use mixtures
of complex component densities to represent non-Gaussian clusters.[21][22]
Clustering multivariate categorical data is most often done using thelatent class model. This assumes that the data arise from a finite
mixture model, where within each cluster the variables are independent.
These arise when variables are of different types, such
as continuous, categorical or ordinal data. Alatent class modelfor
mixed data assumes local independence between the variable.[23]The location model relaxes the local independence
assumption.[24]The clustMD approach assumes that
the observed variables are manifestations of underlying continuous Gaussian
latent variables.[25]
The simplest model-based clustering approach for multivariate
count data is based on finite mixtures with locally independent Poisson
distributions, similar to thelatent class model.
More realistic approaches allow for dependence and overdispersion in the
counts.[26]These include methods based on the multivariate Poisson distribution,
the multivarate Poisson-log normal distribution, the integer-valued
autoregressive (INAR) model and the Gaussian Cox model.
These consist of sequences of categorical values from a finite set of
possibilities, such as life course trajectories.
Model-based clustering approaches include group-based trajectory and
growth mixture models[27]and a distance-based
mixture model.[28]
These arise when individuals rank objects in order of preference. The data
are then ordered lists of objects, arising in voting, education, marketing
and other areas. Model-based clustering methods for rank data include
mixtures ofPlackett-Luce modelsand mixtures of Benter models,[29][30]and mixtures of Mallows models.[31]
These consist of the presence, absence or strength of connections between
individuals or nodes, and are widespread in the social sciences and biology.
The stochastic blockmodel carries out model-based clustering of the nodes
in a network by assuming that there is a latent clustering and that
connections are formed independently given the clustering.[32]The latent position cluster model
assumes that each node occupies a position in an unobserved latent space,
that these positions arise from a mixture of Gaussian distributions,
and that presence or absence of a connection is associated with distance
in the latent space.[33]
Much of the model-based clustering software is in the form of a publicly
and freely availableR package. Many of these are listed in the
CRAN Task View on Cluster Analysis and Finite Mixture Models.[34]The most used such package ismclust,[35][36]which is used to cluster continuous data and has been downloaded over
8 million times.[37]
ThepoLCApackage[38]clusters
categorical data using thelatent class model.
TheclustMDpackage[25]clusters
mixed data, including continuous, binary, ordinal and nominal variables.
Theflexmixpackage[39]does model-based clustering for a range of component distributions.
Themixtoolspackage[40]can cluster
different data types. Bothflexmixandmixtoolsimplement model-based clustering with covariates.
Model-based clustering was first invented in 1950 byPaul Lazarsfeldfor clustering multivariate discrete data, in the form of thelatent class model.[41]
In 1959, Lazarsfeld gave a lecture on latent structure analysis
at the University of California-Berkeley, whereJohn H. Wolfewas an M.A. student.
This led Wolfe to think about how to do the same thing for continuous
data, and in 1965 he did so, proposing the Gaussian mixture model for
clustering.[42][43]He also produced the first software for estimating it, called NORMIX.
Day (1969), working independently, was the first to publish a journal
article on the approach.[44]However, Wolfe deserves credit as the inventor of model-based clustering
for continuous data.
Murtagh and Raftery (1984) developed a model-based clustering method
based on the eigenvalue decomposition of the component covariance matrices.[45]McLachlan and Basford (1988) was the first book on the approach,
advancing methodology and sparking interest.[46]Banfield and Raftery (1993) coined the term "model-based clustering",
introduced the family of parsimonious models,
described an information criterion for
choosing the number of clusters, proposed the uniform model for outliers,
and introduced themclustsoftware.[6]Celeux and Govaert (1995) showed how to perform maximum likelihood estimation
for the models.[7]Thus, by 1995 the core components of the methodology were in place,
laying the groundwork for extensive development since then.
Free download:https://math.univ-cotedazur.fr/~cbouveyr/MBCbook/
|
https://en.wikipedia.org/wiki/Model-based_clustering
|
Generative topographic map(GTM) is amachine learningmethod that is a probabilistic counterpart of theself-organizing map(SOM), is probably convergent and does not require a shrinkingneighborhoodor a decreasing step size. It is agenerative model: the data is assumed to arise by first probabilistically picking a point in a low-dimensional space, mapping the point to the observed high-dimensional input space (via a smooth function), then adding noise in that space. The parameters of the low-dimensional probability distribution, the smooth map and the noise are all learned from the training data using theexpectation–maximization (EM) algorithm. GTM was introduced in 1996 in a paper byChristopher Bishop, Markus Svensen, and Christopher K. I. Williams.
The approach is strongly related todensity networkswhich useimportance samplingand amulti-layer perceptronto form a non-linearlatent variable model. In the GTM the latent space is a discrete grid of points which is assumed to be non-linearly projected into data space. AGaussian noiseassumption is then made in data space so that the model becomes a constrainedmixture of Gaussians. Then the model's likelihood can be maximized by EM.
In theory, an arbitrary nonlinear parametric deformation could be used. The optimal parameters could be found by gradient descent, etc.
The suggested approach to the nonlinear mapping is to use aradial basis function network(RBF) to create a nonlinear mapping between the latent space and the data space. The nodes of the
RBF network then form afeature spaceand the nonlinear mapping can then be taken as alinear transformof this feature space. This approach has the advantage over the suggested density network approach that it can be optimised analytically.
In data analysis, GTMs are like a nonlinear version ofprincipal components analysis, which allows high-dimensional data to be modelled as resulting from Gaussian noise added to sources in lower-dimensional latent space. For example, to locate stocks in plottable 2D space based on their hi-D time-series shapes. Other applications may want to have fewer sources than data points, for example mixture models.
In generativedeformational modelling, the latent and data spaces have the same dimensions, for example, 2D images or 1 audio sound waves. Extra 'empty' dimensions are added to the source (known as the 'template' in this form of modelling), for example locating the 1D sound wave in 2D space. Further nonlinear dimensions are then added, produced by combining the original dimensions. The enlarged latent space is then projected back into the 1D data space. The probability of a given projection is, as before, given by the product of the likelihood of the data under the Gaussian noise model with the prior on the deformation parameter. Unlike conventional spring-based deformation modelling, this has the advantage of being analytically optimizable. The disadvantage is that it is a 'data-mining' approach, i.e. the shape of the deformation prior is unlikely to be meaningful as an explanation of the possible deformations, as it is based on a very high, artificial- and arbitrarily constructed nonlinear latent space. For this reason the prior is learned from data rather than created by a human expert, as is possible for spring-based models.
While nodes in theself-organizing map (SOM)can wander around at will, GTM nodes are constrained by the allowable transformations and their probabilities. If the deformations are well-behaved the topology of the latent space is preserved.
The SOM was created as a biological model of neurons and is a heuristic algorithm. By contrast, the GTM has nothing to do with neuroscience or cognition and is a probabilistically principled model. Thus, it has a number of advantages over SOM, namely:
GTM was introduced by Bishop, Svensen and Williams in their Technical Report in 1997 (Technical Report NCRG/96/015, Aston University, UK) published later in Neural Computation. It was also described in thePhDthesis of Markus Svensen (Aston, 1998).
|
https://en.wikipedia.org/wiki/Generative_topographic_map
|
Meta-learning[1][2]is a subfield ofmachine learningwhere automatic learning algorithms are applied tometadataabout machine learning experiments. As of 2017, the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existinglearning algorithmsor to learn (induce) the learning algorithm itself, hence the alternative termlearning to learn.[1]
Flexibility is important because each learning algorithm is based on a set of assumptions about the data, itsinductive bias.[3]This means that it will only learn well if the bias matches the learning problem. A learning algorithm may perform very well in one domain, but not on the next. This poses strong restrictions on the use ofmachine learningordata miningtechniques, since the relationship between the learning problem (often some kind ofdatabase) and the effectiveness of different learning algorithms is not yet understood.
By using different kinds of metadata, like properties of the learning problem, algorithm properties (like performance measures), or patterns previously derived from the data, it is possible to learn, select, alter or combine different learning algorithms to effectively solve a given learning problem. Critiques of meta-learning approaches bear a strong resemblance to the critique ofmetaheuristic, a possibly related problem. A good analogy to meta-learning, and the inspiration forJürgen Schmidhuber's early work (1987)[1]andYoshua Bengioet al.'s work (1991),[4]considers that genetic evolution learns the learning procedure encoded in genes and executed in each individual's brain. In an open-ended hierarchical meta-learning system[1]usinggenetic programming, better evolutionary methods can be learned by meta evolution, which itself can be improved by meta meta evolution, etc.[1]
A proposed definition[5]for a meta-learning system combines three requirements:
Biasrefers to the assumptions that influence the choice of explanatory hypotheses[6]and not the notion of bias represented in thebias-variance dilemma. Meta-learning is concerned with two aspects of learning bias.
There are three common approaches:[8]
Model-based meta-learning models updates its parameters rapidly with a few training steps, which can be achieved by its internal architecture or controlled by another meta-learner model.[8]
A Memory-AugmentedNeural Network, or MANN for short, is claimed to be able to encode new information quickly and thus to adapt to new tasks after only a few examples.[9]
Meta Networks (MetaNet) learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization.[10]
The core idea in metric-based meta-learning is similar tonearest neighborsalgorithms, which weight is generated by a kernel function. It aims to learn a metric or distance function over objects. The notion of a good metric is problem-dependent. It should represent the relationship between inputs in the task space and facilitate problem solving.[8]
Siamese neural networkis composed of two twin networks whose output is jointly trained. There is a function above to learn the relationship between input data sample pairs. The two networks are the same, sharing the same weight and network parameters.[11]
Matching Networks learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.[12]
The Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting.[13]
Prototypical Networks learn ametric spacein which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve satisfied results.[14]
What optimization-based meta-learning algorithms intend for is to adjust theoptimization algorithmso that the model can be good at learning with a few examples.[8]
LSTM-based meta-learner is to learn the exactoptimization algorithmused to train another learnerneural networkclassifierin the few-shot regime. The parametrization allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner (classifier) network that allows for quick convergence of training.[15]
Model-Agnostic Meta-Learning (MAML) is a fairly generaloptimization algorithm, compatible with any model that learns through gradient descent.[16]
Reptile is a remarkably simple meta-learning optimization algorithm, given that both of its components rely onmeta-optimizationthrough gradient descent and both are model-agnostic.[17]
Some approaches which have been viewed as instances of meta-learning:
|
https://en.wikipedia.org/wiki/Meta-learning_(computer_science)
|
Multivariate statisticsis a subdivision ofstatisticsencompassing the simultaneous observation and analysis of more than oneoutcome variable, i.e.,multivariate random variables.
Multivariate statistics concerns understanding the different aims and background of each of the different forms of multivariate analysis, and how they relate to each other. The practical application of multivariate statistics to a particular problem may involve several types of univariate and multivariate analyses in order to understand the relationships between variables and their relevance to the problem being studied.
In addition, multivariate statistics is concerned with multivariateprobability distributions, in terms of both
Certain types of problems involving multivariate data, for examplesimple linear regressionandmultiple regression, arenotusually considered to be special cases of multivariate statistics because the analysis is dealt with by considering the (univariate) conditional distribution of a single outcome variable given the other variables.
Multivariate analysis(MVA) is based on the principles of multivariate statistics. Typically, MVA is used to address situations where multiple measurements are made on each experimental unit and the relations among these measurements and their structures are important.[1]A modern, overlapping categorization of MVA includes:[1]
Multivariate analysis can be complicated by the desire to include physics-based analysis to calculate the effects of variables for a hierarchical "system-of-systems". Often, studies that wish to use multivariate analysis are stalled by the dimensionality of the problem. These concerns are often eased through the use ofsurrogate models, highly accurate approximations of the physics-based code. Since surrogate models take the form of an equation, they can be evaluated very quickly. This becomes an enabler for large-scale MVA studies: while aMonte Carlo simulationacross the design space is difficult with physics-based codes, it becomes trivial when evaluating surrogate models, which often take the form ofresponse-surfaceequations.
Many different models are used in MVA, each with its own type of analysis:
It is very common that in an experimentally acquired set of data the values of some components of a given data point aremissing. Rather than discarding the whole data point, it is common to "fill in" values for the missing components, a process called "imputation".[6]
There is a set ofprobability distributionsused in multivariate analyses that play a similar role to the corresponding set of distributions that are used inunivariate analysiswhen thenormal distributionis appropriate to a dataset. These multivariate distributions are:
TheInverse-Wishart distributionis important inBayesian inference, for example inBayesian multivariate linear regression. Additionally,Hotelling's T-squared distributionis a multivariate distribution, generalisingStudent's t-distribution, that is used in multivariatehypothesis testing.
C.R. Raomade significant contributions to multivariate statistical theory throughout his career, particularly in the mid-20th century. One of his key works is the book titled "Advanced Statistical Methods in Biometric Research," published in 1952. This work laid the foundation for many concepts in multivariate statistics.[7]Anderson's 1958 textbook,An Introduction to Multivariate Statistical Analysis,[8]educated a generation of theorists and applied statisticians; Anderson's book emphasizeshypothesis testingvialikelihood ratio testsand the properties ofpower functions:admissibility,unbiasednessandmonotonicity.[9][10]
MVA was formerly discussed solely in the context of statistical theories, due to the size and complexity of underlying datasets and its high computational consumption. With the dramatic growth of computational power, MVA now plays an increasingly important role in data analysis and has wide application inOmicsfields.
There are an enormous number of software packages and other tools for multivariate analysis, including:
|
https://en.wikipedia.org/wiki/Multivariate_analysis
|
Weak supervision(also known assemi-supervised learning) is a paradigm inmachine learning, the relevance and notability of which increased with the advent oflarge language modelsdue to large amount of data required to train them. It is characterized by using a combination of a small amount of human-labeled data(exclusively used in more expensive and time-consumingsupervised learningparadigm), followed by a large amount of unlabeled data (used exclusively inunsupervised learningparadigm). In other words, the desired output values are provided only for a subset of the training data. The remaining data is unlabeled or imprecisely labeled. Intuitively, it can be seen as an exam and labeled data as sample problems that the teacher solves for the class as an aid in solving another set of problems. In thetransductivesetting, these unsolved problems act as exam questions. In theinductivesetting, they become practice problems of the sort that will make up the exam.
The acquisition of labeled data for a learning problem often requires a skilled human agent (e.g. to transcribe an audio segment) or a physical experiment (e.g. determining the 3D structure of a protein or determining whether there is oil at a particular location). The cost associated with the labeling process thus may render large, fully labeled training sets infeasible, whereas acquisition of unlabeled data is relatively inexpensive. In such situations, semi-supervised learning can be of great practical value. Semi-supervised learning is also of theoretical interest in machine learning and as a model for human learning.
More formally, semi-supervised learning assumes a set ofl{\displaystyle l}independently identically distributedexamplesx1,…,xl∈X{\displaystyle x_{1},\dots ,x_{l}\in X}with corresponding labelsy1,…,yl∈Y{\displaystyle y_{1},\dots ,y_{l}\in Y}andu{\displaystyle u}unlabeled examplesxl+1,…,xl+u∈X{\displaystyle x_{l+1},\dots ,x_{l+u}\in X}are processed. Semi-supervised learning combines this information to surpass theclassificationperformance that can be obtained either by discarding the unlabeled data and doing supervised learning or by discarding the labels and doing unsupervised learning.
Semi-supervised learning may refer to eithertransductive learningorinductive learning.[1]The goal of transductive learning is to infer the correct labels for the given unlabeled dataxl+1,…,xl+u{\displaystyle x_{l+1},\dots ,x_{l+u}}only. The goal of inductive learning is to infer the correct mapping fromX{\displaystyle X}toY{\displaystyle Y}.
It is unnecessary (and, according toVapnik's principle, imprudent) to perform transductive learning by way of inferring a classification rule over the entire input space; however, in practice, algorithms formally designed for transduction or induction are often used interchangeably.
In order to make any use of unlabeled data, some relationship to the underlying distribution of data must exist. Semi-supervised learning algorithms make use of at least one of the following assumptions:[2]
Points that are close to each other are more likely to share a label.This is also generally assumed in supervised learning and yields a preference for geometrically simpledecision boundaries. In the case of semi-supervised learning, the smoothness assumption additionally yields a preference for decision boundaries in low-density regions, so few points are close to each other but in different classes.[3]
The data tend to form discrete clusters, and points in the same cluster are more likely to share a label(although data that shares a label may spread across multiple clusters). This is a special case of the smoothness assumption and gives rise tofeature learningwith clustering algorithms.
The data lie approximately on amanifoldof much lower dimension than the input space.In this case learning the manifold using both the labeled and unlabeled data can avoid thecurse of dimensionality. Then learning can proceed using distances and densities defined on the manifold.
The manifold assumption is practical when high-dimensional data are generated by some process that may be hard to model directly, but which has only a few degrees of freedom. For instance, human voice is controlled by a few vocal folds,[4]and images of various facial expressions are controlled by a few muscles. In these cases, it is better to consider distances and smoothness in the natural space of the generating problem, rather than in the space of all possible acoustic waves or images, respectively.
The heuristic approach ofself-training(also known asself-learningorself-labeling) is historically the oldest approach to semi-supervised learning,[2]with examples of applications starting in the 1960s.[5]
The transductive learning framework was formally introduced byVladimir Vapnikin the 1970s.[6]Interest in inductive learning using generative models also began in the 1970s. Aprobably approximately correctlearningbound for semi-supervised learning of aGaussianmixture was demonstrated by Ratsaby and Venkatesh in 1995.[7]
Generative approaches to statistical learning first seek to estimatep(x|y){\displaystyle p(x|y)},[disputed–discuss]the distribution of data points belonging to each class. The probabilityp(y|x){\displaystyle p(y|x)}that a given pointx{\displaystyle x}has labely{\displaystyle y}is then proportional top(x|y)p(y){\displaystyle p(x|y)p(y)}byBayes' rule. Semi-supervised learning withgenerative modelscan be viewed either as an extension of supervised learning (classification plus information aboutp(x){\displaystyle p(x)}) or as an extension of unsupervised learning (clustering plus some labels).
Generative models assume that the distributions take some particular formp(x|y,θ){\displaystyle p(x|y,\theta )}parameterized by the vectorθ{\displaystyle \theta }. If these assumptions are incorrect, the unlabeled data may actually decrease the accuracy of the solution relative to what would have been obtained from labeled data alone.[8]However, if the assumptions are correct, then the unlabeled data necessarily improves performance.[7]
The unlabeled data are distributed according to a mixture of individual-class distributions. In order to learn the mixture distribution from the unlabeled data, it must be identifiable, that is, different parameters must yield different summed distributions. Gaussian mixture distributions are identifiable and commonly used for generative models.
The parameterizedjoint distributioncan be written asp(x,y|θ)=p(y|θ)p(x|y,θ){\displaystyle p(x,y|\theta )=p(y|\theta )p(x|y,\theta )}by using thechain rule. Each parameter vectorθ{\displaystyle \theta }is associated with a decision functionfθ(x)=argmaxyp(y|x,θ){\displaystyle f_{\theta }(x)={\underset {y}{\operatorname {argmax} }}\ p(y|x,\theta )}.
The parameter is then chosen based on fit to both the labeled and unlabeled data, weighted byλ{\displaystyle \lambda }:
Another major class of methods attempts to place boundaries in regions with few data points (labeled or unlabeled). One of the most commonly used algorithms is thetransductive support vector machine, or TSVM (which, despite its name, may be used for inductive learning as well). Whereassupport vector machinesfor supervised learning seek a decision boundary with maximalmarginover the labeled data, the goal of TSVM is a labeling of the unlabeled data such that the decision boundary has maximal margin over all of the data. In addition to the standardhinge loss(1−yf(x))+{\displaystyle (1-yf(x))_{+}}for labeled data, a loss function(1−|f(x)|)+{\displaystyle (1-|f(x)|)_{+}}is introduced over the unlabeled data by lettingy=signf(x){\displaystyle y=\operatorname {sign} {f(x)}}. TSVM then selectsf∗(x)=h∗(x)+b{\displaystyle f^{*}(x)=h^{*}(x)+b}from areproducing kernel Hilbert spaceH{\displaystyle {\mathcal {H}}}by minimizing theregularizedempirical risk:
An exact solution is intractable due to the non-convexterm(1−|f(x)|)+{\displaystyle (1-|f(x)|)_{+}}, so research focuses on useful approximations.[9]
Other approaches that implement low-density separation include Gaussian process models, information regularization, and entropy minimization (of which TSVM is a special case).
Laplacian regularization has been historically approached through graph-Laplacian.
Graph-based methods for semi-supervised learning use a graph representation of the data, with a node for each labeled and unlabeled example. The graph may be constructed using domain knowledge or similarity of examples; two common methods are to connect each data point to itsk{\displaystyle k}nearest neighbors or to examples within some distanceϵ{\displaystyle \epsilon }. The weightWij{\displaystyle W_{ij}}of an edge betweenxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}is then set toe−‖xi−xj‖2/ϵ2{\displaystyle e^{-\|x_{i}-x_{j}\|^{2}/\epsilon ^{2}}}.
Within the framework ofmanifold regularization,[10][11]the graph serves as a proxy for the manifold. A term is added to the standardTikhonov regularizationproblem to enforce smoothness of the solution relative to the manifold (in the intrinsic space of the problem) as well as relative to the ambient input space. The minimization problem becomes
whereH{\displaystyle {\mathcal {H}}}is a reproducing kernelHilbert spaceandM{\displaystyle {\mathcal {M}}}is the manifold on which the data lie. The regularization parametersλA{\displaystyle \lambda _{A}}andλI{\displaystyle \lambda _{I}}control smoothness in the ambient and intrinsic spaces respectively. The graph is used to approximate the intrinsic regularization term. Defining thegraph LaplacianL=D−W{\displaystyle L=D-W}whereDii=∑j=1l+uWij{\displaystyle D_{ii}=\sum _{j=1}^{l+u}W_{ij}}andf{\displaystyle \mathbf {f} }is the vector[f(x1)…f(xl+u)]{\displaystyle [f(x_{1})\dots f(x_{l+u})]}, we have
The graph-based approach to Laplacian regularization is to put in relation withfinite difference method.[clarification needed][citation needed]
The Laplacian can also be used to extend the supervised learning algorithms:regularized least squaresand support vector machines (SVM) to semi-supervised versions Laplacian regularized least squares and Laplacian SVM.
Some methods for semi-supervised learning are not intrinsically geared to learning from both unlabeled and labeled data, but instead make use of unlabeled data within a supervised learning framework. For instance, the labeled and unlabeled examplesx1,…,xl+u{\displaystyle x_{1},\dots ,x_{l+u}}may inform a choice of representation,distance metric, orkernelfor the data in an unsupervised first step. Then supervised learning proceeds from only the labeled examples. In this vein, some methods learn a low-dimensional representation using the supervised data and then apply either low-density separation or graph-based methods to the learned representation.[12][13]Iteratively refining the representation and then performing semi-supervised learning on said representation may further improve performance.
Self-trainingis a wrapper method for semi-supervised learning.[14]First a supervised learning algorithm is trained based on the labeled data only. This classifier is then applied to the unlabeled data to generate more labeled examples as input for the supervised learning algorithm. Generally only the labels the classifier is most confident in are added at each step.[15]In natural language processing, a common self-training algorithm is theYarowsky algorithmfor problems like word sense disambiguation, accent restoration, and spelling correction.[16]
Co-trainingis an extension of self-training in which multiple classifiers are trained on different (ideally disjoint) sets of features and generate labeled examples for one another.[17]
Human responses to formal semi-supervised learning problems have yielded varying conclusions about the degree of influence of the unlabeled data.[18]More natural learning problems may also be viewed as instances of semi-supervised learning. Much of humanconcept learninginvolves a small amount of direct instruction (e.g. parental labeling of objects during childhood) combined with large amounts of unlabeled experience (e.g. observation of objects without naming or counting them, or at least without feedback).
Human infants are sensitive to the structure of unlabeled natural categories such as images of dogs and cats or male and female faces.[19]Infants and children take into account not only unlabeled examples, but thesamplingprocess from which labeled examples arise.[20][21]
|
https://en.wikipedia.org/wiki/Weak_supervision
|
Automatic taxonomy construction(ATC) is the use of software programs to generate taxonomical classifications from a body of texts called acorpus. ATC is a branch ofnatural language processing, which in turn is a branch ofartificial intelligence.
Ataxonomy(or taxonomical classification) is ascheme of classification, especially, a hierarchical classification, in which things are organized into groups or types.[1][2][3][4][5][6]Among other things, a taxonomy can be used to organize and index knowledge (stored as documents, articles, videos, etc.), such as in the form of alibrary classification system, or asearch engine taxonomy, so that users can more easily find the information they are searching for. Many taxonomies arehierarchies(and thus, have an intrinsictree structure), but not all are.
Manually developing and maintaining a taxonomy is a labor-intensive task requiring significant time and resources, including familiarity of or expertise in the taxonomy'sdomain(scope, subject, or field), which drives the costs and limits the scope of such projects. Also, domain modelers have their own points of view which inevitably, even if unintentionally, work their way into the taxonomy. ATC uses artificial intelligence techniques to quickly automatically generate a taxonomy for a domain in order to avoid these problems and remove limitations.
There are several approaches to ATC. One approach is to use rules to detect patterns in the corpus and use those patterns to infer relations such ashyponymy. Other approaches usemachine learningtechniques such asBayesian inferencingandArtificial Neural Networks.[7]
One approach to building a taxonomy is to automatically gather the keywords from a domain usingkeyword extraction, then analyze the relationships between them (seeHyponymy, below), and then arrange them as a taxonomy based on those relationships.
In ATC programs, one of the most important tasks is the discovery ofhypernym and hyponymrelations among words. One way to do that from a body of text is to search for certain phrases like "is a" and "such as".
Inlinguistics, is-a relations are calledhyponymy. Words that describe categories are called hypernyms and words that are examples of categories are hyponyms. For example,dogis a hypernym andFidois one of its hyponyms. A word can be both a hyponym and a hypernym. So,dogis a hyponym ofmammaland also a hypernym ofFido.
Taxonomies are often represented asis-ahierarchieswhere each level is more specific than (in mathematical language "a subset of") the level above it. For example, a basic biology taxonomy would have concepts such asmammal, which is a subset ofanimal, anddogsandcats, which are subsets ofmammal. This kind of taxonomy is called an is-a model because the specific objects are considered instances of a concept. For example,Fidois-a instance of the conceptdogandFluffyis-acat.[8]
ATC can be used to buildtaxonomies for search engines, to improve search results.
ATC systems are a key component ofontology learning(also known as automatic ontology construction), and have been used to automatically generate largeontologiesfor domains such as insurance and finance. They have also been used to enhance existing large networks such asWordnetto make them more complete and consistent.[9][10][11]
Other names for automatic taxonomy construction include:
|
https://en.wikipedia.org/wiki/Automatic_taxonomy_construction
|
Ininformation science, anontologyencompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or alldomains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to asapplied ontology.[1]
Everyacademic disciplineor field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain,interoperabilityof data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain acontrolled vocabularyofjargonbetween each of their languages.[2]For instance, thedefinition and ontology of economicsis a primary concern inMarxist economics,[3]but also in othersubfields of economics.[4]An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining whatcapital assetsare at risk and by how much (seerisk management).
What ontologies in bothinformation scienceandphilosophyhave in common is the attempt to represent entities, including both objects and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems ofontology engineering(e.g.,QuineandKripkein philosophy,SowaandGuarinoin information science),[5]and debates concerning to what extentnormativeontology is possible (e.g.,foundationalismandcoherentismin philosophy,BFOandCycin artificial intelligence).
Applied ontologyis considered by some as a successor to prior work in philosophy. However many current efforts are more concerned with establishingcontrolled vocabulariesof narrow domains than with philosophicalfirst principles, or with questions such as the mode of existence offixed essencesor whether enduring objects (e.g.,perdurantismandendurantism) may be ontologically more primary thanprocesses.Artificial intelligencehas retained considerable attention regardingapplied ontologyin subfields likenatural language processingwithinmachine translationandknowledge representation, but ontology editors are being used often in a range of fields, including biomedical informatics,[6]industry.[7]Such efforts often use ontology editing tools such asProtégé.[8]
Ontologyis a branch ofphilosophyand intersects areas such asmetaphysics,epistemology, andphilosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality.Metaphysicsdeals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those betweenparticularsanduniversals,intrinsic and extrinsic properties, oressenceandexistence. Metaphysics has been an ongoing topic of discussion since recorded history.
Thecompoundwordontologycombinesonto-, from theGreekὄν,on(gen.ὄντος,ontos), i.e. "being; that which is", which is thepresentparticipleof theverbεἰμί,eimí, i.e. "to be, I am", and-λογία,-logia, i.e. "logical discourse", seeclassical compoundsfor this type of word formation.[9][10]
While theetymologyis Greek, the oldest extant record of the word itself, theNeo-Latinformontologia, appeared in 1606 in the workOgdoas ScholasticabyJacob Lorhard(Lorhardus) and in 1613 in theLexicon philosophicumbyRudolf Göckel(Goclenius).[11]
The first occurrence in English ofontologyas recorded by theOED(Oxford English Dictionary, online edition, 2008) came inArcheologia Philosophica NovaorNew Principles of PhilosophybyGideon Harvey.
Since the mid-1970s, researchers in the field ofartificial intelligence(AI) have recognized thatknowledge engineeringis the key to building large and powerful AI systems[citation needed]. AI researchers argued that they could create new ontologies ascomputational modelsthat enable certain kinds ofautomated reasoning, which was onlymarginally successful. In the 1980s, the AI community began to use the termontologyto refer to both a theory of a modeled world and a component ofknowledge-based systems. In particular, David Powers introduced the wordontologyto AI to refer to real world or robotic grounding,[12][13]publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings.[14]Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy.[15]
In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" byTom Gruber[16]usedontologyas a technical term incomputer scienceclosely related to earlier idea ofsemantic networksandtaxonomies. Gruber introduced the term asa specification of a conceptualization:
An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy.[17]
Attempting to distance ontologies from taxonomies and similar efforts inknowledge modelingthat rely onclassesandinheritance, Gruber stated (1993):
Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited toconservative definitions, that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world (Enderton, 1972). To specify a conceptualization, one needs to state axioms thatdoconstrain the possible interpretations for the defined terms.[16]
Recent experimental ontology frameworks have also explored resonance-based AI-human co-evolution structures, such as IAMF (Illumination AI Matrix Framework). Though not yet widely adopted in academic discourse, such models propose phased approaches to ethical harmonization and structural emergence.[18]
As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity."[19]
Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations.
A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the wordcardhas many different meanings. An ontology about the domain ofpokerwould model the "playing card" meaning of the word, while an ontology about the domain ofcomputer hardwarewould model the "punched card" and "video card" meanings.
Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.)[citation needed].
At present, merging ontologies that are not developed from a commonupper ontologyis a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies,[20]but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like theOBO Foundry.
An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs acore glossarythat overarches the terms and associated object descriptions as they are used in various relevant domain ontologies.
Standardized upper ontologies available for use includeBFO,BORO method,Dublin Core,GFO,Cyc,SUMO,UMBEL, andDOLCE.[21][22]WordNethas been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies.[23]
TheGellishontology is an example of a combination of an upper and a domain ontology.
A survey of ontology visualization methods is presented by Katifori et al.[24]An updated survey of ontology visualization methods and tools was published by Dudás et al.[25]The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al.[26]A visual language for ontologies represented inOWLis specified by theVisual Notation for OWL Ontologies (VOWL).[27]
Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain.[28]It is a subfield ofknowledge engineeringthat studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them.[29][30]
Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include:
Ontology editorsare applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or moreontology languages.
Aspects of ontology editors include: visual navigation possibilities within theknowledge model,inference enginesandinformation extraction; support for modules; the import and export of foreignknowledge representationlanguages forontology matching; and the support of meta-ontologies such asOWL-S,Dublin Core, etc.[31]
Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction andtext mininghave been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges.[32]
Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have.[citation needed]
Anontology languageis aformal languageused to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based:
The W3CLinking Open Data community projectcoordinates attempts to converge different ontologies into worldwideSemantic Web.
The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries.
The following are libraries of human-selected ontologies.
The following are both directories and search engines.
In general, ontologies can be used beneficially in several fields.
|
https://en.wikipedia.org/wiki/Ontology_(information_science)#Domain_ontology
|
Natural language understanding(NLU) ornatural language interpretation(NLI)[1]is a subset ofnatural language processinginartificial intelligencethat deals with machinereading comprehension. NLU has been considered anAI-hardproblem.[2]
There is considerable commercial interest in the field because of its application toautomated reasoning,[3]machine translation,[4]question answering,[5]news-gathering,text categorization,voice-activation, archiving, and large-scalecontent analysis.
The programSTUDENT, written in 1964 byDaniel Bobrowfor his PhD dissertation atMIT, is one of the earliest known attempts at NLU by a computer.[6][7][8][9][10]Eight years afterJohn McCarthycoined the termartificial intelligence, Bobrow's dissertation (titledNatural Language Input for a Computer Problem Solving System) showed how a computer could understand simple natural language input to solve algebra word problems.
A year later, in 1965,Joseph Weizenbaumat MIT wroteELIZA, an interactive program that carried on a dialogue in English on any topic, the most popular being psychotherapy. ELIZA worked by simple parsing and substitution of key words into canned phrases and Weizenbaum sidestepped the problem of giving the program adatabaseof real-world knowledge or a richlexicon. Yet ELIZA gained surprising popularity as a toy project and can be seen as a very early precursor to current commercial systems such as those used byAsk.com.[11]
In 1969,Roger SchankatStanford Universityintroduced theconceptual dependency theoryfor NLU.[12]This model, partially influenced by the work ofSydney Lamb, was extensively used by Schank's students atYale University, such asRobert Wilensky,Wendy Lehnert, andJanet Kolodner.
In 1970,William A. Woodsintroduced theaugmented transition network(ATN) to represent natural language input.[13]Instead ofphrase structure rulesATNs used an equivalent set offinite-state automatathat were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years.
In 1971,Terry Winogradfinished writingSHRDLUfor his PhD thesis at MIT. SHRDLU could understand simple English sentences in a restricted world of children's blocks to direct a robotic arm to move items. The successful demonstration of SHRDLU provided significant momentum for continued research in the field.[14][15]Winograd continued to be a major influence in the field with the publication of his bookLanguage as a Cognitive Process.[16]At Stanford, Winograd would later adviseLarry Page, who co-foundedGoogle.
In the 1970s and 1980s, the natural language processing group atSRI Internationalcontinued research and development in the field. A number of commercial efforts based on the research were undertaken,e.g., in 1982Gary HendrixformedSymantec Corporationoriginally as a company for developing a natural language interface for database queries on personal computers. However, with the advent of mouse-drivengraphical user interfaces, Symantec changed direction. A number of other commercial efforts were started around the same time,e.g., Larry R. Harris at the Artificial Intelligence Corporation and Roger Schank and his students at Cognitive Systems Corp.[17][18]In 1983, Michael Dyer developed the BORIS system at Yale which bore similarities to the work of Roger Schank and W. G. Lehnert.[19]
The third millennium saw the introduction of systems using machine learning for text classification, such as the IBMWatson. However, experts debate how much "understanding" such systems demonstrate:e.g., according toJohn Searle, Watson did not even understand the questions.[20]
John Ball, cognitive scientist and inventor of thePatom Theory, supports this assessment. Natural language processing has made inroads for applications to support human productivity in service and e-commerce, but this has largely been made possible by narrowing the scope of the application. There are thousands of ways to request something in a human language that still defies conventional natural language processing.[citation needed]According to Wibe Wagemans, "To have a meaningful conversation with machines is only possible when we match every word to the correct meaning based on the meanings of the other words in the sentence – just like a 3-year-old does without guesswork."[21]
The umbrella term "natural language understanding" can be applied to a diverse set of computer applications, ranging from small, relatively simple tasks such as short commands issued torobots, to highly complex endeavors such as the full comprehension of newspaper articles or poetry passages. Many real-world applications fall between the two extremes, for instancetext classificationfor the automatic analysis of emails and their routing to a suitable department in a corporation does not require an in-depth understanding of the text,[22]but needs to deal with a much larger vocabulary and more diverse syntax than the management of simple queries to database tables with fixed schemata.
Throughout the years various attempts at processing natural language orEnglish-likesentences presented to computers have taken place at varying degrees of complexity. Some attempts have not resulted in systems with deep understanding, but have helped overall system usability. For example,Wayne Ratlifforiginally developed theVulcanprogram with an English-like syntax to mimic the English speaking computer inStar Trek. Vulcan later became thedBasesystem whose easy-to-use syntax effectively launched the personal computer database industry.[23][24]Systems with an easy to use or English-like syntax are, however, quite distinct from systems that use a richlexiconand include an internalrepresentation(often asfirst order logic) of the semantics of natural language sentences.
Hence the breadth and depth of "understanding" aimed at by a system determine both the complexity of the system (and the implied challenges) and the types of applications it can deal with. The "breadth" of a system is measured by the sizes of its vocabulary and grammar. The "depth" is measured by the degree to which its understanding approximates that of a fluent native speaker. At the narrowest and shallowest,English-likecommand interpreters require minimal complexity, but have a small range of applications. Narrow but deep systems explore and model mechanisms of understanding,[25]but they still have limited application. Systems that attempt to understand the contents of a document such as a news release beyond simple keyword matching and to judge its suitability for a user are broader and require significant complexity,[26]but they are still somewhat shallow. Systems that are both very broad and very deep are beyond the current state of the art.
Regardless of the approach used, most NLU systems share some common components. The system needs alexiconof the language and aparserandgrammarrules to break sentences into an internal representation. The construction of a rich lexicon with a suitableontologyrequires significant effort,e.g., theWordnetlexicon required many person-years of effort.[27]
The system also needs theory fromsemanticsto guide the comprehension. The interpretation capabilities of a language-understanding system depend on the semantic theory it uses. Competing semantic theories of language have specific trade-offs in their suitability as the basis of computer-automated semantic interpretation.[28]These range fromnaive semanticsorstochastic semantic analysisto the use ofpragmaticsto derive meaning from context.[29][30][31]Semantic parsersconvert natural-language texts into formal meaning representations.[32]
Advanced applications of NLU also attempt to incorporate logicalinferencewithin their framework. This is generally achieved by mapping the derived meaning into a set of assertions inpredicate logic, then usinglogical deductionto arrive at conclusions. Therefore, systems based on functional languages such asLispneed to include a subsystem to represent logical assertions, while logic-oriented systems such as those using the languageProloggenerally rely on an extension of the built-in logical representation framework.[33][34]
The management ofcontextin NLU can present special challenges. A large variety of examples and counter examples have resulted in multiple approaches to theformal modelingof context, each with specific strengths and weaknesses.[35][36]
|
https://en.wikipedia.org/wiki/Natural_language_understanding
|
Incomputer science,canonicalization(sometimesstandardizationornormalization) is a process for convertingdatathat has more than one possible representation into a "standard", "normal", orcanonical form. This can be done to compare different representations for equivalence, to count the number of distinct data structures, to improve the efficiency of variousalgorithmsby eliminating repeated calculations, or to make it possible to impose a meaningfulsortingorder.
Files infile systemsmay in most cases be accessed through multiplefilenames. For instance inUnix-like systems, the string "/./" can be replaced by "/". In theC standard library, the functionrealpath()performs this task. Other operations performed by this function to canonicalize filenames are the handling of/..components referring to parent directories, simplification of sequences of multiple slashes, removal of trailing slashes, and the resolution ofsymbolic links.
Canonicalization of filenames is important for computer security. For example, a web server may have a restriction that only files under the cgi directoryC:\inetpub\wwwroot\cgi-binmay be executed. This rule is enforced by checking that the path starts withC:\inetpub\wwwroot\cgi-bin\and only then executing it. While the fileC:\inetpub\wwwroot\cgi-bin\..\..\..\Windows\System32\cmd.exeinitially appears to be in the cgi directory, it exploits the..path specifier to traverse back up the directory hierarchy in an attempt to execute a file outside ofcgi-bin. Permittingcmd.exeto execute would be an error caused by a failure to canonicalize the filename to the simplest representation,C:\Windows\System32\cmd.exe, and is called adirectory traversalvulnerability. With the path canonicalized, it is clear the file should not be executed.
InUnicode, many accented letters can be represented in more than one way. For example,écan be represented in Unicode as the Unicode character U+0065 (LATIN SMALL LETTER E) followed by the character U+0301 (COMBINING ACUTE ACCENT), but it can also be represented as the precomposed character U+00E9 (LATIN SMALL LETTER E WITH ACUTE). This makes string comparison more complicated, since every possible representation of a string containing such glyphs must be considered. To deal with this, Unicode provides the mechanism ofcanonical equivalence. In this context, canonicalization isUnicode normalization.
Variable-width encodingsin the Unicode standard, in particularUTF-8, may cause an additional need for canonicalization in some situations. Namely, by the standard, in UTF-8 there is only one valid byte sequence for any Unicode character,[1]but some byte sequences are invalid, i.e., they cannot be obtained by encoding any string of Unicode characters into UTF-8. Some sloppy decoder implementations may accept invalid byte sequences as input and produce a valid Unicode character as output for such a sequence. If one uses such a decoder, some Unicode characters effectively have more than one corresponding byte sequence: the valid one and some invalid ones. This could lead to security issues similar to the one described in the previous section. Therefore, if one wants to apply some filter (e.g., a regular expression written in UTF-8) to UTF-8 strings that will later be passed to a decoder that allows invalid byte sequences, one should canonicalize the strings before passing them to the filter. In this context, canonicalization is the process of translating every string character to its single valid byte sequence. An alternative to canonicalization is to reject any strings containing invalid byte sequences.
Acanonical URLis aURLfor defining thesingle source of truthforduplicate content.
A canonical URL is the URL of the page that Google thinks is most representative from a set of duplicate pages on your site. For example, if you have URLs for the same page, such ashttps://example.com/?dress=1234andhttps://example.com/dresses/1234, Google chooses one as canonical. Note that the pages do not need to be absolutely identical; minor changes in sorting or filtering of list pages do not make the page unique (for example, sorting by price or filtering by item color).
The canonical can be in a different domain than a duplicate.[2]
With the help of canonical URLs, a search engine knows which link should be provided in a query result.
Acanonical link elementcan get used to define a canonical URL.
Inintranets, manual searching for information is predominant. In this case, canonical URLs can be defined in a non-machine-readable form, too. For example in aguideline.
Canonical URLs are usually the URLs that get used for theshare action.
Since the Canonical URL gets used in the search result of search engines, they are in most cases alanding page.
In web search andsearch engine optimization(SEO),URL canonicalizationdeals with web content that has more than one possible URL. Having multiple URLs for the same web content can cause problems for search engines - specifically in determining which URL should be shown in search results.[3]Most search engines support theCanonical link elementas a hint to which URL should be treated as the true version. As indicated by John Mueller of Google, having other directives in a page, like therobots noindexelement can give search engines conflicting signals about how to handle canonicalization[4]
Example:
All of these URLs point to the homepage of Wikipedia, but a search engine will only consider one of them to be the canonical form of the URL.
ACanonical XMLdocument is by definition an XML document that is in XML Canonical form, defined byThe Canonical XML specification. Briefly, canonicalization removes whitespace within tags, uses particular character encodings, sorts namespace references and eliminates redundant ones, removes XML and DOCTYPE declarations, and transforms relative URIs into absolute URIs.
A simple example would be the following two snippets of XML:
The first example contains extra spaces in the closing tag of the first node. The second example, which has been canonicalized, has had these spaces removed. Note that only the spaces within the tags are removed under W3C canonicalization, not those between tags.
A full summary of canonicalization changes is listed below:
Inmorphologyandlexicography, alemmais thecanonical formof a set ofwords. InEnglish, for example,run,runs,ran, andrunningare forms of the samelexeme, so we can select one of them; ex.run, to represent all the forms.Lexical databasessuch asUnitexuse this kind of representation.
Lemmatisationis the process of converting a word to itscanonical form.
|
https://en.wikipedia.org/wiki/Canonicalization
|
Inmathematics, acanonical basisis a basis of analgebraic structurethat is canonical in a sense that depends on the precise context:
The canonical basis for the irreducible representations of a quantized enveloping algebra of
typeADE{\displaystyle ADE}and also for the plus part of that algebra was introduced by Lusztig[2]by
two methods: an algebraic one (using a braid group action and PBW bases) and a topological one
(using intersection cohomology). Specializing the parameterq{\displaystyle q}toq=1{\displaystyle q=1}yields a canonical basis for the irreducible representations of the corresponding simple Lie algebra, which was
not known earlier. Specializing the parameterq{\displaystyle q}toq=0{\displaystyle q=0}yields something like a shadow of a basis. This shadow (but not the basis itself) for the case of irreducible representations
was considered independently by Kashiwara;[3]it is sometimes called thecrystal basis.
The definition of the canonical basis was extended to the Kac-Moody setting by Kashiwara[4](by an algebraic method) and by Lusztig[5](by a topological method).
There is a general concept underlying these bases:
Consider the ring of integralLaurent polynomialsZ:=Z[v,v−1]{\displaystyle {\mathcal {Z}}:=\mathbb {Z} \left[v,v^{-1}\right]}with its two subringsZ±:=Z[v±1]{\displaystyle {\mathcal {Z}}^{\pm }:=\mathbb {Z} \left[v^{\pm 1}\right]}and the automorphism⋅¯{\displaystyle {\overline {\cdot }}}defined byv¯:=v−1{\displaystyle {\overline {v}}:=v^{-1}}.
Aprecanonical structureon a freeZ{\displaystyle {\mathcal {Z}}}-moduleF{\displaystyle F}consists of
If a precanonical structure is given, then one can define theZ±{\displaystyle {\mathcal {Z}}^{\pm }}submoduleF±:=∑Z±tj{\textstyle F^{\pm }:=\sum {\mathcal {Z}}^{\pm }t_{j}}ofF{\displaystyle F}.
Acanonical basis of the precanonical structure is then aZ{\displaystyle {\mathcal {Z}}}-basis(ci)i∈I{\displaystyle (c_{i})_{i\in I}}ofF{\displaystyle F}that satisfies:
for alli∈I{\displaystyle i\in I}.
One can show that there exists at most one canonical basis for each precanonical structure.[6]A sufficient condition for existence is that the polynomialsrij∈Z{\displaystyle r_{ij}\in {\mathcal {Z}}}defined bytj¯=∑irijti{\textstyle {\overline {t_{j}}}=\sum _{i}r_{ij}t_{i}}satisfyrii=1{\displaystyle r_{ii}=1}andrij≠0⟹i≤j{\displaystyle r_{ij}\neq 0\implies i\leq j}.
A canonical basis induces an isomorphism fromF+∩F+¯=∑iZci{\displaystyle \textstyle F^{+}\cap {\overline {F^{+}}}=\sum _{i}\mathbb {Z} c_{i}}toF+/vF+{\displaystyle F^{+}/vF^{+}}.
Let(W,S){\displaystyle (W,S)}be aCoxeter group. The correspondingIwahori-Hecke algebraH{\displaystyle H}has the standard basis(Tw)w∈W{\displaystyle (T_{w})_{w\in W}}, the group is partially ordered by theBruhat orderwhich is interval finite and has a dualization operation defined byTw¯:=Tw−1−1{\displaystyle {\overline {T_{w}}}:=T_{w^{-1}}^{-1}}. This is a precanonical structure onH{\displaystyle H}that satisfies the sufficient condition above and the corresponding canonical basis ofH{\displaystyle H}is theKazhdan–Lusztig basis
withPy,w{\displaystyle P_{y,w}}being theKazhdan–Lusztig polynomials.
If we are given ann×nmatrixA{\displaystyle A}and wish to find a matrixJ{\displaystyle J}inJordan normal form,similartoA{\displaystyle A}, we are interested only in sets oflinearly independentgeneralized eigenvectors. A matrix in Jordan normal form is an "almost diagonal matrix," that is, as close to diagonal as possible. Adiagonal matrixD{\displaystyle D}is a special case of a matrix in Jordan normal form. Anordinary eigenvectoris a special case of a generalized eigenvector.
Everyn×nmatrixA{\displaystyle A}possessesnlinearly independent generalized eigenvectors. Generalized eigenvectors corresponding to distincteigenvaluesare linearly independent. Ifλ{\displaystyle \lambda }is an eigenvalue ofA{\displaystyle A}ofalgebraic multiplicityμ{\displaystyle \mu }, thenA{\displaystyle A}will haveμ{\displaystyle \mu }linearly independent generalized eigenvectors corresponding toλ{\displaystyle \lambda }.
For any givenn×nmatrixA{\displaystyle A}, there are infinitely many ways to pick thenlinearly independent generalized eigenvectors. If they are chosen in a particularly judicious manner, we can use these vectors to show thatA{\displaystyle A}is similar to a matrix in Jordan normal form. In particular,
Definition:A set ofnlinearly independent generalized eigenvectors is acanonical basisif it is composed entirely of Jordan chains.
Thus, once we have determined that a generalized eigenvector ofrankmis in a canonical basis, it follows that them− 1 vectorsxm−1,xm−2,…,x1{\displaystyle \mathbf {x} _{m-1},\mathbf {x} _{m-2},\ldots ,\mathbf {x} _{1}}that are in the Jordan chain generated byxm{\displaystyle \mathbf {x} _{m}}are also in the canonical basis.[7]
Letλi{\displaystyle \lambda _{i}}be an eigenvalue ofA{\displaystyle A}of algebraic multiplicityμi{\displaystyle \mu _{i}}. First, find theranks(matrix ranks) of the matrices(A−λiI),(A−λiI)2,…,(A−λiI)mi{\displaystyle (A-\lambda _{i}I),(A-\lambda _{i}I)^{2},\ldots ,(A-\lambda _{i}I)^{m_{i}}}. The integermi{\displaystyle m_{i}}is determined to be thefirst integerfor which(A−λiI)mi{\displaystyle (A-\lambda _{i}I)^{m_{i}}}has rankn−μi{\displaystyle n-\mu _{i}}(nbeing the number of rows or columns ofA{\displaystyle A}, that is,A{\displaystyle A}isn×n).
Now define
The variableρk{\displaystyle \rho _{k}}designates the number of linearly independent generalized eigenvectors of rankk(generalized eigenvector rank; seegeneralized eigenvector) corresponding to the eigenvalueλi{\displaystyle \lambda _{i}}that will appear in a canonical basis forA{\displaystyle A}. Note that
Once we have determined the number of generalized eigenvectors of each rank that a canonical basis has, we can obtain the vectors explicitly (seegeneralized eigenvector).[8]
This example illustrates a canonical basis with two Jordan chains. Unfortunately, it is a little difficult to construct an interesting example of low order.[9]The matrix
has eigenvaluesλ1=4{\displaystyle \lambda _{1}=4}andλ2=5{\displaystyle \lambda _{2}=5}with algebraic multiplicitiesμ1=4{\displaystyle \mu _{1}=4}andμ2=2{\displaystyle \mu _{2}=2}, butgeometric multiplicitiesγ1=1{\displaystyle \gamma _{1}=1}andγ2=1{\displaystyle \gamma _{2}=1}.
Forλ1=4,{\displaystyle \lambda _{1}=4,}we haven−μ1=6−4=2,{\displaystyle n-\mu _{1}=6-4=2,}
Thereforem1=4.{\displaystyle m_{1}=4.}
Thus, a canonical basis forA{\displaystyle A}will have, corresponding toλ1=4,{\displaystyle \lambda _{1}=4,}one generalized eigenvector each of ranks 4, 3, 2 and 1.
Forλ2=5,{\displaystyle \lambda _{2}=5,}we haven−μ2=6−2=4,{\displaystyle n-\mu _{2}=6-2=4,}
Thereforem2=2.{\displaystyle m_{2}=2.}
Thus, a canonical basis forA{\displaystyle A}will have, corresponding toλ2=5,{\displaystyle \lambda _{2}=5,}one generalized eigenvector each of ranks 2 and 1.
A canonical basis forA{\displaystyle A}is
x1{\displaystyle \mathbf {x} _{1}}is the ordinary eigenvector associated withλ1{\displaystyle \lambda _{1}}.x2,x3{\displaystyle \mathbf {x} _{2},\mathbf {x} _{3}}andx4{\displaystyle \mathbf {x} _{4}}are generalized eigenvectors associated withλ1{\displaystyle \lambda _{1}}.y1{\displaystyle \mathbf {y} _{1}}is the ordinary eigenvector associated withλ2{\displaystyle \lambda _{2}}.y2{\displaystyle \mathbf {y} _{2}}is a generalized eigenvector associated withλ2{\displaystyle \lambda _{2}}.
A matrixJ{\displaystyle J}in Jordan normal form, similar toA{\displaystyle A}is obtained as follows:
where the matrixM{\displaystyle M}is ageneralized modal matrixforA{\displaystyle A}andAM=MJ{\displaystyle AM=MJ}.[10]
|
https://en.wikipedia.org/wiki/Canonical_basis
|
Inmathematics, thecanonical bundleof anon-singularalgebraic varietyV{\displaystyle V}of dimensionn{\displaystyle n}over a field is theline bundleΩn=ω{\displaystyle \,\!\Omega ^{n}=\omega }, which is then{\displaystyle n}thexterior powerof thecotangent bundleΩ{\displaystyle \Omega }onV{\displaystyle V}.
Over thecomplex numbers, it is thedeterminant bundleof the holomorphiccotangent bundleT∗V{\displaystyle T^{*}V}. Equivalently, it is the line bundle of holomorphicn{\displaystyle n}-forms onV{\displaystyle V}.
This is thedualising objectforSerre dualityonV{\displaystyle V}. It may equally well be considered as aninvertible sheaf.
Thecanonical classis thedivisor classof aCartier divisorK{\displaystyle K}onV{\displaystyle V}giving rise to the canonical bundle — it is anequivalence classforlinear equivalenceonV{\displaystyle V}, and any divisor in it may be called acanonical divisor. Ananticanonicaldivisor is any divisor −K{\displaystyle K}withK{\displaystyle K}canonical.
Theanticanonical bundleis the corresponding inverse bundleω−1{\displaystyle \omega ^{-1}}. When the anticanonical bundle ofV{\displaystyle V}isample,V{\displaystyle V}is called aFano variety.
Suppose thatX{\displaystyle X}is asmooth varietyand thatD{\displaystyle D}is a smooth divisor onX{\displaystyle X}. The adjunction formula relates the canonical bundles ofX{\displaystyle X}andD{\displaystyle D}. It is a natural isomorphism
In terms of canonical classes, it is
This formula is one of the most powerful formulas in algebraic geometry. An important tool of modern birational geometry isinversion of adjunction, which allows one to deduce results about the singularities ofX{\displaystyle X}from the singularities ofD{\displaystyle D}.
LetX{\displaystyle X}be a normal surface. Agenusg{\displaystyle g}fibrationf:X→B{\displaystyle f:X\to B}ofX{\displaystyle X}is aproperflatmorphismf{\displaystyle f}to a smooth curve such thatf∗OX≅OB{\displaystyle f_{*}{\mathcal {O}}_{X}\cong {\mathcal {O}}_{B}}and all fibers off{\displaystyle f}havearithmetic genusg{\displaystyle g}. IfX{\displaystyle X}is a smooth projective surface and thefibersoff{\displaystyle f}do not contain rational curves of self-intersection−1{\displaystyle -1}, then the fibration is calledminimal. For example, ifX{\displaystyle X}admits a (minimal) genus 0 fibration, then isX{\displaystyle X}is birationally ruled, that is, birational toP1×B{\displaystyle \mathbb {P} ^{1}\times B}.
For a minimal genus 1 fibration (also calledelliptic fibrations)f:X→B{\displaystyle f:X\to B}all but finitely many fibers off{\displaystyle f}are geometrically integral and all fibers are geometrically connected (byZariski's connectedness theorem). In particular, for a fiberF=∑i=1naiEi{\displaystyle F=\sum _{i=1}^{n}a_{i}E_{i}}off{\displaystyle f}, we have thatF.Ei=KX.Ei=0,{\displaystyle F.E_{i}=K_{X}.E_{i}=0,}whereKX{\displaystyle K_{X}}is a canonical divisor ofX{\displaystyle X}; so form=gcd(ai){\displaystyle m=\operatorname {gcd} (a_{i})}, ifF{\displaystyle F}is geometrically integral ifm=1{\displaystyle m=1}andm>1{\displaystyle m>1}otherwise.
Consider a minimal genus 1 fibrationf:X→B{\displaystyle f:X\to B}. LetF1,…,Fr{\displaystyle F_{1},\dots ,F_{r}}be the finitely many fibers that are not geometrically integral and writeFi=miFi′{\displaystyle F_{i}=m_{i}F_{i}^{'}}wheremi>1{\displaystyle m_{i}>1}is greatest common divisor of coefficients of the expansion ofFi{\displaystyle F_{i}}into integral components; these are calledmultiple fibers. Bycohomology and base changeone has thatR1f∗OX=L⊕T{\displaystyle R^{1}f_{*}{\mathcal {O}}_{X}={\mathcal {L}}\oplus {\mathcal {T}}}whereL{\displaystyle {\mathcal {L}}}is an invertible sheaf andT{\displaystyle {\mathcal {T}}}is a torsion sheaf (T{\displaystyle {\mathcal {T}}}is supported onb∈B{\displaystyle b\in B}such thath0(Xb,OXb)>1{\displaystyle h^{0}(X_{b},{\mathcal {O}}_{X_{b}})>1}). Then, one has that
where0≤ai<mi{\displaystyle 0\leq a_{i}<m_{i}}for eachi{\displaystyle i}anddeg(L−1)=χ(OX)+length(T){\displaystyle \operatorname {deg} \left({\mathcal {L}}^{-1}\right)=\chi ({\mathcal {O}}_{X})+\operatorname {length} ({\mathcal {T}})}.[1]One notes that
For example, for the minimal genus 1 fibration of a(quasi)-bielliptic surfaceinduced by theAlbanese morphism, the canonical bundle formula gives that this fibration has no multiple fibers. A similar deduction can be made for any minimal genus 1 fibration of aK3 surface. On the other hand, a minimal genus one fibration of anEnriques surfacewill always admit multiple fibers and so, such a surface will not admit a section.
On a singular varietyX{\displaystyle X}, there are several ways to define the canonical divisor. If the variety is normal, it is smooth in codimension one. In particular, we can define canonical divisor on the smooth locus. This gives us a uniqueWeil divisorclass onX{\displaystyle X}. It is this class, denoted byKX{\displaystyle K_{X}}that is referred to as the canonical divisor onX.{\displaystyle X.}
Alternately, again on a normal varietyX{\displaystyle X}, one can considerh−d(ωX.){\displaystyle h^{-d}(\omega _{X}^{.})}, the−d{\displaystyle -d}'th cohomology of the normalizeddualizing complexofX{\displaystyle X}. This sheaf corresponds to aWeil divisorclass, which is equal to the divisor classKX{\displaystyle K_{X}}defined above. In the absence of the normality hypothesis, the same result holds ifX{\displaystyle X}is S2 andGorensteinin dimension one.
If the canonical class iseffective, then it determines arational mapfromVinto projective space. This map is called thecanonical map. The rational map determined by thenth multiple of the canonical class is then-canonical map. Then-canonical map sendsVinto a projective space of dimension one less than the dimension of the global sections of thenth multiple of the canonical class.n-canonical maps may have base points, meaning that they are not defined everywhere (i.e., they may not be a morphism of varieties). They may have positive dimensional fibers, and even if they have zero-dimensional fibers, they need not be local analytic isomorphisms.
The best studied case is that of curves. Here, the canonical bundle is the same as the (holomorphic)cotangent bundle. A global section of the canonical bundle is therefore the same as an everywhere-regular differential form. Classically, these were calleddifferentials of the first kind. The degree of the canonical class is 2g− 2 for a curve of genusg.[2]
Suppose thatCis a smooth algebraic curve of genusg. Ifgis zero, thenCisP1, and the canonical class is the class of −2P, wherePis any point ofC. This follows from the calculus formulad(1/t) = −dt/t2, for example, a meromorphic differential with double pole at the origin on theRiemann sphere. In particular,KCand its multiples are not effective. Ifgis one, thenCis anelliptic curve, andKCis the trivial bundle. The global sections of the trivial bundle form a one-dimensional vector space, so then-canonical map for anynis the map to a point.
IfChas genus two or more, then the canonical class isbig, so the image of anyn-canonical map is a curve. The image of the 1-canonical map is called acanonical curve. A canonical curve of genusgalways sits in a projective space of dimensiong− 1.[3]WhenCis ahyperelliptic curve, the canonical curve is arational normal curve, andCa double cover of its canonical curve. For example ifPis a polynomial of degree 6 (without repeated roots) then
is an affine curve representation of a genus 2 curve, necessarily hyperelliptic, and a basis of the differentials of the first kind is given in the same notation by
This means that the canonical map is given byhomogeneous coordinates[1:x] as a morphism to the projective line. The rational normal curve for higher genus hyperelliptic curves arises in the same way with higher power monomials inx.
Otherwise, for non-hyperellipticCwhich meansgis at least 3, the morphism is an isomorphism ofCwith its image, which has degree 2g− 2. Thus forg= 3 the canonical curves (non-hyperelliptic case) arequartic plane curves. All non-singular plane quartics arise in this way. There is explicit information for the caseg= 4, when a canonical curve is an intersection of aquadricand acubic surface; and forg= 5 when it is an intersection of three quadrics.[3]There is a converse, which is a corollary to theRiemann–Roch theorem: a non-singular curveCof genusgembedded in projective space of dimensiong− 1 as alinearly normalcurve of degree 2g− 2 is a canonical curve, provided its linear span is the whole space. In fact the relationship between canonical curvesC(in the non-hyperelliptic case ofgat least 3), Riemann-Roch, and the theory ofspecial divisorsis rather close. Effective divisorsDonCconsisting of distinct points have a linear span in the canonical embedding with dimension directly related to that of the linear system in which they move; and with some more discussion this applies also to the case of points with multiplicities.[4][5]
More refined information is available, for larger values ofg, but in these cases canonical curves are not generallycomplete intersections, and the description requires more consideration ofcommutative algebra. The field started withMax Noether's theorem: the dimension of the space of quadrics passing throughCas embedded as canonical curve is (g− 2)(g− 3)/2.[6]Petri's theorem, often cited under this name and published in 1923 by Karl Petri (1881–1955), states that forgat least 4 the homogeneous ideal defining the canonical curve is generated by its elements of degree 2, except for the cases of (a)trigonal curvesand (b) non-singular plane quintics wheng= 6. In the exceptional cases, the ideal is generated by the elements of degrees 2 and 3. Historically speaking, this result was largely known before Petri, and has been called the theorem of Babbage-Chisini-Enriques (for Dennis Babbage who completed the proof,Oscar ChisiniandFederigo Enriques). The terminology is confused, since the result is also called theNoether–Enriques theorem. Outside the hyperelliptic cases, Noether proved that (in modern language) the canonical bundle isnormally generated: thesymmetric powersof the space of sections of the canonical bundle map onto the sections of its tensor powers.[7][8]This implies for instance the generation of thequadratic differentialson such curves by the differentials of the first kind; and this has consequences for thelocal Torelli theorem.[9]Petri's work actually provided explicit quadratic and cubic generators of the ideal, showing that apart from the exceptions the cubics could be expressed in terms of the quadratics. In the exceptional cases the intersection of the quadrics through the canonical curve is respectively aruled surfaceand aVeronese surface.
These classical results were proved over the complex numbers, but modern discussion shows that the techniques work over fields of any characteristic.[10]
Thecanonical ringofVis thegraded ring
If the canonical class ofVis anample line bundle, then the canonical ring is thehomogeneous coordinate ringof the image of the canonical map. This can be true even when the canonical class ofVis not ample. For instance, ifVis a hyperelliptic curve, then the canonical ring is again the homogeneous coordinate ring of the image of the canonical map. In general, if the ring above is finitely generated, then it is elementary to see that it is the homogeneous coordinate ring of the image of ak-canonical map, wherekis any sufficiently divisible positive integer.
Theminimal model programproposed that the canonical ring of every smooth or mildly singular projective variety was finitely generated. In particular, this was known to imply the existence of acanonical model, a particular birational model ofVwith mild singularities that could be constructed by blowing downV. When the canonical ring is finitely generated, the canonical model isProjof the canonical ring. If the canonical ring is not finitely generated, thenProjRis not a variety, and so it cannot be birational toV; in particular,Vadmits no canonical model. One can show that if the canonical divisorKofVis anefdivisor and theself intersectionofKis greater than zero, thenVwill admit a canonical model (more generally, this is true for normal complete Gorenstein algebraic spaces[11]).[12]
A fundamental theorem of Birkar–Cascini–Hacon–McKernan from 2006[13]is that the canonical ring of a smooth or mildly singular projective algebraic variety is finitely generated.
TheKodaira dimensionofVis the dimension of the canonical ring minus one. Here the dimension of the canonical ring may be taken to meanKrull dimensionortranscendence degree.
|
https://en.wikipedia.org/wiki/Canonical_class
|
Normalizationornormalisationrefers to a process that makes something more normal or regular.
|
https://en.wikipedia.org/wiki/Normalization_(disambiguation)
|
Standardization(American English) orstandardisation(British English) is the process of implementing and developingtechnical standardsbased on the consensus of different parties that include firms, users, interest groups, standards organizations and governments.[1]Standardization can help maximizecompatibility,interoperability,safety,repeatability,efficiency, andquality. It can also facilitate a normalization of formerly custom processes.
Insocial sciences, includingeconomics,[2]the idea ofstandardizationis close to the solution for acoordination problem, a situation in which all parties can realize mutual gains, but only by making mutually consistent decisions. Divergent national standards impose costs on consumers and can be a form ofnon-tariff trade barrier.[3]
Standard weights and measures were developed by theIndus Valley civilization.[4]The centralized weight and measure system served the commercial interest of Indus merchants as smaller weight measures were used to measure luxury goods while larger weights were employed for buying bulkier items, such as food grains etc.[5]Weights existed in multiples of a standard weight and in categories.[5]Technical standardisationenabled gauging devices to be effectively used inangular measurementand measurement for construction.[6]Uniform units of length were used in the planning of towns such asLothal,Surkotada,Kalibangan,Dolavira,Harappa, andMohenjo-daro.[4]The weights and measures of the Indus civilization also reachedPersiaandCentral Asia, where they were further modified.[7]Shigeo Iwata describes the excavated weights unearthed from the Indus civilization:
A total of 558 weights were excavated from Mohenjodaro, Harappa, andChanhu-daro, not including defective weights. They did not find statistically significant differences between weights that were excavated from five different layers, each measuring about 1.5 m in depth. This was evidence that strong control existed for at least a 500-year period. The 13.7-g weight seems to be one of the units used in the Indus valley. The notation was based on thebinaryanddecimalsystems. 83% of the weights which were excavated from the above three cities were cubic, and 68% were made ofchert.[4]
The implementation of standards in industry and commerce became highly important with the onset of theIndustrial Revolutionand the need for high-precisionmachine toolsandinterchangeable parts.
Henry Maudslaydeveloped the first industrially practicalscrew-cutting lathein 1800. This allowed for the standardization ofscrew threadsizes for the first time and paved the way for the practical application ofinterchangeability(an idea that was already taking hold) tonutsandbolts.[8]
Before this, screw threads were usually made by chipping and filing (that is, with skilled freehand use ofchiselsandfiles).Nutswere rare; metal screws, when made at all, were usually for use in wood. Metal bolts passing through wood framing to a metal fastening on the other side were usually fastened in non-threaded ways (such as clinching or upsetting against a washer). Maudslay standardized the screw threads used in his workshop and produced sets oftaps and diesthat would make nuts and bolts consistently to those standards, so that any bolt of the appropriate size would fit any nut of the same size. This was a major advance in workshop technology.[9]
Maudslay's work, as well as the contributions of other engineers, accomplished a modest amount of industry standardization; some companies' in-house standards spread a bit within their industries.
Joseph Whitworth's screw thread measurements were adopted as the first (unofficial) national standard by companies around the country in 1841. It came to be known as theBritish Standard Whitworth, and was widely adopted in other countries.[10][11]
This new standard specified a 55° thread angle and a thread depth of 0.640327pand a radius of 0.137329p, wherepis the pitch. The thread pitch increased with diameter in steps specified on a chart. An example of the use of the Whitworth thread is theRoyal Navy'sCrimean Wargunboats. These were the first instance of "mass-production" techniques being applied to marine engineering.[8]
With the adoption of BSW by Britishrailwaylines, many of which had previously used their own standard both for threads and for bolt head and nut profiles, and improving manufacturing techniques, it came to dominate British manufacturing.
American Unified Coarsewas originally based on almost the same imperial fractions. The Unified thread angle is 60° and has flattened crests (Whitworth crests are rounded). Thread pitch is the same in both systems except that the thread pitch for the1⁄2in. (inch) bolt is 12 threads per inch (tpi) in BSW versus 13 tpi in the UNC.
By the end of the 19th century, differences in standards between companies were making trade increasingly difficult and strained. For instance, an iron and steel dealer recorded his displeasure inThe Times: "Architects and engineers generally specify such unnecessarily diverse types of sectional material or given work that anything like economical and continuous manufacture becomes impossible. In this country no two professional men are agreed upon the size and weight of a girder to employ for given work."
TheEngineering Standards Committeewas established in London in 1901 as the world's first national standards body.[12][13]It subsequently extended its standardization work and became the British Engineering Standards Association in 1918, adopting the name British Standards Institution in 1931 after receiving its Royal Charter in 1929. The national standards were adopted universally throughout the country, and enabled the markets to act more rationally and efficiently, with an increased level of cooperation.
After theFirst World War, similar national bodies were established in other countries. TheDeutsches Institut für Normungwas set up in Germany in 1917, followed by its counterparts, the AmericanNational Standard Instituteand the FrenchCommission Permanente de Standardisation, both in 1918.[8]
At a regional level (e.g. Europa, the Americas, Africa, etc) or at subregional level (e.g. Mercosur, Andean Community, South East Asia, South East Africa, etc), several Regional Standardization Organizations exist (see alsoStandards Organization).
The three regional standards organizations in Europe – European Standardization Organizations (ESOs), recognised by the EU Regulation on Standardization (Regulation (EU) 1025/2012)[14]– areCEN,CENELECandETSI. CEN develops standards for numerous kinds of products, materials, services and processes. Some sectors covered by CEN include transport equipment and services, chemicals, construction, consumer products, defence and security, energy, food and feed, health and safety, healthcare, digital sector, machinery or services.[15]The European Committee for Electrotechnical Standardization (CENELEC) is the European Standardization organization developing standards in the electrotechnical area and corresponding to the International Electrotechnical Commission (IEC) in Europe.[16]
The first modernInternational Organization(Intergovernmental Organization) the International Telegraph Union (nowInternational Telecommunication Union) was created in 1865[17]to set international standards in order to connect national telegraph networks, as a merger of two predecessor organizations (Bern and Paris treaties) that had similar objectives, but in more limited territories.[18][19]With the advent of radiocommunication soon after the creation, the work of the ITU quickly expanded from the standardization of Telegraph communications, to developing standards for telecommunications in general.
By the mid to late 19th century, efforts were being made to standardize electrical measurement.Lord Kelvinwas an important figure in this process, introducing accurate methods and apparatus for measuring electricity. In 1857, he introduced a series of effective instruments, including the quadrant electrometer, which cover the entire field of electrostatic measurement. He invented thecurrent balance, also known as theKelvin balanceorAmpere balance(SiC), for theprecisespecification of theampere, thestandardunitofelectric current.[20]
R. E. B. Cromptonbecame concerned by the large range of different standards and systems used by electrical engineering companies and scientists in the early 20th century. Many companies had entered the market in the 1890s and all chose their own settings forvoltage,frequency,currentand even the symbols used on circuit diagrams. Adjacent buildings would have totally incompatible electrical systems simply because they had been fitted out by different companies. Crompton could see the lack of efficiency in this system and began to consider proposals for an international standard for electric engineering.[21]
In 1904, Crompton represented Britain at theInternational Electrical Congress, held in connection withLouisiana Purchase ExpositioninSaint Louisas part of a delegation by theInstitute of Electrical Engineers. He presented a paper on standardisation, which was so well received that he was asked to look into the formation of a commission to oversee the process.[22]By 1906 his work was complete and he drew up a permanent constitution for theInternational Electrotechnical Commission.[23]The body held its first meeting that year in London, with representatives from 14 countries. In honour of his contribution to electrical standardisation, Lord Kelvin was elected as the body's first President.[24]
TheInternational Federation of the National Standardizing Associations(ISA) was founded in 1926 with a broader remit to enhance international cooperation for all technical standards and specifications. The body was suspended in 1942 duringWorld War II.
After the war, ISA was approached by the recently formed United Nations Standards Coordinating Committee (UNSCC) with a proposal to form a new global standards body. In October 1946, ISA and UNSCC delegates from 25 countries met inLondonand agreed to join forces to create the newInternational Organization for Standardization(ISO); the new organization officially began operations in February 1947.[25]
In general, each country or economy has a single recognized National Standards Body (NSB). Examples includeABNT,AENOR (now called UNE,Spanish Association for Standardization),AFNOR,ANSI,BSI,DGN,DIN,IRAM,JISC,KATS,SABS,SAC,SCC,SIS. An NSB is likely the sole member from that economy in ISO.
NSBs may be either public or private sector organizations, or combinations of the two. For example, the three NSBs of Canada, Mexico and the United States are respectively the Standards Council of Canada (SCC), the General Bureau of Standards (Dirección General de Normas, DGN), and theAmerican National Standards Institute(ANSI). SCC is a CanadianCrown Corporation, DGN is a governmental agency within the Mexican Ministry of Economy, and ANSI and AENOR are a501(c)(3)non-profit organization with members from both the private and public sectors. The determinants of whether an NSB for a particular economy is a public or private sector body may include the historical and traditional roles that the private sector fills in public affairs in that economy or the development stage of that economy.
Standards can be:
The existence of a published standard does not necessarily imply that it is useful or correct. Just because an item is stamped with a standard number does not, by itself, indicate that the item is fit for any particular use. The people who use the item or service (engineers, trade unions, etc.) or specify it (building codes, government, industry, etc.) have the responsibility to consider the available standards, specify the correct one, enforce compliance, and use the item correctly:validation and verification.
To avoid the proliferation of industry standards, also referred to asprivate standards, regulators in the United States are instructed by their government offices to adopt "voluntary consensus standards" before relying upon "industry standards" or developing "government standards".[26]Regulatory authorities can reference voluntary consensus standards to translate internationally accepted criteria intopublic policy.[27][28]
In the context of information exchange, standardization refers to the process of developing standards for specific business processes using specificformal languages. These standards are usually developed in voluntary consensus standards bodies such as the United Nations Center for Trade Facilitation and Electronic Business (UN/CEFACT), the World Wide Web Consortium (W3C), theTelecommunications Industry Association(TIA), and the Organization for the Advancement of Structured Information Standards (OASIS).
There are manyspecificationsthat govern the operation and interaction of devices and software on theInternet, which do not use the term "standard" in their names. TheW3C, for example, publishes "Recommendations", and theIETFpublishes "Requests for Comments" (RFCs). Nevertheless, these publications are often referred to as "standards", because they are the products of regular standardization processes.
Standardized product certificationssuch as oforganic food,buildingsorpossibly sustainable seafoodas well as standardized product safety evaluation and dis/approval procedures (e.g.regulation of chemicals,cosmeticsandfood safety) can protect the environment.[29][30][31]This effect may depend on associated modifiedconsumer choices, strategic product support/obstruction, requirements and bans as well as their accordance with a scientific basis, the robustness and applicability of a scientific basis, whether adoption of the certifications is voluntary, and the socioeconomic context (systems ofgovernanceand theeconomy), with possibly most certifications being so far mostly largely ineffective.[32][additional citation(s) needed]
Moreover, standardized scientific frameworks can enable evaluation of levels of environmental protection, such as ofmarine protected areas, and serve as, potentially evolving, guides for improving, planning and monitoring the protection-quality, -scopes and -extents.[33]
Moreover, technical standards could decreaseelectronic waste[34][35][36]and reduce resource-needs such as by thereby requiring (or enabling) products to beinteroperable, compatible (with other products, infrastructures, environments, etc),durable,energy-efficient,modular,[37]upgradeable/repairable[38]andrecyclableand conform to versatile, optimal standards and protocols.
Such standardization is not limited to the domain of electronic devices like smartphones and phone chargers but could also be applied to e.g. the energy infrastructure.Policy-makers could developpolicies "fostering standard design and interfaces, and promoting the re-use of modules and components across plants to develop more sustainableenergy infrastructure".[39]Computers and the Internet are some of the tools that could be used to increase practicability and reduce suboptimal results, detrimental standards andbureaucracy, which is often associated with traditional processes and results of standardization.[40]Taxes and subsidies, and funding of research and development could be used complementarily.[41]Standardized measurement is used in monitoring, reporting and verification frameworks of environmental impacts, usually of companies, for example to prevent underreporting of greenhouse gas emissions by firms.[42]
In routineproduct testingandproduct analysisresults can be reported using official or informal standards. It can be done to increaseconsumer protection, to ensure safety or healthiness or efficiency or performance or sustainability of products. It can be carried out by the manufacturer, an independent laboratory, a government agency, a magazine or others on a voluntary or commissioned/mandated basis.[43][44][additional citation(s) needed]
Estimating theenvironmental impacts of food productsin a standardized way – as has been done witha datasetof >57,000 foodproductsin supermarkets – could e.g. be used to inform consumers or inpolicy.[45][46]For example, such may be useful for approaches usingpersonal carbon allowances(or similar quota) or fortargeted alteration of (ultimate overall) costs.
Public informationsymbols(e.g.hazard symbols), especially when related to safety, are often standardized, sometimeson the international level.[47]
Standardization is also used to ensure safe design and operation of laboratories and similar potentially dangerous workplaces, e.g. to ensurebiosafety levels.[48]There is research into microbiology safety standards used in clinical and research laboratories.[49]
In the context of defense, standardization has been defined byNATOasThe development and implementation of concepts, doctrines, procedures and designs to achieve and maintain the required levels ofcompatibility,interchangeabilityorcommonalityin the operational, procedural, material, technical and administrative fields to attain interoperability.[50]
In some cases, standards are being used in the design and operation ofworkplacesand products that can impact consumers' health. Some of such standards seek to ensureoccupational safety and healthandergonomics. For example,chairs[47][51][52][53](see e.g.active sittingandsteps of research) could be potentially be designed and chosen using standards that may or may not be based on adequate scientific data. Standards could reduce the variety of products and lead to convergence on fewer broad designs – which can often be efficiently mass-produced via common shared automated procedures and instruments – or formulations deemed to be the most healthy, most efficient or best compromise between healthiness and other factors. Standardization is sometimes or could also be used to ensure or increase or enable consumer health protection beyond the workplace and ergonomics such as standards in food, food production, hygiene products, tab water, cosmetics, drugs/medicine,[54]drink and dietary supplements,[55][56]especially in cases where there is robust scientific data that suggests detrimental impacts on health (e.g. of ingredients) despite being substitutable and not necessarily of consumer interest.[additional citation(s) needed]
In the context of assessment, standardization may define how a measuring instrument or procedure is similar to every subjects or patients.[57]: 399[58]: 71For example, educational psychologist may adoptstructured interviewto systematically interview the people in concern. By delivering the same procedures, all subjects is evaluated using same criteria and minimising anyconfounding variablethat reduce thevalidity.[58]: 72Some other example includesmental status examinationandpersonality test.
In the context of social criticism andsocial science, standardization often means the process of establishing standards of various kinds and improving efficiency to handle people, their interactions, cases, and so forth. Examples include formalization of judicial procedure in court, and establishing uniform criteria for diagnosing mental disease. Standardization in this sense is often discussed along with (or synonymously to) such large-scale social changes as modernization, bureaucratization, homogenization, and centralization of society.
In the context ofcustomer service, standardization refers to the process of developing an international standard that enables organizations to focus on customer service, while at the same time providing recognition of success[clarification needed]through a third party organization, such as theBritish Standards Institution. An international standard has been developed byThe International Customer Service Institute.
In the context ofsupply chain managementandmaterials management, standardization covers the process of specification and use of any item the company must buy in or make, allowable substitutions, andbuild or buydecisions.
The process of standardization can itself be standardized. There are at least four levels of standardization: compatibility,interchangeability,commonalityandreference. These standardization processes create compatibility, similarity, measurement, and symbol standards.
There are typically four different techniques for standardization
Types of standardization process:
Standardization has a variety of benefits and drawbacks for firms and consumers participating in the market, and on technology and innovation.
The primary effect of standardization on firms is that the basis of competition is shifted from integrated systems to individual components within the system. Prior to standardization a company's product must span the entire system because individual components from different competitors are incompatible, but after standardization each company can focus on providing an individual component of the system.[60]When the shift toward competition based on individual components takes place, firms selling tightly integrated systems must quickly shift to a modular approach, supplying other companies with subsystems or components.[61]
Standardization has a variety of benefits for consumers, but one of the greatest benefits is enhanced network effects. Standards increase compatibility and interoperability between products, allowing information to be shared within a larger network and attracting more consumers to use the new technology, further enhancing network effects.[62]Other benefits of standardization to consumers are reduced uncertainty, because consumers can be more certain that they are not choosing the wrong product, and reduced lock-in, because the standard makes it more likely that there will be competing products in the space.[63]Consumers may also get the benefit of being able to mix and match components of a system to align with their specific preferences.[64]Once these initial benefits of standardization are realized, further benefits that accrue to consumers as a result of using the standard are driven mostly by the quality of the technologies underlying that standard.[65]
Probably the greatest downside of standardization for consumers is lack of variety. There is no guarantee that the chosen standard will meet all consumers' needs or even that the standard is the best available option.[64]Another downside is that if a standard is agreed upon before products are available in the market, then consumers are deprived of the penetration pricing that often results when rivals are competing to rapidly increase market share in an attempt to increase the likelihood that their product will become the standard.[64]It is also possible that a consumer will choose a product based upon a standard that fails to become dominant.[66]In this case, the consumer will have spent resources on a product that is ultimately less useful to him or her as the result of the standardization process.
Much like the effect on consumers, the effect of standardization on technology and innovation is mixed.[67]Meanwhile, the various links between research and standardization have been identified,[68]also as a platform of knowledge transfer[69]and translated into policy measures (e.g.WIPANO).
Increased adoption of a new technology as a result of standardization is important because rival and incompatible approaches competing in the marketplace can slow or even kill the growth of the technology (a state known asmarket fragmentation).[70]The shift to a modularized architecture as a result of standardization brings increased flexibility, rapid introduction of new products, and the ability to more closely meet individual customer's needs.[71]
The negative effects of standardization on technology have to do with its tendency to restrict new technology and innovation. Standards shift competition from features to price because the features are defined by the standard. The degree to which this is true depends on the specificity of the standard.[72]Standardization in an area also rules out alternative technologies as options while encouraging others.[73]
|
https://en.wikipedia.org/wiki/Standardization
|
In mathematics, aneigenvalue perturbationproblem is that of finding theeigenvectors and eigenvaluesof a systemAx=λx{\displaystyle Ax=\lambda x}that isperturbedfrom one with known eigenvectors and eigenvaluesA0x0=λ0x0{\displaystyle A_{0}x_{0}=\lambda _{0}x_{0}}. This is useful for studying how sensitive the original system's eigenvectors and eigenvaluesx0i,λ0i,i=1,…n{\displaystyle x_{0i},\lambda _{0i},i=1,\dots n}are to changes in the system.
This type of analysis was popularized byLord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities.[1]
The derivations in this article are essentially self-contained and can be found in many texts on numerical linear algebra or numericalfunctional analysis.
This article is focused on the case of the perturbation of a simple eigenvalue (see inmultiplicity of eigenvalues).
In the entryapplications of eigenvalues and eigenvectorswe find numerous scientific fields in which eigenvalues are used to obtain solutions.Generalized eigenvalue problemsare less widespread but are a key in the study ofvibrations.
They are useful when we use theGalerkin methodorRayleigh-Ritz methodto find approximate
solutions of partial differential equations modeling vibrations of structures such as strings and plates; the paper of Courant (1943)[2]is fundamental. TheFinite element methodis a widespread particular case.
In classical mechanics, generalized eigenvalues may crop up when we look for vibrations ofmultiple degrees of freedomsystems close to equilibrium; the kinetic energy provides the mass matrixM{\displaystyle M}, the potential strain energy provides the rigidity matrixK{\displaystyle K}.
For further details, see the first section of this article of Weinstein (1941, in French)[3]
With both methods, we obtain a system of differential equations orMatrix differential equationMx¨+Bx˙+Kx=0{\displaystyle M{\ddot {x}}+B{\dot {x}}+Kx=0}with the mass matrixM{\displaystyle M}, the damping matrixB{\displaystyle B}and the rigidity matrixK{\displaystyle K}. If we neglect the damping effect, we useB=0{\displaystyle B=0}, we can look for a solution of the following formx=eiωtu{\displaystyle x=e^{i\omega t}u}; we obtain thatu{\displaystyle u}andω2{\displaystyle \omega ^{2}}are solution of the generalized eigenvalue problem−ω2Mu+Ku=0{\displaystyle -\omega ^{2}Mu+Ku=0}
Suppose we have solutions to thegeneralized eigenvalue problem,
whereK0{\displaystyle \mathbf {K} _{0}}andM0{\displaystyle \mathbf {M} _{0}}are matrices. That is, we know the eigenvaluesλ0iand eigenvectorsx0ifori= 1, ...,N. It is also required thatthe eigenvalues are distinct.
Now suppose we want to change the matrices by a small amount. That is, we want to find the eigenvalues and eigenvectors of
where
with the perturbationsδK{\displaystyle \delta \mathbf {K} }andδM{\displaystyle \delta \mathbf {M} }much smaller thanK{\displaystyle \mathbf {K} }andM{\displaystyle \mathbf {M} }respectively. Then we expect the new eigenvalues and eigenvectors to be similar to the original, plus small perturbations:
We assume that the matrices aresymmetricandpositive definite, and assume we have scaled the eigenvectors such that
whereδijis theKronecker delta.
Now we want to solve the equation
In this article we restrict the study to first order perturbation.
Substituting in (1), we get
which expands to
Canceling from (0) (K0x0i=λ0iM0x0i{\displaystyle \mathbf {K} _{0}\mathbf {x} _{0i}=\lambda _{0i}\mathbf {M} _{0}\mathbf {x} _{0i}}) leaves
Removing the higher-order terms, this simplifies to
As the matrix is symmetric, the unperturbed eigenvectors areM{\displaystyle M}orthogonal and so we use them as a basis for the perturbed eigenvectors.
That is, we want to construct
where theεijare small constants that are to be determined.
In the same way, substituting in (2), and removing higher order terms, we getδxjM0x0i+x0jM0δxi+x0jδM0x0i=0(5){\displaystyle \delta \mathbf {x} _{j}\mathbf {M} _{0}\mathbf {x} _{0i}+\mathbf {x} _{0j}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\mathbf {x} _{0j}\delta \mathbf {M} _{0}\mathbf {x} _{0i}=0\quad {(5)}}
The derivation can go on with two forks.
we left multiply withx0iT{\displaystyle \mathbf {x} _{0i}^{T}}and use (2) as well as its first order variation (5); we get
or
We notice that it is the first order perturbation of the generalizedRayleigh quotientwith fixedx0i{\displaystyle x_{0i}}:R(K,M;x0i)=x0iTKx0i/x0iTMx0i,withx0iTMx0i=1{\displaystyle R(K,M;x_{0i})=x_{0i}^{T}Kx_{0i}/x_{0i}^{T}Mx_{0i},{\text{ with }}x_{0i}^{T}Mx_{0i}=1}
Moreover, forM=I{\displaystyle M=I}, the formulaδλi=x0iTδKx0i{\displaystyle \delta \lambda _{i}=x_{0i}^{T}\delta Kx_{0i}}should be compared withBauer-Fiketheorem which provides a bound for eigenvalue perturbation.
We left multiply (3) withx0jT{\displaystyle x_{0j}^{T}}forj≠i{\displaystyle j\neq i}and get
We usex0jTK=λ0jx0jTMandx0jTM0x0i=0,{\displaystyle \mathbf {x} _{0j}^{T}K=\lambda _{0j}\mathbf {x} _{0j}^{T}M{\text{ and }}\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\mathbf {x} _{0i}=0,}forj≠i{\displaystyle j\neq i}.
or
As the eigenvalues are assumed to be simple, forj≠i{\displaystyle j\neq i}
Moreover (5) (the first order variation of (2) ) yields2ϵii=2x0iTM0δxi=−x0iTδMx0i.{\displaystyle 2\epsilon _{ii}=2\mathbf {x} _{0i}^{T}\mathbf {M} _{0}\delta x_{i}=-\mathbf {x} _{0i}^{T}\delta M\mathbf {x} _{0i}.}We have obtained all the components ofδxi{\displaystyle \delta x_{i}}.
Substituting (4) into (3) and rearranging gives
Because the eigenvectors areM0-orthogonal whenM0is positive definite, we can remove the summations by left-multiplying byx0i⊤{\displaystyle \mathbf {x} _{0i}^{\top }}:
By use of equation (1) again:
The two terms containingεiiare equal because left-multiplying (1) byx0i⊤{\displaystyle \mathbf {x} _{0i}^{\top }}gives
Canceling those terms in (6) leaves
Rearranging gives
But by (2), this denominator is equal to 1. Thus
Then, asλi≠λk{\displaystyle \lambda _{i}\neq \lambda _{k}}fori≠k{\displaystyle i\neq k}(assumption simple eigenvalues) by left-multiplying equation (5) byx0k⊤{\displaystyle \mathbf {x} _{0k}^{\top }}:
Or by changing the name of the indices:
To findεii, use the fact that:
implies:
In the case whereall the matrices are Hermitian positive definite and all the eigenvalues are distinct,
for infinitesimalδK{\displaystyle \delta \mathbf {K} }andδM{\displaystyle \delta \mathbf {M} }(the higher order terms in (3) being neglected).
So far, we have not proved that these higher order terms may be neglected. This point may be derived using the implicit function theorem; in next section, we summarize the use of this theorem in order to obtain a first order expansion.
In the next paragraph, we shall use theImplicit function theorem(Statement of the theorem ); we notice that for a continuously differentiable functionf:Rn+m→Rm,f:(x,y)↦f(x,y){\displaystyle f:\mathbb {R} ^{n+m}\to \mathbb {R} ^{m},\;f:(x,y)\mapsto f(x,y)}, with an invertible Jacobian matrixJf,b(x0,y0){\displaystyle J_{f,b}(x_{0},y_{0})}, from a point(x0,y0){\displaystyle (x_{0},y_{0})}solution off(x0,y0)=0{\displaystyle f(x_{0},y_{0})=0}, we get solutions off(x,y)=0{\displaystyle f(x,y)=0}withx{\displaystyle x}close tox0{\displaystyle x_{0}}in the formy=g(x){\displaystyle y=g(x)}whereg{\displaystyle g}is a continuously differentiable function ; moreover the Jacobian marix ofg{\displaystyle g}is provided by the linear system
As soon as the hypothesis of the theorem is satisfied, the Jacobian matrix ofg{\displaystyle g}may be computed with a first order expansion off(x0+δx,y0+δy)=0{\displaystyle f(x_{0}+\delta x,y_{0}+\delta y)=0}, we get
Jf,x(x,g(x))δx+Jf,y(x,g(x))δy=0{\displaystyle J_{f,x}(x,g(x))\delta x+J_{f,y}(x,g(x))\delta y=0}; asδy=Jg,x(x)δx{\displaystyle \delta y=J_{g,x}(x)\delta x}, it is equivalent to equation(6){\displaystyle (6)}.
We use the previous paragraph (Perturbation of an implicit function) with somewhat different notations suited to eigenvalue perturbation; we introducef~:R2n2×Rn+1→Rn+1{\displaystyle {\tilde {f}}:\mathbb {R} ^{2n^{2}}\times \mathbb {R} ^{n+1}\to \mathbb {R} ^{n+1}}, with
f(K,M,λ,x)=Kx−λx,fn+1(M,x)=xTMx−1{\displaystyle f(K,M,\lambda ,x)=Kx-\lambda x,f_{n+1}(M,x)=x^{T}Mx-1}. In order to use theImplicit function theorem, we study the invertibility of the JacobianJf~;λ,x(K,M;λ0i,x0i){\displaystyle J_{{\tilde {f}};\lambda ,x}(K,M;\lambda _{0i},x_{0i})}with
Jf~;λ,x(K,M;λi,xi)(δλ,δx)=(−Mxi0)δλ+(K−λM2xiTM)δxi{\displaystyle J_{{\tilde {f}};\lambda ,x}(K,M;\lambda _{i},x_{i})(\delta \lambda ,\delta x)={\binom {-Mx_{i}}{0}}\delta \lambda +{\binom {K-\lambda M}{2x_{i}^{T}M}}\delta x_{i}}. Indeed, the solution of
Jf~;λ0i,x0i(K,M;λ0i,x0i)(δλi,δxi)={\displaystyle J_{{\tilde {f}};\lambda _{0i},x_{0i}}(K,M;\lambda _{0i},x_{0i})(\delta \lambda _{i},\delta x_{i})=}(yyn+1){\displaystyle {\binom {y}{y_{n+1}}}}may be derived with computations similar to the derivation of the expansion.
δλi=−x0iTy,and(λ0i−λ0j)x0jTMδxi=xjTy,j=1,…,n,j≠i;{\displaystyle \delta \lambda _{i}=-x_{0i}^{T}y,\;{\text{ and }}(\lambda _{0i}-\lambda _{0j})x_{0j}^{T}M\delta x_{i}=x_{j}^{T}y,j=1,\dots ,n,j\neq i\;;}orx0jTMδxi=xjTy/(λ0i−λ0j),and2x0iTMδxi=yn+1{\displaystyle {\text{ or }}x_{0j}^{T}M\delta x_{i}=x_{j}^{T}y/(\lambda _{0i}-\lambda _{0j}),{\text{ and }}\;2x_{0i}^{T}M\delta x_{i}=y_{n+1}}
Whenλi{\displaystyle \lambda _{i}}is a simple eigenvalue, as the eigenvectorsx0j,j=1,…,n{\displaystyle x_{0j},j=1,\dots ,n}form an orthonormal basis, for any right-hand side, we have obtained one solution therefore, the Jacobian is invertible.
Theimplicit function theoremprovides a continuously differentiable function(K,M)↦(λi(K,M),xi(K,M)){\displaystyle (K,M)\mapsto (\lambda _{i}(K,M),x_{i}(K,M))}hence the expansion withlittle o notation:λi=λ0i+δλi+o(‖δK‖+‖δM‖){\displaystyle \lambda _{i}=\lambda _{0i}+\delta \lambda _{i}+o(\|\delta K\|+\|\delta M\|)}xi=x0i+δxi+o(‖δK‖+‖δM‖){\displaystyle x_{i}=x_{0i}+\delta x_{i}+o(\|\delta K\|+\|\delta M\|)}.
with
δλi=x0iTδKx0i−λ0ix0iTδMx0i;{\displaystyle \delta \lambda _{i}=\mathbf {x} _{0i}^{T}\delta \mathbf {K} \mathbf {x} _{0i}-\lambda _{0i}\mathbf {x} _{0i}^{T}\delta \mathbf {M} \mathrm {x} _{0i};}δxi=x0jTM0δxix0jwith{\displaystyle \delta x_{i}=\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}\mathbf {x} _{0j}{\text{ with}}}x0jTM0δxi=−x0jTδKx0i+λ0ix0jTδMx0i(λ0j−λ0i),i=1,…n;j=1,…n;j≠i.{\displaystyle \mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}={\frac {-\mathbf {x} _{0j}^{T}\delta \mathbf {K} \mathbf {x} _{0i}+\lambda _{0i}\mathbf {x} _{0j}^{T}\delta \mathbf {M} \mathrm {x} _{0i}}{(\lambda _{0j}-\lambda _{0i})}},i=1,\dots n;j=1,\dots n;j\neq i.}This is the first order expansion of the perturbed eigenvalues and eigenvectors. which is proved.
This means it is possible to efficiently do asensitivity analysisonλias a function of changes in the entries of the matrices. (Recall that the matrices are symmetric and so changingKkℓwill also changeKℓk, hence the(2 −δkℓ)term.)
Similarly
A simple case isK=[2bb0]{\displaystyle K={\begin{bmatrix}2&b\\b&0\end{bmatrix}}}; however you can compute eigenvalues and eigenvectors with the help of online tools such as[1](see introduction in WikipediaWIMS) or using SageSageMath. You get the smallest eigenvalueλ=−[b2+1+1]{\displaystyle \lambda =-\left[{\sqrt {b^{2}+1}}+1\right]}and an explicit computation∂λ∂b=−xx2+1{\displaystyle {\frac {\partial \lambda }{\partial b}}={\frac {-x}{\sqrt {x^{2}+1}}}}; more over, an associated eigenvector isx~0=[x,−(x2+1+1))]T{\displaystyle {\tilde {x}}_{0}=[x,-({\sqrt {x^{2}+1}}+1))]^{T}}; it is not an unitary vector; sox01x02=x~01x~02/‖x~0‖2{\displaystyle x_{01}x_{02}={\tilde {x}}_{01}{\tilde {x}}_{02}/\|{\tilde {x}}_{0}\|^{2}}; we get‖x~0‖2=2x2+1(x2+1+1){\displaystyle \|{\tilde {x}}_{0}\|^{2}=2{\sqrt {x^{2}+1}}({\sqrt {x^{2}+1}}+1)}andx~01x~02=−x(x2+1+1){\displaystyle {\tilde {x}}_{01}{\tilde {x}}_{02}=-x({\sqrt {x^{2}+1}}+1)}; hencex01x02=−x2x2+1{\displaystyle x_{01}x_{02}=-{\frac {x}{2{\sqrt {x^{2}+1}}}}}; for this example , we have checked that∂λ∂b=2x01x02{\displaystyle {\frac {\partial \lambda }{\partial b}}=2x_{01}x_{02}}orδλ=2x01x02δb{\displaystyle \delta \lambda =2x_{01}x_{02}\delta b}.
Note that in the above example we assumed that both the unperturbed and the perturbed systems involvedsymmetric matrices, which guaranteed the existence ofN{\displaystyle N}linearly independent eigenvectors. An eigenvalue problem involving non-symmetric matrices is not guaranteed to haveN{\displaystyle N}linearly independent eigenvectors, though a sufficient condition is thatK{\displaystyle \mathbf {K} }andM{\displaystyle \mathbf {M} }besimultaneously diagonalizable.
A technical report of Rellich[4]for perturbation of eigenvalue problems provides several examples. The elementary examples are in chapter 2. The report may be downloaded fromarchive.org. We draw an example in which the eigenvectors have a nasty behavior.
Consider the following matrixB(ϵ)=ϵ[cos(2/ϵ),sin(2/ϵ)sin(2/ϵ),scos(2/ϵ)]{\displaystyle B(\epsilon )=\epsilon {\begin{bmatrix}\cos(2/\epsilon )&,\sin(2/\epsilon )\\\sin(2/\epsilon )&,s\cos(2/\epsilon )\end{bmatrix}}}andA(ϵ)=I−e−1/ϵ2B;{\displaystyle A(\epsilon )=I-e^{-1/\epsilon ^{2}}B;}A(0)=I.{\displaystyle A(0)=I.}Forϵ≠0{\displaystyle \epsilon \neq 0}, the matrixA(ϵ){\displaystyle A(\epsilon )}has eigenvectorsΦ1=[cos(1/ϵ),−sin(1/ϵ)]T;Φ2=[sin(1/ϵ),−cos(1/ϵ)]T{\displaystyle \Phi ^{1}=[\cos(1/\epsilon ),-\sin(1/\epsilon )]^{T};\Phi ^{2}=[\sin(1/\epsilon ),-\cos(1/\epsilon )]^{T}}belonging to eigenvaluesλ1=1−e−1/ϵ2),λ2=1+e−1/ϵ2){\displaystyle \lambda _{1}=1-e^{-1/\epsilon ^{2})},\lambda _{2}=1+e^{-1/\epsilon ^{2})}}.
Sinceλ1≠λ2{\displaystyle \lambda _{1}\neq \lambda _{2}}forϵ≠0{\displaystyle \epsilon \neq 0}ifuj(ϵ),j=1,2,{\displaystyle u^{j}(\epsilon ),j=1,2,}are any normalized eigenvectors belonging toλj(ϵ),j=1,2{\displaystyle \lambda _{j}(\epsilon ),j=1,2}respectively
thenuj=eαj(ϵ)Φj(ϵ){\displaystyle u^{j}=e^{\alpha _{j}(\epsilon )}\Phi ^{j}(\epsilon )}whereαj,j=1,2{\displaystyle \alpha _{j},j=1,2}are real forϵ≠0.{\displaystyle \epsilon \neq 0.}It is obviously impossible to defineα1(ϵ){\displaystyle \alpha _{1}(\epsilon )}, say, in such a way thatu1(ϵ){\displaystyle u^{1}(\epsilon )}tends to a limit asϵ→0,{\displaystyle \epsilon \rightarrow 0,}because|u1(ϵ)|=|cos(1/ϵ)|{\displaystyle |u^{1}(\epsilon )|=|\cos(1/\epsilon )|}has no limit asϵ→0.{\displaystyle \epsilon \rightarrow 0.}
Note in this example thatAjk(ϵ){\displaystyle A_{jk}(\epsilon )}is not only continuous but also has continuous derivatives of all orders.
Rellich draws the following important consequence.
<< Since in general the individual eigenvectors do not depend continuously on the perturbation parameter even though the operatorA(ϵ){\displaystyle A(\epsilon )}does, it is necessary to work, not with an eigenvector, but rather with the space spanned by all the eigenvectors belonging to the same eigenvalue. >>
This example is less nasty that the previous one. Suppose[K0]{\displaystyle [K_{0}]}is the 2x2 identity matrix, any vector is an eigenvector; thenu0=[1,1]T/2{\displaystyle u_{0}=[1,1]^{T}/{\sqrt {2}}}is one possible eigenvector. But if one makes a small perturbation, such as
[K]=[K0]+[ϵ000]{\displaystyle [K]=[K_{0}]+{\begin{bmatrix}\epsilon &0\\0&0\end{bmatrix}}}
Then the eigenvectors arev1=[1,0]T{\displaystyle v_{1}=[1,0]^{T}}andv2=[0,1]T{\displaystyle v_{2}=[0,1]^{T}}; they are constant with respect toϵ{\displaystyle \epsilon }so that‖u0−v1‖{\displaystyle \|u_{0}-v_{1}\|}is constant and does not go to zero.
|
https://en.wikipedia.org/wiki/Eigenvalue_perturbation
|
Inmatrix theory, theFrobenius covariantsof asquare matrixAare special polynomials of it, namelyprojectionmatricesAiassociated with theeigenvalues and eigenvectorsofA.[1]: pp.403, 437–8They are named after the mathematicianFerdinand Frobenius.
Each covariant is aprojectionon theeigenspaceassociated with the eigenvalueλi.
Frobenius covariants are the coefficients ofSylvester's formula, which expresses afunction of a matrixf(A)as a matrix polynomial, namely a linear combination
of that function's values on the eigenvalues ofA.
LetAbe adiagonalizable matrixwith eigenvaluesλ1, ...,λk.
The Frobenius covariantAi, fori= 1,...,k, is the matrix
It is essentially theLagrange polynomialwith matrix argument. If the eigenvalueλiis simple, then as an idempotent projection matrix to a one-dimensional subspace,Aihas a unittrace.
The Frobenius covariants of a matrixAcan be obtained from anyeigendecompositionA=SDS−1, whereSis non-singular andDis diagonal withDi,i=λi.
IfAhas no multiple eigenvalues, then letcibe theith right eigenvector ofA, that is, theith column ofS; and letribe theith left eigenvector ofA, namely theith row ofS−1. ThenAi=ciri.
IfAhas an eigenvalueλiappearing multiple times, thenAi= Σjcjrj, where the sum is over all rows and columns associated with the eigenvalueλi.[1]: p.521
Consider the two-by-two matrix:
This matrix has two eigenvalues, 5 and −2; hence(A− 5)(A+ 2) = 0.
The corresponding eigen decomposition is
Hence the Frobenius covariants, manifestly projections, are
with
NotetrA1= trA2= 1, as required.
|
https://en.wikipedia.org/wiki/Frobenius_covariant
|
Inlinear algebra, aHouseholder transformation(also known as aHouseholder reflectionorelementary reflector) is alinear transformationthat describes areflectionabout aplaneorhyperplanecontaining the origin. The Householder transformation was used in a 1958 paper byAlston Scott Householder.[1]
TheHouseholderoperator[2]may be defined over any finite-dimensionalinner product spaceV{\displaystyle V}withinner product⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }andunit vectoru∈V{\displaystyle u\in V}as
It is also common to choose a non-unit vectorq∈V{\displaystyle q\in V}, and normalize it directly in the Householder operator's expression:[4]
Such an operator islinearandself-adjoint.
IfV=Cn{\displaystyle V=\mathbb {C} ^{n}}, note that the reflection hyperplane can be defined by itsnormal vector, aunit vectorv→∈V{\textstyle {\vec {v}}\in V}(a vector with length1{\textstyle 1}) that isorthogonalto the hyperplane. The reflection of apointx{\textstyle x}about this hyperplane is theHouseholdertransformation:
wherex→{\displaystyle {\vec {x}}}is the vector from the origin to the pointx{\displaystyle x}, andv→∗{\textstyle {\vec {v}}^{*}}is theconjugate transposeofv→{\textstyle {\vec {v}}}.
The matrix constructed from this transformation can be expressed in terms of anouter productas:
is known as theHouseholder matrix, whereI{\textstyle I}is theidentity matrix.
The Householder matrix has the following properties:
consider the normalization of a vector of 1's
v→=12[11]{\displaystyle {\vec {v}}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\1\end{bmatrix}}}
Then the Householder matrix corresponding to this vector is
Pv=[1001]−2(12[11])(12[11]){\displaystyle P_{v}={\begin{bmatrix}1&0\\0&1\end{bmatrix}}-2({\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\1\end{bmatrix}})({\frac {1}{\sqrt {2}}}{\begin{bmatrix}1&1\end{bmatrix}})}
=[1001]−[11][11]{\displaystyle ={\begin{bmatrix}1&0\\0&1\end{bmatrix}}-{\begin{bmatrix}1\\1\end{bmatrix}}{\begin{bmatrix}1&1\end{bmatrix}}}
=[1001]−[1111]{\displaystyle ={\begin{bmatrix}1&0\\0&1\end{bmatrix}}-{\begin{bmatrix}1&1\\1&1\end{bmatrix}}}
=[0−1−10]{\displaystyle ={\begin{bmatrix}0&-1\\-1&0\end{bmatrix}}}
Note that if we have a vector representing a coordinate in the 2D plane
[xy]{\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}}
Then in this casePv{\displaystyle P_{v}}flips and negates the x and y coordinates, in other words
Pv[xy]=[−y−x]{\displaystyle P_{v}{\begin{bmatrix}x\\y\end{bmatrix}}={\begin{bmatrix}-y\\-x\end{bmatrix}}}
Which corresponds to reflecting the vector across the liney=−x{\displaystyle y=-x}, which our original vectorv{\displaystyle v}is normal to.
In geometric optics,specular reflectioncan be expressed in terms of the Householder matrix (seeSpecular reflection § Vector formulation).
Householder transformations are widely used innumerical linear algebra, for example, to annihilate the entries below the main diagonal of a matrix,[5]to performQR decompositionsand in the first step of theQR algorithm. They are also widely used for transforming to aHessenbergform. For symmetric orHermitianmatrices, the symmetry can be preserved, resulting intridiagonalization.[6]Because they involve only a rank-one update and make use of low-levelBLAS-1operations, they can be quite efficient.
Householder transformations can be used to calculate aQR decomposition. Consider a matrix tridiangularized up to columni{\displaystyle i}, then our goal is to construct such Householder matrices that act upon the principal submatrices of a given matrix
[a11a12⋯a1n0a22⋯a1n⋮⋱⋮0⋯0x1=aii⋯ain0⋯0⋮⋮0⋯0xn=ani⋯ann]{\displaystyle {\begin{bmatrix}a_{11}&a_{12}&\cdots &&&a_{1n}\\0&a_{22}&\cdots &&&a_{1n}\\\vdots &&\ddots &&&\vdots \\0&\cdots &0&x_{1}=a_{ii}&\cdots &a_{in}\\0&\cdots &0&\vdots &&\vdots \\0&\cdots &0&x_{n}=a_{ni}&\cdots &a_{nn}\end{bmatrix}}}
via the matrix
[Ii−100Pv]{\displaystyle {\begin{bmatrix}I_{i-1}&0\\0&P_{v}\end{bmatrix}}}.
(note that we already established before that Householder transformations are unitary matrices, and since the multiplication of unitary matrices is itself a unitary matrix, this gives us the unitary matrix of the QR decomposition)
If we can find av→{\displaystyle {\vec {v}}}so that
Pvx→=e→1{\displaystyle P_{v}{\vec {x}}={\vec {e}}_{1}}
we could accomplish this. Thinking geometrically, we are looking for a plane so that the reflection about this plane happens to land directly on the basis vector. In other words,
for some constantα{\displaystyle \alpha }. However, for this to happen, we must have
v→∝x→−αe→1{\displaystyle {\vec {v}}\propto {\vec {x}}-\alpha {\vec {e}}_{1}}.
And sincev→{\displaystyle {\vec {v}}}is a unit vector, this means that we must have
Now if we apply equation (2) back into equation (1), we get
x→−αe→1=2(⟨x→,x→−αe→1‖x→−αe→1‖2⟩x→−αe→1‖x→−αe→1‖2{\displaystyle {\vec {x}}-\alpha {\vec {e}}_{1}=2(\langle {\vec {x}},{\frac {{\vec {x}}-\alpha {\vec {e}}_{1}}{\|{\vec {x}}-\alpha {\vec {e}}_{1}\|_{2}}}\rangle {\frac {{\vec {x}}-\alpha {\vec {e}}_{1}}{\|{\vec {x}}-\alpha {\vec {e}}_{1}\|_{2}}}}
Or, in other words, by comparing the scalars in front of the vectorx→−αe→1{\displaystyle {\vec {x}}-\alpha {\vec {e}}_{1}}we must have
‖x→−αe→1‖22=2⟨x→,x→−αe1⟩{\displaystyle \|{\vec {x}}-\alpha {\vec {e}}_{1}\|_{2}^{2}=2\langle {\vec {x}},{\vec {x}}-\alpha e_{1}\rangle }.
Or
2(‖x→‖22−αx1)=‖x→‖22−2αx1+α2{\displaystyle 2(\|{\vec {x}}\|_{2}^{2}-\alpha x_{1})=\|{\vec {x}}\|_{2}^{2}-2\alpha x_{1}+\alpha ^{2}}
Which means that we can solve forα{\displaystyle \alpha }as
α=±‖x→‖2{\displaystyle \alpha =\pm \|{\vec {x}}\|_{2}}
This completes the construction; however, in practice we want to avoidcatastrophic cancellationin equation (2). To do so, we choose the sign ofα{\displaystyle \alpha }as
α=−sign(Re(x1))‖x→‖2{\displaystyle \alpha =-sign(Re(x_{1}))\|{\vec {x}}\|_{2}}[7]
This procedure is presented in Numerical Analysis by Burden and Faires, and works when the matrix is symmetric. In the non-symmetric case, it is still useful as a similar procedure can result in a Hessenberg matrix.
It uses a slightly alteredsgn{\displaystyle \operatorname {sgn} }function withsgn(0)=1{\displaystyle \operatorname {sgn} (0)=1}.[8]In the first step, to form the Householder matrix in each step we need to determineα{\textstyle \alpha }andr{\textstyle r}, which are:
Fromα{\textstyle \alpha }andr{\textstyle r}, construct vectorv{\textstyle v}:
wherev1=0{\textstyle v_{1}=0},v2=a21−α2r{\textstyle v_{2}={\frac {a_{21}-\alpha }{2r}}}, and
Then compute:
Having foundP1{\textstyle P^{1}}and computedA(2){\textstyle A^{(2)}}the process is repeated fork=2,3,…,n−2{\textstyle k=2,3,\ldots ,n-2}as follows:
Continuing in this manner, the tridiagonal and symmetric matrix is formed.
In this example, also from Burden and Faires,[8]the given matrix is transformed to the similar tridiagonal matrix A3by using the Householder method.
Following those steps in the Householder method, we have:
The first Householder matrix:
UsedA2{\textstyle A_{2}}to form
As we can see, the final result is a tridiagonal symmetric matrix which is similar to the original one. The process is finished after two steps.
As unitary matrices are useful in quantum computation, and Householder transformations are unitary, they are very useful in quantum computing. One of the central algorithms where they're useful is Grover's algorithm, where we are trying to solve for a representation of anoracle functionrepresented by what turns out to be a Householder transformation:
{Uω|x⟩=−|x⟩forx=ω, that is,f(x)=1,Uω|x⟩=|x⟩forx≠ω, that is,f(x)=0.{\displaystyle {\begin{cases}U_{\omega }|x\rangle =-|x\rangle &{\text{for }}x=\omega {\text{, that is, }}f(x)=1,\\U_{\omega }|x\rangle =|x\rangle &{\text{for }}x\neq \omega {\text{, that is, }}f(x)=0.\end{cases}}}
(here the|x⟩{\displaystyle |x\rangle }is part of thebra-ket notationand is analogous tox→{\displaystyle {\vec {x}}}which we were using previously)
This is done via an algorithm that iterates via the oracle functionUω{\displaystyle U_{\omega }}and another operatorUs{\displaystyle U_{s}}known as theGrover diffusion operatordefined by
|s⟩=1N∑x=0N−1|x⟩.{\displaystyle |s\rangle ={\frac {1}{\sqrt {N}}}\sum _{x=0}^{N-1}|x\rangle .}andUs=2|s⟩⟨s|−I{\displaystyle U_{s}=2\left|s\right\rangle \!\!\left\langle s\right|-I}.
The Householder transformation is a reflection about a hyperplane with unit normal vectorv{\textstyle v}, as stated earlier. AnN{\textstyle N}-by-N{\textstyle N}unitary transformationU{\textstyle U}satisfiesUU∗=I{\textstyle UU^{*}=I}. Taking the determinant (N{\textstyle N}-th power of the geometric mean) and trace (proportional to arithmetic mean) of a unitary matrix reveals that its eigenvaluesλi{\textstyle \lambda _{i}}have unit modulus. This can be seen directly and swiftly:
Since arithmetic and geometric means are equal if the variables are constant (seeinequality of arithmetic and geometric means), we establish the claim of unit modulus.
For the case of real valued unitary matrices we obtainorthogonal matrices,UUT=I{\textstyle UU^{\textsf {T}}=I}. It follows rather readily (seeOrthogonal matrix) that any orthogonal matrix can bedecomposedinto a product of 2-by-2 rotations, calledGivens rotations, and Householder reflections. This is appealing intuitively since multiplication of a vector by an orthogonal matrix preserves the length of that vector, and rotations and reflections exhaust the set of (real valued) geometric operations that render invariant a vector's length.
The Householder transformation was shown to have a one-to-one relationship with the canonical coset decomposition of unitary matrices defined in group theory, which can be used to parametrize unitary operators in a very efficient manner.[9]
Finally we note that a single Householder transform, unlike a solitary Givens transform, can act on all columns of a matrix, and as such exhibits the lowest computational cost for QR decomposition and tridiagonalization. The penalty for this "computational optimality" is, of course, that Householder operations cannot be as deeply or efficiently parallelized. As such Householder is preferred for dense matrices on sequential machines, whilst Givens is preferred on sparse matrices, and/or parallel machines.
|
https://en.wikipedia.org/wiki/Householder_transformation
|
This article lists some important classes ofmatricesused inmathematics,scienceandengineering. Amatrix(plural matrices, or less commonly matrixes) is a rectangulararrayofnumberscalledentries. Matrices have a long history of both study and application, leading to diverse ways of classifying matrices. A first group is matrices satisfying concrete conditions of the entries, including constant matrices. Important examples include theidentity matrixgiven by
and thezero matrixof dimensionm×n{\displaystyle m\times n}. For example:
Further ways of classifying matrices are according to theireigenvalues, or by imposing conditions on theproductof the matrix with other matrices. Finally, many domains, both in mathematics and other sciences includingphysicsandchemistry, have particular matrices that are applied chiefly in these areas.
The list below comprises matrices whose elements are constant for any given dimension (size) of matrix. The matrix entries will be denotedaij. The table below uses theKronecker deltaδijfor two integersiandjwhich is 1 ifi=jand 0 else.
The following lists matrices whose entries are subject to certain conditions. Many of them apply tosquare matricesonly, that is matrices with the same number of columns and rows. Themain diagonalof a square matrix is thediagonaljoining the upper left corner and the lower right one or equivalently the entriesai,i. The other diagonal is called anti-diagonal (or counter-diagonal).
A number of matrix-related notions is about properties of products or inverses of the given matrix. Thematrix productof am-by-nmatrixAand an-by-kmatrixBis them-by-kmatrixCgiven by
This matrix product is denotedAB. Unlike the product of numbers, matrix products are notcommutative, that is to sayABneed not be equal toBA.[2]A number of notions are concerned with the failure of this commutativity. Aninverseof square matrixAis a matrixB(necessarily of the same dimension asA) such thatAB=I. Equivalently,BA=I. An inverse need not exist. If it exists,Bis uniquely determined, and is also calledtheinverse ofA, denotedA−1.
The following matrices find their main application instatisticsandprobability theory.
The following matrices find their main application ingraphandnetwork theory.
|
https://en.wikipedia.org/wiki/List_of_matrices
|
Inmatrix theory,Sylvester's formulaorSylvester's matrix theorem(named afterJ. J. Sylvester) orLagrange−Sylvester interpolationexpresses an analyticfunctionf(A)of amatrixAas a polynomial inA, in terms of theeigenvalues and eigenvectorsofA.[1][2]It states that[3]
where theλiare the eigenvalues ofA, and the matrices
are the correspondingFrobenius covariantsofA, which are (projection) matrixLagrange polynomialsofA.
Sylvester's formula applies for anydiagonalizable matrixAwithkdistinct eigenvalues,λ1, ...,λk, and any functionfdefined on some subset of thecomplex numberssuch thatf(A)is well defined. The last condition means that every eigenvalueλiis in the domain off, and that every eigenvalueλiwith multiplicitymi> 1 is in the interior of the domain, withfbeing (mi- 1) times differentiable atλi.[1]: Def.6.4
Consider the two-by-two matrix:
This matrix has two eigenvalues, 5 and −2. Its Frobenius covariants are
Sylvester's formula then amounts to
For instance, iffis defined byf(x) =x−1, then Sylvester's formula expresses the matrix inversef(A) =A−1as
Sylvester's formula is only valid fordiagonalizable matrices; an extension due toArthur Buchheim, based onHermite interpolating polynomials, covers the general case:[4]
whereϕi(t):=f(t)/∏j≠i(t−λj)nj{\displaystyle \phi _{i}(t):=f(t)/\prod _{j\neq i}\left(t-\lambda _{j}\right)^{n_{j}}}.
A concise form is further given byHans Schwerdtfeger,[5]
whereAiare the correspondingFrobenius covariantsofA
If a matrixAis bothHermitianandunitary, then it can only have eigenvalues of±1{\displaystyle \pm 1}, and thereforeA=A+−A−{\displaystyle A=A_{+}-A_{-}}, whereA+{\displaystyle A_{+}}is the projector onto the subspace with eigenvalue +1, andA−{\displaystyle A_{-}}is the projector onto the subspace with eigenvalue−1{\displaystyle -1}; By the completeness of the eigenbasis,A++A−=I{\displaystyle A_{+}+A_{-}=I}. Therefore, for any analytic functionf,
In particular,eiθA=(cosθ)I+(isinθ)A{\displaystyle e^{i\theta A}=(\cos \theta )I+(i\sin \theta )A}andA=eiπ2(I−A)=e−iπ2(I−A){\displaystyle A=e^{i{\frac {\pi }{2}}(I-A)}=e^{-i{\frac {\pi }{2}}(I-A)}}.
|
https://en.wikipedia.org/wiki/Sylvester%27s_formula
|
Inlinear algebra, anorthogonal matrix, ororthonormal matrix, is a realsquare matrixwhose columns and rows areorthonormalvectors.
One way to express this isQTQ=QQT=I,{\displaystyle Q^{\mathrm {T} }Q=QQ^{\mathrm {T} }=I,}whereQTis thetransposeofQandIis theidentity matrix.
This leads to the equivalent characterization: a matrixQis orthogonal if its transpose is equal to itsinverse:QT=Q−1,{\displaystyle Q^{\mathrm {T} }=Q^{-1},}whereQ−1is the inverse ofQ.
An orthogonal matrixQis necessarily invertible (with inverseQ−1=QT),unitary(Q−1=Q∗), whereQ∗is theHermitian adjoint(conjugate transpose) ofQ, and thereforenormal(Q∗Q=QQ∗) over thereal numbers. Thedeterminantof any orthogonal matrix is either +1 or −1. As alinear transformation, an orthogonal matrix preserves theinner productof vectors, and therefore acts as anisometryofEuclidean space, such as arotation,reflectionorrotoreflection. In other words, it is aunitary transformation.
The set ofn×northogonal matrices, under multiplication, forms thegroupO(n), known as theorthogonal group. ThesubgroupSO(n)consisting of orthogonal matrices with determinant +1 is called thespecial orthogonal group, and each of its elements is a special orthogonal matrix. As a linear transformation, every special orthogonal matrix acts as a rotation.
An orthogonal matrix is the real specialization of a unitary matrix, and thus always anormal matrix. Although we consider only real matrices here, the definition can be used for matrices with entries from anyfield. However, orthogonal matrices arise naturally fromdot products, and for matrices of complex numbers that leads instead to the unitary requirement. Orthogonal matrices preserve the dot product,[1]so, for vectorsuandvin ann-dimensional realEuclidean spaceu⋅v=(Qu)⋅(Qv){\displaystyle {\mathbf {u} }\cdot {\mathbf {v} }=\left(Q{\mathbf {u} }\right)\cdot \left(Q{\mathbf {v} }\right)}whereQis an orthogonal matrix. To see the inner product connection, consider a vectorvin ann-dimensional realEuclidean space. Written with respect to an orthonormal basis, the squared length ofvisvTv. If a linear transformation, in matrix formQv, preserves vector lengths, thenvTv=(Qv)T(Qv)=vTQTQv.{\displaystyle {\mathbf {v} }^{\mathrm {T} }{\mathbf {v} }=(Q{\mathbf {v} })^{\mathrm {T} }(Q{\mathbf {v} })={\mathbf {v} }^{\mathrm {T} }Q^{\mathrm {T} }Q{\mathbf {v} }.}
Thusfinite-dimensionallinear isometries—rotations, reflections, and their combinations—produce orthogonal matrices. The converse is also true: orthogonal matrices imply orthogonal transformations. However, linear algebra includes orthogonal transformations between spaces which may be neither finite-dimensional nor of the same dimension, and these have no orthogonal matrix equivalent.
Orthogonal matrices are important for a number of reasons, both theoretical and practical. Then×northogonal matrices form agroupunder matrix multiplication, theorthogonal groupdenoted byO(n), which—with its subgroups—is widely used in mathematics and the physical sciences. For example, thepoint groupof a molecule is a subgroup of O(3). Because floating point versions of orthogonal matrices have advantageous properties, they are key to many algorithms in numerical linear algebra, such asQRdecomposition. As another example, with appropriate normalization thediscrete cosine transform(used inMP3compression) is represented by an orthogonal matrix.
Below are a few examples of small orthogonal matrices and possible interpretations.
The simplest orthogonal matrices are the1 × 1matrices [1] and [−1], which we can interpret as the identity and a reflection of the real line across the origin.
The2 × 2matrices have the form[ptqu],{\displaystyle {\begin{bmatrix}p&t\\q&u\end{bmatrix}},}which orthogonality demands satisfy the three equations1=p2+t2,1=q2+u2,0=pq+tu.{\displaystyle {\begin{aligned}1&=p^{2}+t^{2},\\1&=q^{2}+u^{2},\\0&=pq+tu.\end{aligned}}}
In consideration of the first equation, without loss of generality letp= cosθ,q= sinθ; then eithert= −q,u=port=q,u= −p. We can interpret the first case as a rotation byθ(whereθ= 0is the identity), and the second as a reflection across a line at an angle ofθ/2.
[cosθ−sinθsinθcosθ](rotation),[cosθsinθsinθ−cosθ](reflection){\displaystyle {\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}{\text{ (rotation), }}\qquad {\begin{bmatrix}\cos \theta &\sin \theta \\\sin \theta &-\cos \theta \\\end{bmatrix}}{\text{ (reflection)}}}
The special case of the reflection matrix withθ= 90°generates a reflection about the line at 45° given byy=xand therefore exchangesxandy; it is apermutation matrix, with a single 1 in each column and row (and otherwise 0):[0110].{\displaystyle {\begin{bmatrix}0&1\\1&0\end{bmatrix}}.}
The identity is also a permutation matrix.
A reflection isits own inverse, which implies that a reflection matrix issymmetric(equal to its transpose) as well as orthogonal. The product of two rotation matrices is arotation matrix, and the product of two reflection matrices is also a rotation matrix.
Regardless of the dimension, it is always possible to classify orthogonal matrices as purely rotational or not, but for3 × 3matrices and larger the non-rotational matrices can be more complicated than reflections. For example,[−1000−1000−1]and[0−1010000−1]{\displaystyle {\begin{bmatrix}-1&0&0\\0&-1&0\\0&0&-1\end{bmatrix}}{\text{ and }}{\begin{bmatrix}0&-1&0\\1&0&0\\0&0&-1\end{bmatrix}}}
represent aninversionthrough the origin and arotoinversion, respectively, about thez-axis.
Rotations become more complicated in higher dimensions; they can no longer be completely characterized by one angle, and may affect more than one planar subspace. It is common to describe a3 × 3rotation matrix in terms of anaxis and angle, but this only works in three dimensions. Above three dimensions two or more angles are needed, each associated with aplane of rotation.
However, we have elementary building blocks for permutations, reflections, and rotations that apply in general.
The most elementary permutation is a transposition, obtained from the identity matrix by exchanging two rows. Anyn×npermutation matrix can be constructed as a product of no more thann− 1transpositions.
AHouseholder reflectionis constructed from a non-null vectorvasQ=I−2vvTvTv.{\displaystyle Q=I-2{\frac {{\mathbf {v} }{\mathbf {v} }^{\mathrm {T} }}{{\mathbf {v} }^{\mathrm {T} }{\mathbf {v} }}}.}
Here the numerator is a symmetric matrix while the denominator is a number, the squared magnitude ofv. This is a reflection in the hyperplane perpendicular tov(negating any vector component parallel tov). Ifvis a unit vector, thenQ=I− 2vvTsuffices. A Householder reflection is typically used to simultaneously zero the lower part of a column. Any orthogonal matrix of sizen×ncan be constructed as a product of at mostnsuch reflections.
AGivens rotationacts on a two-dimensional (planar) subspace spanned by two coordinate axes, rotating by a chosen angle. It is typically used to zero a single subdiagonal entry. Any rotation matrix of sizen×ncan be constructed as a product of at mostn(n− 1)/2such rotations. In the case of3 × 3matrices, three such rotations suffice; and by fixing the sequence we can thus describe all3 × 3rotation matrices (though not uniquely) in terms of the three angles used, often calledEuler angles.
AJacobi rotationhas the same form as a Givens rotation, but is used to zero both off-diagonal entries of a2 × 2symmetric submatrix.
A real square matrix is orthogonalif and only ifits columns form anorthonormal basisof theEuclidean spaceRnwith the ordinary Euclideandot product, which is the case if and only if its rows form an orthonormal basis ofRn. It might be tempting to suppose a matrix with orthogonal (not orthonormal) columns would be called an orthogonal matrix, but such matrices have no special interest and no special name; they only satisfyMTM=D, withDadiagonal matrix.
Thedeterminantof any orthogonal matrix is +1 or −1. This follows from basic facts about determinants, as follows:1=det(I)=det(QTQ)=det(QT)det(Q)=(det(Q))2.{\displaystyle 1=\det(I)=\det \left(Q^{\mathrm {T} }Q\right)=\det \left(Q^{\mathrm {T} }\right)\det(Q)={\bigl (}\det(Q){\bigr )}^{2}.}
The converse is not true; having a determinant of ±1 is no guarantee of orthogonality, even with orthogonal columns, as shown by the following counterexample.[20012]{\displaystyle {\begin{bmatrix}2&0\\0&{\frac {1}{2}}\end{bmatrix}}}
With permutation matrices the determinant matches thesignature, being +1 or −1 as the parity of the permutation is even or odd, for the determinant is an alternating function of the rows.
Stronger than the determinant restriction is the fact that an orthogonal matrix can always bediagonalizedover thecomplex numbersto exhibit a full set ofeigenvalues, all of which must have (complex)modulus1.
The inverse of every orthogonal matrix is again orthogonal, as is the matrix product of two orthogonal matrices. In fact, the set of alln×northogonal matrices satisfies all the axioms of agroup. It is acompactLie groupof dimensionn(n− 1)/2, called theorthogonal groupand denoted byO(n).
The orthogonal matrices whose determinant is +1 form apath-connectednormal subgroupofO(n)ofindex2, thespecial orthogonal groupSO(n)of rotations. Thequotient groupO(n)/SO(n)is isomorphic toO(1), with the projection map choosing [+1] or [−1] according to the determinant. Orthogonal matrices with determinant −1 do not include the identity, and so do not form a subgroup but only acoset; it is also (separately) connected. Thus each orthogonal group falls into two pieces; and because the projection mapsplits,O(n)is asemidirect productofSO(n)byO(1). In practical terms, a comparable statement is that any orthogonal matrix can be produced by taking a rotation matrix and possibly negating one of its columns, as we saw with2 × 2matrices. Ifnis odd, then the semidirect product is in fact adirect product, and any orthogonal matrix can be produced by taking a rotation matrix and possibly negating all of its columns. This follows from the property of determinants that negating a column negates the determinant, and thus negating an odd (but not even) number of columns negates the determinant.
Now consider(n+ 1) × (n+ 1)orthogonal matrices with bottom right entry equal to 1. The remainder of the last column (and last row) must be zeros, and the product of any two such matrices has the same form. The rest of the matrix is ann×northogonal matrix; thusO(n)is a subgroup ofO(n+ 1)(and of all higher groups).
[0O(n)⋮00⋯01]{\displaystyle {\begin{bmatrix}&&&0\\&\mathrm {O} (n)&&\vdots \\&&&0\\0&\cdots &0&1\end{bmatrix}}}
Since an elementary reflection in the form of aHouseholder matrixcan reduce any orthogonal matrix to this constrained form, a series of such reflections can bring any orthogonal matrix to the identity; thus an orthogonal group is areflection group. The last column can be fixed to any unit vector, and each choice gives a different copy ofO(n)inO(n+ 1); in this wayO(n+ 1)is abundleover the unit sphereSnwith fiberO(n).
Similarly,SO(n)is a subgroup ofSO(n+ 1); and any special orthogonal matrix can be generated byGivens plane rotationsusing an analogous procedure. The bundle structure persists:SO(n) ↪ SO(n+ 1) →Sn. A single rotation can produce a zero in the first row of the last column, and series ofn− 1rotations will zero all but the last row of the last column of ann×nrotation matrix. Since the planes are fixed, each rotation has only one degree of freedom, its angle. By induction,SO(n)therefore has(n−1)+(n−2)+⋯+1=n(n−1)2{\displaystyle (n-1)+(n-2)+\cdots +1={\frac {n(n-1)}{2}}}degrees of freedom, and so doesO(n).
Permutation matrices are simpler still; they form, not a Lie group, but only a finite group, the ordern!symmetric groupSn. By the same kind of argument,Snis a subgroup ofSn+ 1. The even permutations produce the subgroup of permutation matrices of determinant +1, the ordern!/2alternating group.
More broadly, the effect of any orthogonal matrix separates into independent actions on orthogonal two-dimensional subspaces. That is, ifQis special orthogonal then one can always find an orthogonal matrixP, a (rotational)change of basis, that bringsQinto block diagonal form:
PTQP=[R1⋱Rk](neven),PTQP=[R1⋱Rk1](nodd).{\displaystyle P^{\mathrm {T} }QP={\begin{bmatrix}R_{1}&&\\&\ddots &\\&&R_{k}\end{bmatrix}}\ (n{\text{ even}}),\ P^{\mathrm {T} }QP={\begin{bmatrix}R_{1}&&&\\&\ddots &&\\&&R_{k}&\\&&&1\end{bmatrix}}\ (n{\text{ odd}}).}
where the matricesR1, ...,Rkare2 × 2rotation matrices, and with the remaining entries zero. Exceptionally, a rotation block may be diagonal,±I. Thus, negating one column if necessary, and noting that a2 × 2reflection diagonalizes to a +1 and −1, any orthogonal matrix can be brought to the formPTQP=[R1⋱Rk00±1⋱±1],{\displaystyle P^{\mathrm {T} }QP={\begin{bmatrix}{\begin{matrix}R_{1}&&\\&\ddots &\\&&R_{k}\end{matrix}}&0\\0&{\begin{matrix}\pm 1&&\\&\ddots &\\&&\pm 1\end{matrix}}\\\end{bmatrix}},}
The matricesR1, ...,Rkgive conjugate pairs of eigenvalues lying on the unit circle in thecomplex plane; so this decomposition confirms that alleigenvalueshaveabsolute value1. Ifnis odd, there is at least one real eigenvalue, +1 or −1; for a3 × 3rotation, the eigenvector associated with +1 is the rotation axis.
Suppose the entries ofQare differentiable functions oft, and thatt= 0givesQ=I. Differentiating the orthogonality conditionQTQ=I{\displaystyle Q^{\mathrm {T} }Q=I}yieldsQ˙TQ+QTQ˙=0{\displaystyle {\dot {Q}}^{\mathrm {T} }Q+Q^{\mathrm {T} }{\dot {Q}}=0}
Evaluation att= 0(Q=I) then impliesQ˙T=−Q˙.{\displaystyle {\dot {Q}}^{\mathrm {T} }=-{\dot {Q}}.}
In Lie group terms, this means that theLie algebraof an orthogonal matrix group consists ofskew-symmetric matrices. Going the other direction, thematrix exponentialof any skew-symmetric matrix is an orthogonal matrix (in fact, special orthogonal).
For example, the three-dimensional object physics callsangular velocityis a differential rotation, thus a vector in the Lie algebraso(3){\displaystyle {\mathfrak {so}}(3)}tangent toSO(3). Givenω= (xθ,yθ,zθ), withv= (x,y,z)being a unit vector, the correct skew-symmetric matrix form ofωisΩ=[0−zθyθzθ0−xθ−yθxθ0].{\displaystyle \Omega ={\begin{bmatrix}0&-z\theta &y\theta \\z\theta &0&-x\theta \\-y\theta &x\theta &0\end{bmatrix}}.}
The exponential of this is the orthogonal matrix for rotation around axisvby angleθ; settingc= cosθ/2,s= sinθ/2,exp(Ω)=[1−2s2+2x2s22xys2−2zsc2xzs2+2ysc2xys2+2zsc1−2s2+2y2s22yzs2−2xsc2xzs2−2ysc2yzs2+2xsc1−2s2+2z2s2].{\displaystyle \exp(\Omega )={\begin{bmatrix}1-2s^{2}+2x^{2}s^{2}&2xys^{2}-2zsc&2xzs^{2}+2ysc\\2xys^{2}+2zsc&1-2s^{2}+2y^{2}s^{2}&2yzs^{2}-2xsc\\2xzs^{2}-2ysc&2yzs^{2}+2xsc&1-2s^{2}+2z^{2}s^{2}\end{bmatrix}}.}
Numerical analysistakes advantage of many of the properties of orthogonal matrices for numerical linear algebra, and they arise naturally. For example, it is often desirable to compute an orthonormal basis for a space, or an orthogonal change of bases; both take the form of orthogonal matrices. Having determinant ±1 and all eigenvalues of magnitude 1 is of great benefit fornumeric stability. One implication is that thecondition numberis 1 (which is the minimum), so errors are not magnified when multiplying with an orthogonal matrix. Many algorithms use orthogonal matrices like Householder reflections andGivens rotationsfor this reason. It is also helpful that, not only is an orthogonal matrix invertible, but its inverse is available essentially free, by exchanging indices.
Permutations are essential to the success of many algorithms, including the workhorseGaussian eliminationwithpartial pivoting(where permutations do the pivoting). However, they rarely appear explicitly as matrices; their special form allows more efficient representation, such as a list ofnindices.
Likewise, algorithms using Householder and Givens matrices typically use specialized methods of multiplication and storage. For example, a Givens rotation affects only two rows of a matrix it multiplies, changing a fullmultiplicationof ordern3to a much more efficient ordern. When uses of these reflections and rotations introduce zeros in a matrix, the space vacated is enough to store sufficient data to reproduce the transform, and to do so robustly. (FollowingStewart (1976), we donotstore a rotation angle, which is both expensive and badly behaved.)
A number of importantmatrix decompositions(Golub & Van Loan 1996) involve orthogonal matrices, including especially:
Consider anoverdetermined system of linear equations, as might occur with repeated measurements of a physical phenomenon to compensate for experimental errors. WriteAx=b, whereAism×n,m>n.
AQRdecomposition reducesAto upper triangularR. For example, ifAis5 × 3thenRhas the formR=[⋅⋅⋅0⋅⋅00⋅000000].{\displaystyle R={\begin{bmatrix}\cdot &\cdot &\cdot \\0&\cdot &\cdot \\0&0&\cdot \\0&0&0\\0&0&0\end{bmatrix}}.}
Thelinear least squaresproblem is to find thexthat minimizes‖Ax−b‖, which is equivalent to projectingbto the subspace spanned by the columns ofA. Assuming the columns ofA(and henceR) are independent, the projection solution is found fromATAx=ATb. NowATAis square (n×n) and invertible, and also equal toRTR. But the lower rows of zeros inRare superfluous in the product, which is thus already in lower-triangular upper-triangular factored form, as inGaussian elimination(Cholesky decomposition). Here orthogonality is important not only for reducingATA= (RTQT)QRtoRTR, but also for allowing solution without magnifying numerical problems.
In the case of a linear system which is underdetermined, or an otherwise non-invertible matrix, singular value decomposition (SVD) is equally useful. WithAfactored asUΣVT, a satisfactory solution uses the Moore-Penrosepseudoinverse,VΣ+UT, whereΣ+merely replaces each non-zero diagonal entry with its reciprocal. SetxtoVΣ+UTb.
The case of a square invertible matrix also holds interest. Suppose, for example, thatAis a3 × 3rotation matrix which has been computed as the composition of numerous twists and turns. Floating point does not match the mathematical ideal of real numbers, soAhas gradually lost its true orthogonality. AGram–Schmidt processcouldorthogonalizethe columns, but it is not the most reliable, nor the most efficient, nor the most invariant method. Thepolar decompositionfactors a matrix into a pair, one of which is the uniqueclosestorthogonal matrix to the given matrix, or one of the closest if the given matrix is singular. (Closeness can be measured by anymatrix norminvariant under an orthogonal change of basis, such as the spectral norm or the Frobenius norm.) For a near-orthogonal matrix, rapid convergence to the orthogonal factor can be achieved by a "Newton's method" approach due toHigham (1986)(1990), repeatedly averaging the matrix with its inverse transpose.Dubrulle (1999)has published an accelerated method with a convenient convergence test.
For example, consider a non-orthogonal matrix for which the simple averaging algorithm takes seven steps[3175]→[1.81250.06253.43752.6875]→⋯→[0.8−0.60.60.8]{\displaystyle {\begin{bmatrix}3&1\\7&5\end{bmatrix}}\rightarrow {\begin{bmatrix}1.8125&0.0625\\3.4375&2.6875\end{bmatrix}}\rightarrow \cdots \rightarrow {\begin{bmatrix}0.8&-0.6\\0.6&0.8\end{bmatrix}}}and which acceleration trims to two steps (withγ= 0.353553, 0.565685).
[3175]→[1.41421−1.060661.060661.41421]→[0.8−0.60.60.8]{\displaystyle {\begin{bmatrix}3&1\\7&5\end{bmatrix}}\rightarrow {\begin{bmatrix}1.41421&-1.06066\\1.06066&1.41421\end{bmatrix}}\rightarrow {\begin{bmatrix}0.8&-0.6\\0.6&0.8\end{bmatrix}}}
Gram-Schmidt yields an inferior solution, shown by a Frobenius distance of 8.28659 instead of the minimum 8.12404.
[3175]→[0.393919−0.9191450.9191450.393919]{\displaystyle {\begin{bmatrix}3&1\\7&5\end{bmatrix}}\rightarrow {\begin{bmatrix}0.393919&-0.919145\\0.919145&0.393919\end{bmatrix}}}
Some numerical applications, such asMonte Carlo methodsand exploration of high-dimensional data spaces, require generation ofuniformly distributedrandom orthogonal matrices. In this context, "uniform" is defined in terms ofHaar measure, which essentially requires that the distribution not change if multiplied by any freely chosen orthogonal matrix. Orthogonalizing matrices withindependentuniformly distributed random entries does not result in uniformly distributed orthogonal matrices[citation needed], but theQRdecompositionof independentnormally distributedrandom entries does, as long as the diagonal ofRcontains only positive entries (Mezzadri 2006).Stewart (1980)replaced this with a more efficient idea thatDiaconis & Shahshahani (1987)later generalized as the "subgroup algorithm" (in which form it works just as well for permutations and rotations). To generate an(n+ 1) × (n+ 1)orthogonal matrix, take ann×none and a uniformly distributed unit vector of dimensionn+ 1. Construct a Householder reflection from the vector, then apply it to the smaller matrix (embedded in the larger size with a 1 at the bottom right corner).
The problem of finding the orthogonal matrixQnearest a given matrixMis related to theOrthogonal Procrustes problem. There are several different ways to get the unique solution, the simplest of which is taking thesingular value decompositionofMand replacing the singular values with ones. Another method expresses theRexplicitly but requires the use of amatrix square root:[2]Q=M(MTM)−12{\displaystyle Q=M\left(M^{\mathrm {T} }M\right)^{-{\frac {1}{2}}}}
This may be combined with the Babylonian method for extracting the square root of a matrix to give a recurrence which converges to an orthogonal matrix quadratically:Qn+1=2M(Qn−1M+MTQn)−1{\displaystyle Q_{n+1}=2M\left(Q_{n}^{-1}M+M^{\mathrm {T} }Q_{n}\right)^{-1}}whereQ0=M.
These iterations are stable provided thecondition numberofMis less than three.[3]
Using a first-order approximation of the inverse and the same initialization results in the modified iteration:
Nn=QnTQn{\displaystyle N_{n}=Q_{n}^{\mathrm {T} }Q_{n}}Pn=12QnNn{\displaystyle P_{n}={\frac {1}{2}}Q_{n}N_{n}}Qn+1=2Qn+PnNn−3Pn{\displaystyle Q_{n+1}=2Q_{n}+P_{n}N_{n}-3P_{n}}
A subtle technical problem afflicts some uses of orthogonal matrices. Not only are the group components with determinant +1 and −1 notconnectedto each other, even the +1 component,SO(n), is notsimply connected(except for SO(1), which is trivial). Thus it is sometimes advantageous, or even necessary, to work with acovering groupof SO(n), thespin group,Spin(n). Likewise,O(n)has covering groups, thepin groups, Pin(n). Forn> 2,Spin(n)is simply connected and thus the universal covering group forSO(n). By far the most famous example of a spin group isSpin(3), which is nothing butSU(2), or the group of unitquaternions.
The Pin and Spin groups are found withinClifford algebras, which themselves can be built from orthogonal matrices.
IfQis not a square matrix, then the conditionsQTQ=IandQQT=Iare not equivalent. The conditionQTQ=Isays that the columns ofQare orthonormal. This can only happen ifQis anm×nmatrix withn≤m(due to linear dependence). Similarly,QQT=Isays that the rows ofQare orthonormal, which requiresn≥m.
There is no standard terminology for these matrices. They are variously called "semi-orthogonal matrices", "orthonormal matrices", "orthogonal matrices", and sometimes simply "matrices with orthonormal rows/columns".
For the casen≤m, matrices with orthonormal columns may be referred to asorthogonal k-framesand they are elements of theStiefel manifold.
|
https://en.wikipedia.org/wiki/Orthogonal_matrix
|
Source separation,blind signal separation(BSS) orblind source separation, is the separation of a set of sourcesignalsfrom a set of mixed signals, without the aid of information (or with very little information) about the source signals or the mixing process. It is most commonly applied indigital signal processingand involves the analysis of mixtures ofsignals; the objective is to recover the original component signals from a mixture signal. The classical example of a source separation problem is thecocktail party problem, where a number of people are talking simultaneously in a room (for example, at acocktail party), and a listener is trying to follow one of the discussions. The human brain can handle this sort of auditory source separation problem, but it is a difficult problem in digital signal processing.
This problem is in general highlyunderdetermined, but useful solutions can be derived under a surprising variety of conditions. Much of the early literature in this field focuses on the separation of temporal signals such as audio. However, blind signal separation is now routinely performed onmultidimensional data, such asimagesandtensors, which may involve no time dimension whatsoever.
Several approaches have been proposed for the solution of this problem but development is currently still very much in progress. Some of the more successful approaches areprincipal components analysisandindependent component analysis, which work well when there are no delays or echoes present; that is, the problem is simplified a great deal. The field ofcomputational auditory scene analysisattempts to achieve auditory source separation using an approach that is based on human hearing.
The human brain must also solve this problem in real time. In human perception this ability is commonly referred to asauditory scene analysisor thecocktail party effect.
At a cocktail party, there is a group of people talking at the same time. You have multiple microphones picking up mixed signals, but you want to isolate the speech of a single person. BSS can be used to separate the individual sources by using mixed signals. In the presence of noise, dedicated optimization criteria need to be used.
Figure 2 shows the basic concept of BSS. The individual source signals are shown as well as the mixed signals which are received signals. BSS is used to separate the mixed signals with only knowing mixed signals and nothing about original signal or how they were mixed. The separated signals are only approximations of the source signals. The separated images, were separated usingPythonand theShogun toolboxusing Joint Approximation Diagonalization of Eigen-matrices (JADE) algorithm which is based onindependent component analysis, ICA.[1]This toolbox method can be used with multi-dimensions but for an easy visual aspect images(2-D) were used.
One of the practical applications being researched in this area ismedical imagingof the brain withmagnetoencephalography(MEG). This kind of imaging involves careful measurements ofmagnetic fieldsoutside the head which yield an accurate 3D-picture of the interior of the head. However, external sources ofelectromagnetic fields, such as a wristwatch on the subject's arm, will significantly degrade the accuracy of the measurement. Applying source separation techniques on the measured signals can help remove undesired artifacts from the signal.
Inelectroencephalogram(EEG) andmagnetoencephalography(MEG), the interference from muscle activity masks the desired signal from brain activity. BSS, however, can be used to separate the two so an accurate representation of brain activity may be achieved.[2][3]
Another application is the separation ofmusicalsignals. For a stereo mix of relatively simple signals it is now possible to make a fairly accurate separation, although someartifactsremain.
Other applications:[2]
The set of individual source signals,s(t)=(s1(t),…,sn(t))T{\displaystyle s(t)=(s_{1}(t),\dots ,s_{n}(t))^{T}}, is 'mixed' using a matrix,A=[aij]∈Rm×n{\displaystyle A=[a_{ij}]\in \mathbb {R} ^{m\times n}}, to produce a set of 'mixed' signals,x(t)=(x1(t),…,xm(t))T{\displaystyle x(t)=(x_{1}(t),\dots ,x_{m}(t))^{T}}, as follows. Usually,n{\displaystyle n}is equal tom{\displaystyle m}. Ifm>n{\displaystyle m>n}, then the system of equations is overdetermined and thus can be unmixed using a conventional linear method. Ifn>m{\displaystyle n>m}, the system is underdetermined and a non-linear method must be employed to recover the unmixed signals. The signals themselves can be multidimensional.
x(t)=A⋅s(t){\displaystyle x(t)=A\cdot s(t)}
The above equation is effectively 'inverted' as follows. Blind source separation separates the set of mixed signals,x(t){\displaystyle x(t)}, through the determination of an 'unmixing' matrix,B=[Bij]∈Rn×m{\displaystyle B=[B_{ij}]\in \mathbb {R} ^{n\times m}}, to 'recover' an approximation of the original signals,y(t)=(y1(t),…,yn(t))T{\displaystyle y(t)=(y_{1}(t),\dots ,y_{n}(t))^{T}}.[4][5][2]
y(t)=B⋅x(t){\displaystyle y(t)=B\cdot x(t)}
Since the chief difficulty of the problem is its underdetermination, methods for blind source separation generally seek to narrow the set of possible solutions in a way that is unlikely to exclude the desired solution. In one approach, exemplified byprincipalandindependentcomponent analysis, one seeks source signals that are minimallycorrelatedor maximallyindependentin a probabilistic orinformation-theoreticsense. A second approach, exemplified bynonnegative matrix factorization, is to impose structural constraints on the source signals. These structural constraints may be derived from a generative model of the signal, but are more commonly heuristics justified by good empirical performance. A common theme in the second approach is to impose some kind of low-complexity constraint on the signal, such assparsityin somebasisfor the signal space. This approach can be particularly effective if one requires not the whole signal, but merely its most salient features.
There are different methods of blind signal separation:
|
https://en.wikipedia.org/wiki/Signal_separation
|
Instatistics, avarimax rotationis used to simplify the expression of a particular sub-space in terms of just a few major items each. The actual coordinate system is unchanged, it is theorthogonalbasis that is being rotated to align with those coordinates. The sub-space found withprincipal component analysisorfactor analysisis expressed as a dense basis with many non-zero weights which makes it hard to interpret. Varimax is so called because it maximizes the sum of thevariancesof the squared loadings (squared correlations between variables and factors). Preserving orthogonality requires that it is a rotation that leaves the sub-space invariant. Intuitively, this is achieved if, (a) any given variable has a high loading on a single factor but near-zero loadings on the remaining factors and if (b) any given factor is constituted by only a few variables with very high loadings on this factor while the remaining variables have near-zero loadings on this factor. If these conditions hold, the factor loading matrix is said to have "simple structure," and varimax rotation brings the loading matrix closer to such simple structure (as much as the data allow). From the perspective of individuals measured on the variables, varimax seeks a basis that most economically represents each individual—that is, each individual can be well described by alinear combinationof only a few basis functions.
One way of expressing the varimax criterion formally is this:
Suggested byHenry Felix Kaiserin 1958,[1]it is a popular scheme for orthogonal rotation (where all factors remain uncorrelated with one another).
A summary of the use of varimax rotation and of other types of factor rotation is presented inthis article on factor analysis.
This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Varimax_rotation
|
In themathematicalfield ofFourier analysis, theconjugate Fourier seriesarises by realizing the Fourier series formally as the boundary values of thereal partof aholomorphic functionon theunit disc. Theimaginary partof that function then defines the conjugate series.Zygmund (1968)studied the delicate questions of convergence of this series, and its relationship with theHilbert transform.
In detail, consider atrigonometric seriesof the form
in which the coefficientsanandbnarereal numbers. This series is the real part of thepower series
along theunit circlewithz=eiθ{\displaystyle z=e^{i\theta }}. The imaginary part ofF(z) is called theconjugate seriesoff, and is denoted
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Conjugate_Fourier_series
|
Ageneralized Fourier seriesis the expansion of asquare integrablefunction into a sum of square integrableorthogonal basis functions. The standardFourier seriesuses anorthonormal basisoftrigonometric functions, and the series expansion is applied to periodic functions. In contrast, a generalized Fourier series uses any set of orthogonal basis functions and can apply to anysquare integrable function.[1][2]
Consider a setΦ={ϕn:[a,b]→C}n=0∞{\displaystyle \Phi =\{\phi _{n}:[a,b]\to \mathbb {C} \}_{n=0}^{\infty }}ofsquare-integrablecomplex valued functions defined on the closed interval[a,b]{\displaystyle [a,b]}that are pairwiseorthogonalunder the weightedinner product:
⟨f,g⟩w=∫abf(x)g(x)¯w(x)dx,{\displaystyle \langle f,g\rangle _{w}=\int _{a}^{b}f(x){\overline {g(x)}}w(x)dx,}
wherew(x){\displaystyle w(x)}is aweight functionandg¯{\displaystyle {\overline {g}}}is thecomplex conjugateofg{\displaystyle g}. Then, thegeneralized Fourier seriesof a functionf{\displaystyle f}is:f(x)=∑n=0∞cnϕn(x),{\displaystyle f(x)=\sum _{n=0}^{\infty }c_{n}\phi _{n}(x),}where the coefficients are given by:cn=⟨f,ϕn⟩w‖ϕn‖w2.{\displaystyle c_{n}={\langle f,\phi _{n}\rangle _{w} \over \|\phi _{n}\|_{w}^{2}}.}
Given the spaceL2(a,b){\displaystyle L^{2}(a,b)}of square integrable functions defined on a given interval, one can find orthogonal bases by considering a class of boundary value problems on the interval[a,b]{\displaystyle [a,b]}calledregular Sturm-Liouville problems. These are defined as follows,(rf′)′+pf+λwf=0{\displaystyle (rf')'+pf+\lambda wf=0}B1(f)=B2(f)=0{\displaystyle B_{1}(f)=B_{2}(f)=0}wherer,r′{\displaystyle r,r'}andp{\displaystyle p}are real and continuous on[a,b]{\displaystyle [a,b]}andr>0{\displaystyle r>0}on[a,b]{\displaystyle [a,b]},B1{\displaystyle B_{1}}andB2{\displaystyle B_{2}}areself-adjointboundary conditions, andw{\displaystyle w}is a positive continuous functions on[a,b]{\displaystyle [a,b]}.
Given a regular Sturm-Liouville problem as defined above, the set{ϕn}1∞{\displaystyle \{\phi _{n}\}_{1}^{\infty }}ofeigenfunctionscorresponding to the distincteigenvaluesolutions to the problem form an orthogonal basis forL2(a,b){\displaystyle L^{2}(a,b)}with respect to the weighted inner product⟨⋅,⋅⟩w{\displaystyle \langle \cdot ,\cdot \rangle _{w}}.[3]We also have that for a functionf∈L2(a,b){\displaystyle f\in L^{2}(a,b)}that satisfies the boundary conditions of this Sturm-Liouville problem, the series∑n=1∞⟨f,ϕn⟩ϕn{\displaystyle \sum _{n=1}^{\infty }\langle f,\phi _{n}\rangle \phi _{n}}converges uniformlytof{\displaystyle f}.[4]
A functionf(x){\displaystyle f(x)}defined on the entire number line is calledperiodicwith periodT{\displaystyle T}if a numberT>0{\displaystyle T>0}exists such that, for any real numberx{\displaystyle x}, the equalityf(x+T)=f(x){\displaystyle f(x+T)=f(x)}holds.
If a function is periodic with periodT{\displaystyle T}, then it is also periodic with periods2T{\displaystyle 2T},3T{\displaystyle 3T}, and so on. Usually, the period of a function is understood as the smallest such numberT{\displaystyle T}. However, for some functions, arbitrarily small values ofT{\displaystyle T}exist.
The sequence of functions1,cos(x),sin(x),cos(2x),sin(2x),...,cos(nx),sin(nx),...{\displaystyle 1,\cos(x),\sin(x),\cos(2x),\sin(2x),...,\cos(nx),\sin(nx),...}is known as the trigonometric system. Anylinear combinationof functions of a trigonometric system, including an infinite combination (that is, a converginginfinite series), is a periodic function with a period of 2π.
On any segment of length 2π (such as the segments [−π,π] and [0,2π]) the trigonometric system is anorthogonal system. This means that for any two functions of the trigonometric system, the integral of their product over a segment of length 2π is equal to zero. This integral can be treated as ascalar productin the space of functions that are integrable on a given segment of length 2π.
Let the functionf(x){\displaystyle f(x)}be defined on the segment [−π, π]. Given appropriate smoothness and differentiability conditions,f(x){\displaystyle f(x)}may be represented on this segment as a linear combination of functions of the trigonometric system, also referred to as theexpansionof the functionf(x){\displaystyle f(x)}into a trigonometric Fourier series.
TheLegendre polynomialsPn(x){\displaystyle P_{n}(x)}are solutions to theSturm–Liouvilleeigenvalue problem
As a consequence of Sturm-Liouville theory, these polynomials are orthogonaleigenfunctionswith respect to theinner productwith unit weight. This can be written as a generalized Fourier series (known in this case as a Fourier–Legendre series) involving the Legendre polynomials, so that
As an example, the Fourier–Legendre series may be calculated forf(x)=cosx{\displaystyle f(x)=\cos x}over[−1,1]{\displaystyle [-1,1]}. Then
and a truncated series involving only these terms would be
which differs fromcosx{\displaystyle \cos x}by approximately 0.003. In computational applications it may be advantageous to use such Fourier–Legendre series rather than Fourier series since the basis functions for the series expansion are all polynomials and hence the integrals and thus the coefficients may be easier to calculate.
Some theorems on the series' coefficientscn{\displaystyle c_{n}}include:
Bessel's inequalityis a statement about the coefficients of an elementx{\displaystyle x}in aHilbert spacewith respect to anorthonormalsequence. The inequality was derived byF.W. Besselin 1828:[5]
Parseval's theoremusually refers to the result that theFourier transformisunitary; loosely, that the sum (or integral) of the square of a function is equal to the sum (or integral) of the square of its transform.[6]
If Φ is a complete basis, then:
|
https://en.wikipedia.org/wiki/Generalized_Fourier_series
|
Inmathematics,Fourier–Bessel seriesis a particular kind ofgeneralized Fourier series(aninfinite seriesexpansion on a finite interval) based onBessel functions.
Fourier–Bessel series are used in the solution topartial differential equations, particularly incylindrical coordinatesystems.
The Fourier–Bessel series of a functionf(x)with adomainof[0,b]satisfyingf(b) = 0
f:[0,b]→R{\displaystyle f:[0,b]\to \mathbb {R} }is the representation of that function as alinear combinationof manyorthogonalversions of the sameBessel function of the first kindJα, where the argument to each versionnis differently scaled, according to[1][2](Jα)n(x):=Jα(uα,nbx){\displaystyle (J_{\alpha })_{n}(x):=J_{\alpha }\left({\frac {u_{\alpha ,n}}{b}}x\right)}whereuα,nis aroot, numberednassociated with the Bessel functionJαandcnare the assigned coefficients:[3]f(x)∼∑n=1∞cnJα(uα,nbx).{\displaystyle f(x)\sim \sum _{n=1}^{\infty }c_{n}J_{\alpha }\left({\frac {u_{\alpha ,n}}{b}}x\right).}
The Fourier–Bessel series may be thought of as a Fourier expansion in the ρ coordinate ofcylindrical coordinates. Just as theFourier seriesis defined for a finite interval and has a counterpart, thecontinuous Fourier transformover an infinite interval, so the Fourier–Bessel series has a counterpart over an infinite interval, namely theHankel transform.
As said, differently scaled Bessel Functions are orthogonal with respect to theinner product
⟨f,g⟩=∫0bxf(x)g(x)dx{\displaystyle \langle f,g\rangle =\int _{0}^{b}xf(x)g(x)\,dx}
according to
∫0bxJα(xuα,nb)Jα(xuα,mb)dx=b22δmn[Jα+1(uα,n)]2,{\displaystyle \int _{0}^{b}xJ_{\alpha }\left({\frac {xu_{\alpha ,n}}{b}}\right)\,J_{\alpha }\left({\frac {xu_{\alpha ,m}}{b}}\right)\,dx={\frac {b^{2}}{2}}\delta _{mn}[J_{\alpha +1}(u_{\alpha ,n})]^{2},}
(where:δmn{\displaystyle \delta _{mn}}is the Kronecker delta). The coefficients can be obtained fromprojectingthe functionf(x)onto the respective Bessel functions:
cn=⟨f,(Jα)n⟩⟨(Jα)n,(Jα)n⟩=∫0bxf(x)(Jα)n(x)dx12(bJα±1(uα,n))2{\displaystyle c_{n}={\frac {\langle f,(J_{\alpha })_{n}\rangle }{\langle (J_{\alpha })_{n},(J_{\alpha })_{n}\rangle }}={\frac {\int _{0}^{b}xf(x)(J_{\alpha })_{n}(x)\,dx}{{\frac {1}{2}}(bJ_{\alpha \pm 1}(u_{\alpha ,n}))^{2}}}}
where the plus or minus sign is equally valid.
For the inverse transform, one makes use of the following representation of theDirac delta function[4]
2xαy1−αb2∑k=1∞Jα(xuα,kb)Jα(yuα,kb)Jα+12(uα,k)=δ(x−y).{\displaystyle {\frac {2x^{\alpha }y^{1-\alpha }}{b^{2}}}\sum _{k=1}^{\infty }{\frac {J_{\alpha }\left({\frac {xu_{\alpha ,k}}{b}}\right)\,J_{\alpha }\left({\frac {yu_{\alpha ,k}}{b}}\right)}{J_{\alpha +1}^{2}(u_{\alpha ,k})}}=\delta (x-y).}
Fourier–Bessel series coefficients are unique for a given signal, and there is one-to-one mapping between continuous frequency (Fn{\displaystyle F_{n}}) and order index(n){\displaystyle (n)}which can be expressed as follows:
un=2πFnLFs{\displaystyle u_{n}={\frac {2\pi F_{n}L}{F_{s}}}}
Since,un=un−1+π≈nπ{\displaystyle u_{n}=u_{n-1}+\pi \approx n\pi }. So above equation can be rewritten as follows:
Fn=Fsn2L{\displaystyle F_{n}={\frac {F_{s}n}{2L}}}
whereL{\displaystyle L}is the length of the signal andFs{\displaystyle F_{s}}is the sampling frequency of the signal.
For an imagef(x,y){\displaystyle f(x,y)}of size M×N, the synthesis equations for order-0 2D-Fourier–Bessel series expansion is as follows:
f(x,y)=∑m=1M∑n=1NF(m,n)J0(u0,nyN)J0(u0,mxM){\displaystyle f(x,y)=\sum _{m=1}^{M}\sum _{n=1}^{N}F(m,n)J_{0}{\bigg (}{\frac {u_{0,n}y}{N}}{\bigg )}J_{0}{\bigg (}{\frac {u_{0,m}x}{M}}{\bigg )}}
WhereF(m,n){\displaystyle F(m,n)}is 2D-Fourier–Bessel series expansion coefficients whose mathematical expressions are as follows:
F(m,n)=4α1∑x=0M−1∑y=0N−1xyf(x,y)J0(u0,nyN)J0(u0,mxM){\displaystyle F(m,n)={\frac {4}{\alpha _{1}}}\sum _{x=0}^{M-1}\sum _{y=0}^{N-1}xyf(x,y)J_{0}{\bigg (}{\frac {u_{0,n}y}{N}}{\bigg )}J_{0}{\bigg (}{\frac {u_{0,m}x}{M}}{\bigg )}}
where,α1=(NM)2(J1(u0,m)J1(u0,n))2{\displaystyle \alpha _{1}=(NM)^{2}(J_{1}(u_{0,m})J_{1}(u_{0,n}))^{2}}
For a signal of lengthb{\displaystyle b}, Fourier-Bessel based spectral entropy such as Shannon spectral entropy (HSSE{\displaystyle H_{\text{SSE}}}), log energy entropy (HLLE{\displaystyle H_{\text{LLE}}}), and Wiener entropy (HWE{\displaystyle H_{\text{WE}}}) are defined as follows:
HSSE=−∑n=1bP(n)log2(P(n)){\displaystyle H_{\text{SSE}}=-\sum _{n=1}^{b}P(n)~{\text{log}}_{2}\left(P(n)\right)}HWE=b∏n=1bEn∑n=1bEn{\displaystyle H_{\text{WE}}=b{\frac {\sqrt {\displaystyle \prod _{n=1}^{b}E_{n}}}{\displaystyle \sum _{n=1}^{b}E_{n}}}}HLE=−∑n=1blog2(P(n)){\displaystyle H_{\text{LE}}=-\sum _{n=1}^{b}~{\text{log}}_{2}\left(P(n)\right)}
wherePn{\displaystyle P_{n}}is the normalized energy distribution which is mathematically defined as follows:
P(n)=En∑n=1bEn{\displaystyle P(n)={\frac {E_{n}}{\displaystyle \sum _{n=1}^{b}E_{n}}}}
En{\displaystyle E_{n}}is energy spectrum which is mathematically defined as follows:
En=cn2b2[J1(u1,n)]22{\displaystyle E_{n}={\frac {c_{n}^{2}b^{2}[J_{1}(u_{1,n})]^{2}}{2}}}
The Empirical wavelet transform (EWT) is a multi-scale signal processing approach for the decomposition of multi-component signal into intrinsic mode functions (IMFs).[5]The EWT is based on the design of empirical wavelet based filter bank based on the segregation of Fourier spectrum of the multi-component signals. The segregation of Fourier spectrum of multi-component signal is performed using the detection of peaks and then the evaluation of boundary points.[5]For non-stationary signals, the Fourier Bessel Series Expansion (FBSE) is the natural choice as it uses Bessel function as basis for analysis and synthesis of the signal. The FBSE spectrum has produced the number of frequency bins same as the length of the signal in the frequency range [0,Fs2{\displaystyle {\frac {F_{s}}{2}}}]. Therefore, in FBSE-EWT, the boundary points are detected using the FBSE based spectrum of the non-stationary signal. Once, the boundary points are obtained, the empirical wavelet based filter-bank is designed in the Fourier domain of the multi-component signal to evaluate IMFs. The FBSE based method used in FBSE-EWT has produced higher number of boundary points as compared to FFT part in EWT based method. The features extracted from the IMFs of EEG and ECG signals obtained using FBSE-EWT based approach have shown better performance for the automated detection of Neurological and cardiac ailments.
For a discrete time signal, x(n), the FBSE domain discrete Stockwell transform (FBSE-DST) is evaluated as follows:T(n,l)=∑m=1LY(m+l)g(m,l)J0(λlNn){\displaystyle T(n,l)=\sum _{m=1}^{L}Y{\Big (}m+l{\Big )}g(m,l)J_{0}{\Big (}{\frac {\lambda _{l}}{N}}n{\Big )}}where Y(l) are the FBSE coefficients and these coefficients are calculated using the following expression as
Y(l)=2N2[J1(λl)]2∑n=0N−1nx(n)J0(λlNn){\displaystyle Y(l)={\frac {2}{N^{2}[J_{1}(\lambda _{l})]^{2}}}\sum _{n=0}^{N-1}nx(n)J_{0}{\Big (}{\frac {\lambda _{l}}{N}}n{\Big )}}
Theλl{\displaystyle \lambda _{l}}is termed as thelth{\displaystyle l^{th}}root of the Bessel function, and it is evaluated in an iterative manner based on the solution ofJ0(λl)=0{\displaystyle J_{0}(\lambda _{l})=0}using theNewton-Raphson method. Similarly, the g(m,l) is the FBSE domain Gaussian window and it is given as follows :
g(m,l)=e−2π2λm2λl2,{l,m=1,2,...L}{\displaystyle g(m,l)={\text{e}}^{-{\frac {2\pi ^{2}\lambda _{m}^{2}}{\lambda _{l}^{2}}}},~{\{l,m=1,2,...L}\}}
For multicomponent amplitude and frequency modulated (AM-FM) signals, the discrete energy separation algorithm (DESA) together with the Gabor's filtering is a traditional approach to estimate the amplitude envelope (AE) and the instantaneous frequency (IF) functions.[6]It has been observed that the filtering operation distorts the amplitude and phase modulations in the separated monocomponent signals.
The Fourier–Bessel series expansion does not require use of window function in order to obtain spectrum of the signal. It represents real signal in terms of real Bessel basis functions. It provides representation of real signals it terms of positive frequencies. The basis functions used are aperiodic in nature and converge. The basis functions include amplitude modulation in the representation. The Fourier–Bessel series expansion spectrum provides frequency points equal to the signal length.
The Fourier–Bessel series expansion employs aperiodic and decaying Bessel functions as the basis. The Fourier–Bessel series expansion has been successfully applied in diversified areas such as Gear fault diagnosis,[7]discrimination of odorants in a turbulent ambient,[8]postural stability analysis, detection of voice onset time, glottal closure instants (epoch) detection, separation of speech formants, speech enhancement,[9]and speaker identification.[10]The Fourier–Bessel series expansion has also been used to reduce cross terms in the Wigner–Ville distribution.
A second Fourier–Bessel series, also known asDini series, is associated with theRobin boundary conditionbf′(b)+cf(b)=0,{\displaystyle bf'(b)+cf(b)=0,}wherec{\displaystyle c}is an arbitrary constant.
The Dini series can be defined byf(x)∼∑n=1∞bnJα(γnx/b),{\displaystyle f(x)\sim \sum _{n=1}^{\infty }b_{n}J_{\alpha }(\gamma _{n}x/b),}
whereγn{\displaystyle \gamma _{n}}is then-th zero ofxJα′(x)+cJα(x){\displaystyle xJ'_{\alpha }(x)+cJ_{\alpha }(x)}.
The coefficientsbn{\displaystyle b_{n}}are given bybn=2γn2b2(c2+γn2−α2)Jα2(γn)∫0bJα(γnx/b)f(x)xdx.{\displaystyle b_{n}={\frac {2\gamma _{n}^{2}}{b^{2}(c^{2}+\gamma _{n}^{2}-\alpha ^{2})J_{\alpha }^{2}(\gamma _{n})}}\int _{0}^{b}J_{\alpha }(\gamma _{n}x/b)\,f(x)\,x\,dx.}
|
https://en.wikipedia.org/wiki/Fourier%E2%80%93Bessel_series
|
Inmathematics, theLaplace transform, named afterPierre-Simon Laplace(/ləˈplɑːs/), is anintegral transformthat converts afunctionof arealvariable(usuallyt{\displaystyle t}, in thetime domain) to a function of acomplexvariables{\displaystyle s}(in the complex-valuedfrequency domain, also known ass-domain, ors-plane).
The transform is useful for convertingdifferentiationandintegrationin the time domain into much easiermultiplicationanddivisionin the Laplace domain (analogous to howlogarithmsare useful for simplifying multiplication and division into addition and subtraction). This gives the transform many applications inscienceandengineering, mostly as a tool for solving lineardifferential equations[1]anddynamical systemsby simplifyingordinary differential equationsandintegral equationsintoalgebraic polynomial equations, and by simplifyingconvolutionintomultiplication.[2][3]Once solved, the inverse Laplace transform reverts to the original domain.
The Laplace transform is defined (for suitable functionsf{\displaystyle f}) by theintegralL{f}(s)=∫0∞f(t)e−stdt,{\displaystyle {\mathcal {L}}\{f\}(s)=\int _{0}^{\infty }f(t)e^{-st}\,dt,}wheresis acomplex number. It is related to many other transforms, most notably theFourier transformand theMellin transform.Formally, the Laplace transform is converted into a Fourier transform by the substitutions=iω{\displaystyle s=i\omega }whereω{\displaystyle \omega }is real. However, unlike the Fourier transform, which gives the decomposition of a function into its components in each frequency, the Laplace transform of a function with suitable decay is ananalytic function, and so has a convergentpower series, the coefficients of which give the decomposition of a function into itsmoments. Also unlike the Fourier transform, when regarded in this way as an analytic function, the techniques ofcomplex analysis, and especiallycontour integrals, can be used for calculations.
The Laplace transform is named aftermathematicianandastronomerPierre-Simon, Marquis de Laplace, who used a similar transform in his work onprobability theory.[4]Laplace wrote extensively about the use ofgenerating functions(1814), and the integral form of the Laplace transform evolved naturally as a result.[5]
Laplace's use of generating functions was similar to what is now known as thez-transform, and he gave little attention to thecontinuous variablecase which was discussed byNiels Henrik Abel.[6]
From 1744,Leonhard Eulerinvestigated integrals of the formz=∫X(x)eaxdxandz=∫X(x)xAdx{\displaystyle z=\int X(x)e^{ax}\,dx\quad {\text{ and }}\quad z=\int X(x)x^{A}\,dx}as solutions of differential equations, introducing in particular thegamma function.[7]Joseph-Louis Lagrangewas an admirer of Euler and, in his work on integratingprobability density functions, investigated expressions of the form∫X(x)e−axaxdx,{\displaystyle \int X(x)e^{-ax}a^{x}\,dx,}which resembles a Laplace transform.[8][9]
These types of integrals seem first to have attracted Laplace's attention in 1782, where he was following in the spirit of Euler in using the integrals themselves as solutions of equations.[10]However, in 1785, Laplace took the critical step forward when, rather than simply looking for a solution in the form of an integral, he started to apply the transforms in the sense that was later to become popular. He used an integral of the form∫xsφ(x)dx,{\displaystyle \int x^{s}\varphi (x)\,dx,}akin to aMellin transform, to transform the whole of adifference equation, in order to look for solutions of the transformed equation. He then went on to apply the Laplace transform in the same way and started to derive some of its properties, beginning to appreciate its potential power.[11]
Laplace also recognised thatJoseph Fourier's method ofFourier seriesfor solving thediffusion equationcould only apply to a limited region of space, because those solutions wereperiodic. In 1809, Laplace applied his transform to find solutions that diffused indefinitely in space.[12]In 1821,Cauchydeveloped anoperational calculusfor the Laplace transform that could be used to study linear differential equations in much the same way the transform is now used in basic engineering. This method was popularized, and perhaps rediscovered, byOliver Heavisidearound the turn of the century.[13]
Bernhard Riemannused the Laplace transform in his 1859 paperOn the number of primes less than a given magnitude, in which he also developed the inversion theorem. Riemann used the Laplace transform to develop the functional equation of theRiemann zeta function, and this method[clarification needed]is still used to relate themodular transformation lawof theJacobi theta functionto the functional equation[clarification needed].
Hjalmar Mellinwas among the first to study the Laplace transform, rigorously in theKarl Weierstrassschool of analysis, and apply it to the study ofdifferential equationsandspecial functions, at the turn of the 20th century.[14]At around the same time, Heaviside was busy with his operational calculus.Thomas Joannes Stieltjesconsidered a generalization of the Laplace transform connected to hiswork on moments. Other contributors in this time period includedMathias Lerch,[15]Oliver Heaviside, andThomas Bromwich.[16]
In 1929,Vannevar BushandNorbert WienerpublishedOperational Circuit Analysisas a text for engineering analysis of electrical circuits, applying both Fourier transforms and operational calculus, and in which they included one of the first predecessors of the modern table of Laplace transforms.
In 1934,Raymond PaleyandNorbert Wienerpublished the important workFourier transforms in the complex domain, about what is now called the Laplace transform (see below). Also during the 30s, the Laplace transform was instrumental inG H HardyandJohn Edensor Littlewood's study oftauberian theorems, and this application was later expounded on by Widder (1941), who developed other aspects of the theory such as a new method for inversion.Edward Charles Titchmarshwrote the influentialIntroduction to the theory of the Fourier integral(1937).
The current widespread use of the transform (mainly in engineering) came about during and soon afterWorld War II,[17]replacing the earlier Heavisideoperational calculus. The advantages of the Laplace transform had been emphasized byGustav Doetsch,[18]to whom the name Laplace transform is apparently due.
The Laplace transform of afunctionf(t), defined for allreal numberst≥ 0, is the functionF(s), which is a unilateral transform defined by
F(s)=∫0∞f(t)e−stdt,{\displaystyle F(s)=\int _{0}^{\infty }f(t)e^{-st}\,dt,}(Eq. 1)
wheresis acomplexfrequency-domain parameters=σ+iω{\displaystyle s=\sigma +i\omega }with real numbersσandω.
An alternate notation for the Laplace transform isL{f}{\displaystyle {\mathcal {L}}\{f\}}instead ofF,[3]often written asF(s)=L{f(t)}{\displaystyle F(s)={\mathcal {L}}\{f(t)\}}in anabuse of notation.
The meaning of the integral depends on types of functions of interest. A necessary condition for existence of the integral is thatfmust belocally integrableon[0, ∞). For locally integrable functions that decay at infinity or are ofexponential type(|f(t)|≤AeB|t|{\displaystyle |f(t)|\leq Ae^{B|t|}}), the integral can be understood to be a (proper)Lebesgue integral. However, for many applications it is necessary to regard it as aconditionally convergentimproper integralat∞. Still more generally, the integral can be understood in aweak sense, and this is dealt with below.
One can define the Laplace transform of a finiteBorel measureμby the Lebesgue integral[19]L{μ}(s)=∫[0,∞)e−stdμ(t).{\displaystyle {\mathcal {L}}\{\mu \}(s)=\int _{[0,\infty )}e^{-st}\,d\mu (t).}
An important special case is whereμis aprobability measure, for example, theDirac delta function. Inoperational calculus, the Laplace transform of a measure is often treated as though the measure came from a probability density functionf. In that case, to avoid potential confusion, one often writesL{f}(s)=∫0−∞f(t)e−stdt,{\displaystyle {\mathcal {L}}\{f\}(s)=\int _{0^{-}}^{\infty }f(t)e^{-st}\,dt,}where the lower limit of0−is shorthand notation forlimε→0+∫−ε∞.{\displaystyle \lim _{\varepsilon \to 0^{+}}\int _{-\varepsilon }^{\infty }.}
This limit emphasizes that any point mass located at0is entirely captured by the Laplace transform. Although with the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with theLaplace–Stieltjes transform.
When one says "the Laplace transform" without qualification, the unilateral or one-sided transform is usually intended. The Laplace transform can be alternatively defined as thebilateral Laplace transform, ortwo-sided Laplace transform, by extending the limits of integration to be the entire real axis. If that is done, the common unilateral transform simply becomes a special case of the bilateral transform, where the definition of the function being transformed is multiplied by theHeaviside step function.
The bilateral Laplace transformF(s)is defined as follows:
F(s)=∫−∞∞e−stf(t)dt.{\displaystyle F(s)=\int _{-\infty }^{\infty }e^{-st}f(t)\,dt.}(Eq. 2)
An alternate notation for the bilateral Laplace transform isB{f}{\displaystyle {\mathcal {B}}\{f\}}, instead ofF.
Two integrable functions have the same Laplace transform only if they differ on a set ofLebesgue measurezero. This means that, on the range of the transform, there is an inverse transform. In fact, besides integrable functions, the Laplace transform is aone-to-one mappingfrom one function space into another in many other function spaces as well, although there is usually no easy characterization of the range.
Typical function spaces in which this is true include the spaces of bounded continuous functions, the spaceL∞(0, ∞), or more generallytempered distributionson(0, ∞). The Laplace transform is also defined and injective for suitable spaces of tempered distributions.
In these cases, the image of the Laplace transform lives in a space ofanalytic functionsin theregion of convergence. Theinverse Laplace transformis given by the following complex integral, which is known by various names (theBromwich integral, theFourier–Mellin integral, andMellin's inverse formula):
f(t)=L−1{F}(t)=12πilimT→∞∫γ−iTγ+iTestF(s)ds,{\displaystyle f(t)={\mathcal {L}}^{-1}\{F\}(t)={\frac {1}{2\pi i}}\lim _{T\to \infty }\int _{\gamma -iT}^{\gamma +iT}e^{st}F(s)\,ds,}(Eq. 3)
whereγis a real number so that the contour path of integration is in the region of convergence ofF(s). In most applications, the contour can be closed, allowing the use of theresidue theorem. An alternative formula for the inverse Laplace transform is given byPost's inversion formula. The limit here is interpreted in theweak-* topology.
In practice, it is typically more convenient to decompose a Laplace transform into known transforms of functions obtained from a table and construct the inverse by inspection.
Inpureandapplied probability, the Laplace transform is defined as anexpected value. IfXis arandom variablewith probability density functionf, then the Laplace transform offis given by the expectationL{f}(s)=E[e−sX],{\displaystyle {\mathcal {L}}\{f\}(s)=\operatorname {E} \left[e^{-sX}\right],}whereE[r]{\displaystyle \operatorname {E} [r]}is theexpectationofrandom variabler{\displaystyle r}.
Byconvention, this is referred to as the Laplace transform of the random variableXitself. Here, replacingsby−tgives themoment generating functionofX. The Laplace transform has applications throughout probability theory, includingfirst passage timesofstochastic processessuch asMarkov chains, andrenewal theory.
Of particular use is the ability to recover thecumulative distribution functionof a continuous random variableXby means of the Laplace transform as follows:[20]FX(x)=L−1{1sE[e−sX]}(x)=L−1{1sL{f}(s)}(x).{\displaystyle F_{X}(x)={\mathcal {L}}^{-1}\left\{{\frac {1}{s}}\operatorname {E} \left[e^{-sX}\right]\right\}(x)={\mathcal {L}}^{-1}\left\{{\frac {1}{s}}{\mathcal {L}}\{f\}(s)\right\}(x).}
The Laplace transform can be alternatively defined in a purely algebraic manner by applying afield of fractionsconstruction to the convolutionringof functions on the positive half-line. The resultingspace of abstract operatorsis exactly equivalent to Laplace space, but in this construction the forward and reverse transforms never need to be explicitly defined (avoiding the related difficulties with proving convergence).[21]
Iffis a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transformF(s)offconverges provided that the limitlimR→∞∫0Rf(t)e−stdt{\displaystyle \lim _{R\to \infty }\int _{0}^{R}f(t)e^{-st}\,dt}exists.
The Laplace transformconverges absolutelyif the integral∫0∞|f(t)e−st|dt{\displaystyle \int _{0}^{\infty }\left|f(t)e^{-st}\right|\,dt}exists as a proper Lebesgue integral. The Laplace transform is usually understood asconditionally convergent, meaning that it converges in the former but not in the latter sense.
The set of values for whichF(s)converges absolutely is either of the formRe(s) >aorRe(s) ≥a, whereais anextended real constantwith−∞ ≤a≤ ∞(a consequence of thedominated convergence theorem). The constantais known as the abscissa of absolute convergence, and depends on the growth behavior off(t).[22]Analogously, the two-sided transform converges absolutely in a strip of the forma< Re(s) <b, and possibly including the linesRe(s) =aorRe(s) =b.[23]The subset of values ofsfor which the Laplace transform converges absolutely is called the region of absolute convergence, or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence: this is a consequence ofFubini's theoremandMorera's theorem.
Similarly, the set of values for whichF(s)converges (conditionally or absolutely) is known as the region of conditional convergence, or simply theregion of convergence(ROC). If the Laplace transform converges (conditionally) ats=s0, then it automatically converges for allswithRe(s) > Re(s0). Therefore, the region of convergence is a half-plane of the formRe(s) >a, possibly including some points of the boundary lineRe(s) =a.
In the region of convergenceRe(s) > Re(s0), the Laplace transform offcan be expressed byintegrating by partsas the integralF(s)=(s−s0)∫0∞e−(s−s0)tβ(t)dt,β(u)=∫0ue−s0tf(t)dt.{\displaystyle F(s)=(s-s_{0})\int _{0}^{\infty }e^{-(s-s_{0})t}\beta (t)\,dt,\quad \beta (u)=\int _{0}^{u}e^{-s_{0}t}f(t)\,dt.}
That is,F(s)can effectively be expressed, in the region of convergence, as the absolutely convergent Laplace transform of some other function. In particular, it is analytic.
There are severalPaley–Wiener theoremsconcerning the relationship between the decay properties off, and the properties of the Laplace transform within the region of convergence.
In engineering applications, a function corresponding to alinear time-invariant (LTI) systemisstableif every bounded input produces a bounded output. This is equivalent to the absolute convergence of the Laplace transform of the impulse response function in the regionRe(s) ≥ 0. As a result, LTI systems are stable, provided that the poles of the Laplace transform of the impulse response function have negative real part.
This ROC is used in knowing about the causality and stability of a system.
The Laplace transform's key property is that it convertsdifferentiationandintegrationin the time domain into multiplication and division bysin the Laplace domain. Thus, the Laplace variablesis also known as anoperator variablein the Laplace domain: either thederivative operatoror (fors−1)theintegration operator.
Given the functionsf(t)andg(t), and their respective Laplace transformsF(s)andG(s),f(t)=L−1{F(s)},g(t)=L−1{G(s)},{\displaystyle {\begin{aligned}f(t)&={\mathcal {L}}^{-1}\{F(s)\},\\g(t)&={\mathcal {L}}^{-1}\{G(s)\},\end{aligned}}}
the following table is a list of properties of unilateral Laplace transform:[24]
f(t)u(t−a){\displaystyle f(t)u(t-a)\ }
e−asL{f(t+a)}{\displaystyle e^{-as}{\mathcal {L}}\{f(t+a)\}}
fP(t)=∑n=0∞(−1)nf(t−Tn){\displaystyle f_{P}(t)=\sum _{n=0}^{\infty }(-1)^{n}f(t-Tn)}
FP(s)=11+e−TsF(s){\displaystyle F_{P}(s)={\frac {1}{1+e^{-Ts}}}F(s)}
The Laplace transform can be viewed as acontinuousanalogue of apower series.[26]Ifa(n)is a discrete function of a positive integern, then the power series associated toa(n)is the series∑n=0∞a(n)xn{\displaystyle \sum _{n=0}^{\infty }a(n)x^{n}}wherexis a real variable (seeZ-transform). Replacing summation overnwith integration overt, a continuous version of the power series becomes∫0∞f(t)xtdt{\displaystyle \int _{0}^{\infty }f(t)x^{t}\,dt}where the discrete functiona(n)is replaced by the continuous onef(t).
Changing the base of the power fromxtoegives∫0∞f(t)(elnx)tdt{\displaystyle \int _{0}^{\infty }f(t)\left(e^{\ln {x}}\right)^{t}\,dt}
For this to converge for, say, all bounded functionsf, it is necessary to require thatlnx< 0. Making the substitution−s= lnxgives just the Laplace transform:∫0∞f(t)e−stdt{\displaystyle \int _{0}^{\infty }f(t)e^{-st}\,dt}
In other words, the Laplace transform is a continuous analog of a power series, in which the discrete parameternis replaced by the continuous parametert, andxis replaced bye−s.
The quantitiesμn=∫0∞tnf(t)dt{\displaystyle \mu _{n}=\int _{0}^{\infty }t^{n}f(t)\,dt}
are themomentsof the functionf. If the firstnmoments offconverge absolutely, then by repeateddifferentiation under the integral,(−1)n(Lf)(n)(0)=μn.{\displaystyle (-1)^{n}({\mathcal {L}}f)^{(n)}(0)=\mu _{n}.}This is of special significance in probability theory, where the moments of a random variableXare given by the expectation valuesμn=E[Xn]{\displaystyle \mu _{n}=\operatorname {E} [X^{n}]}. Then, the relation holdsμn=(−1)ndndsnE[e−sX](0).{\displaystyle \mu _{n}=(-1)^{n}{\frac {d^{n}}{ds^{n}}}\operatorname {E} \left[e^{-sX}\right](0).}
It is often convenient to use the differentiation property of the Laplace transform to find the transform of a function's derivative. This can be derived from the basic expression for a Laplace transform as follows:L{f(t)}=∫0−∞e−stf(t)dt=[f(t)e−st−s]0−∞−∫0−∞e−st−sf′(t)dt(by parts)=[−f(0−)−s]+1sL{f′(t)},{\displaystyle {\begin{aligned}{\mathcal {L}}\left\{f(t)\right\}&=\int _{0^{-}}^{\infty }e^{-st}f(t)\,dt\\[6pt]&=\left[{\frac {f(t)e^{-st}}{-s}}\right]_{0^{-}}^{\infty }-\int _{0^{-}}^{\infty }{\frac {e^{-st}}{-s}}f'(t)\,dt\quad {\text{(by parts)}}\\[6pt]&=\left[-{\frac {f(0^{-})}{-s}}\right]+{\frac {1}{s}}{\mathcal {L}}\left\{f'(t)\right\},\end{aligned}}}yieldingL{f′(t)}=s⋅L{f(t)}−f(0−),{\displaystyle {\mathcal {L}}\{f'(t)\}=s\cdot {\mathcal {L}}\{f(t)\}-f(0^{-}),}and in the bilateral case,L{f′(t)}=s∫−∞∞e−stf(t)dt=s⋅L{f(t)}.{\displaystyle {\mathcal {L}}\{f'(t)\}=s\int _{-\infty }^{\infty }e^{-st}f(t)\,dt=s\cdot {\mathcal {L}}\{f(t)\}.}
The general resultL{f(n)(t)}=sn⋅L{f(t)}−sn−1f(0−)−⋯−f(n−1)(0−),{\displaystyle {\mathcal {L}}\left\{f^{(n)}(t)\right\}=s^{n}\cdot {\mathcal {L}}\{f(t)\}-s^{n-1}f(0^{-})-\cdots -f^{(n-1)}(0^{-}),}wheref(n){\displaystyle f^{(n)}}denotes thenth derivative off, can then be established with an inductive argument.
A useful property of the Laplace transform is the following:∫0∞f(x)g(x)dx=∫0∞(Lf)(s)⋅(L−1g)(s)ds{\displaystyle \int _{0}^{\infty }f(x)g(x)\,dx=\int _{0}^{\infty }({\mathcal {L}}f)(s)\cdot ({\mathcal {L}}^{-1}g)(s)\,ds}under suitable assumptions on the behaviour off,g{\displaystyle f,g}in a right neighbourhood of0{\displaystyle 0}and on the decay rate off,g{\displaystyle f,g}in a left neighbourhood of∞{\displaystyle \infty }. The above formula is a variation of integration by parts, with the operatorsddx{\displaystyle {\frac {d}{dx}}}and∫dx{\displaystyle \int \,dx}being replaced byL{\displaystyle {\mathcal {L}}}andL−1{\displaystyle {\mathcal {L}}^{-1}}. Let us prove the equivalent formulation:∫0∞(Lf)(x)g(x)dx=∫0∞f(s)(Lg)(s)ds.{\displaystyle \int _{0}^{\infty }({\mathcal {L}}f)(x)g(x)\,dx=\int _{0}^{\infty }f(s)({\mathcal {L}}g)(s)\,ds.}
By plugging in(Lf)(x)=∫0∞f(s)e−sxds{\displaystyle ({\mathcal {L}}f)(x)=\int _{0}^{\infty }f(s)e^{-sx}\,ds}the left-hand side turns into:∫0∞∫0∞f(s)g(x)e−sxdsdx,{\displaystyle \int _{0}^{\infty }\int _{0}^{\infty }f(s)g(x)e^{-sx}\,ds\,dx,}but assuming Fubini's theorem holds, by reversing the order of integration we get the wanted right-hand side.
This method can be used to compute integrals that would otherwise be difficult to compute using elementary methods of real calculus. For example,∫0∞sinxxdx=∫0∞L(1)(x)sinxdx=∫0∞1⋅L(sin)(x)dx=∫0∞dxx2+1=π2.{\displaystyle \int _{0}^{\infty }{\frac {\sin x}{x}}dx=\int _{0}^{\infty }{\mathcal {L}}(1)(x)\sin xdx=\int _{0}^{\infty }1\cdot {\mathcal {L}}(\sin )(x)dx=\int _{0}^{\infty }{\frac {dx}{x^{2}+1}}={\frac {\pi }{2}}.}
The (unilateral) Laplace–Stieltjes transform of a functiong: ℝ → ℝis defined by theLebesgue–Stieltjes integral
{L∗g}(s)=∫0∞e−stdg(t).{\displaystyle \{{\mathcal {L}}^{*}g\}(s)=\int _{0}^{\infty }e^{-st}\,d\,g(t)~.}
The functiongis assumed to be ofbounded variation. Ifgis theantiderivativeoff:
g(x)=∫0xf(t)dt{\displaystyle g(x)=\int _{0}^{x}f(t)\,d\,t}
then the Laplace–Stieltjes transform ofgand the Laplace transform offcoincide. In general, the Laplace–Stieltjes transform is the Laplace transform of theStieltjes measureassociated tog. So in practice, the only distinction between the two transforms is that the Laplace transform is thought of as operating on the density function of the measure, whereas the Laplace–Stieltjes transform is thought of as operating on itscumulative distribution function.[27]
Letf{\displaystyle f}be a complex-valued Lebesgue integrable function supported on[0,∞){\displaystyle [0,\infty )}, and letF(s)=Lf(s){\displaystyle F(s)={\mathcal {L}}f(s)}be its Laplace transform. Then, within the region of convergence, we have
which is the Fourier transform of the functionf(t)e−σt{\displaystyle f(t)e^{-\sigma t}}.[28]
Indeed, theFourier transformis a special case (under certain conditions) of the bilateral Laplace transform. The main difference is that the Fourier transform of a function is a complex function of arealvariable (frequency), the Laplace transform of a function is a complex function of acomplexvariable. The Laplace transform is usually restricted to transformation of functions oftwitht≥ 0. A consequence of this restriction is that the Laplace transform of a function is aholomorphic functionof the variables. Unlike the Fourier transform, the Laplace transform of adistributionis generally awell-behavedfunction. Techniques of complex variables can also be used to directly study Laplace transforms. As a holomorphic function, the Laplace transform has apower seriesrepresentation. This power series expresses a function as a linear superposition ofmomentsof the function. This perspective has applications in probability theory.
Formally, the Fourier transform is equivalent to evaluating the bilateral Laplace transform with imaginary arguments=iω[29][30]when the condition explained below is fulfilled,
f^(ω)=F{f(t)}=L{f(t)}|s=iω=F(s)|s=iω=∫−∞∞e−iωtf(t)dt.{\displaystyle {\begin{aligned}{\hat {f}}(\omega )&={\mathcal {F}}\{f(t)\}\\[4pt]&={\mathcal {L}}\{f(t)\}|_{s=i\omega }=F(s)|_{s=i\omega }\\[4pt]&=\int _{-\infty }^{\infty }e^{-i\omega t}f(t)\,dt~.\end{aligned}}}
This convention of the Fourier transform (f^3(ω){\displaystyle {\hat {f}}_{3}(\omega )}inFourier transform § Other conventions) requires a factor of1/2πon the inverse Fourier transform. This relationship between the Laplace and Fourier transforms is often used to determine thefrequency spectrumof asignalor dynamical system.
The above relation is valid as statedif and only ifthe region of convergence (ROC) ofF(s)contains the imaginary axis,σ= 0.
For example, the functionf(t) = cos(ω0t)has a Laplace transformF(s) =s/(s2+ω02)whose ROC isRe(s) > 0. Ass=iω0is a pole ofF(s), substitutings=iωinF(s)does not yield the Fourier transform off(t)u(t), which contains terms proportional to theDirac delta functionsδ(ω±ω0).
However, a relation of the formlimσ→0+F(σ+iω)=f^(ω){\displaystyle \lim _{\sigma \to 0^{+}}F(\sigma +i\omega )={\hat {f}}(\omega )}holds under much weaker conditions. For instance, this holds for the above example provided that the limit is understood as aweak limitof measures (seevague topology). General conditions relating the limit of the Laplace transform of a function on the boundary to the Fourier transform take the form ofPaley–Wiener theorems.
The Mellin transform and its inverse are related to the two-sided Laplace transform by a simple change of variables.
If in the Mellin transformG(s)=M{g(θ)}=∫0∞θsg(θ)dθθ{\displaystyle G(s)={\mathcal {M}}\{g(\theta )\}=\int _{0}^{\infty }\theta ^{s}g(\theta )\,{\frac {d\theta }{\theta }}}we setθ=e−twe get a two-sided Laplace transform.
The unilateral or one-sided Z-transform is simply the Laplace transform of an ideally sampled signal with the substitution ofz=defesT,{\displaystyle z{\stackrel {\mathrm {def} }{{}={}}}e^{sT},}whereT= 1/fsis thesampling interval(in units of time e.g., seconds) andfsis thesampling rate(insamples per secondorhertz).
LetΔT(t)=def∑n=0∞δ(t−nT){\displaystyle \Delta _{T}(t)\ {\stackrel {\mathrm {def} }{=}}\ \sum _{n=0}^{\infty }\delta (t-nT)}be a sampling impulse train (also called aDirac comb) andxq(t)=defx(t)ΔT(t)=x(t)∑n=0∞δ(t−nT)=∑n=0∞x(nT)δ(t−nT)=∑n=0∞x[n]δ(t−nT){\displaystyle {\begin{aligned}x_{q}(t)&{\stackrel {\mathrm {def} }{{}={}}}x(t)\Delta _{T}(t)=x(t)\sum _{n=0}^{\infty }\delta (t-nT)\\&=\sum _{n=0}^{\infty }x(nT)\delta (t-nT)=\sum _{n=0}^{\infty }x[n]\delta (t-nT)\end{aligned}}}be the sampled representation of the continuous-timex(t)x[n]=defx(nT).{\displaystyle x[n]{\stackrel {\mathrm {def} }{{}={}}}x(nT)~.}
The Laplace transform of the sampled signalxq(t)isXq(s)=∫0−∞xq(t)e−stdt=∫0−∞∑n=0∞x[n]δ(t−nT)e−stdt=∑n=0∞x[n]∫0−∞δ(t−nT)e−stdt=∑n=0∞x[n]e−nsT.{\displaystyle {\begin{aligned}X_{q}(s)&=\int _{0^{-}}^{\infty }x_{q}(t)e^{-st}\,dt\\&=\int _{0^{-}}^{\infty }\sum _{n=0}^{\infty }x[n]\delta (t-nT)e^{-st}\,dt\\&=\sum _{n=0}^{\infty }x[n]\int _{0^{-}}^{\infty }\delta (t-nT)e^{-st}\,dt\\&=\sum _{n=0}^{\infty }x[n]e^{-nsT}~.\end{aligned}}}
This is the precise definition of the unilateral Z-transform of the discrete functionx[n]
X(z)=∑n=0∞x[n]z−n{\displaystyle X(z)=\sum _{n=0}^{\infty }x[n]z^{-n}}with the substitution ofz→esT.
Comparing the last two equations, we find the relationship between the unilateral Z-transform and the Laplace transform of the sampled signal,Xq(s)=X(z)|z=esT.{\displaystyle X_{q}(s)=X(z){\Big |}_{z=e^{sT}}.}
The similarity between the Z- and Laplace transforms is expanded upon in the theory oftime scale calculus.
The integral form of theBorel transformF(s)=∫0∞f(z)e−szdz{\displaystyle F(s)=\int _{0}^{\infty }f(z)e^{-sz}\,dz}is a special case of the Laplace transform forfanentire functionof exponential type, meaning that|f(z)|≤AeB|z|{\displaystyle |f(z)|\leq Ae^{B|z|}}for some constantsAandB. The generalized Borel transform allows a different weighting function to be used, rather than the exponential function, to transform functions not of exponential type.Nachbin's theoremgives necessary and sufficient conditions for the Borel transform to be well defined.
Since an ordinary Laplace transform can be written as a special case of a two-sided transform, and since the two-sided transform can be written as the sum of two one-sided transforms, the theory of the Laplace-, Fourier-, Mellin-, and Z-transforms are at bottom the same subject. However, a different point of view and different characteristic problems are associated with each of these four major integral transforms.
The following table provides Laplace transforms for many common functions of a single variable.[31][32]For definitions and explanations, see theExplanatory Notesat the end of the table.
Because the Laplace transform is a linear operator,
Using this linearity, and varioustrigonometric,hyperbolic, and complex number (etc.) properties and/or identities, some Laplace transforms can be obtained from others more quickly than by using the definition directly.
The unilateral Laplace transform takes as input a function whose time domain is thenon-negativereals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function,u(t).
The entries of the table that involve a time delayτare required to becausal(meaning thatτ> 0). A causal system is a system where theimpulse responseh(t)is zero for all timetprior tot= 0. In general, the region of convergence for causal systems is not the same as that ofanticausal systems.
The Laplace transform is often used incircuit analysis, and simple conversions to thes-domain of circuit elements can be made. Circuit elements can be transformed intoimpedances, very similar tophasorimpedances.
Here is a summary of equivalents:
Note that the resistor is exactly the same in the time domain and thes-domain. The sources are put in if there are initial conditions on the circuit elements. For example, if a capacitor has an initial voltage across it, or if the inductor has an initial current through it, the sources inserted in thes-domain account for that.
The equivalents for current and voltage sources are simply derived from the transformations in the table above.
The Laplace transform is used frequently inengineeringandphysics; the output of alinear time-invariant systemcan be calculated by convolving its unit impulse response with the input signal. Performing this calculation in Laplace space turns the convolution into a multiplication; the latter being easier to solve because of its algebraic form. For more information, seecontrol theory. The Laplace transform is invertible on a large class of functions. Given a simple mathematical or functional description of an input or output to asystem, the Laplace transform provides an alternative functional description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system based on a set of specifications.[38]
The Laplace transform can also be used to solve differential equations and is used extensively inmechanical engineeringandelectrical engineering. The Laplace transform reduces a linear differential equation to an algebraic equation, which can then be solved by the formal rules of algebra. The original differential equation can then be solved by applying the inverse Laplace transform. English electrical engineerOliver Heavisidefirst proposed a similar scheme, although without using the Laplace transform; and the resulting operational calculus is credited as the Heaviside calculus.
LetL{f(t)}=F(s){\displaystyle {\mathcal {L}}\left\{f(t)\right\}=F(s)}. Then (see the table above)
∂sL{f(t)t}=∂s∫0∞f(t)te−stdt=−∫0∞f(t)e−stdt=−F(s){\displaystyle \partial _{s}{\mathcal {L}}\left\{{\frac {f(t)}{t}}\right\}=\partial _{s}\int _{0}^{\infty }{\frac {f(t)}{t}}e^{-st}\,dt=-\int _{0}^{\infty }f(t)e^{-st}dt=-F(s)}
From which one gets:
L{f(t)t}=∫s∞F(p)dp.{\displaystyle {\mathcal {L}}\left\{{\frac {f(t)}{t}}\right\}=\int _{s}^{\infty }F(p)\,dp.}
In the limits→0{\displaystyle s\rightarrow 0}, one gets∫0∞f(t)tdt=∫0∞F(p)dp,{\displaystyle \int _{0}^{\infty }{\frac {f(t)}{t}}\,dt=\int _{0}^{\infty }F(p)\,dp,}provided that the interchange of limits can be justified. This is often possible as a consequence of thefinal value theorem. Even when the interchange cannot be justified the calculation can be suggestive. For example, witha≠ 0 ≠b, proceeding formally one has∫0∞cos(at)−cos(bt)tdt=∫0∞(pp2+a2−pp2+b2)dp=[12lnp2+a2p2+b2]0∞=12lnb2a2=ln|ba|.{\displaystyle {\begin{aligned}\int _{0}^{\infty }{\frac {\cos(at)-\cos(bt)}{t}}\,dt&=\int _{0}^{\infty }\left({\frac {p}{p^{2}+a^{2}}}-{\frac {p}{p^{2}+b^{2}}}\right)\,dp\\[6pt]&=\left[{\frac {1}{2}}\ln {\frac {p^{2}+a^{2}}{p^{2}+b^{2}}}\right]_{0}^{\infty }={\frac {1}{2}}\ln {\frac {b^{2}}{a^{2}}}=\ln \left|{\frac {b}{a}}\right|.\end{aligned}}}
The validity of this identity can be proved by other means. It is an example of aFrullani integral.
Another example isDirichlet integral.
In the theory ofelectrical circuits, the current flow in acapacitoris proportional to the capacitance and rate of change in the electrical potential (with equations as for theSIunit system). Symbolically, this is expressed by the differential equationi=Cdvdt,{\displaystyle i=C{dv \over dt},}whereCis the capacitance of the capacitor,i=i(t)is theelectric currentthrough the capacitor as a function of time, andv=v(t)is thevoltageacross the terminals of the capacitor, also as a function of time.
Taking the Laplace transform of this equation, we obtainI(s)=C(sV(s)−V0),{\displaystyle I(s)=C(sV(s)-V_{0}),}whereI(s)=L{i(t)},V(s)=L{v(t)},{\displaystyle {\begin{aligned}I(s)&={\mathcal {L}}\{i(t)\},\\V(s)&={\mathcal {L}}\{v(t)\},\end{aligned}}}andV0=v(0).{\displaystyle V_{0}=v(0).}
Solving forV(s)we haveV(s)=I(s)sC+V0s.{\displaystyle V(s)={I(s) \over sC}+{V_{0} \over s}.}
The definition of the complex impedanceZ(inohms) is the ratio of the complex voltageVdivided by the complex currentIwhile holding the initial stateV0at zero:Z(s)=V(s)I(s)|V0=0.{\displaystyle Z(s)=\left.{V(s) \over I(s)}\right|_{V_{0}=0}.}
Using this definition and the previous equation, we find:Z(s)=1sC,{\displaystyle Z(s)={\frac {1}{sC}},}which is the correct expression for the complex impedance of a capacitor. In addition, the Laplace transform has large applications in control theory.
Consider a linear time-invariant system withtransfer functionH(s)=1(s+α)(s+β).{\displaystyle H(s)={\frac {1}{(s+\alpha )(s+\beta )}}.}
Theimpulse responseis simply the inverse Laplace transform of this transfer function:h(t)=L−1{H(s)}.{\displaystyle h(t)={\mathcal {L}}^{-1}\{H(s)\}.}
To evaluate this inverse transform, we begin by expandingH(s)using the method of partial fraction expansion,1(s+α)(s+β)=Ps+α+Rs+β.{\displaystyle {\frac {1}{(s+\alpha )(s+\beta )}}={P \over s+\alpha }+{R \over s+\beta }.}
The unknown constantsPandRare theresidueslocated at the corresponding poles of the transfer function. Each residue represents the relative contribution of thatsingularityto the transfer function's overall shape.
By theresidue theorem, the inverse Laplace transform depends only upon the poles and their residues. To find the residueP, we multiply both sides of the equation bys+αto get1s+β=P+R(s+α)s+β.{\displaystyle {\frac {1}{s+\beta }}=P+{R(s+\alpha ) \over s+\beta }.}
Then by lettings= −α, the contribution fromRvanishes and all that is left isP=1s+β|s=−α=1β−α.{\displaystyle P=\left.{1 \over s+\beta }\right|_{s=-\alpha }={1 \over \beta -\alpha }.}
Similarly, the residueRis given byR=1s+α|s=−β=1α−β.{\displaystyle R=\left.{1 \over s+\alpha }\right|_{s=-\beta }={1 \over \alpha -\beta }.}
Note thatR=−1β−α=−P{\displaystyle R={-1 \over \beta -\alpha }=-P}and so the substitution ofRandPinto the expanded expression forH(s)givesH(s)=(1β−α)⋅(1s+α−1s+β).{\displaystyle H(s)=\left({\frac {1}{\beta -\alpha }}\right)\cdot \left({1 \over s+\alpha }-{1 \over s+\beta }\right).}
Finally, using the linearity property and the known transform for exponential decay (seeItem#3in theTable of Laplace Transforms, above), we can take the inverse Laplace transform ofH(s)to obtainh(t)=L−1{H(s)}=1β−α(e−αt−e−βt),{\displaystyle h(t)={\mathcal {L}}^{-1}\{H(s)\}={\frac {1}{\beta -\alpha }}\left(e^{-\alpha t}-e^{-\beta t}\right),}which is the impulse response of the system.
The same result can be achieved using theconvolution propertyas if the system is a series of filters with transfer functions1/(s+α)and1/(s+β). That is, the inverse ofH(s)=1(s+α)(s+β)=1s+α⋅1s+β{\displaystyle H(s)={\frac {1}{(s+\alpha )(s+\beta )}}={\frac {1}{s+\alpha }}\cdot {\frac {1}{s+\beta }}}isL−1{1s+α}∗L−1{1s+β}=e−αt∗e−βt=∫0te−αxe−β(t−x)dx=e−αt−e−βtβ−α.{\displaystyle {\mathcal {L}}^{-1}\!\left\{{\frac {1}{s+\alpha }}\right\}*{\mathcal {L}}^{-1}\!\left\{{\frac {1}{s+\beta }}\right\}=e^{-\alpha t}*e^{-\beta t}=\int _{0}^{t}e^{-\alpha x}e^{-\beta (t-x)}\,dx={\frac {e^{-\alpha t}-e^{-\beta t}}{\beta -\alpha }}.}
Starting with the Laplace transform,X(s)=ssin(φ)+ωcos(φ)s2+ω2{\displaystyle X(s)={\frac {s\sin(\varphi )+\omega \cos(\varphi )}{s^{2}+\omega ^{2}}}}we find the inverse by first rearranging terms in the fraction:X(s)=ssin(φ)s2+ω2+ωcos(φ)s2+ω2=sin(φ)(ss2+ω2)+cos(φ)(ωs2+ω2).{\displaystyle {\begin{aligned}X(s)&={\frac {s\sin(\varphi )}{s^{2}+\omega ^{2}}}+{\frac {\omega \cos(\varphi )}{s^{2}+\omega ^{2}}}\\&=\sin(\varphi )\left({\frac {s}{s^{2}+\omega ^{2}}}\right)+\cos(\varphi )\left({\frac {\omega }{s^{2}+\omega ^{2}}}\right).\end{aligned}}}
We are now able to take the inverse Laplace transform of our terms:x(t)=sin(φ)L−1{ss2+ω2}+cos(φ)L−1{ωs2+ω2}=sin(φ)cos(ωt)+cos(φ)sin(ωt).{\displaystyle {\begin{aligned}x(t)&=\sin(\varphi ){\mathcal {L}}^{-1}\left\{{\frac {s}{s^{2}+\omega ^{2}}}\right\}+\cos(\varphi ){\mathcal {L}}^{-1}\left\{{\frac {\omega }{s^{2}+\omega ^{2}}}\right\}\\&=\sin(\varphi )\cos(\omega t)+\cos(\varphi )\sin(\omega t).\end{aligned}}}
This is just thesine of the sumof the arguments, yielding:x(t)=sin(ωt+φ).{\displaystyle x(t)=\sin(\omega t+\varphi ).}
We can apply similar logic to find thatL−1{scosφ−ωsinφs2+ω2}=cos(ωt+φ).{\displaystyle {\mathcal {L}}^{-1}\left\{{\frac {s\cos \varphi -\omega \sin \varphi }{s^{2}+\omega ^{2}}}\right\}=\cos {(\omega t+\varphi )}.}
Instatistical mechanics, the Laplace transform of the density of statesg(E){\displaystyle g(E)}defines thepartition function.[39]That is, the canonical partition functionZ(β){\displaystyle Z(\beta )}is given byZ(β)=∫0∞e−βEg(E)dE{\displaystyle Z(\beta )=\int _{0}^{\infty }e^{-\beta E}g(E)\,dE}and the inverse is given byg(E)=12πi∫β0−i∞β0+i∞eβEZ(β)dβ{\displaystyle g(E)={\frac {1}{2\pi i}}\int _{\beta _{0}-i\infty }^{\beta _{0}+i\infty }e^{\beta E}Z(\beta )\,d\beta }
The wide and general applicability of the Laplace transform and its inverse is illustrated by an application in astronomy which provides some information on thespatial distributionof matter of anastronomicalsource ofradiofrequencythermal radiationtoo distant toresolveas more than a point, given itsflux densityspectrum, rather than relating thetimedomain with the spectrum (frequency domain).
Assuming certain properties of the object, e.g. spherical shape and constant temperature, calculations based on carrying out an inverse Laplace transformation on the spectrum of the object can produce the only possiblemodelof the distribution of matter in it (density as a function of distance from the center) consistent with the spectrum.[40]When independent information on the structure of an object is available, the inverse Laplace transform method has been found to be in good agreement.
Consider arandom walk, with steps{+1,−1}{\displaystyle \{+1,-1\}}occurring with probabilitiesp,q=1−p{\displaystyle p,q=1-p}.[41]Suppose also that the time step is anPoisson process, with parameterλ{\displaystyle \lambda }. Then the probability of the walk being at the lattice pointn{\displaystyle n}at timet{\displaystyle t}is
This leads to a system ofintegral equations(or equivalently a system of differential equations). However, because it is a system of convolution equations, the Laplace transform converts it into a system of linear equations for
namely:
which may now be solved by standard methods.
The Laplace transform of the measureμ{\displaystyle \mu }on[0,∞){\displaystyle [0,\infty )}is given by
It is intuitively clear that, for smalls>0{\displaystyle s>0}, the exponentially decaying integrand will become more sensitive to the concentration of the measureμ{\displaystyle \mu }on larger subsets of the domain. To make this more precise, introduce the distribution function:
Formally, we expect a limit of the following kind:
Tauberian theoremsare theorems relating the asymptotics of the Laplace transform, ass→0+{\displaystyle s\to 0^{+}}, to those of the distribution ofμ{\displaystyle \mu }ast→∞{\displaystyle t\to \infty }. They are thus of importance in asymptotic formulae ofprobabilityandstatistics, where often the spectral side has asymptotics that are simpler to infer.[42]
Two Tauberian theorems of note are theHardy–Littlewood Tauberian theoremandWiener's Tauberian theorem. The Wiener theorem generalizes theIkehara Tauberian theorem, which is the following statement:
LetA(x) be a non-negative,monotonicnondecreasing function ofx, defined for 0 ≤x< ∞. Suppose that
converges for ℜ(s) > 1 to the functionƒ(s) and that, for some non-negative numberc,
has an extension as acontinuous functionfor ℜ(s) ≥ 1.
Then thelimitasxgoes to infinity ofe−xA(x) is equal to c.
This statement can be applied in particular to thelogarithmic derivativeofRiemann zeta function, and thus provides an extremely short way to prove theprime number theorem.[43]
|
https://en.wikipedia.org/wiki/Laplace_transform
|
Inmathematics, thetwo-sided Laplace transformorbilateral Laplace transformis anintegral transformequivalent toprobability'smoment-generating function. Two-sided Laplace transforms are closely related to theFourier transform, theMellin transform, theZ-transformand the ordinary or one-sidedLaplace transform. Iff(t) is a real- or complex-valued function of the real variabletdefined for all real numbers, then the two-sided Laplace transform is defined by the integral
The integral is most commonly understood as animproper integral, which convergesif and only ifboth integrals
exist. There seems to be no generally accepted notation for the two-sided transform; theB{\displaystyle {\mathcal {B}}}used here recalls "bilateral". The two-sided transform
used by some authors is
In pure mathematics the argumenttcan be any variable, and Laplace transforms are used to study howdifferential operatorstransform the function.
Inscienceandengineeringapplications, the argumenttoften represents time (in seconds), and the functionf(t) often represents asignalor waveform that varies with time. In these cases, the signals are transformed byfilters, that work like a mathematical operator, but with a restriction. They have to be causal, which means that the output in a given timetcannot depend on an output which is a higher value oft.
In population ecology, the argumenttoften represents spatial displacement in a dispersal kernel.
When working with functions of time,f(t) is called thetime domainrepresentation of the signal, whileF(s) is called thes-domain(orLaplace domain) representation. The inverse transformation then represents asynthesisof the signal as the sum of its frequency components taken over all frequencies, whereas the forward transformation represents theanalysisof the signal into its frequency components.
TheFourier transformcan be defined in terms of the two-sided Laplace transform:
Note that definitions of the Fourier transform differ, and in particular
is often used instead. In terms of the Fourier transform, we may also obtain the two-sided Laplace transform, as
The Fourier transform is normally defined so that it exists for real values; the above definition defines the image in a stripa<ℑ(s)<b{\displaystyle a<\Im (s)<b}which may not include the real axis where the Fourier transform is supposed to converge.
This is then why Laplace transforms retain their value in control theory and signal processing: the convergence of a Fourier transform integral within its domain only means that a linear, shift-invariant system described by it is stable or critical. The Laplace one on the other hand will somewhere converge for every impulse response which is at most exponentially growing, because it involves an extra term which can be taken as an exponential regulator. Since there are no superexponentially growing linear feedback networks, Laplace transform based analysis and solution of linear, shift-invariant systems, takes its most general form in the context of Laplace, not Fourier, transforms.
At the same time, nowadays Laplace transform theory falls within the ambit of more generalintegral transforms, or even generalharmonic analysis. In that framework and nomenclature, Laplace transforms are simply another form of Fourier analysis, even if more general in hindsight.
Ifuis theHeaviside step function, equal to zero when its argument is less than zero, to one-half when its argument equals zero, and to one when its argument is greater than zero, then the Laplace transformL{\displaystyle {\mathcal {L}}}may be defined in terms of the two-sided Laplace transform by
On the other hand, we also have
wherem:R→R{\displaystyle m:\mathbb {R} \to \mathbb {R} }is the function that multiplies by minus one (m(x)=−x{\displaystyle m(x)=-x}), so either version of the Laplace transform can be defined in terms of the other.
TheMellin transformmay be defined in terms of the two-sided Laplace transform by
withm{\displaystyle m}as above, and conversely we can get the two-sided transform from the Mellin transform by
Themoment-generating functionof a continuousprobability density functionƒ(x) can be expressed asB{f}(−s){\displaystyle {\mathcal {B}}\{f\}(-s)}.
The following properties can be found inBracewell (2000)andOppenheim & Willsky (1997)
Most properties of the bilateral Laplace transform are very similar to properties of the unilateral Laplace transform,
but there are some important differences:
Letf1(t){\displaystyle f_{1}(t)}andf2(t){\displaystyle f_{2}(t)}be functions with bilateral Laplace transformsF1(s){\displaystyle F_{1}(s)}andF2(s){\displaystyle F_{2}(s)}in the strips of convergenceα1,2<ℜs<β1,2{\displaystyle \alpha _{1,2}<\Re s<\beta _{1,2}}.
Letc∈R{\displaystyle c\in \mathbb {R} }withmax(−β1,α2)<c<min(−α1,β2){\displaystyle \max(-\beta _{1},\alpha _{2})<c<\min(-\alpha _{1},\beta _{2})}.
ThenParseval's theoremholds:[1]
This theorem is proved by applying the inverse Laplace transform on the convolution theorem in form of the cross-correlation.
Letf(t){\displaystyle f(t)}be a function with bilateral Laplace transformF(s){\displaystyle F(s)}in the strip of convergenceα<ℜs<β{\displaystyle \alpha <\Re s<\beta }.
Letc∈R{\displaystyle c\in \mathbb {R} }withα<c<β{\displaystyle \alpha <c<\beta }.
Then thePlancherel theoremholds:[2]
For any two functionsf,g{\textstyle f,g}for which the two-sided Laplace transformsT{f},T{g}{\textstyle {\mathcal {T}}\{f\},{\mathcal {T}}\{g\}}exist, ifT{f}=T{g},{\textstyle {\mathcal {T}}\{f\}={\mathcal {T}}\{g\},}i.e.T{f}(s)=T{g}(s){\textstyle {\mathcal {T}}\{f\}(s)={\mathcal {T}}\{g\}(s)}for every value ofs∈R,{\textstyle s\in \mathbb {R} ,}thenf=g{\textstyle f=g}almost everywhere.
Bilateral transform requirements for convergence are more difficult than for unilateral transforms. The region of convergence will be normally smaller.
Iffis alocally integrablefunction (or more generally aBorel measurelocally ofbounded variation), then the Laplace transformF(s) offconverges provided that the limit
exists. The Laplace transform converges absolutely if the integral
exists (as a properLebesgue integral). The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former instead of the latter sense.
The set of values for whichF(s) converges absolutely is either of the form Re(s) >aor else Re(s) ≥a, whereais anextended real constant, −∞ ≤a≤ ∞. (This follows from thedominated convergence theorem.) The constantais known as the abscissa ofabsolute convergence, and depends on the growth behavior off(t).[3]Analogously, the two-sided transform converges absolutely in a strip of the forma< Re(s) <b, and possibly including the lines Re(s) =aor Re(s) =b.[4]The subset of values ofsfor which the Laplace transform converges absolutely is called the region of absolute convergence or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform isanalyticin the region of absolute convergence.
Similarly, the set of values for whichF(s) converges (conditionally or absolutely) is known as the region of conditional convergence, or simply theregion of convergence(ROC). If the Laplace transform converges (conditionally) ats=s0, then it automatically converges for allswith Re(s) > Re(s0). Therefore, the region of convergence is a half-plane of the form Re(s) >a, possibly including some points of the boundary line Re(s) =a. In the region of convergence Re(s) > Re(s0), the Laplace transform offcan be expressed byintegrating by partsas the integral
That is, in the region of convergenceF(s) can effectively be expressed as the absolutely convergent Laplace transform of some other function. In particular, it is analytic.
There are severalPaley–Wiener theoremsconcerning the relationship between the decay properties offand the properties of the Laplace transform within the region of convergence.
In engineering applications, a function corresponding to alinear time-invariant (LTI) systemisstableif every bounded input produces a bounded output.
Bilateral transforms do not respectcausality. They make sense when applied over generic functions but when working with functions of time (signals) unilateral transforms are preferred.
Following list of interesting examples for the bilateral Laplace transform can be deduced from the corresponding Fourier or
unilateral Laplace transformations
(see alsoBracewell (2000)):
|
https://en.wikipedia.org/wiki/Two-sided_Laplace_transform
|
Inmathematics, theMellin transformis anintegral transformthat may be regarded as themultiplicativeversion of thetwo-sided Laplace transform. This integral transform is closely connected to the theory ofDirichlet series, and is
often used innumber theory,mathematical statistics, and the theory ofasymptotic expansions; it is closely related to theLaplace transformand theFourier transform, and the theory of thegamma functionand alliedspecial functions.
The Mellin transform of a complex-valued functionfdefined onR+×=(0,∞){\displaystyle \mathbf {R} _{+}^{\times }=(0,\infty )}is the functionMf{\displaystyle {\mathcal {M}}f}of complex variables{\displaystyle s}given (where it exists, seeFundamental stripbelow) byM{f}(s)=φ(s)=∫0∞xs−1f(x)dx=∫R+×f(x)xsdxx.{\displaystyle {\mathcal {M}}\left\{f\right\}(s)=\varphi (s)=\int _{0}^{\infty }x^{s-1}f(x)\,dx=\int _{\mathbf {R} _{+}^{\times }}f(x)x^{s}{\frac {dx}{x}}.}Notice thatdx/x{\displaystyle dx/x}is aHaar measureon the multiplicative groupR+×{\displaystyle \mathbf {R} _{+}^{\times }}andx↦xs{\displaystyle x\mapsto x^{s}}is a (in general non-unitary)multiplicative character.
The inverse transform isM−1{φ}(x)=f(x)=12πi∫c−i∞c+i∞x−sφ(s)ds.{\displaystyle {\mathcal {M}}^{-1}\left\{\varphi \right\}(x)=f(x)={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }x^{-s}\varphi (s)\,ds.}The notation implies this is aline integraltaken over a vertical line in the complex plane, whose real partcneed only satisfy a mild lower bound. Conditions under which this inversion is valid are given in theMellin inversion theorem.
The transform is named after theFinnishmathematicianHjalmar Mellin, who introduced it in a paper published 1897 inActa Societatis Scientiarum Fennicæ.[1]
Thetwo-sided Laplace transformmay be defined in terms of the Mellin transform byB{f}(s)=M{f(−lnx)}(s){\displaystyle {\mathcal {B}}\left\{f\right\}(s)={\mathcal {M}}\left\{f(-\ln x)\right\}(s)}and conversely we can get the Mellin transform from the two-sided Laplace transform byM{f}(s)=B{f(e−x)}(s).{\displaystyle {\mathcal {M}}\left\{f\right\}(s)={\mathcal {B}}\left\{f(e^{-x})\right\}(s).}
The Mellin transform may be thought of as integrating using a kernelxswith respect to the multiplicativeHaar measure,dxx{\textstyle {\frac {dx}{x}}}, which is invariant under dilationx↦ax{\displaystyle x\mapsto ax}, so thatd(ax)ax=dxx;{\textstyle {\frac {d(ax)}{ax}}={\frac {dx}{x}};}the two-sided Laplace transform integrates with respect to the additive Haar measuredx{\displaystyle dx}, which is translation invariant, so thatd(x+a)=dx.{\displaystyle d(x+a)=dx\,.}
We also may define theFourier transformin terms of the Mellin transform and vice versa; in terms of the Mellin transform and of the two-sided Laplace transform defined above{Ff}(−s)={Bf}(−is)={Mf(−lnx)}(−is).{\displaystyle \left\{{\mathcal {F}}f\right\}(-s)=\left\{{\mathcal {B}}f\right\}(-is)=\left\{{\mathcal {M}}f(-\ln x)\right\}(-is)\ .}We may also reverse the process and obtain{Mf}(s)={Bf(e−x)}(s)={Ff(e−x)}(−is).{\displaystyle \left\{{\mathcal {M}}f\right\}(s)=\left\{{\mathcal {B}}f(e^{-x})\right\}(s)=\left\{{\mathcal {F}}f(e^{-x})\right\}(-is)\ .}
The Mellin transform also connects theNewton seriesorbinomial transformtogether with thePoisson generating function, by means of thePoisson–Mellin–Newton cycle.
The Mellin transform may also be viewed as theGelfand transformfor theconvolution algebraof thelocally compact abelian groupof positive real numbers with multiplication.
The Mellin transform of the functionf(x)=e−x{\displaystyle f(x)=e^{-x}}isΓ(s)=∫0∞xs−1e−xdx{\displaystyle \Gamma (s)=\int _{0}^{\infty }x^{s-1}e^{-x}dx}whereΓ(s){\displaystyle \Gamma (s)}is thegamma function.Γ(s){\displaystyle \Gamma (s)}is ameromorphic functionwith simplepolesatz=0,−1,−2,…{\displaystyle z=0,-1,-2,\dots }.[2]Therefore,Γ(s){\displaystyle \Gamma (s)}is analytic forℜ(s)>0{\displaystyle \Re (s)>0}. Thus, lettingc>0{\displaystyle c>0}andz−s{\displaystyle z^{-s}}on theprincipal branch, the inverse transform givese−z=12πi∫c−i∞c+i∞Γ(s)z−sds.{\displaystyle e^{-z}={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }\Gamma (s)z^{-s}\;ds.}
This integral is known as the Cahen–Mellin integral.[3]
Since∫0∞xadx{\textstyle \int _{0}^{\infty }x^{a}dx}is not convergent for any value ofa∈R{\displaystyle a\in \mathbb {R} }, the Mellin transform is not defined for polynomial functions defined on the whole positive real axis. However, by defining it to be zero on different sections of the real axis, it is possible to take the Mellin transform. For example, iff(x)={xax<1,0x>1,{\displaystyle f(x)={\begin{cases}x^{a}&x<1,\\0&x>1,\end{cases}}}thenMf(s)=∫01xs−1xadx=∫01xs+a−1dx=1s+a.{\displaystyle {\mathcal {M}}f(s)=\int _{0}^{1}x^{s-1}x^{a}dx=\int _{0}^{1}x^{s+a-1}dx={\frac {1}{s+a}}.}
ThusMf(s){\displaystyle {\mathcal {M}}f(s)}has a simple pole ats=−a{\displaystyle s=-a}and is thus defined forℜ(s)>−a{\displaystyle \Re (s)>-a}. Similarly, iff(x)={0x<1,xbx>1,{\displaystyle f(x)={\begin{cases}0&x<1,\\x^{b}&x>1,\end{cases}}}thenMf(s)=∫1∞xs−1xbdx=∫1∞xs+b−1dx=−1s+b.{\displaystyle {\mathcal {M}}f(s)=\int _{1}^{\infty }x^{s-1}x^{b}dx=\int _{1}^{\infty }x^{s+b-1}dx=-{\frac {1}{s+b}}.}ThusMf(s){\displaystyle {\mathcal {M}}f(s)}has a simple pole ats=−b{\displaystyle s=-b}and is thus defined forℜ(s)<−b{\displaystyle \Re (s)<-b}.
Forp>0{\displaystyle p>0}, letf(x)=e−px{\displaystyle f(x)=e^{-px}}. ThenMf(s)=∫0∞xse−pxdxx=∫0∞(up)se−uduu=1ps∫0∞use−uduu=1psΓ(s).{\displaystyle {\mathcal {M}}f(s)=\int _{0}^{\infty }x^{s}e^{-px}{\frac {dx}{x}}=\int _{0}^{\infty }\left({\frac {u}{p}}\right)^{s}e^{-u}{\frac {du}{u}}={\frac {1}{p^{s}}}\int _{0}^{\infty }u^{s}e^{-u}{\frac {du}{u}}={\frac {1}{p^{s}}}\Gamma (s).}
It is possible to use the Mellin transform to produce one of the fundamental formulas for theRiemann zeta function,ζ(s){\displaystyle \zeta (s)}. Letf(x)=1ex−1{\textstyle f(x)={\frac {1}{e^{x}-1}}}. ThenMf(s)=∫0∞xs−11ex−1dx=∫0∞xs−1e−x1−e−xdx=∫0∞xs−1∑n=1∞e−nxdx=∑n=1∞∫0∞xse−nxdxx=∑n=1∞1nsΓ(s)=Γ(s)ζ(s).{\displaystyle {\begin{alignedat}{3}{\mathcal {M}}f(s)&=\int _{0}^{\infty }x^{s-1}{\frac {1}{e^{x}-1}}dx&&=\int _{0}^{\infty }x^{s-1}{\frac {e^{-x}}{1-e^{-x}}}dx\\&=\int _{0}^{\infty }x^{s-1}\sum _{n=1}^{\infty }e^{-nx}dx&&=\sum _{n=1}^{\infty }\int _{0}^{\infty }x^{s}e^{-nx}{\frac {dx}{x}}\\&=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}\Gamma (s)=\Gamma (s)\zeta (s).\end{alignedat}}}Thus,ζ(s)=1Γ(s)∫0∞xs−11ex−1dx.{\displaystyle \zeta (s)={\frac {1}{\Gamma (s)}}\int _{0}^{\infty }x^{s-1}{\frac {1}{e^{x}-1}}dx.}
Forp>0{\displaystyle p>0}, letf(x)=e−xp{\displaystyle f(x)=e^{-x^{p}}}(i.e.f{\displaystyle f}is ageneralized Gaussian distributionwithout the scaling factor.) ThenMf(s)=∫0∞xs−1e−xpdx=∫0∞xp−1xs−pe−xpdx=∫0∞xp−1(xp)s/p−1e−xpdx=1p∫0∞us/p−1e−udu=Γ(s/p)p.{\displaystyle {\begin{alignedat}{3}{\mathcal {M}}f(s)&=\int _{0}^{\infty }x^{s-1}e^{-x^{p}}dx&&=\int _{0}^{\infty }x^{p-1}x^{s-p}e^{-x^{p}}dx\\&=\int _{0}^{\infty }x^{p-1}(x^{p})^{s/p-1}e^{-x^{p}}dx&&={\frac {1}{p}}\int _{0}^{\infty }u^{s/p-1}e^{-u}du\\&={\frac {\Gamma (s/p)}{p}}.\end{alignedat}}}In particular, settings=1{\displaystyle s=1}recovers the following form of the gamma functionΓ(1+1p)=∫0∞e−xpdx.{\displaystyle \Gamma \left(1+{\frac {1}{p}}\right)=\int _{0}^{\infty }e^{-x^{p}}dx.}
Generally, assuming the necessary convergence, we can connect Dirichlet series andpower seriesF(s)=∑n=1∞anns,f(z)=∑n=1∞anzn{\displaystyle F(s)=\sum \limits _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}},\quad f(z)=\sum \limits _{n=1}^{\infty }a_{n}z^{n}}by this formal identity involving the Mellin transform:[4]Γ(s)F(s)=∫0∞xs−1f(e−x)dx{\displaystyle \Gamma (s)F(s)=\int _{0}^{\infty }x^{s-1}f(e^{-x})dx}
Forα,β∈R{\displaystyle \alpha ,\beta \in \mathbb {R} }, let the open strip⟨α,β⟩{\displaystyle \langle \alpha ,\beta \rangle }be defined to be alls∈C{\displaystyle s\in \mathbb {C} }such thats=σ+it{\displaystyle s=\sigma +it}withα<σ<β.{\displaystyle \alpha <\sigma <\beta .}Thefundamental stripofMf(s){\displaystyle {\mathcal {M}}f(s)}is defined to be the largest open strip on which it is defined. For example, fora>b{\displaystyle a>b}the fundamental strip off(x)={xax<1,xbx>1,{\displaystyle f(x)={\begin{cases}x^{a}&x<1,\\x^{b}&x>1,\end{cases}}}is⟨−a,−b⟩.{\displaystyle \langle -a,-b\rangle .}As seen by this example, the asymptotics of the function asx→0+{\displaystyle x\to 0^{+}}define the left endpoint of its fundamental strip, and the asymptotics of the function asx→+∞{\displaystyle x\to +\infty }define its right endpoint. To summarize usingBig O notation, iff{\displaystyle f}isO(xa){\displaystyle O(x^{a})}asx→0+{\displaystyle x\to 0^{+}}andO(xb){\displaystyle O(x^{b})}asx→+∞,{\displaystyle x\to +\infty ,}thenMf(s){\displaystyle {\mathcal {M}}f(s)}is defined in the strip⟨−a,−b⟩.{\displaystyle \langle -a,-b\rangle .}[5]
An application of this can be seen in the gamma function,Γ(s).{\displaystyle \Gamma (s).}Sincef(x)=e−x{\displaystyle f(x)=e^{-x}}isO(x0){\displaystyle O(x^{0})}asx→0+{\displaystyle x\to 0^{+}}andO(xk){\displaystyle O(x^{k})}for allk,{\displaystyle k,}thenΓ(s)=Mf(s){\displaystyle \Gamma (s)={\mathcal {M}}f(s)}should be defined in the strip⟨0,+∞⟩,{\displaystyle \langle 0,+\infty \rangle ,}which confirms thatΓ(s){\displaystyle \Gamma (s)}is analytic forℜ(s)>0.{\displaystyle \Re (s)>0.}
The properties in this table may be found inBracewell (2000)andErdélyi (1954).
Letf1(x){\displaystyle f_{1}(x)}andf2(x){\displaystyle f_{2}(x)}be functions with well-defined
Mellin transformsf~1,2(s)=M{f1,2}(s){\displaystyle {\tilde {f}}_{1,2}(s)={\mathcal {M}}\{f_{1,2}\}(s)}in the fundamental stripsα1,2<ℜs<β1,2{\displaystyle \alpha _{1,2}<\Re s<\beta _{1,2}}.
Letc∈R{\displaystyle c\in \mathbb {R} }withmax(α1,1−β2)<c<min(β1,1−α2){\displaystyle \max(\alpha _{1},1-\beta _{2})<c<\min(\beta _{1},1-\alpha _{2})}.
If the functionsxc−1/2f1(x){\displaystyle x^{c-1/2}\,f_{1}(x)}andx1/2−cf2(x){\displaystyle x^{1/2-c}\,f_{2}(x)}are also square-integrable over the interval(0,∞){\displaystyle (0,\infty )}, thenParseval's formulaholds:[6]∫0∞f1(x)f2(x)dx=12πi∫c−i∞c+i∞f1~(s)f2~(1−s)ds{\displaystyle \int _{0}^{\infty }f_{1}(x)\,f_{2}(x)\,dx={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }{\tilde {f_{1}}}(s)\,{\tilde {f_{2}}}(1-s)\,ds}The integration on the right hand side is done along the vertical lineℜr=c{\displaystyle \Re r=c}that
lies entirely within the overlap of the (suitable transformed) fundamental strips.
We can replacef2(x){\displaystyle f_{2}(x)}byf2(x)xs0−1{\displaystyle f_{2}(x)\,x^{s_{0}-1}}. This gives following alternative form of the theorem:
Letf1(x){\displaystyle f_{1}(x)}andf2(x){\displaystyle f_{2}(x)}be functions with well-defined
Mellin transformsf~1,2(s)=M{f1,2}(s){\displaystyle {\tilde {f}}_{1,2}(s)={\mathcal {M}}\{f_{1,2}\}(s)}in the fundamental stripsα1,2<ℜs<β1,2{\displaystyle \alpha _{1,2}<\Re s<\beta _{1,2}}.
Letc∈R{\displaystyle c\in \mathbb {R} }withα1<c<β1{\displaystyle \alpha _{1}<c<\beta _{1}}and
chooses0∈C{\displaystyle s_{0}\in \mathbb {C} }withα2<ℜs0−c<β2{\displaystyle \alpha _{2}<\Re s_{0}-c<\beta _{2}}.
If the functionsxc−1/2f1(x){\displaystyle x^{c-1/2}\,f_{1}(x)}andxs0−c−1/2f2(x){\displaystyle x^{s_{0}-c-1/2}\,f_{2}(x)}are also square-integrable over the interval(0,∞){\displaystyle (0,\infty )}, then we have[6]∫0∞f1(x)f2(x)xs0−1dx=12πi∫c−i∞c+i∞f1~(s)f2~(s0−s)ds{\displaystyle \int _{0}^{\infty }f_{1}(x)\,f_{2}(x)\,x^{s_{0}-1}\,dx={\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }{\tilde {f_{1}}}(s)\,{\tilde {f_{2}}}(s_{0}-s)\,ds}We can replacef2(x){\displaystyle f_{2}(x)}byf1(x)¯{\displaystyle {\overline {f_{1}(x)}}}.
This gives following theorem:
Letf(x){\displaystyle f(x)}be a function with well-defined Mellin transformf~(s)=M{f}(s){\displaystyle {\tilde {f}}(s)={\mathcal {M}}\{f\}(s)}in the fundamental stripα<ℜs<β{\displaystyle \alpha <\Re s<\beta }.
Letc∈R{\displaystyle c\in \mathbb {R} }withα<c<β{\displaystyle \alpha <c<\beta }.
If the functionxc−1/2f(x){\displaystyle x^{c-1/2}\,f(x)}is also square-integrable over the interval(0,∞){\displaystyle (0,\infty )}, thenPlancherel's theoremholds:[7]∫0∞|f(x)|2x2c−1dx=12π∫−∞∞|f~(c+it)|2dt{\displaystyle \int _{0}^{\infty }|f(x)|^{2}\,x^{2c-1}dx={\frac {1}{2\pi }}\int _{-\infty }^{\infty }|{\tilde {f}}(c+it)|^{2}\,dt}
In the study ofHilbert spaces, the Mellin transform is often posed in a slightly different way. For functions inL2(0,∞){\displaystyle L^{2}(0,\infty )}(seeLp space) the fundamental strip always includes12+iR{\displaystyle {\tfrac {1}{2}}+i\mathbb {R} }, so we may define alinear operatorM~{\displaystyle {\tilde {\mathcal {M}}}}asM~:L2(0,∞)→L2(−∞,∞),{\displaystyle {\tilde {\mathcal {M}}}\colon L^{2}(0,\infty )\to L^{2}(-\infty ,\infty ),}{M~f}(s):=12π∫0∞x−12+isf(x)dx.{\displaystyle \{{\tilde {\mathcal {M}}}f\}(s):={\frac {1}{\sqrt {2\pi }}}\int _{0}^{\infty }x^{-{\frac {1}{2}}+is}f(x)\,dx.}In other words, we have set{M~f}(s):=12π{Mf}(12+is).{\displaystyle \{{\tilde {\mathcal {M}}}f\}(s):={\tfrac {1}{\sqrt {2\pi }}}\{{\mathcal {M}}f\}({\tfrac {1}{2}}+is).}This operator is usually denoted by just plainM{\displaystyle {\mathcal {M}}}and called the "Mellin transform", butM~{\displaystyle {\tilde {\mathcal {M}}}}is used here to distinguish from the definition used elsewhere in this article. TheMellin inversion theoremthen shows thatM~{\displaystyle {\tilde {\mathcal {M}}}}is invertible with inverseM~−1:L2(−∞,∞)→L2(0,∞),{\displaystyle {\tilde {\mathcal {M}}}^{-1}\colon L^{2}(-\infty ,\infty )\to L^{2}(0,\infty ),}{M~−1φ}(x)=12π∫−∞∞x−12−isφ(s)ds.{\displaystyle \{{\tilde {\mathcal {M}}}^{-1}\varphi \}(x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }x^{-{\frac {1}{2}}-is}\varphi (s)\,ds.}Furthermore, this operator is anisometry, that is to say‖M~f‖L2(−∞,∞)=‖f‖L2(0,∞){\displaystyle \|{\tilde {\mathcal {M}}}f\|_{L^{2}(-\infty ,\infty )}=\|f\|_{L^{2}(0,\infty )}}for allf∈L2(0,∞){\displaystyle f\in L^{2}(0,\infty )}(this explains why the factor of1/2π{\displaystyle 1/{\sqrt {2\pi }}}was used).
In probability theory, the Mellin transform is an essential tool in studying the distributions of products of random variables.[8]IfXis a random variable, andX+= max{X,0} denotes its positive part, whileX−= max{−X,0} is its negative part, then theMellin transformofXis defined as[9]MX(s)=∫0∞xsdFX+(x)+γ∫0∞xsdFX−(x),{\displaystyle {\mathcal {M}}_{X}(s)=\int _{0}^{\infty }x^{s}dF_{X^{+}}(x)+\gamma \int _{0}^{\infty }x^{s}dF_{X^{-}}(x),}whereγis a formal indeterminate withγ2= 1. This transform exists for allsin some complex stripD= {s:a≤ Re(s) ≤b}, wherea≤ 0 ≤b.[9]
The Mellin transformMX(it){\displaystyle {\mathcal {M}}_{X}(it)}of a random variableXuniquely determines its distribution functionFX.[9]The importance of the Mellin transform in probability theory lies in the fact that ifXandYare two independent random variables, then the Mellin transform of their product is equal to the product of the Mellin transforms ofXandY:[10]MXY(s)=MX(s)MY(s){\displaystyle {\mathcal {M}}_{XY}(s)={\mathcal {M}}_{X}(s){\mathcal {M}}_{Y}(s)}
In the Laplacian in cylindrical coordinates in a generic dimension (orthogonal coordinates with one angle and one radius, and the remaining lengths) there is always a term:1r∂∂r(r∂f∂r)=frr+frr{\displaystyle {\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)=f_{rr}+{\frac {f_{r}}{r}}}
For example, in 2-D polar coordinates the Laplacian is:∇2f=1r∂∂r(r∂f∂r)+1r2∂2f∂θ2{\displaystyle \nabla ^{2}f={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \theta ^{2}}}}and in 3-D cylindrical coordinates the Laplacian is,∇2f=1r∂∂r(r∂f∂r)+1r2∂2f∂φ2+∂2f∂z2.{\displaystyle \nabla ^{2}f={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}.}
This term can be treated with the Mellin transform,[11]since:M(r2frr+rfr,r→s)=s2M(f,r→s)=s2F{\displaystyle {\mathcal {M}}\left(r^{2}f_{rr}+rf_{r},r\to s\right)=s^{2}{\mathcal {M}}\left(f,r\to s\right)=s^{2}F}
For example, the 2-DLaplace equationin polar coordinates is the PDE in two variables:r2frr+rfr+fθθ=0{\displaystyle r^{2}f_{rr}+rf_{r}+f_{\theta \theta }=0}and by multiplication:1r∂∂r(r∂f∂r)+1r2∂2f∂θ2=0{\displaystyle {\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \theta ^{2}}}=0}with a Mellin transform on radius becomes the simpleharmonic oscillator:Fθθ+s2F=0{\displaystyle F_{\theta \theta }+s^{2}F=0}with general solution:F(s,θ)=C1(s)cos(sθ)+C2(s)sin(sθ){\displaystyle F(s,\theta )=C_{1}(s)\cos(s\theta )+C_{2}(s)\sin(s\theta )}
Now let's impose for example some simple wedgeboundary conditionsto the original Laplace equation:f(r,−θ0)=a(r),f(r,θ0)=b(r){\displaystyle f(r,-\theta _{0})=a(r),\quad f(r,\theta _{0})=b(r)}these are particularly simple for Mellin transform, becoming:F(s,−θ0)=A(s),F(s,θ0)=B(s){\displaystyle F(s,-\theta _{0})=A(s),\quad F(s,\theta _{0})=B(s)}
These conditions imposed to the solution particularize it to:F(s,θ)=A(s)sin(s(θ0−θ))sin(2θ0s)+B(s)sin(s(θ0+θ))sin(2θ0s){\displaystyle F(s,\theta )=A(s){\frac {\sin(s(\theta _{0}-\theta ))}{\sin(2\theta _{0}s)}}+B(s){\frac {\sin(s(\theta _{0}+\theta ))}{\sin(2\theta _{0}s)}}}
Now by the convolution theorem for Mellin transform, the solution in the Mellin domain can be inverted:f(r,θ)=rmcos(mθ)2θ0∫0∞(a(x)x2m+2rmxmsin(mθ)+r2m+b(x)x2m−2rmxmsin(mθ)+r2m)xm−1dx{\displaystyle f(r,\theta )={\frac {r^{m}\cos(m\theta )}{2\theta _{0}}}\int _{0}^{\infty }\left({\frac {a(x)}{x^{2m}+2r^{m}x^{m}\sin(m\theta )+r^{2m}}}+{\frac {b(x)}{x^{2m}-2r^{m}x^{m}\sin(m\theta )+r^{2m}}}\right)x^{m-1}\,dx}where the following inverse transform relation was employed:M−1(sin(sφ)sin(2θ0s);s→r)=12θ0rmsin(mφ)1+2rmcos(mφ)+r2m{\displaystyle {\mathcal {M}}^{-1}\left({\frac {\sin(s\varphi )}{\sin(2\theta _{0}s)}};s\to r\right)={\frac {1}{2\theta _{0}}}{\frac {r^{m}\sin(m\varphi )}{1+2r^{m}\cos(m\varphi )+r^{2m}}}}wherem=π2θ0{\displaystyle m={\frac {\pi }{2\theta _{0}}}}.
The Mellin transform is widely used in computer science for the analysis of algorithms[12]because of itsscale invarianceproperty. The magnitude of the Mellin Transform of a scaled function is identical to the magnitude of the original function for purely imaginary inputs. This scale invariance property is analogous to the Fourier Transform's shift invariance property. The magnitude of a Fourier transform of a time-shifted function is identical to the magnitude of the Fourier transform of the original function.
This property is useful inimage recognition. An image of an object is easily scaled when the object is moved towards or away from the camera.
Inquantum mechanicsand especiallyquantum field theory,Fourier spaceis enormously useful and used extensively because momentum and position areFourier transformsof each other (for instance,Feynman diagramsare much more easily computed in momentum space). In 2011,A. Liam Fitzpatrick,Jared Kaplan,João Penedones,Suvrat Raju, andBalt C. van Reesshowed that Mellin space serves an analogous role in the context of theAdS/CFT correspondence.[13][14][15]
Below is a list of interesting examples for the Mellin transform:
|
https://en.wikipedia.org/wiki/Mellin_transform
|
In applied mathematics, thenon-uniform discrete Fourier transform(NUDFTorNDFT) of a signal is a type ofFourier transform, related to adiscrete Fourier transformordiscrete-time Fourier transform, but in which the input signal is not sampled at equally spaced points or frequencies (or both). It is a generalization of theshifted DFT. It has important applications in signal processing,[1]magnetic resonance imaging,[2]and the numerical solution of partial differential equations.[3]
As a generalized approach fornonuniform sampling, the NUDFT allows one to obtain frequency domain information of a finite length signal at any frequency. One of the reasons to adopt the NUDFT is that many signals have their energy distributed nonuniformly in the frequency domain. Therefore, a nonuniform sampling scheme could be more convenient and useful in manydigital signal processingapplications. For example, the NUDFT provides a variable spectral resolution controlled by the user.
Thenonuniform discrete Fourier transformtransforms a sequence ofN{\displaystyle N}complex numbersx0,…,xN−1{\displaystyle x_{0},\ldots ,x_{N-1}}into another sequence of complex numbersX0,…,XN−1{\displaystyle X_{0},\ldots ,X_{N-1}}defined by
wherep0,…,pN−1∈[0,1]{\displaystyle p_{0},\ldots ,p_{N-1}\in [0,1]}are sample points andf0,…,fN−1∈[0,N]{\displaystyle f_{0},\ldots ,f_{N-1}\in [0,N]}are frequencies. Note that ifpn=n/N{\displaystyle p_{n}=n/N}andfk=k{\displaystyle f_{k}=k}, then equation (1) reduces to thediscrete Fourier transform. There are three types of NUDFTs.[4]Note that these types are not universal and different authors will refer to different types by different numbers.
A similar set of NUDFTs can be defined by substituting−i{\displaystyle -i}for+i{\displaystyle +i}in equation (1).
Unlike in the uniform case, however, this substitution is unrelated to the inverse Fourier transform.
The inversion of the NUDFT is a separate problem, discussed below.
The multidimensional NUDFT converts ad{\displaystyle d}-dimensional array of complex numbersxn{\displaystyle x_{\mathbf {n} }}into anotherd{\displaystyle d}-dimensional array of complex numbersXk{\displaystyle X_{\mathbf {k} }}defined by
wherepn∈[0,1]d{\displaystyle \mathbf {p} _{\mathbf {n} }\in [0,1]^{d}}are sample points,fk∈[0,N1]×[0,N2]×⋯×[0,Nd]{\displaystyle {\boldsymbol {f}}_{\mathbf {k} }\in [0,N_{1}]\times [0,N_{2}]\times \cdots \times [0,N_{d}]}are frequencies, andn=(n1,n2,…,nd){\displaystyle \mathbf {n} =(n_{1},n_{2},\ldots ,n_{d})}andk=(k1,k2,…,kd){\displaystyle \mathbf {k} =(k_{1},k_{2},\ldots ,k_{d})}ared{\displaystyle d}-dimensional vectors of indices from 0 toN−1=(N1−1,N2−1,…,Nd−1){\displaystyle \mathbf {N} -1=(N_{1}-1,N_{2}-1,\ldots ,N_{d}-1)}. The multidimensional NUDFTs of types I, II, and III are defined analogously to the 1D case.[4]
The NUDFT-I can be expressed as aZ-transform.[8]The NUDFT-I of a sequencex[n]{\displaystyle x[n]}of lengthN{\displaystyle N}is
whereX(z){\displaystyle X(z)}is the Z-transform ofx[n]{\displaystyle x[n]}, and{zi}i=0,1,...,N−1{\displaystyle \{z_{i}\}_{i=0,1,...,N-1}}are arbitrarily distinct points in the z-plane. Note that the NUDFT reduces to the DFT when the sampling points are located on the unit circle at equally spaced angles.
Expressing the above as a matrix, we get
where
As we can see, the NUDFT-I is characterized byD{\displaystyle \mathbf {D} }and hence theN{\displaystyle N}zk{\displaystyle {z_{k}}}points. If we further factorizedet(D){\displaystyle \det(\mathbf {D} )}, we can see thatD{\displaystyle \mathbf {D} }is nonsingular provided theN{\displaystyle N}zk{\displaystyle {z_{k}}}points are distinct. IfD{\displaystyle \mathbf {D} }is nonsingular, we can get a unique inverse NUDFT-I as follows:
GivenXandD{\displaystyle \mathbf {X} {\text{ and }}\mathbf {D} }, we can useGaussian eliminationto solve forx{\displaystyle \mathbf {x} }. However, the complexity of this method isO(N3){\displaystyle O(N^{3})}. To solve this problem more efficiently, we first determineX(z){\displaystyle X(z)}directly by polynomial interpolation:
Thenx[n]{\displaystyle x[n]}are the coefficients of the above interpolating polynomial.
ExpressingX(z){\displaystyle X(z)}as theLagrange polynomialof orderN−1{\displaystyle N-1}, we get
where{Li(z)}i=0,1,...,N−1{\displaystyle \{L_{i}(z)\}_{i=0,1,...,N-1}}are the fundamental polynomials:
ExpressingX(z){\displaystyle X(z)}by the Newton interpolation method, we get
wherecj{\displaystyle c_{j}}is the divided difference of thej{\displaystyle j}th order ofX^[0],X^[1],...,X^[j]{\displaystyle {\hat {X}}[0],{\hat {X}}[1],...,{\hat {X}}[j]}with respect toz0,z1,...,zj{\displaystyle z_{0},z_{1},...,z_{j}}:
The disadvantage of the Lagrange representation is that any additional point included will increase the order of the interpolating polynomial, leading to the need to recompute all the fundamental polynomials. However, any additional point included in the Newton representation only requires the addition of one more term.
We can use a lower triangular system to solve{cj}{\displaystyle \{c_{j}\}}:
where
By the above equation,{cj}{\displaystyle \{c_{j}\}}can be computed withinO(N2){\displaystyle O(N^{2})}operations. In this way Newton interpolation is more efficient than Lagrange Interpolation unless the latter is modified by
While a naive application of equation (1) results in anO(N2){\displaystyle O(N^{2})}algorithm for computing the NUDFT,O(NlogN){\displaystyle O(N\log N)}algorithms based on thefast Fourier transform(FFT) do exist. Such algorithms are referred to as NUFFTs or NFFTs and have been developed based on oversampling and interpolation,[9][10][11][12]min-max interpolation,[2]and low-rank approximation.[13]In general, NUFFTs leverage the FFT by converting the nonuniform problem into a uniform problem (or a sequence of uniform problems) to which the FFT can be applied.[4]Software libraries for performing NUFFTs are available in 1D, 2D, and 3D.[7][6][14][15][16][17]
The applications of the NUDFT include:
|
https://en.wikipedia.org/wiki/Non-uniform_discrete_Fourier_transform
|
Inquantum computing, thequantum Fourier transform (QFT)is alinear transformationonquantum bits, and is the quantum analogue of thediscrete Fourier transform. The quantum Fourier transform is a part of manyquantum algorithms, notablyShor's algorithmfor factoring and computing thediscrete logarithm, thequantum phase estimation algorithmfor estimating theeigenvaluesof aunitary operator, and algorithms for thehidden subgroup problem. The quantum Fourier transform was discovered byDon Coppersmith.[1]With small modifications to the QFT, it can also be used for performing fastintegerarithmetic operations such as addition and multiplication.[2][3][4]
The quantum Fourier transform can be performed efficiently on a quantum computer with a decomposition into the product of simplerunitary matrices. The discrete Fourier transform on2n{\displaystyle 2^{n}}amplitudes can be implemented as aquantum circuitconsisting of onlyO(n2){\displaystyle O(n^{2})}Hadamard gatesandcontrolledphase shift gates, wheren{\displaystyle n}is the number of qubits.[5]This can be compared with the classical discrete Fourier transform, which takesO(n2n){\displaystyle O(n2^{n})}gates (wheren{\displaystyle n}is the number of bits), which is exponentially more thanO(n2){\displaystyle O(n^{2})}.
The quantum Fourier transform acts on aquantum statevector (aquantum register), and the classicaldiscrete Fourier transformacts on a vector. Both types of vectors can be written as lists of complex numbers. In the classical case, the vector can be represented with e.g. an array offloating-point numbers, and in the quantum case it is a sequence ofprobability amplitudesfor all the possible outcomes uponmeasurement(the outcomes are thebasis states, oreigenstates). Because measurementcollapsesthe quantum state to a single basis state, not every task that uses the classical Fourier transform can take advantage of the quantum Fourier transform's exponential speedup.
The best quantum Fourier transform algorithms known (as of late 2000) require onlyO(nlogn){\displaystyle O(n\log n)}gates to achieve an efficient approximation, provided that acontrolledphase gateis implemented as a native operation.[6]
The quantum Fourier transform is the classical discrete Fourier transform applied to the vector of amplitudes of a quantum state, which has lengthN=2n{\displaystyle N=2^{n}}if it is applied to a register ofn{\displaystyle n}qubits.
Theclassical Fourier transformacts on avector(x0,x1,…,xN−1)∈CN{\displaystyle (x_{0},x_{1},\ldots ,x_{N-1})\in \mathbb {C} ^{N}}and maps it to the vector(y0,y1,…,yN−1)∈CN{\displaystyle (y_{0},y_{1},\ldots ,y_{N-1})\in \mathbb {C} ^{N}}according to the formula
whereωN=e2πiN{\displaystyle \omega _{N}=e^{\frac {2\pi i}{N}}}is anN-throot of unity.
Similarly, thequantum Fourier transformacts on a quantum state|x⟩=∑j=0N−1xj|j⟩{\textstyle |x\rangle =\sum _{j=0}^{N-1}x_{j}|j\rangle }and maps it to a quantum state∑j=0N−1yj|j⟩{\textstyle \sum _{j=0}^{N-1}y_{j}|j\rangle }according to the formula
(Conventions for the sign of the phase factor exponent vary; here the quantum Fourier transform has the same effect as the inverse discrete Fourier transform, and conversely.)
SinceωNl{\displaystyle \omega _{N}^{l}}is a rotation, theinverse quantum Fourier transformacts similarly but with
In case that|x⟩{\displaystyle |x\rangle }is a basis state, the quantum Fourier transform can also be expressed as the map
Equivalently, the quantum Fourier transform can be viewed as aunitary matrix(orquantum gate) acting on quantum state vectors, where the unitary matrixFN{\displaystyle F_{N}}is theDFT matrix
whereω=ωN{\displaystyle \omega =\omega _{N}}. For example, in the case ofN=4=22{\displaystyle N=4=2^{2}}and phaseω=i{\displaystyle \omega =i}the transformation matrix is
Most of the properties of the quantum Fourier transform follow from the fact that it is aunitary transformation. This can be checked by performingmatrix multiplicationand ensuring that the relationFF†=F†F=I{\displaystyle FF^{\dagger }=F^{\dagger }F=I}holds, whereF†{\displaystyle F^{\dagger }}is theHermitian adjointofF{\displaystyle F}. Alternately, one can check that orthogonal vectors ofnorm1 get mapped to orthogonal vectors of norm 1.
From the unitary property it follows that the inverse of the quantum Fourier transform is the Hermitian adjoint of the Fourier matrix, thusF−1=F†{\displaystyle F^{-1}=F^{\dagger }}. Since there is an efficient quantum circuit implementing the quantum Fourier transform, the circuit can be run in reverse to perform the inverse quantum Fourier transform. Thus both transforms can be efficiently performed on a quantum computer.
Thequantum gatesused in the circuit ofn{\displaystyle n}qubits are theHadamard gateand thedyadic rationalphase gateRk{\displaystyle R_{k}}:
H=12(111−1)andRk=(100ei2π/2k){\displaystyle H={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&1\\1&-1\end{pmatrix}}\qquad {\text{and}}\qquad R_{k}={\begin{pmatrix}1&0\\0&e^{i2\pi /2^{k}}\end{pmatrix}}}
The circuit is composed ofH{\displaystyle H}gates and thecontrolledversion ofRk{\displaystyle R_{k}}:
An orthonormal basis consists of the basis states
These basis states span all possible states of the qubits:
where, withtensor productnotation⊗{\displaystyle \otimes },|xj⟩{\displaystyle |x_{j}\rangle }indicates that qubitj{\displaystyle j}is in statexj{\displaystyle x_{j}}, withxj{\displaystyle x_{j}}either 0 or 1. By convention, the basis state indexx{\displaystyle x}is thebinary numberencoded by thexj{\displaystyle x_{j}}, withx1{\displaystyle x_{1}}the most significant bit.
The action of the Hadamard gate isH|xj⟩=(12)(|0⟩+e2πixj2−1|1⟩){\displaystyle H|x_{j}\rangle =\left({\frac {1}{\sqrt {2}}}\right)\left(|0\rangle +e^{2\pi ix_{j}2^{-1}}|1\rangle \right)}, where the sign depends onxi{\displaystyle x_{i}}.
The quantum Fourier transform can be written as the tensor product of a series of terms:
Using the fractional binary notation
the action of the quantum Fourier transform can be expressed in a compact manner:
To obtain this state from the circuit depicted above, aswap operationof the qubits must be performed to reverse their order. At mostn/2{\displaystyle n/2}swaps are required.[5]
Because the discrete Fourier transform, an operation onnqubits, can be factored into the tensor product ofnsingle-qubit operations, it is easily represented as aquantum circuit(up to an order reversal of the output). Each of those single-qubit operations can be implemented efficiently using oneHadamard gateand a linear number ofcontrolledphase gates. The first term requires one Hadamard gate and(n−1){\displaystyle (n-1)}controlled phase gates, the next term requires one Hadamard gate and(n−2){\displaystyle (n-2)}controlled phase gate, and each following term requires one fewer controlled phase gate. Summing up the number of gates, excluding the ones needed for the output reversal, givesn+(n−1)+⋯+1=n(n+1)/2=O(n2){\displaystyle n+(n-1)+\cdots +1=n(n+1)/2=O(n^{2})}gates, which is quadratic polynomial in the number of qubits. This value is much smaller than for the classical Fourier transformation.[7]
The circuit-level implementation of the quantum Fourier transform on a linear nearest neighbor architecture has been studied before.[8][9]Thecircuit depthis linear in the number of qubits.
The quantum Fourier transform on three qubits,F8{\displaystyle F_{8}}withn=3,N=8=23{\displaystyle n=3,N=8=2^{3}}, is represented by the following transformation:
whereω=ω8{\displaystyle \omega =\omega _{8}}is an eighthroot of unitysatisfyingω8=(ei2π8)8=1{\displaystyle \omega ^{8}=\left(e^{\frac {i2\pi }{8}}\right)^{8}=1}.
The matrix representation of the Fourier transform on three qubits is:
The 3-qubit quantum Fourier transform can be rewritten as:
The following sketch shows the respective circuit forn=3{\displaystyle n=3}(with reversed order of output qubits with respect to the proper QFT):
As calculated above, the number of gates used isn(n+1)/2{\displaystyle n(n+1)/2}which is equal to6{\displaystyle 6}, forn=3{\displaystyle n=3}.
Using the generalizedFourier transform on finite (abelian) groups, there are actually two natural ways to define a quantum Fourier transform on ann-qubitquantum register. The QFT as defined above is equivalent to the DFT, which considers these n qubits as indexed by the cyclic groupZ/2nZ{\displaystyle \mathbb {Z} /2^{n}\mathbb {Z} }. However, it also makes sense to consider the qubits as indexed by theBoolean group(Z/2Z)n{\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{n}}, and in this case the Fourier transform is theHadamard transform. This is achieved by applying aHadamard gateto each of the n qubits in parallel.[10][11]Shor's algorithmuses both types of Fourier transforms, an initial Hadamard transform as well as a QFT.
The Fourier transform can be formulated forgroups other than the cyclic group, and extended to the quantum setting.[12]For example, consider the symmetric groupSn{\displaystyle S_{n}}.[13][14]The Fourier transform can be expressed in matrix form
where[λ(g)]q,p{\displaystyle [\lambda (g)]_{q,p}}is the(q,p){\displaystyle (q,p)}element of the matrix representation ofλ(g){\displaystyle \lambda (g)},P(λ){\displaystyle {\mathcal {P}}(\lambda )}is the set of paths from the root node toλ{\displaystyle \lambda }in theBratteli diagramofSn{\displaystyle S_{n}},Λn{\displaystyle \Lambda _{n}}is the set of representations ofSn{\displaystyle S_{n}}indexed byYoung diagrams, andg{\displaystyle g}is a permutation.
The discrete Fourier transform can also beformulated over a finite fieldFq{\displaystyle F_{q}}, and a quantum version can be defined.[15]ConsiderN=q=pn{\displaystyle N=q=p^{n}}. Letϕ:GF(q)→GF(p){\displaystyle \phi :GF(q)\to GF(p)}be an arbitrary linear map (trace, for example). Then for eachx∈GF(q){\displaystyle x\in GF(q)}define
forω=e2πi/p{\displaystyle \omega =e^{2\pi i/p}}and extendFq,ϕ{\displaystyle F_{q,\phi }}linearly.
|
https://en.wikipedia.org/wiki/Quantum_Fourier_transform
|
Inmathematics, asetBof elements of avector spaceVis called abasis(pl.:bases) if every element ofVcan be written in a unique way as a finitelinear combinationof elements ofB. The coefficients of this linear combination are referred to ascomponentsorcoordinatesof the vector with respect toB. The elements of a basis are calledbasis vectors.
Equivalently, a setBis a basis if its elements arelinearly independentand every element ofVis alinear combinationof elements ofB.[1]In other words, a basis is a linearly independentspanning set.
A vector space can have several bases; however all the bases have the same number of elements, called thedimensionof the vector space.
This article deals mainly with finite-dimensional vector spaces. However, many of the principles are also valid for infinite-dimensional vector spaces.
Basis vectors find applications in the study ofcrystal structuresandframes of reference.
AbasisBof avector spaceVover afieldF(such as thereal numbersRor thecomplex numbersC) is a linearly independentsubsetofVthatspansV. This means that a subsetBofVis a basis if it satisfies the two following conditions:
Thescalarsai{\displaystyle a_{i}}are called the coordinates of the vectorvwith respect to the basisB, and by the first property they are uniquely determined.
A vector space that has afinitebasis is calledfinite-dimensional. In this case, the finite subset can be taken asBitself to check for linear independence in the above definition.
It is often convenient or even necessary to have anorderingon the basis vectors, for example, when discussingorientation, or when one considers the scalar coefficients of a vector with respect to a basis without referring explicitly to the basis elements. In this case, the ordering is necessary for associating each coefficient to the corresponding basis element. This ordering can be done by numbering the basis elements. In order to emphasize that an order has been chosen, one speaks of anordered basis, which is therefore not simply an unstructuredset, but asequence, anindexed family, or similar; see§ Ordered bases and coordinatesbelow.
The setR2of theordered pairsofreal numbersis a vector space under the operations of component-wise addition(a,b)+(c,d)=(a+c,b+d){\displaystyle (a,b)+(c,d)=(a+c,b+d)}and scalar multiplicationλ(a,b)=(λa,λb),{\displaystyle \lambda (a,b)=(\lambda a,\lambda b),}whereλ{\displaystyle \lambda }is any real number. A simple basis of this vector space consists of the two vectorse1= (1, 0)ande2= (0, 1). These vectors form a basis (called thestandard basis) because any vectorv= (a,b)ofR2may be uniquely written asv=ae1+be2.{\displaystyle \mathbf {v} =a\mathbf {e} _{1}+b\mathbf {e} _{2}.}Any other pair of linearly independent vectors ofR2, such as(1, 1)and(−1, 2), forms also a basis ofR2.
More generally, ifFis afield, the setFn{\displaystyle F^{n}}ofn-tuplesof elements ofFis a vector space for similarly defined addition and scalar multiplication. Letei=(0,…,0,1,0,…,0){\displaystyle \mathbf {e} _{i}=(0,\ldots ,0,1,0,\ldots ,0)}be then-tuple with all components equal to 0, except theith, which is 1. Thene1,…,en{\displaystyle \mathbf {e} _{1},\ldots ,\mathbf {e} _{n}}is a basis ofFn,{\displaystyle F^{n},}which is called thestandard basisofFn.{\displaystyle F^{n}.}
A different flavor of example is given bypolynomial rings. IfFis a field, the collectionF[X]of allpolynomialsin oneindeterminateXwith coefficients inFis anF-vector space. One basis for this space is themonomial basisB, consisting of allmonomials:B={1,X,X2,…}.{\displaystyle B=\{1,X,X^{2},\ldots \}.}Any set of polynomials such that there is exactly one polynomial of each degree (such as theBernstein basis polynomialsorChebyshev polynomials) is also a basis. (Such a set of polynomials is called apolynomial sequence.) But there are also many bases forF[X]that are not of this form.
Many properties of finite bases result from theSteinitz exchange lemma, which states that, for any vector spaceV, given a finitespanning setSand alinearly independentsetLofnelements ofV, one may replacenwell-chosen elements ofSby the elements ofLto get a spanning set containingL, having its other elements inS, and having the same number of elements asS.
Most properties resulting from the Steinitz exchange lemma remain true when there is no finite spanning set, but their proofs in the infinite case generally require theaxiom of choiceor a weaker form of it, such as theultrafilter lemma.
IfVis a vector space over a fieldF, then:
IfVis a vector space of dimensionn, then:
LetVbe a vector space of finite dimensionnover a fieldF, andB={b1,…,bn}{\displaystyle B=\{\mathbf {b} _{1},\ldots ,\mathbf {b} _{n}\}}be a basis ofV. By definition of a basis, everyvinVmay be written, in a unique way, asv=λ1b1+⋯+λnbn,{\displaystyle \mathbf {v} =\lambda _{1}\mathbf {b} _{1}+\cdots +\lambda _{n}\mathbf {b} _{n},}where the coefficientsλ1,…,λn{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}are scalars (that is, elements ofF), which are called thecoordinatesofvoverB. However, if one talks of thesetof the coefficients, one loses the correspondence between coefficients and basis elements, and several vectors may have the samesetof coefficients. For example,3b1+2b2{\displaystyle 3\mathbf {b} _{1}+2\mathbf {b} _{2}}and2b1+3b2{\displaystyle 2\mathbf {b} _{1}+3\mathbf {b} _{2}}have the same set of coefficients{2, 3}, and are different. It is therefore often convenient to work with anordered basis; this is typically done byindexingthe basis elements by the first natural numbers. Then, the coordinates of a vector form asequencesimilarly indexed, and a vector is completely characterized by the sequence of coordinates. An ordered basis, especially when used in conjunction with anorigin, is also called acoordinate frameor simply aframe(for example, aCartesian frameor anaffine frame).
Let, as usual,Fn{\displaystyle F^{n}}be the set of then-tuplesof elements ofF. This set is anF-vector space, with addition and scalar multiplication defined component-wise. The mapφ:(λ1,…,λn)↦λ1b1+⋯+λnbn{\displaystyle \varphi :(\lambda _{1},\ldots ,\lambda _{n})\mapsto \lambda _{1}\mathbf {b} _{1}+\cdots +\lambda _{n}\mathbf {b} _{n}}is alinear isomorphismfrom the vector spaceFn{\displaystyle F^{n}}ontoV. In other words,Fn{\displaystyle F^{n}}is thecoordinate spaceofV, and then-tupleφ−1(v){\displaystyle \varphi ^{-1}(\mathbf {v} )}is thecoordinate vectorofv.
Theinverse imagebyφ{\displaystyle \varphi }ofbi{\displaystyle \mathbf {b} _{i}}is then-tupleei{\displaystyle \mathbf {e} _{i}}all of whose components are 0, except theith that is 1. Theei{\displaystyle \mathbf {e} _{i}}form an ordered basis ofFn{\displaystyle F^{n}}, which is called itsstandard basisorcanonical basis. The ordered basisBis the image byφ{\displaystyle \varphi }of the canonical basis ofFn{\displaystyle F^{n}}.
It follows from what precedes that every ordered basis is the image by a linear isomorphism of the canonical basis ofFn{\displaystyle F^{n}},and that every linear isomorphism fromFn{\displaystyle F^{n}}ontoVmay be defined as the isomorphism that maps the canonical basis ofFn{\displaystyle F^{n}}onto a given ordered basis ofV. In other words, it is equivalent to define an ordered basis ofV, or a linear isomorphism fromFn{\displaystyle F^{n}}ontoV.
LetVbe a vector space of dimensionnover a fieldF. Given two (ordered) basesBold=(v1,…,vn){\displaystyle B_{\text{old}}=(\mathbf {v} _{1},\ldots ,\mathbf {v} _{n})}andBnew=(w1,…,wn){\displaystyle B_{\text{new}}=(\mathbf {w} _{1},\ldots ,\mathbf {w} _{n})}ofV, it is often useful to express the coordinates of a vectorxwith respect toBold{\displaystyle B_{\mathrm {old} }}in terms of the coordinates with respect toBnew.{\displaystyle B_{\mathrm {new} }.}This can be done by thechange-of-basis formula, that is described below. The subscripts "old" and "new" have been chosen because it is customary to refer toBold{\displaystyle B_{\mathrm {old} }}andBnew{\displaystyle B_{\mathrm {new} }}as theold basisand thenew basis, respectively. It is useful to describe the old coordinates in terms of the new ones, because, in general, one hasexpressionsinvolving the old coordinates, and if one wants to obtain equivalent expressions in terms of the new coordinates; this is obtained by replacing the old coordinates by their expressions in terms of the new coordinates.
Typically, the new basis vectors are given by their coordinates over the old basis, that is,wj=∑i=1nai,jvi.{\displaystyle \mathbf {w} _{j}=\sum _{i=1}^{n}a_{i,j}\mathbf {v} _{i}.}If(x1,…,xn){\displaystyle (x_{1},\ldots ,x_{n})}and(y1,…,yn){\displaystyle (y_{1},\ldots ,y_{n})}are the coordinates of a vectorxover the old and the new basis respectively, the change-of-basis formula isxi=∑j=1nai,jyj,{\displaystyle x_{i}=\sum _{j=1}^{n}a_{i,j}y_{j},}fori= 1, ...,n.
This formula may be concisely written inmatrixnotation. LetAbe the matrix of theai,j{\displaystyle a_{i,j}},andX=[x1⋮xn]andY=[y1⋮yn]{\displaystyle X={\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}\quad {\text{and}}\quad Y={\begin{bmatrix}y_{1}\\\vdots \\y_{n}\end{bmatrix}}}be thecolumn vectorsof the coordinates ofvin the old and the new basis respectively, then the formula for changing coordinates isX=AY.{\displaystyle X=AY.}
The formula can be proven by considering the decomposition of the vectorxon the two bases: one hasx=∑i=1nxivi,{\displaystyle \mathbf {x} =\sum _{i=1}^{n}x_{i}\mathbf {v} _{i},}andx=∑j=1nyjwj=∑j=1nyj∑i=1nai,jvi=∑i=1n(∑j=1nai,jyj)vi.{\displaystyle \mathbf {x} =\sum _{j=1}^{n}y_{j}\mathbf {w} _{j}=\sum _{j=1}^{n}y_{j}\sum _{i=1}^{n}a_{i,j}\mathbf {v} _{i}=\sum _{i=1}^{n}{\biggl (}\sum _{j=1}^{n}a_{i,j}y_{j}{\biggr )}\mathbf {v} _{i}.}
The change-of-basis formula results then from the uniqueness of the decomposition of a vector over a basis, hereBold{\displaystyle B_{\text{old}}};that isxi=∑j=1nai,jyj,{\displaystyle x_{i}=\sum _{j=1}^{n}a_{i,j}y_{j},}fori= 1, ...,n.
If one replaces the field occurring in the definition of a vector space by aring, one gets the definition of amodule. For modules,linear independenceandspanning setsare defined exactly as for vector spaces, although "generating set" is more commonly used than that of "spanning set".
Like for vector spaces, abasisof a module is a linearly independent subset that is also a generating set. A major difference with the theory of vector spaces is that not every module has a basis. A module that has a basis is called afree module. Free modules play a fundamental role in module theory, as they may be used for describing the structure of non-free modules throughfree resolutions.
A module over the integers is exactly the same thing as anabelian group. Thus a free module over the integers is also a free abelian group. Free abelian groups have specific properties that are not shared by modules over other rings. Specifically, every subgroup of a free abelian group is a free abelian group, and, ifGis a subgroup of a finitely generated free abelian groupH(that is an abelian group that has a finite basis), then there is a basise1,…,en{\displaystyle \mathbf {e} _{1},\ldots ,\mathbf {e} _{n}}ofHand an integer0 ≤k≤nsuch thata1e1,…,akek{\displaystyle a_{1}\mathbf {e} _{1},\ldots ,a_{k}\mathbf {e} _{k}}is a basis ofG, for some nonzero integersa1,…,ak{\displaystyle a_{1},\ldots ,a_{k}}.For details, seeFree abelian group § Subgroups.
In the context of infinite-dimensional vector spaces over the real or complex numbers, the termHamel basis(named afterGeorg Hamel[2]) oralgebraic basiscan be used to refer to a basis as defined in this article. This is to make a distinction with other notions of "basis" that exist when infinite-dimensional vector spaces are endowed with extra structure. The most important alternatives areorthogonal basesonHilbert spaces,Schauder bases, andMarkushevich basesonnormed linear spaces. In the case of the real numbersRviewed as a vector space over the fieldQof rational numbers, Hamel bases are uncountable, and have specifically thecardinalityof the continuum, which is thecardinal number2ℵ0{\displaystyle 2^{\aleph _{0}}},whereℵ0{\displaystyle \aleph _{0}}(aleph-nought) is the smallest infinite cardinal, the cardinal of the integers.
The common feature of the other notions is that they permit the taking of infinite linear combinations of the basis vectors in order to generate the space. This, of course, requires that infinite sums are meaningfully defined on these spaces, as is the case fortopological vector spaces– a large class of vector spaces including e.g.Hilbert spaces,Banach spaces, orFréchet spaces.
The preference of other types of bases for infinite-dimensional spaces is justified by the fact that the Hamel basis becomes "too big" in Banach spaces: IfXis an infinite-dimensional normed vector space that iscomplete(i.e.Xis aBanach space), then any Hamel basis ofXis necessarilyuncountable. This is a consequence of theBaire category theorem. The completeness as well as infinite dimension are crucial assumptions in the previous claim. Indeed, finite-dimensional spaces have by definition finite bases and there are infinite-dimensional (non-complete) normed spaces that have countable Hamel bases. Considerc00{\displaystyle c_{00}},the space of thesequencesx=(xn){\displaystyle x=(x_{n})}of real numbers that have only finitely many non-zero elements, with the norm‖x‖=supn|xn|{\textstyle \|x\|=\sup _{n}|x_{n}|}.Itsstandard basis, consisting of the sequences having only one non-zero element, which is equal to 1, is a countable Hamel basis.
In the study ofFourier series, one learns that the functions{1} ∪ { sin(nx), cos(nx) :n= 1, 2, 3, ... }are an "orthogonal basis" of the (real or complex) vector space of all (real or complex valued) functions on the interval [0, 2π] that are square-integrable on this interval, i.e., functionsfsatisfying∫02π|f(x)|2dx<∞.{\displaystyle \int _{0}^{2\pi }\left|f(x)\right|^{2}\,dx<\infty .}
The functions{1} ∪ { sin(nx), cos(nx) :n= 1, 2, 3, ... }are linearly independent, and every functionfthat is square-integrable on [0, 2π] is an "infinite linear combination" of them, in the sense thatlimn→∞∫02π|a0+∑k=1n(akcos(kx)+bksin(kx))−f(x)|2dx=0{\displaystyle \lim _{n\to \infty }\int _{0}^{2\pi }{\biggl |}a_{0}+\sum _{k=1}^{n}\left(a_{k}\cos \left(kx\right)+b_{k}\sin \left(kx\right)\right)-f(x){\biggr |}^{2}dx=0}
for suitable (real or complex) coefficientsak,bk. But many[3]square-integrable functions cannot be represented asfinitelinear combinations of these basis functions, which thereforedo notcomprise a Hamel basis. Every Hamel basis of this space is much bigger than this merely countably infinite set of functions. Hamel bases of spaces of this kind are typically not useful, whereasorthonormal basesof these spaces are essential inFourier analysis.
The geometric notions of anaffine space,projective space,convex set, andconehave related notions ofbasis.[4]Anaffine basisfor ann-dimensional affine space isn+1{\displaystyle n+1}points ingeneral linear position. Aprojective basisisn+2{\displaystyle n+2}points in general position, in a projective space of dimensionn. Aconvex basisof apolytopeis the set of the vertices of itsconvex hull. Acone basis[5]consists of one point by edge of a polygonal cone. See also aHilbert basis (linear programming).
For aprobability distributioninRnwith aprobability density function, such as the equidistribution in ann-dimensional ball with respect to Lebesgue measure, it can be shown thatnrandomly and independently chosen vectors will form a basiswith probability one, which is due to the fact thatnlinearly dependent vectorsx1, ...,xninRnshould satisfy the equationdet[x1⋯xn] = 0(zero determinant of the matrix with columnsxi), and the set of zeros of a non-trivial polynomial has zero measure. This observation has led to techniques for approximating random bases.[6][7]
It is difficult to check numerically the linear dependence or exact orthogonality. Therefore, the notion of ε-orthogonality is used. Forspaces with inner product,xis ε-orthogonal toyif|⟨x,y⟩|/(‖x‖‖y‖)<ε{\displaystyle \left|\left\langle x,y\right\rangle \right|/\left(\left\|x\right\|\left\|y\right\|\right)<\varepsilon }(that is, cosine of the angle betweenxandyis less thanε).
In high dimensions, two independent random vectors are with high probability almost orthogonal, and the number of independent random vectors, which all are with given high probability pairwise almost orthogonal, grows exponentially with dimension. More precisely, consider equidistribution inn-dimensional ball. ChooseNindependent random vectors from a ball (they areindependent and identically distributed). Letθbe a small positive number. Then for
Nrandom vectors are all pairwise ε-orthogonal with probability1 −θ.[7]ThisNgrowth exponentially with dimensionnandN≫n{\displaystyle N\gg n}for sufficiently bign. This property of random bases is a manifestation of the so-calledmeasure concentration phenomenon.[8]
The figure (right) illustrates distribution of lengths N of pairwise almost orthogonal chains of vectors that are independently randomly sampled from then-dimensional cube[−1, 1]nas a function of dimension,n. A point is first randomly selected in the cube. The second point is randomly chosen in the same cube. If the angle between the vectors was withinπ/2 ± 0.037π/2then the vector was retained. At the next step a new vector is generated in the same hypercube, and its angles with the previously generated vectors are evaluated. If these angles are withinπ/2 ± 0.037π/2then the vector is retained. The process is repeated until the chain of almost orthogonality breaks, and the number of such pairwise almost orthogonal vectors (length of the chain) is recorded. For eachn, 20 pairwise almost orthogonal chains were constructed numerically for each dimension. Distribution of the length of these chains is presented.
LetVbe any vector space over some fieldF. LetXbe the set of all linearly independent subsets ofV.
The setXis nonempty since the empty set is an independent subset ofV, and it ispartially orderedby inclusion, which is denoted, as usual, by⊆.
LetYbe a subset ofXthat is totally ordered by⊆, and letLYbe the union of all the elements ofY(which are themselves certain subsets ofV).
Since(Y, ⊆)is totally ordered, every finite subset ofLYis a subset of an element ofY, which is a linearly independent subset ofV, and henceLYis linearly independent. ThusLYis an element ofX. Therefore,LYis an upper bound forYin(X, ⊆): it is an element ofX, that contains every element ofY.
AsXis nonempty, and every totally ordered subset of(X, ⊆)has an upper bound inX,Zorn's lemmaasserts thatXhas a maximal element. In other words, there exists some elementLmaxofXsatisfying the condition that wheneverLmax⊆ Lfor some elementLofX, thenL = Lmax.
It remains to prove thatLmaxis a basis ofV. SinceLmaxbelongs toX, we already know thatLmaxis a linearly independent subset ofV.
If there were some vectorwofVthat is not in the span ofLmax, thenwwould not be an element ofLmaxeither. LetLw= Lmax∪ {w}. This set is an element ofX, that is, it is a linearly independent subset ofV(becausewis not in the span ofLmax, andLmaxis independent). AsLmax⊆ Lw, andLmax≠ Lw(becauseLwcontains the vectorwthat is not contained inLmax), this contradicts the maximality ofLmax. Thus this shows thatLmaxspansV.
HenceLmaxis linearly independent and spansV. It is thus a basis ofV, and this proves that every vector space has a basis.
This proof relies on Zorn's lemma, which is equivalent to theaxiom of choice. Conversely, it has been proved that if every vector space has a basis, then the axiom of choice is true.[9]Thus the two assertions are equivalent.
|
https://en.wikipedia.org/wiki/Basis_vector
|
Inprobability theoryandstatistics, thecharacteristic functionof anyreal-valuedrandom variablecompletely defines itsprobability distribution. If a random variable admits aprobability density function, then the characteristic function is theFourier transform(with sign reversal) of the probability density function. Thus it provides an alternative route to analytical results compared with working directly withprobability density functionsorcumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.
In addition tounivariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases.
The characteristic function always exists when treated as a function of a real-valued argument, unlike themoment-generating function. There are relations between the behavior of the characteristic function of a distribution and properties of the distribution, such as the existence of moments and the existence of a density function.
The characteristic function is a way to describe arandom variableX.
Thecharacteristic function,
a function oft,
determines the behavior and properties of the probability distribution ofX.
It is equivalent to aprobability density functionorcumulative distribution function, since knowing one of these functions allows computation of the others, but they provide different insights into the features of the random variable. In particular cases, one or another of these equivalent functions may be easier to represent in terms of simple standard functions.
If a random variable admits adensity function, then the characteristic function is itsFourier dual, in the sense that each of them is aFourier transformof the other. If a random variable has amoment-generating functionMX(t){\displaystyle M_{X}(t)}, then the domain of the characteristic function can be extended to the complex plane, and
Note however that the characteristic function of a distribution is well defined for allreal valuesoft, even when themoment-generating functionis not well defined for all real values oft.
The characteristic function approach is particularly useful in analysis of linear combinations of independent random variables: a classical proof of theCentral Limit Theoremuses characteristic functions andLévy's continuity theorem. Another important application is to the theory of thedecomposabilityof random variables.
For a scalar random variableXthecharacteristic functionis defined as theexpected valueofeitX, whereiis theimaginary unit, andt∈Ris the argument of the characteristic function:
HereFXis thecumulative distribution functionofX,fXis the correspondingprobability density function,QX(p)is the corresponding inverse cumulative distribution function also called thequantile function,[2]and the integrals are of theRiemann–Stieltjeskind. If a random variableXhas aprobability density functionthen the characteristic function is itsFourier transformwith sign reversal in the complex exponential.[3][4]This convention for the constants appearing in the definition of the characteristic function differs from the usual convention for the Fourier transform.[5]For example, some authors[6]defineφX(t) = E[e−2πitX], which is essentially a change of parameter. Other notation may be encountered in the literature:p^{\displaystyle \scriptstyle {\hat {p}}}as the characteristic function for a probability measurep, orf^{\displaystyle \scriptstyle {\hat {f}}}as the characteristic function corresponding to a densityf.
The notion of characteristic functions generalizes to multivariate random variables and more complicatedrandom elements. The argument of the characteristic function will always belong to thecontinuous dualof the space where the random variableXtakes its values. For common cases such definitions are listed below:
Oberhettinger (1973) provides extensive tables of characteristic functions.
The bijection stated above between probability distributions and characteristic functions issequentially continuous. That is, whenever a sequence of distribution functionsFj(x)converges (weakly) to some distributionF(x), the corresponding sequence of characteristic functionsφj(t)will also converge, and the limitφ(t)will correspond to the characteristic function of lawF. More formally, this is stated as
This theorem can be used to prove thelaw of large numbersand thecentral limit theorem.
There is aone-to-one correspondencebetween cumulative distribution functions and characteristic functions, so it is possible to find one of these functions if we know the other. The formula in the definition of characteristic function allows us to computeφwhen we know the distribution functionF(or densityf). If, on the other hand, we know the characteristic functionφand want to find the corresponding distribution function, then one of the followinginversion theoremscan be used.
Theorem. If the characteristic functionφXof a random variableXisintegrable, thenFXis absolutely continuous, and thereforeXhas aprobability density function. In the univariate case (i.e. whenXis scalar-valued) the density function is given byfX(x)=FX′(x)=12π∫Re−itxφX(t)dt.{\displaystyle f_{X}(x)=F_{X}'(x)={\frac {1}{2\pi }}\int _{\mathbf {R} }e^{-itx}\varphi _{X}(t)\,dt.}
In the multivariate case it isfX(x)=1(2π)n∫Rne−i(t⋅x)φX(t)λ(dt){\displaystyle f_{X}(x)={\frac {1}{(2\pi )^{n}}}\int _{\mathbf {R} ^{n}}e^{-i(t\cdot x)}\varphi _{X}(t)\lambda (dt)}
wheret⋅x{\textstyle t\cdot x}is thedot product.
The density function is theRadon–Nikodym derivativeof the distributionμXwith respect to theLebesgue measureλ:fX(x)=dμXdλ(x).{\displaystyle f_{X}(x)={\frac {d\mu _{X}}{d\lambda }}(x).}
Theorem (Lévy).[note 1]IfφXis characteristic function of distribution functionFX, two pointsa<bare such that{x|a<x<b}is acontinuity setofμX(in the univariate case this condition is equivalent to continuity ofFXat pointsaandb), then
Theorem. Ifais (possibly) an atom ofX(in the univariate case this means a point of discontinuity ofFX) then
Theorem (Gil-Pelaez).[16]For a univariate random variableX, ifxis acontinuity pointofFXthen
where the imaginary part of a complex numberz{\displaystyle z}is given byIm(z)=(z−z∗)/2i{\displaystyle \mathrm {Im} (z)=(z-z^{*})/2i}.
And its density function is:
The integral may be notLebesgue-integrable; for example, whenXis thediscrete random variablethat is always 0, it becomes theDirichlet integral.
Inversion formulas for multivariate distributions are available.[14][17]
The set of all characteristic functions is closed under certain operations:
It is well known that any non-decreasingcàdlàgfunctionFwith limitsF(−∞) = 0,F(+∞) = 1corresponds to acumulative distribution functionof some random variable. There is also interest in finding similar simple criteria for when a given functionφcould be the characteristic function of some random variable. The central result here isBochner’s theorem, although its usefulness is limited because the main condition of the theorem,non-negative definiteness, is very hard to verify. Other theorems also exist, such as Khinchine’s, Mathias’s, or Cramér’s, although their application is just as difficult.Pólya’s theorem, on the other hand, provides a very simple convexity condition which is sufficient but not necessary. Characteristic functions which satisfy this condition are called Pólya-type.[18]
Bochner’s theorem. An arbitrary functionφ:Rn→Cis the characteristic function of some random variable if and only ifφispositive definite, continuous at the origin, and ifφ(0) = 1.
Khinchine’s criterion. A complex-valued, absolutely continuous functionφ, withφ(0) = 1, is a characteristic function if and only if it admits the representation
Mathias’ theorem. A real-valued, even, continuous, absolutely integrable functionφ, withφ(0) = 1, is a characteristic function if and only if
forn= 0,1,2,..., and allp> 0. HereH2ndenotes theHermite polynomialof degree2n.
Pólya’s theorem. Ifφ{\displaystyle \varphi }is a real-valued, even, continuous function which satisfies the conditions
thenφ(t)is the characteristic function of an absolutely continuous distribution symmetric about 0.
Because of thecontinuity theorem, characteristic functions are used in the most frequently seen proof of thecentral limit theorem. The main technique involved in making calculations with a characteristic function is recognizing the function as the characteristic function of a particular distribution.
Characteristic functions are particularly useful for dealing with linear functions ofindependentrandom variables. For example, ifX1,X2, ...,Xnis a sequence of independent (and not necessarily identically distributed) random variables, and
where theaiare constants, then the characteristic function forSnis given by
In particular,φX+Y(t) =φX(t)φY(t). To see this, write out the definition of characteristic function:
The independence ofXandYis required to establish the equality of the third and fourth expressions.
Another special case of interest for identically distributed random variables is whenai= 1 /nand thenSnis the sample mean. In this case, writingXfor the mean,
Characteristic functions can also be used to findmomentsof a random variable. Provided that then-thmoment exists, the characteristic function can be differentiatedntimes:
E[Xn]=i−n[dndtnφX(t)]t=0=i−nφX(n)(0),{\displaystyle \operatorname {E} \left[X^{n}\right]=i^{-n}\left[{\frac {d^{n}}{dt^{n}}}\varphi _{X}(t)\right]_{t=0}=i^{-n}\varphi _{X}^{(n)}(0),\!}
This can be formally written using the derivatives of theDirac delta function:fX(x)=∑n=0∞(−1)nn!δ(n)(x)E[Xn]{\displaystyle f_{X}(x)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\delta ^{(n)}(x)\operatorname {E} [X^{n}]}which allows a formal solution to themoment problem.
For example, supposeXhas a standardCauchy distribution. ThenφX(t) =e−|t|. This is notdifferentiableatt= 0, showing that the Cauchy distribution has noexpectation. Also, the characteristic function of the sample meanXofnindependentobservations has characteristic functionφX(t) = (e−|t|/n)n=e−|t|, using the result from the previous section. This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself.
As a further example, supposeXfollows aGaussian distributioni.e.X∼N(μ,σ2){\displaystyle X\sim {\mathcal {N}}(\mu ,\sigma ^{2})}. ThenφX(t)=eμit−12σ2t2{\displaystyle \varphi _{X}(t)=e^{\mu it-{\frac {1}{2}}\sigma ^{2}t^{2}}}and
A similar calculation showsE[X2]=μ2+σ2{\displaystyle \operatorname {E} \left[X^{2}\right]=\mu ^{2}+\sigma ^{2}}and is easier to carry out than applying the definition of expectation and using integration by parts to evaluateE[X2]{\displaystyle \operatorname {E} \left[X^{2}\right]}.
The logarithm of a characteristic function is acumulant generating function, which is useful for findingcumulants; some instead define the cumulant generating function as the logarithm of themoment-generating function, and call the logarithm of the characteristic function thesecondcumulant generating function.
Characteristic functions can be used as part of procedures for fitting probability distributions to samples of data. Cases where this provides a practicable option compared to other possibilities include fitting thestable distributionsince closed form expressions for the density are not available which makes implementation ofmaximum likelihoodestimation difficult. Estimation procedures are available which match the theoretical characteristic function to theempirical characteristic function, calculated from the data. Paulson et al. (1975)[19]and Heathcote (1977)[20]provide some theoretical background for such an estimation procedure. In addition, Yu (2004)[21]describes applications of empirical characteristic functions to fittime seriesmodels where likelihood procedures are impractical. Empirical characteristic functions have also been used by Ansari et al. (2020)[22]and Li et al. (2020)[23]for traininggenerative adversarial networks.
Thegamma distributionwith scale parameter θ and a shape parameterkhas the characteristic function
Now suppose that we have
withXandYindependent from each other, and we wish to know what the distribution ofX+Yis. The characteristic functions are
which by independence and the basic properties of characteristic function leads to
This is the characteristic function of the gamma distribution scale parameterθand shape parameterk1+k2, and we therefore conclude
The result can be expanded tonindependent gamma distributed random variables with the same scale parameter and we get
As defined above, the argument of the characteristic function is treated as a real number: however, certain aspects of the theory of characteristic functions are advanced by extending the definition into the complex plane byanalytic continuation, in cases where this is possible.[24]
Related concepts include themoment-generating functionand theprobability-generating function. The characteristic function exists for all probability distributions. This is not the case for the moment-generating function.
The characteristic function is closely related to theFourier transform: the characteristic function of a probability density functionp(x)is thecomplex conjugateof thecontinuous Fourier transformofp(x)(according to the usual convention; seecontinuous Fourier transform – other conventions).
whereP(t)denotes thecontinuous Fourier transformof the probability density functionp(x). Likewise,p(x)may be recovered fromφX(t)through the inverse Fourier transform:
Indeed, even when the random variable does not have a density, the characteristic function may be seen as the Fourier transform of the measure corresponding to the random variable.
Another related concept is the representation of probability distributions as elements of areproducing kernel Hilbert spacevia thekernel embedding of distributions. This framework may be viewed as a generalization of the characteristic function under specific choices of thekernel function.
|
https://en.wikipedia.org/wiki/Characteristic_function_(probability_theory)
|
Inmathematics,orthogonal functionsbelong to afunction spacethat is avector spaceequipped with abilinear form. When the function space has anintervalas thedomain, the bilinear form may be theintegralof the product of functions over the interval:
The functionsf{\displaystyle f}andg{\displaystyle g}areorthogonalwhen this integral is zero, i.e.⟨f,g⟩=0{\displaystyle \langle f,\,g\rangle =0}wheneverf≠g{\displaystyle f\neq g}. As with abasisof vectors in a finite-dimensional space, orthogonal functions can form an infinite basis for a function space. Conceptually, the above integral is the equivalent of a vectordot product; two vectors are mutually independent (orthogonal) if their dot-product is zero.
Suppose{f0,f1,…}{\displaystyle \{f_{0},f_{1},\ldots \}}is a sequence of orthogonal functions of nonzeroL2-norms‖fn‖2=⟨fn,fn⟩=(∫fn2dx)12{\textstyle \left\|f_{n}\right\|_{2}={\sqrt {\langle f_{n},f_{n}\rangle }}=\left(\int f_{n}^{2}\ dx\right)^{\frac {1}{2}}}. It follows that the sequence{fn/‖fn‖2}{\displaystyle \left\{f_{n}/\left\|f_{n}\right\|_{2}\right\}}is of functions ofL2-norm one, forming anorthonormal sequence. To have a definedL2-norm, the integral must be bounded, which restricts the functions to beingsquare-integrable.
Several sets of orthogonal functions have become standard bases for approximating functions. For example, the sine functionssinnxandsinmxare orthogonal on the intervalx∈(−π,π){\displaystyle x\in (-\pi ,\pi )}whenm≠n{\displaystyle m\neq n}andnandmare positive integers. For then
and the integral of the product of the two sine functions vanishes.[1]Together with cosine functions, these orthogonal functions may be assembled into atrigonometric polynomialto approximate a given function on the interval with itsFourier series.
If one begins with themonomialsequence{1,x,x2,…}{\displaystyle \left\{1,x,x^{2},\dots \right\}}on the interval[−1,1]{\displaystyle [-1,1]}and applies theGram–Schmidt process, then one obtains theLegendre polynomials. Another collection of orthogonal polynomials are theassociated Legendre polynomials.
The study of orthogonal polynomials involvesweight functionsw(x){\displaystyle w(x)}that are inserted in the bilinear form:
ForLaguerre polynomialson(0,∞){\displaystyle (0,\infty )}the weight function isw(x)=e−x{\displaystyle w(x)=e^{-x}}.
Both physicists and probability theorists useHermite polynomialson(−∞,∞){\displaystyle (-\infty ,\infty )}, where the weight function isw(x)=e−x2{\displaystyle w(x)=e^{-x^{2}}}orw(x)=e−x2/2{\displaystyle w(x)=e^{-x^{2}/2}}.
Chebyshev polynomialsare defined on[−1,1]{\displaystyle [-1,1]}and use weightsw(x)=11−x2{\textstyle w(x)={\frac {1}{\sqrt {1-x^{2}}}}}orw(x)=1−x2{\textstyle w(x)={\sqrt {1-x^{2}}}}.
Zernike polynomialsare defined on theunit diskand have orthogonality of both radial and angular parts.
Walsh functionsandHaar waveletsare examples of orthogonal functions with discrete ranges.
Legendre and Chebyshev polynomials provide orthogonal families for the interval[−1, 1]while occasionally orthogonal families are required on[0, ∞). In this case it is convenient to apply theCayley transformfirst, to bring the argument into[−1, 1]. This procedure results in families ofrationalorthogonal functions calledLegendre rational functionsandChebyshev rational functions.
Solutions of lineardifferential equationswith boundary conditions can often be written as a weighted sum of orthogonal solution functions (a.k.a.eigenfunctions), leading togeneralized Fourier series.
|
https://en.wikipedia.org/wiki/Orthogonal_functions
|
Inmathematics,Schwartz spaceS{\displaystyle {\mathcal {S}}}is thefunction spaceof allfunctionswhosederivativesarerapidly decreasing. This space has the important property that theFourier transformis anautomorphismon this space. This property enables one, by duality, to define the Fourier transform for elements in the dual spaceS∗{\displaystyle {\mathcal {S}}^{*}}ofS{\displaystyle {\mathcal {S}}}, that is, fortempered distributions. A function in the Schwartz space is sometimes called aSchwartz function.
Schwartz space is named after French mathematicianLaurent Schwartz.
LetN{\displaystyle \mathbb {N} }be thesetof non-negativeintegers, and for anyn∈N{\displaystyle n\in \mathbb {N} }, letNn:=N×⋯×N⏟ntimes{\displaystyle \mathbb {N} ^{n}:=\underbrace {\mathbb {N} \times \dots \times \mathbb {N} } _{n{\text{ times}}}}be then-foldCartesian product.
TheSchwartz spaceorspace of rapidly decreasing functions onRn{\displaystyle \mathbb {R} ^{n}}is the function spaceS(Rn,C):={f∈C∞(Rn,C)∣∀α,β∈Nn,‖f‖α,β<∞},{\displaystyle {\mathcal {S}}\left(\mathbb {R} ^{n},\mathbb {C} \right):=\left\{f\in C^{\infty }(\mathbb {R} ^{n},\mathbb {C} )\mid \forall {\boldsymbol {\alpha }},{\boldsymbol {\beta }}\in \mathbb {N} ^{n},\|f\|_{{\boldsymbol {\alpha }},{\boldsymbol {\beta }}}<\infty \right\},}whereC∞(Rn,C){\displaystyle C^{\infty }(\mathbb {R} ^{n},\mathbb {C} )}is the function space ofsmooth functionsfromRn{\displaystyle \mathbb {R} ^{n}}intoC{\displaystyle \mathbb {C} }, and‖f‖α,β:=supx∈Rn|xα(Dβf)(x)|.{\displaystyle \|f\|_{{\boldsymbol {\alpha }},{\boldsymbol {\beta }}}:=\sup _{{\boldsymbol {x}}\in \mathbb {R} ^{n}}\left|{\boldsymbol {x}}^{\boldsymbol {\alpha }}({\boldsymbol {D}}^{\boldsymbol {\beta }}f)({\boldsymbol {x}})\right|.}Here,sup{\displaystyle \sup }denotes thesupremum, and we usedmulti-index notation, i.e.xα:=x1α1x2α2…xnαn{\displaystyle {\boldsymbol {x}}^{\boldsymbol {\alpha }}:=x_{1}^{\alpha _{1}}x_{2}^{\alpha _{2}}\ldots x_{n}^{\alpha _{n}}}andDβ:=∂1β1∂2β2…∂nβn{\displaystyle D^{\boldsymbol {\beta }}:=\partial _{1}^{\beta _{1}}\partial _{2}^{\beta _{2}}\ldots \partial _{n}^{\beta _{n}}}.
To put common language to this definition, one could consider a rapidly decreasing function as essentially a functionf(x)such thatf(x),f′(x),f′′(x), ... all exist everywhere onRand go to zero asx→ ±∞faster than any reciprocal power ofx. In particular,𝒮(Rn,C)is asubspaceof the function spaceC∞(Rn,C) of smooth functions fromRnintoC.
In particular, this implies that𝒮(Rn)is anR-algebra. More generally, iff∈ 𝒮(R)andHis a bounded smooth function with bounded derivatives of all orders, thenfH∈ 𝒮(R).
This article incorporates material from Space of rapidly decreasing functions onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
|
https://en.wikipedia.org/wiki/Schwartz_space
|
Insignal processing, the power spectrumSxx(f){\displaystyle S_{xx}(f)}of acontinuous timesignalx(t){\displaystyle x(t)}describes the distribution ofpowerinto frequency componentsf{\displaystyle f}composing that signal.[1]According toFourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal (includingnoise) as analyzed in terms of its frequency content, is called itsspectrum.
When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute theenergy spectral density. More commonly used is thepower spectral density(PSD, or simplypower spectrum), which applies to signals existing overalltime, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The PSD then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite.Summationor integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integratingx2(t){\displaystyle x^{2}(t)}over the time domain, as dictated byParseval's theorem.[1]
The spectrum of a physical processx(t){\displaystyle x(t)}often contains essential information about the nature ofx{\displaystyle x}. For instance, thepitchandtimbreof a musical instrument are immediately determined from a spectral analysis. Thecolorof a light source is determined by the spectrum of the electromagnetic wave's electric fieldE(t){\displaystyle E(t)}as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves theFourier transform, and generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice, such as when adispersive prismis used to obtain a spectrum of light in aspectrograph, or when a sound is perceived through its effect on the auditory receptors of the inner ear, each of which is sensitive to a particular frequency.
However this article concentrates on situations in which the time series is known (at least in a statistical sense) or directly measured (such as by a microphone sampled by a computer). The power spectrum is important instatistical signal processingand in the statistical study ofstochastic processes, as well as in many other branches ofphysicsandengineering. Typically the process is a function of time, but one can similarly discuss data in the spatial domain being decomposed in terms ofspatial frequency.[1]
Inphysics, the signal might be a wave, such as anelectromagnetic wave, anacoustic wave, or the vibration of a mechanism. Thepower spectral density(PSD) of the signal describes thepowerpresent in the signal as a function of frequency, per unit frequency. Power spectral density is commonly expressed inSI unitsofwattsperhertz(abbreviated as W/Hz).[2]
When a signal is defined in terms only of avoltage, for instance, there is no unique power associated with the stated amplitude. In this case "power" is simply reckoned in terms of the square of the signal, as this would always beproportionalto the actual power delivered by that signal into a givenimpedance. So one might use units of V2Hz−1for the PSD.Energy spectral density(ESD) would have units of V2s Hz−1, sinceenergyhas units of power multiplied by time (e.g.,watt-hour).[3]
In the general case, the units of PSD will be the ratio of units of variance per unit of frequency; so, for example, a series of displacement values (in meters) over time (in seconds) will have PSD in units of meters squared per hertz, m2/Hz.
In the analysis of randomvibrations, units ofg2Hz−1are frequently used for the PSD ofacceleration, wheregdenotes theg-force.[4]
Mathematically, it is not necessary to assign physical dimensions to the signal or to the independent variable. In the following discussion the meaning ofx(t) will remain unspecified, but the independent variable will be assumed to be that of time.
A PSD can be either aone-sidedfunction of only positive frequencies or atwo-sidedfunction of both positive andnegative frequenciesbut with only half the amplitude. Noise PSDs are generally one-sided in engineering and two-sided in physics.[5]
Insignal processing, theenergyof a signalx(t){\displaystyle x(t)}is given byE≜∫−∞∞|x(t)|2dt.{\displaystyle E\triangleq \int _{-\infty }^{\infty }\left|x(t)\right|^{2}\ dt.}Assuming the total energy is finite (i.e.x(t){\displaystyle x(t)}is asquare-integrable function) allows applyingParseval's theorem(orPlancherel's theorem).[6]That is,∫−∞∞|x(t)|2dt=∫−∞∞|x^(f)|2df,{\displaystyle \int _{-\infty }^{\infty }|x(t)|^{2}\,dt=\int _{-\infty }^{\infty }\left|{\hat {x}}(f)\right|^{2}\,df,}wherex^(f)=∫−∞∞e−i2πftx(t)dt,{\displaystyle {\hat {x}}(f)=\int _{-\infty }^{\infty }e^{-i2\pi ft}x(t)\ dt,}is theFourier transformofx(t){\displaystyle x(t)}atfrequencyf{\displaystyle f}(inHz).[7]The theorem also holds true in the discrete-time cases. Since the integral on the left-hand side is the energy of the signal, the value of|x^(f)|2df{\displaystyle \left|{\hat {x}}(f)\right|^{2}df}can be interpreted as adensity functionmultiplied by an infinitesimally small frequency interval, describing the energy contained in the signal at frequencyf{\displaystyle f}in the frequency intervalf+df{\displaystyle f+df}.
Therefore, theenergy spectral densityofx(t){\displaystyle x(t)}is defined as:[8]
The functionS¯xx(f){\displaystyle {\bar {S}}_{xx}(f)}and theautocorrelationofx(t){\displaystyle x(t)}form a Fourier transform pair, a result also known as theWiener–Khinchin theorem(see alsoPeriodogram).
As a physical example of how one might measure the energy spectral density of a signal, supposeV(t){\displaystyle V(t)}represents thepotential(involts) of an electrical pulse propagating along atransmission lineofimpedanceZ{\displaystyle Z}, and suppose the line is terminated with amatchedresistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). ByOhm's law, the power delivered to the resistor at timet{\displaystyle t}is equal toV(t)2/Z{\displaystyle V(t)^{2}/Z}, so the total energy is found by integratingV(t)2/Z{\displaystyle V(t)^{2}/Z}with respect to time over the duration of the pulse. To find the value of the energy spectral densityS¯xx(f){\displaystyle {\bar {S}}_{xx}(f)}at frequencyf{\displaystyle f}, one could insert between the transmission line and the resistor abandpass filterwhich passes only a narrow range of frequencies (Δf{\displaystyle \Delta f}, say) near the frequency of interest and then measure the total energyE(f){\displaystyle E(f)}dissipated across the resistor. The value of the energy spectral density atf{\displaystyle f}is then estimated to beE(f)/Δf{\displaystyle E(f)/\Delta f}. In this example, since the powerV(t)2/Z{\displaystyle V(t)^{2}/Z}has units of V2Ω−1, the energyE(f){\displaystyle E(f)}has units of V2s Ω−1=J, and hence the estimateE(f)/Δf{\displaystyle E(f)/\Delta f}of the energy spectral density has units of J Hz−1, as required. In many situations, it is common to forget the step of dividing byZ{\displaystyle Z}so that the energy spectral density instead has units of V2Hz−1.
This definition generalizes in a straightforward manner to a discrete signal with acountably infinitenumber of valuesxn{\displaystyle x_{n}}such as a signal sampled at discrete timestn=t0+(nΔt){\displaystyle t_{n}=t_{0}+(n\,\Delta t)}:S¯xx(f)=limN→∞(Δt)2|∑n=−NNxne−i2πfnΔt|2⏟|x^d(f)|2,{\displaystyle {\bar {S}}_{xx}(f)=\lim _{N\to \infty }(\Delta t)^{2}\underbrace {\left|\sum _{n=-N}^{N}x_{n}e^{-i2\pi fn\,\Delta t}\right|^{2}} _{\left|{\hat {x}}_{d}(f)\right|^{2}},}wherex^d(f){\displaystyle {\hat {x}}_{d}(f)}is thediscrete-time Fourier transformofxn.{\displaystyle x_{n}.}The sampling intervalΔt{\displaystyle \Delta t}is needed to keep the correct physical units and to ensure that we recover the continuous case in the limitΔt→0.{\displaystyle \Delta t\to 0.}But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (also seenormalized frequency)
The above definition of energy spectral density is suitable for transients (pulse-like signals) whose energy is concentrated around one time window; then the Fourier transforms of the signals generally exist. For continuous signals over all time, one must rather define thepower spectral density(PSD) which exists forstationary processes; this describes how thepowerof a signal or time series is distributed over frequency, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, is simply identified with the squared value of the signal. For example, statisticians study thevarianceof a function over timex(t){\displaystyle x(t)}(or over another independent variable), and using an analogy with electrical signals (among other physical processes), it is customary to refer to it as thepower spectrumeven when there is no physical power involved. If one were to create a physicalvoltagesource which followedx(t){\displaystyle x(t)}and applied it to the terminals of a oneohmresistor, then indeed the instantaneous power dissipated in that resistor would be given byx2(t){\displaystyle x^{2}(t)}watts.
The average powerP{\displaystyle P}of a signalx(t){\displaystyle x(t)}over all time is therefore given by the following time average, where the periodT{\displaystyle T}is centered about some arbitrary timet=t0{\displaystyle t=t_{0}}:P=limT→∞1T∫t0−T/2t0+T/2|x(t)|2dt{\displaystyle P=\lim _{T\to \infty }{\frac {1}{T}}\int _{t_{0}-T/2}^{t_{0}+T/2}\left|x(t)\right|^{2}\,dt}
Whenever it is more convenient to deal with time limits in the signal itself rather than time limits in the bounds of the integral, the average power can also be written asP=limT→∞1T∫−∞∞|xT(t)|2dt,{\displaystyle P=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }\left|x_{T}(t)\right|^{2}\,dt,}wherexT(t)=x(t)wT(t){\displaystyle x_{T}(t)=x(t)w_{T}(t)}andwT(t){\displaystyle w_{T}(t)}is unity within the arbitrary period and zero elsewhere.
WhenP{\displaystyle P}is non-zero, the integral must grow to infinity at least as fast asT{\displaystyle T}does. That is the reason why we cannot use the energy of the signal, which is that diverging integral.
In analyzing the frequency content of the signalx(t){\displaystyle x(t)}, one might like to compute the ordinary Fourier transformx^(f){\displaystyle {\hat {x}}(f)}; however, for many signals of interest the ordinary Fourier transform does not formally exist.[nb 1]However, under suitable conditions, certain generalizations of the Fourier transform (e.g. theFourier-Stieltjes transform) still adhere toParseval's theorem. As such,P=limT→∞1T∫−∞∞|x^T(f)|2df,{\displaystyle P=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }|{\hat {x}}_{T}(f)|^{2}\,df,}where the integrand defines thepower spectral density:[9][10]
Theconvolution theoremthen allows regarding|x^T(f)|2{\displaystyle |{\hat {x}}_{T}(f)|^{2}}as theFourier transformof the timeconvolutionofxT∗(−t){\displaystyle x_{T}^{*}(-t)}andxT(t){\displaystyle x_{T}(t)}, where * represents the complex conjugate.
In order to deduce Eq.2, we will find an expression for[x^T(f)]∗{\displaystyle [{\hat {x}}_{T}(f)]^{*}}that will be useful for the purpose. In fact, we will demonstrate that[x^T(f)]∗=F{xT∗(−t)}{\displaystyle [{\hat {x}}_{T}(f)]^{*}={\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}}. Let's start by noting thatF{xT∗(−t)}=∫−∞∞xT∗(−t)e−i2πftdt{\displaystyle {\begin{aligned}{\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}&=\int _{-\infty }^{\infty }x_{T}^{*}(-t)e^{-i2\pi ft}dt\end{aligned}}}and letz=−t{\displaystyle z=-t}, so thatz→−∞{\displaystyle z\rightarrow -\infty }whent→∞{\displaystyle t\rightarrow \infty }and vice versa. So∫−∞∞xT∗(−t)e−i2πftdt=∫∞−∞xT∗(z)ei2πfz(−dz)=∫−∞∞xT∗(z)ei2πfzdz=∫−∞∞xT∗(t)ei2πftdt{\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }x_{T}^{*}(-t)e^{-i2\pi ft}dt&=\int _{\infty }^{-\infty }x_{T}^{*}(z)e^{i2\pi fz}\left(-dz\right)\\&=\int _{-\infty }^{\infty }x_{T}^{*}(z)e^{i2\pi fz}dz\\&=\int _{-\infty }^{\infty }x_{T}^{*}(t)e^{i2\pi ft}dt\end{aligned}}}Where, in the last line, we have made use of the fact thatz{\displaystyle z}andt{\displaystyle t}are dummy variables.
So, we haveF{xT∗(−t)}=∫−∞∞xT∗(−t)e−i2πftdt=∫−∞∞xT∗(t)ei2πftdt=∫−∞∞xT∗(t)[e−i2πft]∗dt=[∫−∞∞xT(t)e−i2πftdt]∗=[F{xT(t)}]∗=[x^T(f)]∗{\displaystyle {\begin{aligned}{\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}&=\int _{-\infty }^{\infty }x_{T}^{*}(-t)e^{-i2\pi ft}dt\\&=\int _{-\infty }^{\infty }x_{T}^{*}(t)e^{i2\pi ft}dt\\&=\int _{-\infty }^{\infty }x_{T}^{*}(t)[e^{-i2\pi ft}]^{*}dt\\&=\left[\int _{-\infty }^{\infty }x_{T}(t)e^{-i2\pi ft}dt\right]^{*}\\&=\left[{\mathcal {F}}\left\{x_{T}(t)\right\}\right]^{*}\\&=\left[{\hat {x}}_{T}(f)\right]^{*}\end{aligned}}}q.e.d.
Now, let's demonstrate eq.2 by using the demonstrated identity. In addition, we will make the subtitutionu(t)=xT∗(−t){\displaystyle u(t)=x_{T}^{*}(-t)}. In this way, we have:|x^T(f)|2=[x^T(f)]∗⋅x^T(f)=F{xT∗(−t)}⋅F{xT(t)}=F{u(t)}⋅F{xT(t)}=F{u(t)∗xT(t)}=∫−∞∞[∫−∞∞u(τ−t)xT(t)dt]e−i2πfτdτ=∫−∞∞[∫−∞∞xT∗(t−τ)xT(t)dt]e−i2πfτdτ,{\displaystyle {\begin{aligned}\left|{\hat {x}}_{T}(f)\right|^{2}&=[{\hat {x}}_{T}(f)]^{*}\cdot {\hat {x}}_{T}(f)\\&={\mathcal {F}}\left\{x_{T}^{*}(-t)\right\}\cdot {\mathcal {F}}\left\{x_{T}(t)\right\}\\&={\mathcal {F}}\left\{u(t)\right\}\cdot {\mathcal {F}}\left\{x_{T}(t)\right\}\\&={\mathcal {F}}\left\{u(t)\mathbin {\mathbf {*} } x_{T}(t)\right\}\\&=\int _{-\infty }^{\infty }\left[\int _{-\infty }^{\infty }u(\tau -t)x_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau \\&=\int _{-\infty }^{\infty }\left[\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }\ d\tau ,\end{aligned}}}where the convolution theorem has been used when passing from the 3rd to the 4th line.
Now, if we divide the time convolution above by the periodT{\displaystyle T}and take the limit asT→∞{\displaystyle T\rightarrow \infty }, it becomes theautocorrelationfunction of the non-windowed signalx(t){\displaystyle x(t)}, which is denoted asRxx(τ){\displaystyle R_{xx}(\tau )}, provided thatx(t){\displaystyle x(t)}isergodic, which is true in most, but not all, practical cases.[nb 2]limT→∞1T|x^T(f)|2=∫−∞∞[limT→∞1T∫−∞∞xT∗(t−τ)xT(t)dt]e−i2πfτdτ=∫−∞∞Rxx(τ)e−i2πfτdτ{\displaystyle \lim _{T\to \infty }{\frac {1}{T}}\left|{\hat {x}}_{T}(f)\right|^{2}=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }\ d\tau =\int _{-\infty }^{\infty }R_{xx}(\tau )e^{-i2\pi f\tau }d\tau }
Assuming the ergodicity ofx(t){\displaystyle x(t)}, the power spectral density can be found once more as the Fourier transform of the autocorrelation function (Wiener–Khinchin theorem).[11]
Many authors use this equality to actually define the power spectral density.[12]
The power of the signal in a given frequency band[f1,f2]{\displaystyle [f_{1},f_{2}]}, where0<f1<f2{\displaystyle 0<f_{1}<f_{2}}, can be calculated by integrating over frequency. SinceSxx(−f)=Sxx(f){\displaystyle S_{xx}(-f)=S_{xx}(f)}, an equal amount of power can be attributed to positive and negative frequency bands, which accounts for the factor of 2 in the following form (such trivial factors depend on the conventions used):Pbandlimited=2∫f1f2Sxx(f)df{\displaystyle P_{\textsf {bandlimited}}=2\int _{f_{1}}^{f_{2}}S_{xx}(f)\,df}More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the time intervalT{\displaystyle T}is finite rather than approaching infinity. This results in decreased spectral coverage and resolution since frequencies of less than1/T{\displaystyle 1/T}are not sampled, and results at frequencies which are not an integer multiple of1/T{\displaystyle 1/T}are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy"; however this can be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding tostatistical ensemblesof realizations ofx(t){\displaystyle x(t)}evaluated over the specified time window.
Just as with the energy spectral density, the definition of the power spectral density can be generalized todiscrete timevariablesxn{\displaystyle x_{n}}. As before, we can consider a window of−N≤n≤N{\displaystyle -N\leq n\leq N}with the signal sampled at discrete timestn=t0+(nΔt){\displaystyle t_{n}=t_{0}+(n\,\Delta t)}for a total measurement periodT=(2N+1)Δt{\displaystyle T=(2N+1)\,\Delta t}.Sxx(f)=limN→∞(Δt)2T|∑n=−NNxne−i2πfnΔt|2{\displaystyle S_{xx}(f)=\lim _{N\to \infty }{\frac {(\Delta t)^{2}}{T}}\left|\sum _{n=-N}^{N}x_{n}e^{-i2\pi fn\,\Delta t}\right|^{2}}Note that a single estimate of the PSD can be obtained through a finite number of samplings. As before, the actual PSD is achieved whenN{\displaystyle N}(and thusT{\displaystyle T}) approaches infinity and the expected value is formally applied. In a real-world application, one would typically average a finite-measurement PSD over many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called aperiodogram. This periodogram converges to the true PSD as the number of estimates as well as the averaging time intervalT{\displaystyle T}approach infinity.[13]
If two signals both possess power spectral densities, then thecross-spectral densitycan similarly be calculated; as the PSD is related to the autocorrelation, so is the cross-spectral density related to thecross-correlation.
Some properties of the PSD include:[14]
Given two signalsx(t){\displaystyle x(t)}andy(t){\displaystyle y(t)}, each of which possess power spectral densitiesSxx(f){\displaystyle S_{xx}(f)}andSyy(f){\displaystyle S_{yy}(f)}, it is possible to define across power spectral density(CPSD) orcross spectral density(CSD). To begin, let us consider the average power of such a combined signal.P=limT→∞1T∫−∞∞[xT(t)+yT(t)]∗[xT(t)+yT(t)]dt=limT→∞1T∫−∞∞|xT(t)|2+xT∗(t)yT(t)+yT∗(t)xT(t)+|yT(t)|2dt{\displaystyle {\begin{aligned}P&=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }\left[x_{T}(t)+y_{T}(t)\right]^{*}\left[x_{T}(t)+y_{T}(t)\right]dt\\&=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }|x_{T}(t)|^{2}+x_{T}^{*}(t)y_{T}(t)+y_{T}^{*}(t)x_{T}(t)+|y_{T}(t)|^{2}dt\\\end{aligned}}}
Using the same notation and methods as used for the power spectral density derivation, we exploit Parseval's theorem and obtainSxy(f)=limT→∞1T[x^T∗(f)y^T(f)]Syx(f)=limT→∞1T[y^T∗(f)x^T(f)]{\displaystyle {\begin{aligned}S_{xy}(f)&=\lim _{T\to \infty }{\frac {1}{T}}\left[{\hat {x}}_{T}^{*}(f){\hat {y}}_{T}(f)\right]&S_{yx}(f)&=\lim _{T\to \infty }{\frac {1}{T}}\left[{\hat {y}}_{T}^{*}(f){\hat {x}}_{T}(f)\right]\end{aligned}}}where, again, the contributions ofSxx(f){\displaystyle S_{xx}(f)}andSyy(f){\displaystyle S_{yy}(f)}are already understood. Note thatSxy∗(f)=Syx(f){\displaystyle S_{xy}^{*}(f)=S_{yx}(f)}, so the full contribution to the cross power is, generally, from twice the real part of either individualCPSD. Just as before, from here we recast these products as the Fourier transform of a time convolution, which when divided by the period and taken to the limitT→∞{\displaystyle T\to \infty }becomes the Fourier transform of across-correlationfunction.[16]Sxy(f)=∫−∞∞[limT→∞1T∫−∞∞xT∗(t−τ)yT(t)dt]e−i2πfτdτ=∫−∞∞Rxy(τ)e−i2πfτdτSyx(f)=∫−∞∞[limT→∞1T∫−∞∞yT∗(t−τ)xT(t)dt]e−i2πfτdτ=∫−∞∞Ryx(τ)e−i2πfτdτ,{\displaystyle {\begin{aligned}S_{xy}(f)&=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )y_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau =\int _{-\infty }^{\infty }R_{xy}(\tau )e^{-i2\pi f\tau }d\tau \\S_{yx}(f)&=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }y_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau =\int _{-\infty }^{\infty }R_{yx}(\tau )e^{-i2\pi f\tau }d\tau ,\end{aligned}}}whereRxy(τ){\displaystyle R_{xy}(\tau )}is thecross-correlationofx(t){\displaystyle x(t)}withy(t){\displaystyle y(t)}andRyx(τ){\displaystyle R_{yx}(\tau )}is the cross-correlation ofy(t){\displaystyle y(t)}withx(t){\displaystyle x(t)}. In light of this, the PSD is seen to be a special case of the CSD forx(t)=y(t){\displaystyle x(t)=y(t)}. Ifx(t){\displaystyle x(t)}andy(t){\displaystyle y(t)}are real signals (e.g. voltage or current), their Fourier transformsx^(f){\displaystyle {\hat {x}}(f)}andy^(f){\displaystyle {\hat {y}}(f)}are usually restricted to positive frequencies by convention. Therefore, in typical signal processing, the fullCPSDis just one of theCPSDs scaled by a factor of two.CPSDFull=2Sxy(f)=2Syx(f){\displaystyle \operatorname {CPSD} _{\text{Full}}=2S_{xy}(f)=2S_{yx}(f)}
For discrete signalsxnandyn, the relationship between the cross-spectral density and the cross-covariance isSxy(f)=∑n=−∞∞Rxy(τn)e−i2πfτnΔτ{\displaystyle S_{xy}(f)=\sum _{n=-\infty }^{\infty }R_{xy}(\tau _{n})e^{-i2\pi f\tau _{n}}\,\Delta \tau }
The goal of spectral density estimation is toestimatethe spectral density of arandom signalfrom a sequence of time samples. Depending on what is known about the signal, estimation techniques can involveparametricornon-parametricapproaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to anautoregressive model. A common non-parametric technique is theperiodogram.
The spectral density is usually estimated usingFourier transformmethods (such as theWelch method), but other techniques such as themaximum entropymethod can also be used.
Any signal that can be represented as a variable that varies in time has a corresponding frequency spectrum. This includes familiar entities such asvisible light(perceived ascolor), musical notes (perceived aspitch),radio/TV(specified by their frequency, or sometimeswavelength) and even the regular rotation of the earth. When these signals are viewed in the form of a frequency spectrum, certain aspects of the received signals or the underlying processes producing them are revealed. In some cases the frequency spectrum may include a distinct peak corresponding to asine wavecomponent. And additionally there may be peaks corresponding toharmonicsof a fundamental peak, indicating a periodic signal which isnotsimply sinusoidal. Or a continuous spectrum may show narrow frequency intervals which are strongly enhanced corresponding to resonances, or frequency intervals containing almost zero power as would be produced by anotch filter.
The concept and use of the power spectrum of a signal is fundamental inelectrical engineering, especially inelectronic communication systems, includingradio communications,radars, and related systems, plus passiveremote sensingtechnology. Electronic instruments calledspectrum analyzersare used to observe and measure thepower spectraof signals.
The spectrum analyzer measures the magnitude of theshort-time Fourier transform(STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density.
Primordial fluctuations, density variations in the early universe, are quantified by a power spectrum which gives the power of the variations as a function of spatial scale.
|
https://en.wikipedia.org/wiki/Spectral_density
|
Spectral musicuses theacousticproperties of sound – orsound spectra– as a basis forcomposition.[1]
Defined in technical language, spectral music is an acoustic musical practice wherecompositionaldecisions are often informed bysonographicrepresentations andmathematicalanalysis of sound spectra, or by mathematically generated spectra. The spectral approach focuses on manipulating the spectral features, interconnecting them, and transforming them. In this formulation, computer-based sound analysis and representations of audio signals are treated as being analogous to atimbralrepresentation of sound.
The (acoustic-composition) spectral approach originated in France in the early 1970s, and techniques were developed, and later refined, primarily atIRCAM, Paris, with theEnsemble l'Itinéraire, by composers such asGérard GriseyandTristan Murail.Hugues Dufourtis commonly credited for introducing the termmusique spectrale(spectral music) in an article published in 1979.[1][2]Murail has described spectral music as anaestheticrather than a style, not so much a set of techniques as an attitude; asJoshua Finebergputs it, a recognition that "music is ultimately sound evolving in time".[3]Julian Andersonindicates that a number of major composers associated with spectralism consider the term inappropriate, misleading, and reductive.[4]The Istanbul Spectral Music Conference of 2003 suggested a redefinition of the term "spectral music" to encompass any music that foregrounds timbre as an important element of structure or language.[5]
While spectralism as a historical movement is generally considered to have begun in France and Germany in the 1970s, precursors to the philosophy and techniques of spectralism, as prizing the nature and properties of sound above all else as an organizing principle for music, go back at least to the early twentieth century. Proto-spectral composers includeClaude Debussy,Edgard Varèse,Giacinto Scelsi,Olivier Messiaen,György Ligeti,Iannis Xenakis,La Monte Young, andKarlheinz Stockhausen.[6][7][8]Other composers who anticipated spectralist ideas in their theoretical writings includeHarry Partch,Henry Cowell, andPaul Hindemith.[9]Also crucial to the origins of spectralism was the development of techniques of sound analysis and synthesis incomputer musicand acoustics during this period, especially focused around IRCAM in France and Darmstadt in Germany.[10]
Julian Anderson considers Danish composerPer Nørgård'sVoyage into the Golden Screenfor chamber orchestra (1968) to be the first "properly instrumental piece of spectral composition".[11]Spectralism as a recognizable and unified movement, however, arose during the early 1970s, in part as a reaction against and alternative to the primarily pitch focused aesthetics of theserialismand post-serialism which was ascendant at the time.[a]Early spectral composers were centered in the cities of Paris and Cologne and associated with the composers of theEnsemble l'Itinéraireand the Feedback group, respectively. In Paris,Gérard GriseyandTristan Murailwere the most prominent pioneers of spectral techniques; Grisey'sEspaces Acoustiquesand Murail'sGondwanawere two influential works of this period. Their early work emphasized the use of the overtone series, techniques ofspectral analysisand ring and frequency modulation, and slowly unfolding processes to create music which gave a new attention to timbre and texture.[12]
The German Feedback group, includingJohannes Fritsch,Mesías Maiguashca,Péter Eötvös,Claude Vivier, andClarence Barlow, was primarily associated with students and disciples of Karlheinz Stockhausen, and began to pioneer spectral techniques around the same time. Their work generally placed more emphasis on linear and melodic writing within a spectral context as compared to that of their French contemporaries, though with significant variations.[13]Another important group of early spectral composers was centered in Romania, where a unique form of spectralism arose, in part inspired by Romanian folk music.[14]This folk tradition, as collected byBéla Bartók(1904–1918), with its acoustic scales derived directly from resonance and natural wind instruments of thealphornfamily, like thebuciumeandtulnice, as well as thecimpoibagpipe, inspired several spectral composers, includingCorneliu Cezar,Anatol Vieru,Aurel Stroe,Ștefan Niculescu,Horațiu Rădulescu,Iancu Dumitrescu, andOctavian Nemescu.[15]
Towards the end of the twentieth century, techniques associated with spectralist composers began to be adopted more widely and the original pioneers of spectralism began to integrate their techniques more fully with those of other traditions. For example, in their works from the later 1980s and into the 1990s, both Grisey and Murail began to shift their emphasis away from the more gradual and regular process which characterized their early work to include more sudden dramatic contrasts as more well linear and contrapuntal writing.[16]Likewise, spectral techniques were adopted by composers from a wider variety of traditions and countries, including the UK (with composers likeJulian AndersonandJonathan Harvey), Finland (composers likeMagnus LindbergandKaija Saariaho), and the United States.[17]A further development is the emergence of "hyper-spectralism"[clarification needed]in the works of Iancu Dumitrescu and Ana-Maria Avram.[18][19]
The spectral adventure has allowed the renovation, without imitation of the foundations of occidental music, because it is not a closed technique but an attitude.—Gérard Grisey[20]
The "panoply of methods and techniques" used are secondary, being only "the means of achieving a sonic end".[3]
Spectral music focuses on the phenomenon andacousticsof sound as well as its potential semantic qualities. Pitch material and intervallic content are often derived from theharmonic series, including the use ofmicrotones. Spectrographic analysis of acoustic sources is used as inspiration fororchestration. The reconstruction of electroacoustic source materials by using acoustic instruments is another common approach to spectral orchestration. In "additive instrumental synthesis", instruments are assigned to play discrete components of a sound, such as an individualpartial.Amplitude modulation,frequency modulation,difference tones, harmonic fusion, residue pitch,Shepard-tonephenomena, and other psychoacoustic concepts are applied to music materials.[21]
Formal concepts important in spectral music includeprocessand the stretching of time.[further explanation needed]Though development is "significantly different from those ofminimalist music" in that all musical parameters may be affected, it similarly draws attention to very subtle aspects of the music. These processes most often achieve a smooth transition throughinterpolation.[22]Any or all of these techniques may be operating in a particular work, though this list is not exhaustive.
TheRomanianspectral tradition focuses more on the study of how sound itself behaves in a "live" environment. Sound work is not restricted to harmonic spectra but includes transitory aspects oftimbreand non-harmonicmusical components(e.g.,rhythm,tempo,dynamics). Furthermore, sound is treatedphenomenologicallyas a dynamic presence to be encountered in listening (rather than as an object of scientific study). This approach results in a transformational musical language in which continuous change of the material displaces the central role accorded to structure in spectralism of the "French school".[23]
Spectral music was initially associated with composers of the FrenchEnsemble l'Itinéraire, includingHugues Dufourt,Gérard Grisey,Tristan Murail, andMichaël Lévinas. For these composers, musical sound (or natural sound) is taken as a model for composition, leading to an interest in the exploration of the interior of sounds.[24]Giacinto Scelsiwas an important influence on Grisey, Murail, and Lévinas; his approach with exploring a single sound in his works and a "smooth" conception of time (such as in hisQuattro pezzi su una nota sola) greatly influenced these composers to include new instrumental techniques and variations of timbre in their works.[25]
Other spectral music composers include those from the German Feedback group, principallyJohannes Fritsch,Mesías Maiguashca,Péter Eötvös,Claude Vivier, andClarence Barlow. Features of spectralism are also seen independently in the contemporary work of Romanian composersCorneliu Cezar,Ștefan Niculescu,Horațiu Rădulescu, andIancu Dumitrescu.[1]
Independent of spectral music developments in Europe, American composerJames Tenney's output included more than fifty significant works that feature spectralist traits.[26]His influences came from encounters with a scientific culture which pervaded during the postwar era, and a "quasi-empiricist musical aesthetic" fromJohn Cage.[27]His works, although having similarities with European spectral music, are distinctive in some ways, for example in his interest in "post-Cageian indeterminacy".
The spectralist movement inspired more recent composers such asJulian Anderson,Ana-Maria Avram,Joshua Fineberg,Georg Friedrich Haas,Jonathan Harvey,Fabien Lévy,Magnus Lindberg, andKaija Saariaho.
Some of the "post-spectralist" French composers includeÉric Tanguy[fr],Philippe Hurel,François Paris,Philippe Leroux, andThierry Blondeau.[28]
In the United States, composers such asAlvin Lucier,La Monte Young,Terry Riley,Maryanne Amacher,Phill Niblock, andGlenn Brancarelate some of the influences of spectral music into their own work. Tenney's work has also influenced a number of composers such asLarry PolanskyandJohn Luther Adams.[29]
In the US, jazz saxophonist and composerSteve Lehman, and in Europe, French composerFrédéric Maurin[fr;de], have both introduced spectral techniques into the domain of jazz.[30][31]
Characteristic spectral pieces include:
Other pieces that utilise spectral ideas or techniques include:[11][27][32]
Post-spectral pieces include:[33][34]
StriaandMortuos Plango, Vivos Vocoare examples ofelectronic musicthat embrace spectral techniques.[35][36]
|
https://en.wikipedia.org/wiki/Spectral_music
|
Inmathematics, more specifically inharmonic analysis,Walsh functionsform acomplete orthogonal setoffunctionsthat can be used to represent any discrete function—just liketrigonometric functionscan be used to represent anycontinuous functioninFourier analysis.[1]They can thus be viewed as a discrete, digital counterpart of the continuous, analog system of trigonometric functions on theunit interval. But unlike thesine and cosinefunctions, which are continuous, Walsh functions are piecewiseconstant. They take the values −1 and +1 only, on sub-intervals defined bydyadic fractions.
The system of Walsh functions is known as theWalsh system. It is an extension of theRademacher systemof orthogonal functions.[2]
Walsh functions, the Walsh system, the Walsh series,[3]and thefast Walsh–Hadamard transformare all named after the American mathematicianJoseph L. Walsh. They find various applications inphysicsandengineeringwhenanalyzing digital signals.
Historically, variousnumerationsof Walsh functions have been used; none of them is particularly superior to another. This articles uses theWalsh–Paley numeration.
We define the sequence of Walsh functionsWk:[0,1]→{−1,1}{\displaystyle W_{k}:[0,1]\rightarrow \{-1,1\}},k∈N{\displaystyle k\in \mathbb {N} }as follows.
For anynatural numberk, andreal numberx∈[0,1]{\displaystyle x\in [0,1]}, let
Then, by definition
In particular,W0(x)=1{\displaystyle W_{0}(x)=1}everywhere on the interval, since all bits ofkare zero.
Notice thatW2m{\displaystyle W_{2^{m}}}is precisely theRademacher functionrm.
Thus, the Rademacher system is a subsystem of the Walsh system. Moreover, every Walsh function is a product of Rademacher functions:
Walsh functions and trigonometric functions are both systems that form a complete,orthonormalset of functions, anorthonormal basisin theHilbert spaceL2[0,1]{\displaystyle L^{2}[0,1]}of thesquare-integrable functionson the unit interval. Both are systems ofbounded functions, unlike, say, theHaar systemor the Franklin system.
Both trigonometric and Walsh systems admit natural extension by periodicity from the unit interval to thereal line. Furthermore, bothFourier analysison the unit interval (Fourier series) and on the real line (Fourier transform) have their digital counterparts defined via Walsh system, the Walsh series analogous to the Fourier series, and theHadamard transformanalogous to the Fourier transform.
The Walsh system{Wk},k∈N0{\displaystyle \{W_{k}\},k\in \mathbb {N} _{0}}is anabelianmultiplicativediscrete groupisomorphicto∐n=0∞Z/2Z{\displaystyle \coprod _{n=0}^{\infty }\mathbb {Z} /2\mathbb {Z} }, thePontryagin dualof theCantor group∏n=0∞Z/2Z{\displaystyle \prod _{n=0}^{\infty }\mathbb {Z} /2\mathbb {Z} }. ItsidentityisW0{\displaystyle W_{0}}, and every element is ofordertwo (that is, self-inverse).
The Walsh system is an orthonormal basis of the Hilbert spaceL2[0,1]{\displaystyle L^{2}[0,1]}. Orthonormality means
and being a basis means that if, for everyf∈L2[0,1]{\displaystyle f\in L^{2}[0,1]}, we setfk=∫01f(x)Wk(x)dx{\displaystyle f_{k}=\int _{0}^{1}f(x)W_{k}(x)dx}then
It turns out that for everyf∈L2[0,1]{\displaystyle f\in L^{2}[0,1]}, theseries∑k=0∞fkWk(x){\displaystyle \sum _{k=0}^{\infty }f_{k}W_{k}(x)}convergestof(x){\displaystyle f(x)}for almost everyx∈[0,1]{\displaystyle x\in [0,1]}.
The Walsh system (in Walsh-Paley numeration) forms aSchauder basisinLp[0,1]{\displaystyle L^{p}[0,1]},1<p<∞{\displaystyle 1<p<\infty }. Note that, unlike theHaar system, and like the trigonometric system, this basis is notunconditional, nor is the system a Schauder basis inL1[0,1]{\displaystyle L^{1}[0,1]}.
LetD=∏n=1∞Z/2Z{\displaystyle \mathbb {D} =\prod _{n=1}^{\infty }\mathbb {Z} /2\mathbb {Z} }be thecompactCantor groupendowed withHaar measureand letD^=∐n=1∞Z/2Z{\displaystyle {\hat {\mathbb {D} }}=\coprod _{n=1}^{\infty }\mathbb {Z} /2\mathbb {Z} }be its discrete group ofcharacters. Elements ofD^{\displaystyle {\hat {\mathbb {D} }}}are readily identified with Walsh functions. Of course, the characters are defined onD{\displaystyle \mathbb {D} }while Walsh functions are defined on the unit interval, but since there exists amodulo zero isomorphismbetween thesemeasure spaces, measurable functions on them are identified viaisometry.
Then basicrepresentation theorysuggests the following broad generalization of the concept ofWalsh system.
For an arbitraryBanach space(X,||⋅||){\displaystyle (X,||\cdot ||)}let{Rt}t∈D⊂AutX{\displaystyle \{R_{t}\}_{t\in \mathbb {D} }\subset \operatorname {Aut} X}be astrongly continuous, uniformly boundedfaithfulactionofD{\displaystyle \mathbb {D} }onX. For everyγ∈D^{\displaystyle \gamma \in {\hat {\mathbb {D} }}}, consider itseigenspaceXγ={x∈X:Rtx=γ(t)x}{\displaystyle X_{\gamma }=\{x\in X:R_{t}x=\gamma (t)x\}}. ThenXis the closed linear span of the eigenspaces:X=Span¯(Xγ,γ∈D^){\displaystyle X={\overline {\operatorname {Span} }}(X_{\gamma },\gamma \in {\hat {\mathbb {D} }})}. Assume that every eigenspace is one-dimensionaland pick an elementwγ∈Xγ{\displaystyle w_{\gamma }\in X_{\gamma }}such that‖wγ‖=1{\displaystyle \|w_{\gamma }\|=1}. Then the system{wγ}γ∈D^{\displaystyle \{w_{\gamma }\}_{\gamma \in {\hat {\mathbb {D} }}}}, or the same system in the Walsh-Paley numeration of the characters{wk}k∈N0{\displaystyle \{w_{k}\}_{k\in {\mathbb {N} }_{0}}}is called generalized Walsh system associated with action{Rt}t∈D{\displaystyle \{R_{t}\}_{t\in \mathbb {D} }}. Classical Walsh system becomes a special case, namely, for
where⊕{\displaystyle \oplus }is additionmodulo2.
In the early 1990s, Serge Ferleger and Fyodor Sukochev showed that in a broad class of Banach spaces (so calledUMDspaces[4]) generalized Walsh systems have many properties similar to the classical one: they form a Schauder basis[5]and a uniform finite-dimensional decomposition[6]in the space, have property of random unconditional convergence.[7]One important example of generalized Walsh system is Fermion Walsh system in non-commutativeLpspaces associated withhyperfinite type II factor.
TheFermion Walsh systemis a non-commutative, or "quantum" analog of the classical Walsh system. Unlike the latter, it consists of operators, not functions. Nevertheless, both systems share many important properties, e.g., both form an orthonormal basis in corresponding Hilbert space, orSchauder basisin corresponding symmetric spaces. Elements of the Fermion Walsh system are calledWalsh operators.
The termFermionin the name of the system is explained by the fact that the enveloping operator space, the so-calledhyperfinite type II factorR{\displaystyle {\mathcal {R}}}, may be viewed as the space ofobservablesof the system of countably infinite number of distinctspin1/2{\displaystyle 1/2}fermions. EachRademacheroperator acts on one particular fermion coordinate only, and there it is aPauli matrix. It may be identified with the observable measuring spin component of that fermion along one of the axes{x,y,z}{\displaystyle \{x,y,z\}}in spin space. Thus, a Walsh operator measures the spin of a subset of fermions, each along its own axis.
Fix a sequenceα=(α1,α2,...){\displaystyle \alpha =(\alpha _{1},\alpha _{2},...)}ofintegerswithαk≥2,k=1,2,…{\displaystyle \alpha _{k}\geq 2,k=1,2,\dots }and letG=Gα=∏n=1∞Z/αkZ{\displaystyle \mathbb {G} =\mathbb {G} _{\alpha }=\prod _{n=1}^{\infty }\mathbb {Z} /\alpha _{k}\mathbb {Z} }endowed with theproduct topologyand the normalized Haar measure. DefineA0=1{\displaystyle A_{0}=1}andAk=α1α2…αk−1{\displaystyle A_{k}=\alpha _{1}\alpha _{2}\dots \alpha _{k-1}}. Eachx∈G{\displaystyle x\in \mathbb {G} }can be associated with the real number
This correspondence is a module zero isomorphism betweenG{\displaystyle \mathbb {G} }and the unit interval. It also defines a norm which generates thetopologyofG{\displaystyle \mathbb {G} }. Fork=1,2,…{\displaystyle k=1,2,\dots }, letρk:G→C{\displaystyle \rho _{k}:\mathbb {G} \to \mathbb {C} }where
The set{ρk}{\displaystyle \{\rho _{k}\}}is calledgeneralized Rademacher system. The Vilenkin system is thegroupG^=∐n=1∞Z/αkZ{\displaystyle {\hat {\mathbb {G} }}=\coprod _{n=1}^{\infty }\mathbb {Z} /\alpha _{k}\mathbb {Z} }of (complex-valued) characters ofG{\displaystyle \mathbb {G} }, which are all finite products of{ρk}{\displaystyle \{\rho _{k}\}}. For each non-negative integern{\displaystyle n}there is a unique sequencen0,n1,…{\displaystyle n_{0},n_{1},\dots }such that0≤nk<αk+1,k=0,1,2,…{\displaystyle 0\leq n_{k}<\alpha _{k+1},k=0,1,2,\dots }and
ThenG^=χn|n=0,1,…{\displaystyle {\hat {\mathbb {G} }}={\chi _{n}|n=0,1,\dots }}where
In particular, ifαk=2,k=1,2...{\displaystyle \alpha _{k}=2,k=1,2...}, thenG{\displaystyle \mathbb {G} }is the Cantor group andG^={χn|n=0,1,…}{\displaystyle {\hat {\mathbb {G} }}=\left\{\chi _{n}|n=0,1,\dots \right\}}is the (real-valued) Walsh-Paley system.
The Vilenkin system is a complete orthonormal system onG{\displaystyle \mathbb {G} }and forms aSchauder basisinLp(G,C){\displaystyle L^{p}(\mathbb {G} ,\mathbb {C} )},1<p<∞{\displaystyle 1<p<\infty }.[8]
Nonlinear phase extensions of discrete Walsh-Hadamard transformwere developed. It was shown that the nonlinear phase basis functions with improved cross-correlation properties significantly outperform the traditional Walsh codes in code division multiple access (CDMA) communications.[9]
Applications of the Walsh functions can be found wherever digit representations are used, includingspeech recognition, medical and biologicalimage processing, anddigital holography.
For example, thefast Walsh–Hadamard transform(FWHT) may be used in the analysis of digitalquasi-Monte Carlo methods. Inradio astronomy, Walsh functions can help reduce the effects of electricalcrosstalkbetween antenna signals. They are also used in passiveLCDpanels as X and Y binary driving waveforms where the autocorrelation between X and Y can be made minimal forpixelsthat are off.
|
https://en.wikipedia.org/wiki/Walsh_function
|
Innumerical analysis, thecondition numberof afunctionmeasures how much the output value of the function can change for a small change in the input argument. This is used to measure howsensitivea function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: givenf(x)=y,{\displaystyle f(x)=y,}one is solving forx,and thus the condition number of the (local) inverse must be used.[1][2]
The condition number is derived from the theory ofpropagation of uncertainty, and is formally defined as the value of theasymptoticworst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem. The condition number is frequently applied to questions inlinear algebra, in which case the derivative is straightforward but the error could be in many different directions, and is thus computed from the geometry of the matrix. More generally, condition numbers can be defined for non-linear functions in several variables.
A problem with a low condition number is said to bewell-conditioned, while a problem with a high condition number is said to beill-conditioned. In non-mathematical terms, an ill-conditioned problem is one where, for a small change in the inputs (theindependent variables) there is a large change in the answer ordependent variable. This means that the correct solution/answer to the equation becomes hard to find. The condition number is a property of the problem. Paired with the problem are any number of algorithms that can be used to solve the problem, that is, to calculate the solution. Some algorithms have a property calledbackward stability; in general, a backward stable algorithm can be expected to accurately solve well-conditioned problems. Numerical analysis textbooks give formulas for the condition numbers of problems and identify known backward stable algorithms.
As a rule of thumb, if the condition numberκ(A)=10k{\displaystyle \kappa (A)=10^{k}}, then you may lose up tok{\displaystyle k}digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods.[3]However, the condition number does not give the exact value of the maximum inaccuracy that may occur in the algorithm. It generally just bounds it with an estimate (whose computed value depends on the choice of the norm to measure the inaccuracy).
For example, the condition number associated with thelinear equationAx=bgives a bound on how inaccurate the solutionxwill be after approximation. Note that this is before the effects ofround-off errorare taken into account; conditioning is a property of thematrix, not thealgorithmorfloating-pointaccuracy of the computer used to solve the corresponding system. In particular, one should think of the condition number as being (very roughly) the rate at which the solutionxwill change with respect to a change inb. Thus, if the condition number is large, even a small error inbmay cause a large error inx. On the other hand, if the condition number is small, then the error inxwill not be much bigger than the error inb.
The condition number is defined more precisely to be the maximum ratio of therelative errorinxto the relative error inb.
Letebe the error inb. Assuming thatAis anonsingularmatrix, the error in the solutionA−1bisA−1e. The ratio of the relative error in the solution to the relative error inbis
The maximum value (for nonzerobande) is then seen to be the product of the twooperator normsas follows:
The same definition is used for any consistentnorm, i.e. one that satisfies
When the condition number is exactly one (which can only happen ifAis a scalar multiple of alinear isometry), then a solution algorithm can find (in principle, meaning if the algorithm introduces no errors of its own) an approximation of the solution whose precision is no worse than that of the data.
However, it does not mean that the algorithm will converge rapidly to this solution, just that it will not diverge arbitrarily because of inaccuracy on the source data (backward error), provided that the forward error introduced by the algorithm does not diverge as well because of accumulating intermediate rounding errors.[clarification needed]
The condition number may also be infinite, but this implies that the problem isill-posed(does not possess a unique, well-defined solution for each choice of data; that is, the matrix is notinvertible), and no algorithm can be expected to reliably find a solution.
The definition of the condition number depends on the choice ofnorm, as can be illustrated by two examples.
If‖⋅‖{\displaystyle \|\cdot \|}is thematrix norm induced by the (vector) Euclidean norm(sometimes known as theL2norm and typically denoted as‖⋅‖2{\displaystyle \|\cdot \|_{2}}), then
whereσmax(A){\displaystyle \sigma _{\text{max}}(A)}andσmin(A){\displaystyle \sigma _{\text{min}}(A)}are maximal and minimalsingular valuesofA{\displaystyle A}respectively. Hence:
The condition number with respect toL2arises so often innumerical linear algebrathat it is given a name, thecondition number of a matrix.
If‖⋅‖{\displaystyle \|\cdot \|}is thematrix norm induced by theL∞{\displaystyle L^{\infty }}(vector) normandA{\displaystyle A}islower triangularnon-singular (i.e.aii≠0{\displaystyle a_{ii}\neq 0}for alli{\displaystyle i}), then
recalling that the eigenvalues of any triangular matrix are simply the diagonal entries.
The condition number computed with this norm is generally larger than the condition number computed relative to theEuclidean norm, but it can be evaluated more easily (and this is often the only practicably computable condition number, when the problem to solve involves anon-linear algebra[clarification needed], for example when approximating irrational andtranscendentalfunctions or numbers with numerical methods).
If the condition number is not significantly larger than one, the matrix iswell-conditioned, which means that its inverse can be computed with good accuracy. If the condition number is very large, then the matrix is said to beill-conditioned. Practically, such a matrix is almost singular, and the computation of its inverse, or solution of a linear system of equations is prone to large numerical errors.
A matrix that is not invertible is often said to have a condition number equal to infinity. Alternatively, it can be defined asκ(A)=‖A‖‖A†‖{\displaystyle \kappa (A)=\|A\|\|A^{\dagger }\|}, whereA†{\displaystyle A^{\dagger }}is the Moore-Penrosepseudoinverse. For square matrices, this unfortunately makes the condition number discontinuous, but it is a useful definition for rectangular matrices, which are never invertible but are still used to define systems of equations.
Condition numbers can also be defined for nonlinear functions, and can be computed usingcalculus. The condition number varies with the point; in some cases one can use the maximum (orsupremum) condition number over thedomainof the function or domain of the question as an overall condition number, while in other cases the condition number at a particular point is of more interest.
Theabsolutecondition number of adifferentiable functionf{\displaystyle f}in one variable is theabsolute valueof thederivativeof the function:
Therelativecondition number off{\displaystyle f}as a function is|xf′/f|{\displaystyle \left|xf'/f\right|}. Evaluated at a pointx{\displaystyle x}, this is
Note that this is the absolute value of theelasticityof a function in economics.
Most elegantly, this can be understood as (the absolute value of) the ratio of thelogarithmic derivativeoff{\displaystyle f}, which is(logf)′=f′/f{\displaystyle (\log f)'=f'/f}, and the logarithmic derivative ofx{\displaystyle x}, which is(logx)′=x′/x=1/x{\displaystyle (\log x)'=x'/x=1/x}, yielding a ratio ofxf′/f{\displaystyle xf'/f}. This is because the logarithmic derivative is theinfinitesimalrate of relative change in a function: it is the derivativef′{\displaystyle f'}scaled by the value off{\displaystyle f}. Note that if a function has azeroat a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to positive or negative, yielding a ratio with zero in the denominator, hence infinite relative change.
More directly, given a small changeΔx{\displaystyle \Delta x}inx{\displaystyle x}, the relative change inx{\displaystyle x}is[(x+Δx)−x]/x=(Δx)/x{\displaystyle [(x+\Delta x)-x]/x=(\Delta x)/x}, while the relative change inf(x){\displaystyle f(x)}is[f(x+Δx)−f(x)]/f(x){\displaystyle [f(x+\Delta x)-f(x)]/f(x)}. Taking the ratio yields
The last term is thedifference quotient(the slope of thesecant line), and taking thelimityields the derivative.
Condition numbers of commonelementary functionsare particularly important in computingsignificant figuresand can be computed immediately from the derivative. A few important ones are given below:
Condition numbers can be defined for any functionf{\displaystyle f}mapping its data from somedomain(e.g. anm{\displaystyle m}-tuple ofreal numbersx{\displaystyle x}) into somecodomain(e.g. ann{\displaystyle n}-tuple of real numbersf(x){\displaystyle f(x)}), where both the domain and codomain areBanach spaces. They express how sensitive that function is to small changes (or small errors) in its arguments. This is crucial in assessing the sensitivity and potential accuracy difficulties of numerous computational problems, for example,polynomial root findingor computingeigenvalues.
The condition number off{\displaystyle f}at a pointx{\displaystyle x}(specifically, itsrelative condition number[4]) is then defined to be the maximum ratio of the fractional change inf(x){\displaystyle f(x)}to any fractional change inx{\displaystyle x}, in the limit where the changeδx{\displaystyle \delta x}inx{\displaystyle x}becomes infinitesimally small:[4]
where‖⋅‖{\displaystyle \|\cdot \|}is anormon the domain/codomain off{\displaystyle f}.
Iff{\displaystyle f}is differentiable, this is equivalent to:[4]
whereJ(x){\displaystyle J(x)}denotes theJacobian matrixofpartial derivativesoff{\displaystyle f}atx{\displaystyle x}, and‖J(x)‖{\displaystyle \|J(x)\|}is theinduced normon the matrix.
|
https://en.wikipedia.org/wiki/Condition_number
|
Inlinear algebraandfunctional analysis, themin-max theorem, orvariational theorem, orCourant–Fischer–Weyl min-max principle, is a result that gives a variational characterization ofeigenvaluesofcompactHermitian operators onHilbert spaces. It can be viewed as the starting point of many results of similar nature.
This article first discusses the finite-dimensional case and its applications before considering compact operators on infinite-dimensional Hilbert spaces.
We will see that for compact operators, the proof of the main theorem uses essentially the same idea from the finite-dimensional argument.
In the case that the operator is non-Hermitian, the theorem provides an equivalent characterization of the associatedsingular values.
The min-max theorem can be extended toself-adjoint operatorsthat are bounded below.
LetAbe an×nHermitian matrix. As with many other variational results on eigenvalues, one considers theRayleigh–Ritz quotientRA:Cn\ {0} →Rdefined by
where(⋅, ⋅)denotes theEuclidean inner productonCn.
Equivalently, the Rayleigh–Ritz quotient can be replaced by
The Rayleigh quotient of an eigenvectorv{\displaystyle v}is its associated eigenvalueλ{\displaystyle \lambda }becauseRA(v)=(λx,x)/(x,x)=λ{\displaystyle R_{A}(v)=(\lambda x,x)/(x,x)=\lambda }.
For a Hermitian matrixA, the range of the continuous functionsRA(x) andf(x) is a compact interval [a,b] of the real line. The maximumband the minimumaare the largest and smallest eigenvalue ofA, respectively. The min-max theorem is a refinement of this fact.
LetA{\textstyle A}be Hermitian on an inner product spaceV{\textstyle V}with dimensionn{\textstyle n}, with spectrum ordered in descending orderλ1≥...≥λn{\textstyle \lambda _{1}\geq ...\geq \lambda _{n}}.
Letv1,...,vn{\textstyle v_{1},...,v_{n}}be the corresponding unit-length orthogonal eigenvectors.
Reverse the spectrum ordering, so thatξ1=λn,...,ξn=λ1{\textstyle \xi _{1}=\lambda _{n},...,\xi _{n}=\lambda _{1}}.
(Poincaré’s inequality)—LetM{\textstyle M}be a subspace ofV{\textstyle V}with dimensionk{\textstyle k}, then there exists unit vectorsx,y∈M{\textstyle x,y\in M}, such that
⟨x,Ax⟩≤λk{\textstyle \langle x,Ax\rangle \leq \lambda _{k}}, and⟨y,Ay⟩≥ξk{\textstyle \langle y,Ay\rangle \geq \xi _{k}}.
Part 2 is a corollary, using−A{\textstyle -A}.
M{\textstyle M}is ak{\textstyle k}dimensional subspace, so if we pick any list ofn−k+1{\textstyle n-k+1}vectors, their spanN:=span(vk,...vn){\textstyle N:=span(v_{k},...v_{n})}must intersectM{\textstyle M}on at least a single line.
Take unitx∈M∩N{\textstyle x\in M\cap N}. That’s what we need.
min-max theorem—λk=maxM⊂Vdim(M)=kminx∈M‖x‖=1⟨x,Ax⟩=minM⊂Vdim(M)=n−k+1maxx∈M‖x‖=1⟨x,Ax⟩.{\displaystyle {\begin{aligned}\lambda _{k}&=\max _{\begin{array}{c}{\mathcal {M}}\subset V\\\operatorname {dim} ({\mathcal {M}})=k\end{array}}\min _{\begin{array}{c}x\in {\mathcal {M}}\\\|x\|=1\end{array}}\langle x,Ax\rangle \\&=\min _{\begin{array}{c}{\mathcal {M}}\subset V\\\operatorname {dim} ({\mathcal {M}})=n-k+1\end{array}}\max _{\begin{array}{c}x\in {\mathcal {M}}\\\|x\|=1\end{array}}\langle x,Ax\rangle {\text{. }}\end{aligned}}}
Part 2 is a corollary of part 1, by using−A{\textstyle -A}.
By Poincare’s inequality,λk{\textstyle \lambda _{k}}is an upper bound to the right side.
By settingM=span(v1,...vk){\textstyle {\mathcal {M}}=span(v_{1},...v_{k})}, the upper bound is achieved.
Define thepartial tracetrV(A){\textstyle tr_{V}(A)}to be the trace of projection ofA{\textstyle A}toV{\textstyle V}. It is equal to∑ivi∗Avi{\textstyle \sum _{i}v_{i}^{*}Av_{i}}given an orthonormal basis ofV{\textstyle V}.
Wielandt minimax formula([1]: 44)—Let1≤i1<⋯<ik≤n{\textstyle 1\leq i_{1}<\cdots <i_{k}\leq n}be integers. Define a partial flag to be a nested collectionV1⊂⋯⊂Vk{\textstyle V_{1}\subset \cdots \subset V_{k}}of subspaces ofCn{\textstyle \mathbb {C} ^{n}}such thatdim(Vj)=ij{\textstyle \operatorname {dim} \left(V_{j}\right)=i_{j}}for all1≤j≤k{\textstyle 1\leq j\leq k}.
Define the associated Schubert varietyX(V1,…,Vk){\textstyle X\left(V_{1},\ldots ,V_{k}\right)}to be the collection of allk{\textstyle k}dimensional subspacesW{\textstyle W}such thatdim(W∩Vj)≥j{\textstyle \operatorname {dim} \left(W\cap V_{j}\right)\geq j}.
λi1(A)+⋯+λik(A)=supV1,…,VkinfW∈X(V1,…,Vk)trW(A){\displaystyle \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)=\sup _{V_{1},\ldots ,V_{k}}\inf _{W\in X\left(V_{1},\ldots ,V_{k}\right)}tr_{W}(A)}
The≤{\textstyle \leq }case.
LetVj=span(e1,…,eij){\textstyle V_{j}=span(e_{1},\dots ,e_{i_{j}})}, and anyW∈X(V1,…,Vk){\textstyle W\in X\left(V_{1},\ldots ,V_{k}\right)}, it remains to show thatλi1(A)+⋯+λik(A)≤trW(A){\displaystyle \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)\leq tr_{W}(A)}
To show this, we construct an orthonormal set of vectorsv1,…,vk{\textstyle v_{1},\dots ,v_{k}}such thatvj∈Vj∩W{\textstyle v_{j}\in V_{j}\cap W}. ThentrW(A)≥∑j⟨vj,Avj⟩≥λij(A){\textstyle tr_{W}(A)\geq \sum _{j}\langle v_{j},Av_{j}\rangle \geq \lambda _{i_{j}}(A)}
Sincedim(V1∩W)≥1{\textstyle dim(V_{1}\cap W)\geq 1}, we pick any unitv1∈V1∩W{\textstyle v_{1}\in V_{1}\cap W}. Next, sincedim(V2∩W)≥2{\textstyle dim(V_{2}\cap W)\geq 2}, we pick any unitv2∈(V2∩W){\textstyle v_{2}\in (V_{2}\cap W)}that is perpendicular tov1{\textstyle v_{1}}, and so on.
The≥{\textstyle \geq }case.
For any such sequence of subspacesVi{\textstyle V_{i}}, we must find someW∈X(V1,…,Vk){\textstyle W\in X\left(V_{1},\ldots ,V_{k}\right)}such thatλi1(A)+⋯+λik(A)≥trW(A){\displaystyle \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)\geq tr_{W}(A)}
Now we prove this by induction.
Then=1{\textstyle n=1}case is the Courant-Fischer theorem. Assume nown≥2{\textstyle n\geq 2}.
Ifi1≥2{\textstyle i_{1}\geq 2}, then we can apply induction. LetE=span(ei1,…,en){\textstyle E=span(e_{i_{1}},\dots ,e_{n})}. We construct a partial flag withinE{\textstyle E}from the intersection ofE{\textstyle E}withV1,…,Vk{\textstyle V_{1},\dots ,V_{k}}.
We begin by picking a(ik−(i1−1)){\textstyle (i_{k}-(i_{1}-1))}-dimensional subspaceWk′⊂E∩Vik{\textstyle W_{k}'\subset E\cap V_{i_{k}}}, which exists by counting dimensions. This has codimension(i1−1){\textstyle (i_{1}-1)}withinVik{\textstyle V_{i_{k}}}.
Then we go down by one space, to pick a(ik−1−(i1−1)){\textstyle (i_{k-1}-(i_{1}-1))}-dimensional subspaceWk−1′⊂Wk∩Vik−1{\textstyle W_{k-1}'\subset W_{k}\cap V_{i_{k-1}}}. This still exists. Etc. Now sincedim(E)≤n−1{\textstyle dim(E)\leq n-1}, apply the induction hypothesis, there exists someW∈X(W1,…,Wk){\textstyle W\in X(W_{1},\dots ,W_{k})}such thatλi1−(i1−1)(A|E)+⋯+λik−(i1−1)(A|E)≥trW(A){\displaystyle \lambda _{i_{1}-(i_{1}-1)}(A|E)+\cdots +\lambda _{i_{k}-(i_{1}-1)}(A|E)\geq tr_{W}(A)}Nowλij−(i1−1)(A|E){\textstyle \lambda _{i_{j}-(i_{1}-1)}(A|E)}is the(ij−(i1−1)){\textstyle (i_{j}-(i_{1}-1))}-th eigenvalue ofA{\textstyle A}orthogonally projected down toE{\textstyle E}. By Cauchy interlacing theorem,λij−(i1−1)(A|E)≤λij(A){\textstyle \lambda _{i_{j}-(i_{1}-1)}(A|E)\leq \lambda _{i_{j}}(A)}. SinceX(W1,…,Wk)⊂X(V1,…,Vk){\textstyle X(W_{1},\dots ,W_{k})\subset X(V_{1},\dots ,V_{k})}, we’re done.
Ifi1=1{\textstyle i_{1}=1}, then we perform a similar construction. LetE=span(e2,…,en){\textstyle E=span(e_{2},\dots ,e_{n})}. IfVk⊂E{\textstyle V_{k}\subset E}, then we can induct. Otherwise, we construct a partial flag sequenceW2,…,Wk{\textstyle W_{2},\dots ,W_{k}}By induction, there exists someW′∈X(W2,…,Wk)⊂X(V2,…,Vk){\textstyle W'\in X(W_{2},\dots ,W_{k})\subset X(V_{2},\dots ,V_{k})}, such thatλi2−1(A|E)+⋯+λik−1(A|E)≥trW′(A){\displaystyle \lambda _{i_{2}-1}(A|E)+\cdots +\lambda _{i_{k}-1}(A|E)\geq tr_{W'}(A)}thusλi2(A)+⋯+λik(A)≥trW′(A){\displaystyle \lambda _{i_{2}}(A)+\cdots +\lambda _{i_{k}}(A)\geq tr_{W'}(A)}And it remains to find somev{\textstyle v}such thatW′⊕v∈X(V1,…,Vk){\textstyle W'\oplus v\in X(V_{1},\dots ,V_{k})}.
IfV1⊄W′{\textstyle V_{1}\not \subset W'}, then anyv∈V1∖W′{\textstyle v\in V_{1}\setminus W'}would work. Otherwise, ifV2⊄W′{\textstyle V_{2}\not \subset W'}, then anyv∈V2∖W′{\textstyle v\in V_{2}\setminus W'}would work, and so on. If none of these work, then it meansVk⊂E{\textstyle V_{k}\subset E}, contradiction.
This has some corollaries:[1]: 44
Extremal partial trace—λ1(A)+⋯+λk(A)=supdim(V)=ktrV(A){\displaystyle \lambda _{1}(A)+\dots +\lambda _{k}(A)=\sup _{\operatorname {dim} (V)=k}tr_{V}(A)}
ξ1(A)+⋯+ξk(A)=infdim(V)=ktrV(A){\displaystyle \xi _{1}(A)+\dots +\xi _{k}(A)=\inf _{\operatorname {dim} (V)=k}tr_{V}(A)}
Corollary—The sumλ1(A)+⋯+λk(A){\textstyle \lambda _{1}(A)+\dots +\lambda _{k}(A)}is a convex function, andξ1(A)+⋯+ξk(A){\textstyle \xi _{1}(A)+\dots +\xi _{k}(A)}is concave.
(Schur-Horn inequality)ξ1(A)+⋯+ξk(A)≤ai1,i1+⋯+aik,ik≤λ1(A)+⋯+λk(A){\displaystyle \xi _{1}(A)+\dots +\xi _{k}(A)\leq a_{i_{1},i_{1}}+\dots +a_{i_{k},i_{k}}\leq \lambda _{1}(A)+\dots +\lambda _{k}(A)}for any subset of indices.
Equivalently, this states that the diagonal vector ofA{\textstyle A}is majorized by its eigenspectrum.
Schatten-norm Hölder inequality—Given HermitianA,B{\textstyle A,B}and Hölder pair1/p+1/q=1{\textstyle 1/p+1/q=1},|tr(AB)|≤‖A‖Sp‖B‖Sq{\displaystyle |\operatorname {tr} (AB)|\leq \|A\|_{S^{p}}\|B\|_{S^{q}}}
WLOG,B{\textstyle B}is diagonalized, then we need to show|∑iBiiAii|≤‖A‖Sp‖(Bii)‖lq{\textstyle |\sum _{i}B_{ii}A_{ii}|\leq \|A\|_{S^{p}}\|(B_{ii})\|_{l^{q}}}
By the standard Hölder inequality, it suffices to show‖(Aii)‖lp≤‖A‖Sp{\textstyle \|(A_{ii})\|_{l^{p}}\leq \|A\|_{S^{p}}}
By the Schur-Horn inequality, the diagonals ofA{\textstyle A}are majorized by the eigenspectrum ofA{\textstyle A}, and since the mapf(x1,…,xn)=‖x‖p{\textstyle f(x_{1},\dots ,x_{n})=\|x\|_{p}}is symmetric and convex, it is Schur-convex.
LetNbe the nilpotent matrix
Define the Rayleigh quotientRN(x){\displaystyle R_{N}(x)}exactly as above in the Hermitian case. Then it is easy to see that the only eigenvalue ofNis zero, while the maximum value of the Rayleigh quotient is1/2. That is, the maximum value of the Rayleigh quotient is larger than the maximum eigenvalue.
Thesingular values{σk} of a square matrixMare the square roots of the eigenvalues ofM*M(equivalentlyMM*). An immediate consequence[citation needed]of the first equality in the min-max theorem is:
Similarly,
Hereσk↓{\displaystyle \sigma _{k}^{\downarrow }}denotes thekthentry in the decreasing sequence of the singular values, so thatσ1↓≥σ2↓≥⋯{\displaystyle \sigma _{1}^{\downarrow }\geq \sigma _{2}^{\downarrow }\geq \cdots }.
LetAbe a symmetricn×nmatrix. Them×mmatrixB, wherem≤n, is called acompressionofAif there exists anorthogonal projectionPonto a subspace of dimensionmsuch thatPAP*=B. The Cauchy interlacing theorem states:
This can be proven using the min-max principle. Letβihave corresponding eigenvectorbiandSjbe thejdimensional subspaceSj= span{b1, ...,bj},then
According to first part of min-max,αj≤βj.On the other hand, if we defineSm−j+1= span{bj, ...,bm},then
where the last inequality is given by the second part of min-max.
Whenn−m= 1, we haveαj≤βj≤αj+1, hence the nameinterlacingtheorem.
Lidskii inequality—If1≤i1<⋯<ik≤n{\textstyle 1\leq i_{1}<\cdots <i_{k}\leq n}thenλi1(A+B)+⋯+λik(A+B)≤λi1(A)+⋯+λik(A)+λ1(B)+⋯+λk(B){\displaystyle {\begin{aligned}&\lambda _{i_{1}}(A+B)+\cdots +\lambda _{i_{k}}(A+B)\\&\quad \leq \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)+\lambda _{1}(B)+\cdots +\lambda _{k}(B)\end{aligned}}}
λi1(A+B)+⋯+λik(A+B)≥λi1(A)+⋯+λik(A)+ξ1(B)+⋯+ξk(B){\displaystyle {\begin{aligned}&\lambda _{i_{1}}(A+B)+\cdots +\lambda _{i_{k}}(A+B)\\&\quad \geq \lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)+\xi _{1}(B)+\cdots +\xi _{k}(B)\end{aligned}}}
The second is the negative of the first. The first is by Wielandt minimax.
λi1(A+B)+⋯+λik(A+B)=supV1,…,VkinfW∈X(V1,…,Vk)(trW(A)+trW(B))=supV1,…,Vk(infW∈X(V1,…,Vk)trW(A)+trW(B))≤supV1,…,Vk(infW∈X(V1,…,Vk)trW(A)+(λ1(B)+⋯+λk(B)))=λi1(A)+⋯+λik(A)+λ1(B)+⋯+λk(B){\displaystyle {\begin{aligned}&\lambda _{i_{1}}(A+B)+\cdots +\lambda _{i_{k}}(A+B)\\=&\sup _{V_{1},\dots ,V_{k}}\inf _{W\in X(V_{1},\dots ,V_{k})}(tr_{W}(A)+tr_{W}(B))\\=&\sup _{V_{1},\dots ,V_{k}}(\inf _{W\in X(V_{1},\dots ,V_{k})}tr_{W}(A)+tr_{W}(B))\\\leq &\sup _{V_{1},\dots ,V_{k}}(\inf _{W\in X(V_{1},\dots ,V_{k})}tr_{W}(A)+(\lambda _{1}(B)+\cdots +\lambda _{k}(B)))\\=&\lambda _{i_{1}}(A)+\cdots +\lambda _{i_{k}}(A)+\lambda _{1}(B)+\cdots +\lambda _{k}(B)\end{aligned}}}
Note that∑iλi(A+B)=tr(A+B)=∑iλi(A)+λi(B){\displaystyle \sum _{i}\lambda _{i}(A+B)=tr(A+B)=\sum _{i}\lambda _{i}(A)+\lambda _{i}(B)}. In other words,λ(A+B)−λ(A)⪯λ(B){\displaystyle \lambda (A+B)-\lambda (A)\preceq \lambda (B)}where⪯{\displaystyle \preceq }meansmajorization. By the Schur convexity theorem, we then have
p-Wielandt-Hoffman inequality—‖λ(A+B)−λ(A)‖ℓp≤‖B‖Sp{\textstyle \|\lambda (A+B)-\lambda (A)\|_{\ell ^{p}}\leq \|B\|_{S^{p}}}where‖⋅‖Sp{\textstyle \|\cdot \|_{S^{p}}}stands for the p-Schatten norm.
LetAbe acompact,Hermitianoperator on a Hilbert spaceH. Recall that thespectrumof such an operator (the set of eigenvalues) is a set of real numbers whose only possiblecluster pointis zero.
It is thus convenient to list the positive eigenvalues ofAas
where entries are repeated withmultiplicity, as in the matrix case. (To emphasize that the sequence is decreasing, we may writeλk=λk↓{\displaystyle \lambda _{k}=\lambda _{k}^{\downarrow }}.)
WhenHis infinite-dimensional, the above sequence of eigenvalues is necessarily infinite.
We now apply the same reasoning as in the matrix case. LettingSk⊂Hbe akdimensional subspace, we can obtain the following theorem.
A similar pair of equalities hold for negative eigenvalues.
LetS'be the closure of the linear spanS′=span{uk,uk+1,…}{\displaystyle S'=\operatorname {span} \{u_{k},u_{k+1},\ldots \}}.
The subspaceS'has codimensionk− 1. By the same dimension count argument as in the matrix case,S'∩Skhas positive dimension. So there existsx∈S'∩Skwith‖x‖=1{\displaystyle \|x\|=1}. Since it is an element ofS', such anxnecessarily satisfy
Therefore, for allSk
ButAis compact, therefore the functionf(x) = (Ax,x) is weakly continuous. Furthermore, any bounded set inHis weakly compact. This lets us replace the infimum by minimum:
So
Because equality is achieved whenSk=span{u1,…,uk}{\displaystyle S_{k}=\operatorname {span} \{u_{1},\ldots ,u_{k}\}},
This is the first part of min-max theorem for compact self-adjoint operators.
Analogously, consider now a(k− 1)-dimensional subspaceSk−1, whose the orthogonal complement is denoted bySk−1⊥. IfS'= span{u1...uk},
So
This implies
where the compactness ofAwas applied. Index the above by the collection ofk-1-dimensional subspaces gives
PickSk−1= span{u1, ...,uk−1} and we deduce
The min-max theorem also applies to (possibly unbounded) self-adjoint operators.[2][3]Recall theessential spectrumis the spectrum without isolated eigenvalues of finite multiplicity.
Sometimes we have some eigenvalues below the essential spectrum, and we would like to approximate the eigenvalues and eigenfunctions.
En=minψ1,…,ψnmax{⟨ψ,Aψ⟩:ψ∈span(ψ1,…,ψn),‖ψ‖=1}{\displaystyle E_{n}=\min _{\psi _{1},\ldots ,\psi _{n}}\max\{\langle \psi ,A\psi \rangle :\psi \in \operatorname {span} (\psi _{1},\ldots ,\psi _{n}),\,\|\psi \|=1\}}.
If we only haveNeigenvalues and hence run out of eigenvalues, then we letEn:=infσess(A){\displaystyle E_{n}:=\inf \sigma _{ess}(A)}(the bottom of the essential spectrum) forn>N, and the above statement holds after replacing min-max with inf-sup.
En=maxψ1,…,ψn−1min{⟨ψ,Aψ⟩:ψ⊥ψ1,…,ψn−1,‖ψ‖=1}{\displaystyle E_{n}=\max _{\psi _{1},\ldots ,\psi _{n-1}}\min\{\langle \psi ,A\psi \rangle :\psi \perp \psi _{1},\ldots ,\psi _{n-1},\,\|\psi \|=1\}}.
If we only haveNeigenvalues and hence run out of eigenvalues, then we letEn:=infσess(A){\displaystyle E_{n}:=\inf \sigma _{ess}(A)}(the bottom of the essential spectrum) forn > N, and the above statement holds after replacing max-min with sup-inf.
The proofs[2][3]use the following results about self-adjoint operators:
infσ(A)=infψ∈D(A),‖ψ‖=1⟨ψ,Aψ⟩{\displaystyle \inf \sigma (A)=\inf _{\psi \in {\mathfrak {D}}(A),\|\psi \|=1}\langle \psi ,A\psi \rangle }
and
supσ(A)=supψ∈D(A),‖ψ‖=1⟨ψ,Aψ⟩{\displaystyle \sup \sigma (A)=\sup _{\psi \in {\mathfrak {D}}(A),\|\psi \|=1}\langle \psi ,A\psi \rangle }.[2]: 77
|
https://en.wikipedia.org/wiki/Min-max_theorem#Cauchy_interlacing_theorem
|
Inmathematics, thePoincaré separation theorem, also known as theCauchy interlacing theorem, gives some upper and lower bounds ofeigenvaluesof a realsymmetric matrixBTABthat can be considered as theorthogonal projectionof a larger real symmetric matrixAonto a linear subspace spanned by the columns ofB. The theorem is named afterHenri Poincaré.
More specifically, letAbe ann×nreal symmetric matrix andBann×rsemi-orthogonal matrixsuch thatBTB=Ir. Denote byλi{\displaystyle \lambda _{i}},i= 1, 2, ...,nandμi{\displaystyle \mu _{i}},i= 1, 2, ...,rthe eigenvalues ofAandBTAB, respectively (in descending order). We have
An algebraic proof, based on thevariational interpretation of eigenvalues, has been published in Magnus'Matrix Differential Calculus with Applications in Statistics and Econometrics.[1]From the geometric point of view,BTABcan be considered as theorthogonal projectionofAonto the linear subspace spanned byB, so the above results follow immediately.[2]
An alternative proof can be made for the case whereBis a principle submatrix ofA, demonstrated by Steve Fisk.[3]
When considering two mechanical systems, each described by anequation of motion, that differ by exactly one constraint (such thatn−r=1{\displaystyle n-r=1}), thenatural frequenciesof the two systems interlace.
This has an important consequence when considering thefrequency responseof a complicated system such as alarge room. Even though there may be many modes, each with unpredictable modes shapes that will vary as details change such as furniture being moved, the interlacing theorem implies that the modal density (average number of modes per frequency interval) remains predictable and approximately constant. This allows for the technique ofmodal density analysis.
Min-max theorem#Cauchy interlacing theorem
|
https://en.wikipedia.org/wiki/Poincar%C3%A9_separation_theorem
|
Inmathematics, particularlylinear algebra, theSchur–Horn theorem, named afterIssai SchurandAlfred Horn, characterizes the diagonal of aHermitian matrixwith giveneigenvalues. It has inspired investigations and substantial generalizations in the setting ofsymplectic geometry. A few important generalizations areKostant's convexity theorem,Atiyah–Guillemin–Sternberg convexity theoremandKirwan convexity theorem.
Schur–Horn theorem—Letd1,…,dN{\displaystyle d_{1},\dots ,d_{N}}andλ1,…,λN{\displaystyle \lambda _{1},\dots ,\lambda _{N}}be two sequences of real numbers arranged in a non-increasing order.
There is aHermitian matrixwith diagonal valuesd1,…,dN{\displaystyle d_{1},\dots ,d_{N}}(in this order, starting withd1{\displaystyle d_{1}}at the top-left) and eigenvaluesλ1,…,λN{\displaystyle \lambda _{1},\dots ,\lambda _{N}}if and only if∑i=1ndi≤∑i=1nλin=1,…,N−1{\displaystyle \sum _{i=1}^{n}d_{i}\leq \sum _{i=1}^{n}\lambda _{i}\qquad n=1,\dots ,N-1}and∑i=1Ndi=∑i=1Nλi.{\displaystyle \sum _{i=1}^{N}d_{i}=\sum _{i=1}^{N}\lambda _{i}.}
The condition on the two sequences is equivalent to themajorizationcondition:d→⪯λ→{\displaystyle {\vec {d}}\preceq {\vec {\lambda }}}.
The inequalities above may alternatively be written:d1≤λ1d2+d1≤λ1+λ2⋮≤⋮dN−1+⋯+d2+d1≤λ1+λ2+⋯+λN−1dN+dN−1+⋯+d2+d1=λ1+λ2+⋯+λN−1+λN.{\displaystyle {\begin{alignedat}{7}d_{1}&\;\leq \;&&\lambda _{1}\\[0.3ex]d_{2}+d_{1}&\;\leq &&\lambda _{1}+\lambda _{2}\\[0.3ex]\vdots &\;\leq &&\vdots \\[0.3ex]d_{N-1}+\cdots +d_{2}+d_{1}&\;\leq &&\lambda _{1}+\lambda _{2}+\cdots +\lambda _{N-1}\\[0.3ex]d_{N}+d_{N-1}+\cdots +d_{2}+d_{1}&\;=&&\lambda _{1}+\lambda _{2}+\cdots +\lambda _{N-1}+\lambda _{N}.\\[0.3ex]\end{alignedat}}}
The Schur–Horn theorem may thus be restated more succinctly and in plain English:
Although this theorem requires thatd1≥⋯≥dN{\displaystyle d_{1}\geq \cdots \geq d_{N}}andλ1≥⋯≥λN{\displaystyle \lambda _{1}\geq \cdots \geq \lambda _{N}}be non-increasing, it is possible toreformulate this theoremwithout these assumptions.
We start with the assumptionλ1≥⋯≥λN.{\displaystyle \lambda _{1}\geq \cdots \geq \lambda _{N}.}The left hand side of the theorem's characterization (that is, "there exists a Hermitian matrix with these eigenvalues and diagonal elements") depends on the order of the desired diagonal elementsd1,…,dN{\displaystyle d_{1},\dots ,d_{N}}(because changing their order would change the Hermitian matrix whose existence is in question) but it doesnotdepend on the order of the desired eigenvaluesλ1,…,λN.{\displaystyle \lambda _{1},\dots ,\lambda _{N}.}
On the right hand right hand side of the characterization, only the values ofλ1+⋯+λn{\displaystyle \lambda _{1}+\cdots +\lambda _{n}}depend on the assumptionλ1≥⋯≥λN.{\displaystyle \lambda _{1}\geq \cdots \geq \lambda _{N}.}Notice that this assumption means that the expressionλ1+⋯+λn{\displaystyle \lambda _{1}+\cdots +\lambda _{n}}is just notation for the sum of then{\displaystyle n}largest desired eigenvalues.
Replacing the expressionλ1+⋯+λn{\displaystyle \lambda _{1}+\cdots +\lambda _{n}}with this written equivalent makes the assumptionλ1≥⋯≥λN{\displaystyle \lambda _{1}\geq \cdots \geq \lambda _{N}}completely unnecessary:
Thepermutation polytopegenerated byx~=(x1,x2,…,xn)∈Rn{\displaystyle {\tilde {x}}=(x_{1},x_{2},\ldots ,x_{n})\in \mathbb {R} ^{n}}denoted byKx~{\displaystyle {\mathcal {K}}_{\tilde {x}}}is defined as the convex hull of the set{(xπ(1),xπ(2),…,xπ(n))∈Rn:π∈Sn}.{\displaystyle \{(x_{\pi (1)},x_{\pi (2)},\ldots ,x_{\pi (n)})\in \mathbb {R} ^{n}:\pi \in S_{n}\}.}HereSn{\displaystyle S_{n}}denotes thesymmetric groupon{1,2,…,n}.{\displaystyle \{1,2,\ldots ,n\}.}In other words, the permutation polytope generated by(x1,…,xn){\displaystyle (x_{1},\dots ,x_{n})}is theconvex hullof the set of all points inRn{\displaystyle \mathbb {R} ^{n}}that can be obtained by rearranging the coordinates of(x1,…,xn).{\displaystyle (x_{1},\dots ,x_{n}).}The permutation polytope of(1,1,2),{\displaystyle (1,1,2),}for instance, is the convex hull of the set{(1,1,2),(1,2,1),(2,1,1)},{\displaystyle \{(1,1,2),(1,2,1),(2,1,1)\},}which in this case is the solid (filled) triangle whose vertices are the three points in this set.
Notice, in particular, that rearranging the coordinates of(x1,…,xn){\displaystyle (x_{1},\dots ,x_{n})}does not change the resulting permutation polytope; in other words, if a pointy~{\displaystyle {\tilde {y}}}can be obtained fromx~=(x1,…,xn){\displaystyle {\tilde {x}}=(x_{1},\dots ,x_{n})}by rearranging its coordinates, thenKy~=Kx~.{\displaystyle {\mathcal {K}}_{\tilde {y}}={\mathcal {K}}_{\tilde {x}}.}
The following lemma characterizes the permutation polytope of a vector inRn.{\displaystyle \mathbb {R} ^{n}.}
Lemma[1][2]—Ifx1≥⋯≥xn,{\displaystyle x_{1}\geq \cdots \geq x_{n},}andy1≥⋯≥yn,{\displaystyle y_{1}\geq \cdots \geq y_{n},}have the same sumx1+⋯+xn=y1+⋯+yn,{\displaystyle x_{1}+\cdots +x_{n}=y_{1}+\cdots +y_{n},}then the following statements are equivalent:
In view of the equivalence of (i) and (ii) in the lemma mentioned above, one may reformulate the theorem in the following manner.
Theorem.Letd1,…,dN{\displaystyle d_{1},\dots ,d_{N}}andλ1,…,λN{\displaystyle \lambda _{1},\dots ,\lambda _{N}}be real numbers. There is aHermitian matrixwith diagonal entriesd1,…,dN{\displaystyle d_{1},\dots ,d_{N}}and eigenvaluesλ1,…,λN{\displaystyle \lambda _{1},\dots ,\lambda _{N}}if and only if the vector(d1,…,dn){\displaystyle (d_{1},\ldots ,d_{n})}is in the permutation polytope generated by(λ1,…,λn).{\displaystyle (\lambda _{1},\ldots ,\lambda _{n}).}
Note that in this formulation, one does not need to impose any ordering on the entries of the vectorsd1,…,dN{\displaystyle d_{1},\dots ,d_{N}}andλ1,…,λN.{\displaystyle \lambda _{1},\dots ,\lambda _{N}.}
LetA=(ajk){\displaystyle A=(a_{jk})}be an×n{\displaystyle n\times n}Hermitian matrix with eigenvalues{λi}i=1n,{\displaystyle \{\lambda _{i}\}_{i=1}^{n},}counted with multiplicity. Denote the diagonal ofA{\displaystyle A}bya~,{\displaystyle {\tilde {a}},}thought of as a vector inRn,{\displaystyle \mathbb {R} ^{n},}and the vector(λ1,λ2,…,λn){\displaystyle (\lambda _{1},\lambda _{2},\ldots ,\lambda _{n})}byλ~.{\displaystyle {\tilde {\lambda }}.}LetΛ{\displaystyle \Lambda }be the diagonal matrix havingλ1,λ2,…,λn{\displaystyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{n}}on its diagonal.
(⇒{\displaystyle \Rightarrow })A{\displaystyle A}may be written in the formUΛU−1,{\displaystyle U\Lambda U^{-1},}whereU{\displaystyle U}is a unitary matrix. Thenaii=∑j=1nλj|uij|2,i=1,2,…,n.{\displaystyle a_{ii}=\sum _{j=1}^{n}\lambda _{j}|u_{ij}|^{2},\;i=1,2,\ldots ,n.}
LetS=(sij){\displaystyle S=(s_{ij})}be the matrix defined bysij=|uij|2.{\displaystyle s_{ij}=|u_{ij}|^{2}.}SinceU{\displaystyle U}is a unitary matrix,S{\displaystyle S}is adoubly stochastic matrixand we havea~=Sλ~.{\displaystyle {\tilde {a}}=S{\tilde {\lambda }}.}By theBirkhoff–von Neumann theorem,S{\displaystyle S}can be written as a convex combination of permutation matrices. Thusa~{\displaystyle {\tilde {a}}}is in the permutation polytope generated byλ~.{\displaystyle {\tilde {\lambda }}.}This proves Schur's theorem.
(⇐{\displaystyle \Leftarrow }) Ifa~{\displaystyle {\tilde {a}}}occurs as the diagonal of a Hermitian matrix with eigenvalues{λi}i=1n,{\displaystyle \{\lambda _{i}\}_{i=1}^{n},}thenta~+(1−t)τ(a~){\displaystyle t{\tilde {a}}+(1-t)\tau ({\tilde {a}})}also occurs as the diagonal of some Hermitian matrix with the same set of eigenvalues, for any transpositionτ{\displaystyle \tau }inSn.{\displaystyle S_{n}.}One may prove that in the following manner.
Letξ{\displaystyle \xi }be a complex number of modulus1{\displaystyle 1}such thatξajk¯=−ξajk{\displaystyle {\overline {\xi a_{jk}}}=-\xi a_{jk}}andU{\displaystyle U}be a unitary matrix withξt,t{\displaystyle \xi {\sqrt {t}},{\sqrt {t}}}in thej,j{\displaystyle j,j}andk,k{\displaystyle k,k}entries, respectively,−1−t,ξ1−t{\displaystyle -{\sqrt {1-t}},\xi {\sqrt {1-t}}}at thej,k{\displaystyle j,k}andk,j{\displaystyle k,j}entries, respectively,1{\displaystyle 1}at all diagonal entries other thanj,j{\displaystyle j,j}andk,k,{\displaystyle k,k,}and0{\displaystyle 0}at all other entries. ThenUAU−1{\displaystyle UAU^{-1}}hastajj+(1−t)akk{\displaystyle ta_{jj}+(1-t)a_{kk}}at thej,j{\displaystyle j,j}entry,(1−t)ajj+takk{\displaystyle (1-t)a_{jj}+ta_{kk}}at thek,k{\displaystyle k,k}entry, andall{\displaystyle a_{ll}}at thel,l{\displaystyle l,l}entry wherel≠j,k.{\displaystyle l\neq j,k.}Letτ{\displaystyle \tau }be the transposition of{1,2,…,n}{\displaystyle \{1,2,\dots ,n\}}that interchangesj{\displaystyle j}andk.{\displaystyle k.}
Then the diagonal ofUAU−1{\displaystyle UAU^{-1}}ista~+(1−t)τ(a~).{\displaystyle t{\tilde {a}}+(1-t)\tau ({\tilde {a}}).}
Λ{\displaystyle \Lambda }is a Hermitian matrix with eigenvalues{λi}i=1n.{\displaystyle \{\lambda _{i}\}_{i=1}^{n}.}Using the equivalence of (i) and (iii) in the lemma mentioned above, we see that any vector in the permutation polytope generated by(λ1,λ2,…,λn),{\displaystyle (\lambda _{1},\lambda _{2},\ldots ,\lambda _{n}),}occurs as the diagonal of a Hermitian matrix with the prescribed eigenvalues. This proves Horn's theorem.
The Schur–Horn theorem may be viewed as a corollary of theAtiyah–Guillemin–Sternberg convexity theoremin the following manner. LetU(n){\displaystyle {\mathcal {U}}(n)}denote the group ofn×n{\displaystyle n\times n}unitary matrices. Its Lie algebra, denoted byu(n),{\displaystyle {\mathfrak {u}}(n),}is the set ofskew-Hermitianmatrices. One may identify the dual spaceu(n)∗{\displaystyle {\mathfrak {u}}(n)^{*}}with the set of Hermitian matricesH(n){\displaystyle {\mathcal {H}}(n)}via the linear isomorphismΨ:H(n)→u(n)∗{\displaystyle \Psi :{\mathcal {H}}(n)\rightarrow {\mathfrak {u}}(n)^{*}}defined byΨ(A)(B)=tr(iAB){\displaystyle \Psi (A)(B)=\mathrm {tr} (iAB)}forA∈H(n),B∈u(n).{\displaystyle A\in {\mathcal {H}}(n),B\in {\mathfrak {u}}(n).}The unitary groupU(n){\displaystyle {\mathcal {U}}(n)}acts onH(n){\displaystyle {\mathcal {H}}(n)}by conjugation and acts onu(n)∗{\displaystyle {\mathfrak {u}}(n)^{*}}by thecoadjoint action. Under these actions,Ψ{\displaystyle \Psi }is anU(n){\displaystyle {\mathcal {U}}(n)}-equivariant map i.e. for everyU∈U(n){\displaystyle U\in {\mathcal {U}}(n)}the following diagram commutes,
Letλ~=(λ1,λ2,…,λn)∈Rn{\displaystyle {\tilde {\lambda }}=(\lambda _{1},\lambda _{2},\ldots ,\lambda _{n})\in \mathbb {R} ^{n}}andΛ∈H(n){\displaystyle \Lambda \in {\mathcal {H}}(n)}denote the diagonal matrix with entries given byλ~.{\displaystyle {\tilde {\lambda }}.}LetOλ~{\displaystyle {\mathcal {O}}_{\tilde {\lambda }}}denote the orbit ofΛ{\displaystyle \Lambda }under theU(n){\displaystyle {\mathcal {U}}(n)}-action i.e. conjugation. Under theU(n){\displaystyle {\mathcal {U}}(n)}-equivariant isomorphismΨ,{\displaystyle \Psi ,}the symplectic structure on the corresponding coadjoint orbit may be brought ontoOλ~.{\displaystyle {\mathcal {O}}_{\tilde {\lambda }}.}ThusOλ~{\displaystyle {\mathcal {O}}_{\tilde {\lambda }}}is a HamiltonianU(n){\displaystyle {\mathcal {U}}(n)}-manifold.
LetT{\displaystyle \mathbb {T} }denote theCartan subgroupofU(n){\displaystyle {\mathcal {U}}(n)}which consists of diagonal complex matrices with diagonal entries of modulus1.{\displaystyle 1.}The Lie algebrat{\displaystyle {\mathfrak {t}}}ofT{\displaystyle \mathbb {T} }consists of diagonal skew-Hermitian matrices and the dual spacet∗{\displaystyle {\mathfrak {t}}^{*}}consists of diagonal Hermitian matrices, under the isomorphismΨ.{\displaystyle \Psi .}In other words,t{\displaystyle {\mathfrak {t}}}consists of diagonal matrices with purely imaginary entries andt∗{\displaystyle {\mathfrak {t}}^{*}}consists of diagonal matrices with real entries. The inclusion mapt↪u(n){\displaystyle {\mathfrak {t}}\hookrightarrow {\mathfrak {u}}(n)}induces a mapΦ:H(n)≅u(n)∗→t∗,{\displaystyle \Phi :{\mathcal {H}}(n)\cong {\mathfrak {u}}(n)^{*}\rightarrow {\mathfrak {t}}^{*},}which projects a matrixA{\displaystyle A}to the diagonal matrix with the same diagonal entries asA.{\displaystyle A.}The setOλ~{\displaystyle {\mathcal {O}}_{\tilde {\lambda }}}is a HamiltonianT{\displaystyle \mathbb {T} }-manifold, and the restriction ofΦ{\displaystyle \Phi }to this set is amoment mapfor this action.
By the Atiyah–Guillemin–Sternberg theorem,Φ(Oλ~){\displaystyle \Phi ({\mathcal {O}}_{\tilde {\lambda }})}is a convex polytope. A matrixA∈H(n){\displaystyle A\in {\mathcal {H}}(n)}is fixed under conjugation by every element ofT{\displaystyle \mathbb {T} }if and only ifA{\displaystyle A}is diagonal. The only diagonal matrices inOλ~{\displaystyle {\mathcal {O}}_{\tilde {\lambda }}}are the ones with diagonal entriesλ1,λ2,…,λn{\displaystyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{n}}in some order. Thus, these matrices generate the convex polytopeΦ(Oλ~).{\displaystyle \Phi ({\mathcal {O}}_{\tilde {\lambda }}).}This is exactly the statement of the Schur–Horn theorem.
|
https://en.wikipedia.org/wiki/Schur%E2%80%93Horn_theorem
|
Inmachine learning,feature hashing, also known as thehashing trick(by analogy to thekernel trick), is a fast and space-efficient way of vectorizingfeatures, i.e. turning arbitrary features into indices in a vector or matrix.[1][2]It works by applying ahash functionto the features and using their hash values as indices directly (after a modulo operation), rather than looking the indices up in anassociative array. In addition to its use for encoding non-numeric values, feature hashing can also be used fordimensionality reduction.[2]
This trick is often attributed to Weinberger et al. (2009),[2]but there exists a much earlier description of this method published by John Moody in 1989.[1]
In a typicaldocument classificationtask, the input to the machine learning algorithm (both during learning and classification) is free text. From this, abag of words(BOW) representation is constructed: the individualtokensare extracted and counted, and each distinct token in the training set defines afeature(independent variable) of each of the documents in both the training and test sets.
Machine learning algorithms, however, are typically defined in terms of numerical vectors. Therefore, the bags of words for a set of documents is regarded as aterm-document matrixwhere each row is a single document, and each column is a single feature/word; the entryi,jin such a matrix captures the frequency (or weight) of thej'th term of thevocabularyin documenti. (An alternative convention swaps the rows and columns of the matrix, but this difference is immaterial.)
Typically, these vectors are extremelysparse—according toZipf's law.
The common approach is to construct, at learning time or prior to that, adictionaryrepresentation of the vocabulary of the training set, and use that to map words to indices.Hash tablesandtriesare common candidates for dictionary implementation. E.g., the three documents
can be converted, using the dictionary
to the term-document matrix
(Punctuation was removed, as is usual in document classification and clustering.)
The problem with this process is that such dictionaries take up a large amount of storage space and grow in size as the training set grows.[3]On the contrary, if the vocabulary is kept fixed and not increased with a growing training set, an adversary may try to invent new words or misspellings that are not in the stored vocabulary so as to circumvent a machine learned filter. To address this challenge,Yahoo! Researchattempted to use feature hashing for their spam filters.[4]
Note that the hashing trick isn't limited to text classification and similar tasks at the document level, but can be applied to any problem that involves large (perhaps unbounded) numbers of features.
Mathematically, a token is an elementt{\displaystyle t}in a finite (or countably infinite) setT{\displaystyle T}. Suppose we only need to process a finite corpus, then we can put all tokens appearing in the corpus intoT{\displaystyle T}, meaning thatT{\displaystyle T}is finite. However, suppose we want to process all possible words made of the English letters, thenT{\displaystyle T}is countably infinite.
Most neural networks can only operate on real vector inputs, so we must construct a "dictionary" functionϕ:T→Rn{\displaystyle \phi :T\to \mathbb {R} ^{n}}.
WhenT{\displaystyle T}is finite, of size|T|=m≤n{\displaystyle |T|=m\leq n}, then we can useone-hot encodingto map it intoRn{\displaystyle \mathbb {R} ^{n}}. First,arbitrarilyenumerateT={t1,t2,..,tm}{\displaystyle T=\{t_{1},t_{2},..,t_{m}\}}, then defineϕ(ti)=ei{\displaystyle \phi (t_{i})=e_{i}}. In other words, we assign a unique indexi{\displaystyle i}to each token, then map the token with indexi{\displaystyle i}to the unit basis vectorei{\displaystyle e_{i}}.
One-hot encoding is easy to interpret, but it requires one to maintain the arbitrary enumeration ofT{\displaystyle T}. Given a tokent∈T{\displaystyle t\in T}, to computeϕ(t){\displaystyle \phi (t)}, we must find out the indexi{\displaystyle i}of the tokent{\displaystyle t}. Thus, to implementϕ{\displaystyle \phi }efficiently, we need a fast-to-compute bijectionh:T→{1,...,m}{\displaystyle h:T\to \{1,...,m\}}, then we haveϕ(t)=eh(t){\displaystyle \phi (t)=e_{h(t)}}.
In fact, we can relax the requirement slightly: It suffices to have a fast-to-computeinjectionh:T→{1,...,n}{\displaystyle h:T\to \{1,...,n\}}, then useϕ(t)=eh(t){\displaystyle \phi (t)=e_{h(t)}}.
In practice, there is no simple way to construct an efficient injectionh:T→{1,...,n}{\displaystyle h:T\to \{1,...,n\}}. However, we do not need a strict injection, but only anapproximateinjection. That is, whent≠t′{\displaystyle t\neq t'}, we shouldprobablyhaveh(t)≠h(t′){\displaystyle h(t)\neq h(t')}, so thatprobablyϕ(t)≠ϕ(t′){\displaystyle \phi (t)\neq \phi (t')}.
At this point, we have just specified thath{\displaystyle h}should be a hashing function. Thus we reach the idea of feature hashing.
The basic feature hashing algorithm presented in (Weinberger et al. 2009)[2]is defined as follows.
First, one specifies two hash functions: thekernel hashh:T→{1,2,...,n}{\displaystyle h:T\to \{1,2,...,n\}}, and thesign hashζ:T→{−1,+1}{\displaystyle \zeta :T\to \{-1,+1\}}. Next, one defines the feature hashing function:ϕ:T→Rn,ϕ(t)=ζ(t)eh(t){\displaystyle \phi :T\to \mathbb {R} ^{n},\quad \phi (t)=\zeta (t)e_{h(t)}}Finally, extend this feature hashing function to strings of tokens byϕ:T∗→Rn,ϕ(t1,...,tk)=∑j=1kϕ(tj){\displaystyle \phi :T^{*}\to \mathbb {R} ^{n},\quad \phi (t_{1},...,t_{k})=\sum _{j=1}^{k}\phi (t_{j})}whereT∗{\displaystyle T^{*}}is the set ofall finite strings consisting of tokensinT{\displaystyle T}.
Equivalently,ϕ(t1,...,tk)=∑j=1kζ(tj)eh(tj)=∑i=1n(∑j:h(tj)=iζ(tj))ei{\displaystyle \phi (t_{1},...,t_{k})=\sum _{j=1}^{k}\zeta (t_{j})e_{h(t_{j})}=\sum _{i=1}^{n}\left(\sum _{j:h(t_{j})=i}\zeta (t_{j})\right)e_{i}}
We want to say something about the geometric property ofϕ{\displaystyle \phi }, butT{\displaystyle T}, by itself, is just a set of tokens, we cannot impose a geometric structure on it except the discrete topology, which is generated by thediscrete metric. To make it nicer, we lift it toT→RT{\displaystyle T\to \mathbb {R} ^{T}}, and liftϕ{\displaystyle \phi }fromϕ:T→Rn{\displaystyle \phi :T\to \mathbb {R} ^{n}}toϕ:RT→Rn{\displaystyle \phi :\mathbb {R} ^{T}\to \mathbb {R} ^{n}}by linear extension:ϕ((xt)t∈T)=∑t∈Txtζ(t)eh(t)=∑i=1n(∑t:h(t)=ixtζ(t))ei{\displaystyle \phi ((x_{t})_{t\in T})=\sum _{t\in T}x_{t}\zeta (t)e_{h(t)}=\sum _{i=1}^{n}\left(\sum _{t:h(t)=i}x_{t}\zeta (t)\right)e_{i}}There is an infinite sum there, which must be handled at once. There are essentially only two ways to handle infinities. One may impose a metric, then take itscompletion, to allow well-behaved infinite sums, or one may demand that nothing isactually infinite, only potentially so. Here, we go for the potential-infinity way, by restrictingRT{\displaystyle \mathbb {R} ^{T}}to contain only vectors withfinite support:∀(xt)t∈T∈RT{\displaystyle \forall (x_{t})_{t\in T}\in \mathbb {R} ^{T}}, only finitely many entries of(xt)t∈T{\displaystyle (x_{t})_{t\in T}}are nonzero.
Define aninner productonRT{\displaystyle \mathbb {R} ^{T}}in the obvious way:⟨et,et′⟩={1,ift=t′,0,else.⟨x,x′⟩=∑t,t′∈Txtxt′⟨et,et′⟩{\displaystyle \langle e_{t},e_{t'}\rangle ={\begin{cases}1,{\text{ if }}t=t',\\0,{\text{ else.}}\end{cases}}\quad \langle x,x'\rangle =\sum _{t,t'\in T}x_{t}x_{t'}\langle e_{t},e_{t'}\rangle }As a side note, ifT{\displaystyle T}is infinite, then the inner product spaceRT{\displaystyle \mathbb {R} ^{T}}is notcomplete. Taking its completion would get us to aHilbert space, which allows well-behaved infinite sums.
Now we have an inner product space, with enough structure to describe the geometry of the feature hashing functionϕ:RT→Rn{\displaystyle \phi :\mathbb {R} ^{T}\to \mathbb {R} ^{n}}.
First, we can see whyh{\displaystyle h}is called a "kernel hash": it allows us to define akernelK:T×T→R{\displaystyle K:T\times T\to \mathbb {R} }byK(t,t′)=⟨eh(t),eh(t′)⟩{\displaystyle K(t,t')=\langle e_{h(t)},e_{h(t')}\rangle }In the language of the "kernel trick",K{\displaystyle K}is the kernel generated by the "feature map"φ:T→Rn,φ(t)=eh(t){\displaystyle \varphi :T\to \mathbb {R} ^{n},\quad \varphi (t)=e_{h(t)}}Note that this is not the feature map we were using, which isϕ(t)=ζ(t)eh(t){\displaystyle \phi (t)=\zeta (t)e_{h(t)}}. In fact, we have been usinganother kernelKζ:T×T→R{\displaystyle K_{\zeta }:T\times T\to \mathbb {R} }, defined byKζ(t,t′)=⟨ζ(t)eh(t),ζ(t′)eh(t′)⟩{\displaystyle K_{\zeta }(t,t')=\langle \zeta (t)e_{h(t)},\zeta (t')e_{h(t')}\rangle }The benefit of augmenting the kernel hashh{\displaystyle h}with the binary hashζ{\displaystyle \zeta }is the following theorem, which states thatϕ{\displaystyle \phi }is an isometry "on average".
Theorem (intuitively stated)—If the binary hashζ{\displaystyle \zeta }is unbiased (meaning that it takes value−1,+1{\displaystyle -1,+1}with equal probability), thenϕ:RT→Rn{\displaystyle \phi :\mathbb {R} ^{T}\to \mathbb {R} ^{n}}is an isometry in expectation:E[⟨ϕ(x),ϕ(x′)⟩]=⟨x,x′⟩.{\displaystyle \mathbb {E} [\langle \phi (x),\phi (x')\rangle ]=\langle x,x'\rangle .}
By linearity of expectation,E[⟨ϕ(x),ϕ(x′)⟩]=∑t,t′∈T(xtxt′′)⋅E[ζ(t)ζ(t′)]⋅⟨eh(t),eh(t′)⟩{\displaystyle \mathbb {E} [\langle \phi (x),\phi (x')\rangle ]=\sum _{t,t'\in T}(x_{t}x'_{t'})\cdot \mathbb {E} [\zeta (t)\zeta (t')]\cdot \langle e_{h(t)},e_{h(t')}\rangle }Now,E[ζ(t)ζ(t′)]={1ift=t′0ift≠t′{\displaystyle \mathbb {E} [\zeta (t)\zeta (t')]={\begin{cases}1\quad {\text{ if }}t=t'\\0\quad {\text{ if }}t\neq t'\\\end{cases}}}, since we assumedζ{\displaystyle \zeta }is unbiased. So we continueE[⟨ϕ(x),ϕ(x′)⟩]=∑t∈T(xtxt′)⟨eh(t),eh(t)⟩=⟨x,x′⟩{\displaystyle \mathbb {E} [\langle \phi (x),\phi (x')\rangle ]=\sum _{t\in T}(x_{t}x'_{t})\langle e_{h(t)},e_{h(t)}\rangle =\langle x,x'\rangle }
The above statement and proof interprets the binary hash functionζ{\displaystyle \zeta }not as a deterministic function of typeT→{−1,+1}{\displaystyle T\to \{-1,+1\}}, but as a random binary vector{−1,+1}T{\displaystyle \{-1,+1\}^{T}}with unbiased entries, meaning thatPr(ζ(t)=+1)=Pr(ζ(t)=−1)=12{\displaystyle Pr(\zeta (t)=+1)=Pr(\zeta (t)=-1)={\frac {1}{2}}}for anyt∈T{\displaystyle t\in T}.
This is a good intuitive picture, though not rigorous. For a rigorous statement and proof, see[2]
Instead of maintaining a dictionary, a feature vectorizer that uses the hashing trick can build a vector of a pre-defined length by applying a hash functionhto the features (e.g., words), then using the hash values directly as feature indices and updating the resulting vector at those indices. Here, we assume that feature actually means feature vector.
Thus, if our feature vector is ["cat","dog","cat"] and hash function ish(xf)=1{\displaystyle h(x_{f})=1}ifxf{\displaystyle x_{f}}is "cat" and2{\displaystyle 2}ifxf{\displaystyle x_{f}}is "dog". Let us take the output feature vector dimension (N) to be 4. Then outputxwill be [0,2,1,0].
It has been suggested that a second, single-bit output hash functionξbe used to determine the sign of the update value, to counter the effect ofhash collisions.[2]If such a hash function is used, the algorithm becomes
The above pseudocode actually converts each sample into a vector. An optimized version would instead only generate a stream of(h,ζ){\displaystyle (h,\zeta )}pairs and let the learning and prediction algorithms consume such streams; alinear modelcan then be implemented as a single hash table representing the coefficient vector.
Feature hashing generally suffers from hash collision, which means that there exist pairs of different tokens with the same hash:t≠t′,ϕ(t)=ϕ(t′)=v{\displaystyle t\neq t',\phi (t)=\phi (t')=v}. A machine learning model trained on feature-hashed words would then have difficulty distinguishingt{\displaystyle t}andt′{\displaystyle t'}, essentially becausev{\displaystyle v}ispolysemic.
Ift′{\displaystyle t'}is rare, then performance degradation is small, as the model could always just ignore the rare case, and pretend allv{\displaystyle v}meanst{\displaystyle t}. However, if both are common, then the degradation can be serious.
To handle this, one can train supervised hashing functions that avoids mapping common tokens to the same feature vectors.[5]
Ganchev and Dredze showed that in text classification applications with random hash functions and several tens of thousands of columns in the output vectors, feature hashing need not have an adverse effect on classification performance, even without the signed hash function.[3]
Weinberger et al. (2009) applied their version of feature hashing tomulti-task learning, and in particular,spam filtering, where the input features are pairs (user, feature) so that a single parameter vector captured per-user spam filters as well as a global filter for several hundred thousand users, and found that the accuracy of the filter went up.[2]
Chen et al. (2015) combined the idea of feature hashing andsparse matrixto construct "virtual matrices": large matrices with small storage requirements. The idea is to treat a matrixM∈Rn×n{\displaystyle M\in \mathbb {R} ^{n\times n}}as a dictionary, with keys inn×n{\displaystyle n\times n}, and values inR{\displaystyle \mathbb {R} }. Then, as usual in hashed dictionaries, one can use a hash functionh:N×N→m{\displaystyle h:\mathbb {N} \times \mathbb {N} \to m}, and thus represent a matrix as a vector inRm{\displaystyle \mathbb {R} ^{m}}, no matter how bign{\displaystyle n}is. With virtual matrices, they constructedHashedNets, which are large neural networks taking only small amounts of storage.[6]
Implementations of the hashing trick are present in:
|
https://en.wikipedia.org/wiki/Feature_hashing
|
Geohashis apublic domaingeocode systeminvented in 2008 by Gustavo Niemeyer[2]which encodes a geographic location into a short string of letters and digits. Similar ideas were introduced by G.M. Morton in 1966.[3]It is a hierarchical spatial data structure which subdivides space into buckets ofgridshape, which is one of the many applications of what is known as aZ-order curve, and generallyspace-filling curves.
Geohashes offer properties like arbitrary precision and the possibility of gradually removing characters from the end of the code to reduce its size (and gradually lose precision). Geohashing guarantees that the longer a shared prefix between two geohashes is, the spatially closer they are together. The reverse of this is not guaranteed, as two points can be very close but have a short or no shared prefix.
The core part of the Geohash algorithm and the first initiative to similar solution was documented in a report of G.M. Morton in 1966, "A Computer Oriented Geodetic Data Base and a New Technique in File Sequencing".[3]The Morton work was used for efficient implementations ofZ-order curve, like inthis modern (2014) Geohash-integer version(based on directly interleaving64-bit integers), but hisgeocodeproposal was nothuman-readableand was not popular.
Apparently, in the late 2000s, G. Niemeyer still didn't know about Morton's work, and reinvented it, adding the use ofbase32representation. In February 2008, together with the announcement of the system,[2]he launched the websitehttp://geohash.org, which allows users to convert geographic coordinates to shortURLswhich uniquely identify positions on theEarth, so that referencing them inemails,forums, andwebsitesis more convenient.
Many variations have been developed, includingOpenStreetMap'sshort link[4](usingbase64instead of base32) in 2009, the64-bit Geohash[5]in 2014, the exoticHilbert-Geohash[6]in 2016, and others.
To obtain the Geohash, the user provides an address to begeocoded, orlatitude and longitudecoordinates, in a single input box (most commonly used formats for latitude and longitude pairs are accepted), and performs the request.
Besides showing the latitude and longitude corresponding to the given Geohash, users who navigate to a Geohash at geohash.org are also presented with an embedded map, and may download aGPXfile, or transfer the waypoint directly to certainGPSreceivers. Links are also provided to external sites that may provide further details around the specified
location.
For example, the coordinate pair57.64911,10.40744(near the tip of thepeninsulaofJutland, Denmark) produces a slightly shorter hash ofu4pruydqqvj.
The main usages of Geohashes are:
Geohashes have also been proposed to be used forgeotagging.
When used in a database, the structure of geohashed data has two advantages. First, data indexed by geohash will have all points for a given rectangular area in contiguous slices (the number of slices depends on the precision required and the presence of geohash "fault lines"). This is especially useful in database systems where queries on a single index are much easier or faster than multiple-index queries. Second, this index structure can be used for a quick-and-dirty proximity search: the closest points are often among the closest geohashes.
A formal description for Computational and Mathematical views.
For exact latitude and longitude translations Geohash is aspatial indexofbase 4, because it transforms the continuous latitude and longitude space coordinates into a hierarchical discrete grid, using a recurrent four-partition of the space. To be a compact code it usesbase 32and represents its values by the following alphabet, that is the "standard textual representation".
The "Geohash alphabet" (32ghs) uses all digits 0-9 and all lower case letters except "a", "i", "l" and "o".
For example, using the table above and the constantB=32{\displaystyle B=32}, the Geohashezs42can be converted to a decimal representation by ordinarypositional notation:
The geometry of the Geohash has a mixed spatial representation:
It is possible to build the "И-order curve" from the Z-order curve by merging neighboring cells and indexing the resulting rectangular grid by the functionj=⌊i2⌋{\displaystyle j=\left\lfloor {\frac {i}{2}}\right\rfloor }. The illustration shows how to obtain the grid of 32 rectangular cells from the grid of 64 square cells.
The most important property of Geohash for humans is that itpreservesspatial hierarchyin thecode prefixes.For example, in the "1 Geohash digit grid" illustration of 32 rectangles, above, the spatial region of the codee(rectangle of greyish blue circle at position 4,3) is preserved with prefixein the "2 digit grid" of 1024 rectangles (scale showingemand greyish green to blue circles at grid).
Using the hashezs42as an example, here is how it is decoded into a decimal latitude and longitude. The first step is decoding it from textual "base 32ghs", as showed above, to obtain the binary representation:
This operation results in thebits0110111111110000010000010. Starting to count from the left side with the digit 0 in the first position, the digits in the even positions form the longitude code (0111110000000), while the digits in the odd positions form the latitude code (101111001001).
Each binary code is then used in a series of divisions, considering one bit at a time, again from the left to the right side. For the latitude value, the interval −90 to +90 is divided by 2, producing two intervals: −90 to 0, and 0 to +90. Since the first bit is 1, the higher interval is chosen, and becomes the current interval. The procedure is repeated for all bits in the code. Finally, the latitude value is the center of the resulting interval. Longitudes are processed in an equivalent way, keeping in mind that the initial interval is −180 to +180.
For example, in the latitude code101111001001, the first bit is 1, so we know our latitude is somewhere between 0 and 90. Without any more bits, we'd guess the latitude was 45, giving us an error of ±45. Since more bits are available, we can continue with the next bit, and each subsequent bit halves this error. This table shows the effect of each bit. At each stage, the relevant half of the range is highlighted in green; a low bit selects the lower range, a high bit selects the upper range.
The column "mean value" shows the latitude, simply the mean value of the range. Each subsequent bit makes this value more precise.
(The numbers in the above table have been rounded to 3 decimal places for clarity)
Final rounding should be done carefully in a way that
So while rounding 42.605 to 42.61 or 42.6 is correct, rounding to 43 is not.
Geohashes can be used to find points in proximity to each other based on a common prefix. However,edge caselocations close to each other but on opposite sides of the 180 degree meridian will result in Geohash codes with no common prefix (different longitudes for near physical locations). Points close to the North and South poles will have very different geohashes (different longitudes for near physical locations).
Two close locations on either side of the Equator (or Greenwich meridian) will not have a long common prefix since they belong to different 'halves' of the world. Put simply, one location's binary latitude (or longitude) will be 011111... and the other 100000...., so they will not have a common prefix and most bits will be flipped. This can also be seen as a consequence of relying on theZ-order curve(which could more appropriately be called an N-order visit in this case) for ordering the points, as two points close by might be visited at very different times. However, two points with a long common prefix will be close by.
In order to do a proximity search, one could compute the southwest corner (low geohash with low latitude and longitude) and northeast corner (high geohash with high latitude and longitude) of a bounding box and search for geohashes between those two. This search will retrieve all points in the z-order curve between the two corners, which can be far too many points. This method also breaks down at the 180 meridians and the poles. Solr uses a filter list of prefixes, by computing the prefixes of the nearest squares close to the geohash[1].
Since a geohash (in this implementation) is based oncoordinates of longitude and latitudethe distance between two geohashes reflects the distance in latitude/longitude coordinates between two points, which does not translate to actual distance, seeHaversine formula.
Example of non-linearity for latitude-longitude system:
Note that these limitations are not due to geohashing, and not due to latitude-longitude coordinates, but due to the difficulty of mapping coordinates on a sphere (non linear and with wrapping of values, similar to modulo arithmetic) to two dimensional coordinates and the difficulty of exploring a two dimensional space uniformly. The first is related toGeographical coordinate systemandMap projection, and the other toHilbert curveandz-order curve. Once a coordinate system is found that represents points linearly in distance and wraps up at the edges, and can be explored uniformly, applying geohashing to those coordinates will not suffer from the limitations above.
While it is possible to apply geohashing to an area with aCartesian coordinate system, it would then only apply to the area where the coordinate system applies.
Despite those issues, there are possible workarounds, and the algorithm has been successfully used in Elasticsearch,[7]MongoDB,[8]HBase, Redis,[9]andAccumulo[10]to implement proximity searches.
An alternative to storing Geohashes as strings in a database areLocational codes, which are also called spatial keys and similar to QuadTiles.[11][12]
In somegeographical information systemsandBig Dataspatial databases, aHilbert curvebased indexation can be used as an alternative toZ-order curve, like in theS2 Geometry library.[13]
In 2019 a front-end was designed byQA Locate[14]in what they called GeohashPhrase[15]to use phrases to code Geohashes for easier communication via spoken English language. There were plans to make GeohashPhrase open source.[16]
The Geohash algorithm was put in thepublic domainby its inventor in a public announcement on February 26, 2008.[17]
While comparable algorithms have been successfully patented[18]and
had copyright claimed upon,[19][20]GeoHash is based on an entirely different algorithm and approach.
Geohash is standardized as CTA-5009.[21]This standard follows the Wikipedia article as of the 2023 version but provides additional detail in a formal (normative) reference. In the absence of an official specification since the creation of Geohash, the CTA WAVE organization published CTA-5009 to aid in broader adoption and compatibility across implementers in the industry.
|
https://en.wikipedia.org/wiki/Geohash
|
Incomputer science,locality of reference, also known as theprinciple of locality,[1]is the tendency of a processor to access the same set of memory locations repetitively over a short period of time.[2]There are two basic types of reference locality – temporal and spatial locality. Temporal locality refers to the reuse of specific data and/or resources within a relatively small time duration. Spatial locality (also termeddata locality[3]) refers to the use of data elements within relatively close storage locations. Sequential locality, a special case of spatial locality, occurs when data elements are arranged and accessed linearly, such as traversing the elements in a one-dimensionalarray.
Locality is a type ofpredictablebehavior that occurs in computer systems. Systems that exhibit stronglocality of referenceare great candidates for performance optimization through the use of techniques such as thecaching,prefetchingfor memory and advancedbranch predictorsof a processor core.
There are several different types of locality of reference:
In order to benefit from temporal and spatial locality, which occur frequently, most of the information storage systems arehierarchical. Equidistant locality is usually supported by a processor's diverse nontrivial increment instructions. For branch locality, the contemporary processors have sophisticated branch predictors, and on the basis of this prediction the memory manager of the processor tries to collect and preprocess the data of plausible alternatives.
There are several reasons for locality. These reasons are either goals to achieve or circumstances to accept, depending on the aspect. The reasons below are notdisjoint; in fact, the list below goes from the most general case to special cases:
If most of the time the substantial portion of the references aggregate into clusters, and if the shape of this system of clusters can be well predicted, then it can be used for performance optimization. There are several ways to benefit from locality usingoptimizationtechniques. Common techniques are:
Hierarchical memory is a hardware optimization that takes the benefits of spatial and temporal locality and can be used on several levels of the memory hierarchy.Pagingobviously benefits from temporal and spatial locality. A cache is a simple example of exploiting temporal locality, because it is a specially designed, faster but smaller memory area, generally used to keep recently referenced data and data near recently referenced data, which can lead to potential performance increases.
Data elements in a cache do not necessarily correspond to data elements that are spatially close in the main memory; however, data elements are brought into cache onecache lineat a time. This means that spatial locality is again important: if one element is referenced, a few neighboring elements will also be brought into cache. Finally, temporal locality plays a role on the lowest level, since results that are referenced very closely together can be kept in themachine registers. Some programming languages (such asC) allow the programmer to suggest that certain variables be kept in registers.
Data locality is a typical memory reference feature of regular programs (though many irregular memory access patterns exist). It makes the hierarchical memory layout profitable. In computers, memory is divided into a hierarchy in order to speed up data accesses. The lower levels of the memory hierarchy tend to be slower, but larger. Thus, a program will achieve greater performance if it uses memory while it is cached in the upper levels of the memory hierarchy and avoids bringing other data into the upper levels of the hierarchy that will displace data that will be used shortly in the future. This is an ideal, and sometimes cannot be achieved.
Typical memory hierarchy (access times and cache sizes are approximations of typical values used as of 2013[update]for the purpose of discussion; actual values and actual numbers of levels in the hierarchy vary):
Modern machines tend to read blocks of lower memory into the next level of the memory hierarchy. If this displaces used memory, theoperating systemtries to predict which data will be accessed least (or latest) and move it down the memory hierarchy. Prediction algorithms tend to be simple to reduce hardware complexity, though they are becoming somewhat more complicated.
A common example ismatrix multiplication:
By switching the looping order forjandk, the speedup in large matrix multiplications becomes dramatic, at least for languages that put contiguous array elements in the last dimension. This will not change the mathematical result, but it improves efficiency. In this case, "large" means, approximately, more than 100,000 elements in each matrix, or enough addressable memory such that the matrices will not fit in L1 and L2 caches.
The reason for this speedup is that in the first case, the reads ofA[i][k]are in cache (since thekindex is the contiguous, last dimension), butB[k][j]is not, so there is a cache miss penalty onB[k][j].C[i][j]is irrelevant, because it can behoistedout of the inner loop -- the loop variable there isk.
In the second case, the reads and writes ofC[i][j]are both in cache, the reads ofB[k][j]are in cache, and the read ofA[i][k]can be hoisted out of the inner loop.
Thus, the second example has no cache miss penalty in the inner loop while the first example has a cache penalty.
On a year 2014 processor, the second case is approximately five times faster than the first case, when written inCand compiled withgcc -O3. (A careful examination of the disassembled code shows that in the first case,GCCusesSIMDinstructions and in the second case it does not, but the cache penalty is much worse than the SIMD gain.)[citation needed]
Temporal locality can also be improved in the above example by using a technique calledblocking. The larger matrix can be divided into evenly sized sub-matrices, so that the smaller blocks can be referenced (multiplied) several times while in memory. Note that this example works for square matrices of dimensions SIZE x SIZE, but it can easily be extended for arbitrary matrices by substituting SIZE_I, SIZE_J and SIZE_K where appropriate.
The temporal locality of the above solution is provided because a block can be used several times before moving on, so that it is moved in and out of memory less often. Spatial locality is improved because elements with consecutive memory addresses tend to be pulled up the memory hierarchy together.
|
https://en.wikipedia.org/wiki/Locality_of_reference
|
In mathematics, theCartan decompositionis a decomposition of asemisimpleLie grouporLie algebra, which plays an important role in their structure theory andrepresentation theory. It generalizes thepolar decompositionorsingular value decompositionof matrices. Its history can be traced to the 1880s work ofÉlie CartanandWilhelm Killing.[1]
Letg{\displaystyle {\mathfrak {g}}}be a realsemisimple Lie algebraand letB(⋅,⋅){\displaystyle B(\cdot ,\cdot )}be itsKilling form. Aninvolutionong{\displaystyle {\mathfrak {g}}}is a Lie algebraautomorphismθ{\displaystyle \theta }ofg{\displaystyle {\mathfrak {g}}}whose square is equal to the identity. Such an involution is called aCartan involutionong{\displaystyle {\mathfrak {g}}}ifBθ(X,Y):=−B(X,θY){\displaystyle B_{\theta }(X,Y):=-B(X,\theta Y)}is apositive definite bilinear form.
Two involutionsθ1{\displaystyle \theta _{1}}andθ2{\displaystyle \theta _{2}}are considered equivalent if they differ only by aninner automorphism.
Any real semisimple Lie algebra has a Cartan involution, and any two Cartan involutions are equivalent.
Letθ{\displaystyle \theta }be an involution on a Lie algebrag{\displaystyle {\mathfrak {g}}}. Sinceθ2=1{\displaystyle \theta ^{2}=1}, the linear mapθ{\displaystyle \theta }has the two eigenvalues±1{\displaystyle \pm 1}. Ifk{\displaystyle {\mathfrak {k}}}andp{\displaystyle {\mathfrak {p}}}denote the eigenspaces corresponding to +1 and -1, respectively, theng=k⊕p{\displaystyle {\mathfrak {g}}={\mathfrak {k}}\oplus {\mathfrak {p}}}. Sinceθ{\displaystyle \theta }is a Lie algebra automorphism, the Lie bracket of two of its eigenspaces is contained in the eigenspace corresponding to the product of their eigenvalues. It follows that
Thusk{\displaystyle {\mathfrak {k}}}is a Lie subalgebra, while any subalgebra ofp{\displaystyle {\mathfrak {p}}}is commutative.
Conversely, a decompositiong=k⊕p{\displaystyle {\mathfrak {g}}={\mathfrak {k}}\oplus {\mathfrak {p}}}with these extra properties determines an involutionθ{\displaystyle \theta }ong{\displaystyle {\mathfrak {g}}}that is+1{\displaystyle +1}onk{\displaystyle {\mathfrak {k}}}and−1{\displaystyle -1}onp{\displaystyle {\mathfrak {p}}}.
Such a pair(k,p){\displaystyle ({\mathfrak {k}},{\mathfrak {p}})}is also called aCartan pairofg{\displaystyle {\mathfrak {g}}},
and(g,k){\displaystyle ({\mathfrak {g}},{\mathfrak {k}})}is called asymmetric pair. This notion of a Cartan pair here is not to be confused with thedistinct notioninvolving the relative Lie algebra cohomologyH∗(g,k){\displaystyle H^{*}({\mathfrak {g}},{\mathfrak {k}})}.
The decompositiong=k⊕p{\displaystyle {\mathfrak {g}}={\mathfrak {k}}\oplus {\mathfrak {p}}}associated to a Cartan involution is called aCartan decompositionofg{\displaystyle {\mathfrak {g}}}. The special feature of a Cartan decomposition is that the Killing form is negative definite onk{\displaystyle {\mathfrak {k}}}and positive definite onp{\displaystyle {\mathfrak {p}}}. Furthermore,k{\displaystyle {\mathfrak {k}}}andp{\displaystyle {\mathfrak {p}}}are orthogonal complements of each other with respect to the Killing form ong{\displaystyle {\mathfrak {g}}}.
LetG{\displaystyle G}be a non-compact semisimple Lie group andg{\displaystyle {\mathfrak {g}}}its Lie algebra. Letθ{\displaystyle \theta }be a Cartan involution ong{\displaystyle {\mathfrak {g}}}and let(k,p){\displaystyle ({\mathfrak {k}},{\mathfrak {p}})}be the resulting Cartan pair. LetK{\displaystyle K}be theanalytic subgroupofG{\displaystyle G}with Lie algebrak{\displaystyle {\mathfrak {k}}}. Then:
The automorphismΘ{\displaystyle \Theta }is also called theglobal Cartan involution, and the diffeomorphismK×p→G{\displaystyle K\times {\mathfrak {p}}\rightarrow G}is called theglobal Cartan decomposition. If we writeP=exp(p)⊂G{\displaystyle P=\mathrm {exp} ({\mathfrak {p}})\subset G}this says that the product mapK×P→G{\displaystyle K\times P\rightarrow G}is a diffeomorphism soG=KP{\displaystyle G=KP}.
For the general linear group,X↦(X−1)T{\displaystyle X\mapsto (X^{-1})^{T}}is a Cartan involution.[clarification needed]
A refinement of the Cartan decomposition for symmetric spaces of compact or noncompact type states that the maximal Abelian subalgebrasa{\displaystyle {\mathfrak {a}}}inp{\displaystyle {\mathfrak {p}}}are unique up to conjugation byK{\displaystyle K}. Moreover,
whereA=ea{\displaystyle A=e^{\mathfrak {a}}}.
In the compact and noncompact case the global Cartan decomposition thus implies
Geometrically the image of the subgroupA{\displaystyle A}inG/K{\displaystyle G/K}is atotally geodesicsubmanifold.
Considergln(R){\displaystyle {\mathfrak {gl}}_{n}(\mathbb {R} )}with the Cartan involutionθ(X)=−XT{\displaystyle \theta (X)=-X^{T}}.[clarification needed]Thenk=son(R){\displaystyle {\mathfrak {k}}={\mathfrak {so}}_{n}(\mathbb {R} )}is the real Lie algebra of skew-symmetric matrices, so thatK=SO(n){\displaystyle K=\mathrm {SO} (n)}, whilep{\displaystyle {\mathfrak {p}}}is the subspace of symmetric matrices. Thus the exponential map is a diffeomorphism fromp{\displaystyle {\mathfrak {p}}}onto the space of positive definite matrices. Up to this exponential map, the global Cartan decomposition is thepolar decompositionof a matrix. The polar decomposition of an invertible matrix is unique.
|
https://en.wikipedia.org/wiki/Cartan_decomposition
|
In themathematicaldiscipline oflinear algebra, amatrix decompositionormatrix factorizationis afactorizationof amatrixinto a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.
Innumerical analysis, different decompositions are used to implement efficient matrixalgorithms.
For example, when solving asystem of linear equationsAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, the matrixAcan be decomposed via theLU decomposition. The LU decomposition factorizes a matrix into alower triangular matrixLand anupper triangular matrixU. The systemsL(Ux)=b{\displaystyle L(U\mathbf {x} )=\mathbf {b} }andUx=L−1b{\displaystyle U\mathbf {x} =L^{-1}\mathbf {b} }require fewer additions and multiplications to solve, compared with the original systemAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, though one might require significantly more digits in inexact arithmetic such asfloating point.
Similarly, theQR decompositionexpressesAasQRwithQanorthogonal matrixandRan upper triangular matrix. The systemQ(Rx) =bis solved byRx=QTb=c, and the systemRx=cis solved by 'back substitution'. The number of additions and multiplications required is about twice that of using the LU solver, but no more digits are required in inexact arithmetic because the QR decomposition isnumerically stable.
TheJordan normal formand theJordan–Chevalley decomposition
Refers to variants of existing matrix decompositions, such as the SVD, that are invariant with respect to diagonal scaling.
Analogous scale-invariant decompositions can be derived from other matrix decompositions; for example, to obtain scale-invariant eigenvalues.[3][4]
There exist analogues of the SVD, QR, LU and Cholesky factorizations forquasimatricesandcmatricesorcontinuous matrices.[13]A ‘quasimatrix’ is, like a matrix, a rectangular scheme whose elements are indexed, but one discrete index is replaced by a continuous index. Likewise, a ‘cmatrix’, is continuous in both indices. As an example of a cmatrix, one can think of the kernel of anintegral operator.
These factorizations are based on early work byFredholm (1903),Hilbert (1904)andSchmidt (1907). For an account, and a translation to English of the seminal papers, seeStewart (2011).
|
https://en.wikipedia.org/wiki/Matrix_decomposition#Algebraic_polar_decomposition
|
Inmathematics, specificallymeasure theory, acomplex measuregeneralizes the concept ofmeasureby letting it havecomplexvalues.[1]In other words, one allows forsetswhose size (length, area, volume) is acomplex number.
Formally, acomplex measureμ{\displaystyle \mu }on ameasurable space(X,Σ){\displaystyle (X,\Sigma )}is a complex-valuedfunction
that issigma-additive. In other words, for anysequence(An)n∈N{\displaystyle (A_{n})_{n\in \mathbb {N} }}ofdisjoint setsbelonging toΣ{\displaystyle \Sigma }, one has
As⋃n=1∞An=⋃n=1∞Aσ(n){\displaystyle \displaystyle \bigcup _{n=1}^{\infty }A_{n}=\bigcup _{n=1}^{\infty }A_{\sigma (n)}}for anypermutation(bijection)σ:N→N{\displaystyle \sigma :\mathbb {N} \to \mathbb {N} }, it follows that∑n=1∞μ(An){\displaystyle \displaystyle \sum _{n=1}^{\infty }\mu (A_{n})}converges unconditionally(hence, sinceC{\displaystyle \mathbb {C} }is finite dimensional,μ{\displaystyle \mu }converges absolutely).
One can define theintegralof a complex-valuedmeasurable functionwith respect to a complex measure in the same way as theLebesgue integralof areal-valued measurable function with respect to anon-negative measure, by approximating a measurable function withsimple functions.[2]Just as in the case of ordinary integration, this more general integral might fail to exist, or its value might be infinite (thecomplex infinity).
Another approach is to not develop a theory of integration from scratch, but rather use the already available concept of integral of areal-valued functionwith respect to a non-negative measure.[3]To that end, it is a quick check that the real and imaginary parts μ1and μ2of a complex measure μ are finite-valuedsigned measures. One can apply theHahn-Jordan decompositionto these measures to split them as
and
where μ1+, μ1−, μ2+, μ2−are finite-valued non-negative measures (which are unique in some sense). Then, for a measurable functionfwhich isreal-valuedfor the moment, one can define
as long as the expression on the right-hand side is defined, that is, all four integrals exist and when adding them up one does not encounter theindeterminate∞−∞.[3]
Given now acomplex-valuedmeasurable function, one can integrate its real and imaginary components separately as illustrated above and define, as expected,
For a complex measure μ, one defines itsvariation, orabsolute value, |μ| by the formula
whereAis in Σ and thesupremumruns over all sequences ofdisjoint sets(An)nwhoseunionisA. Taking only finite partitions of the setAintomeasurable subsets, one obtains an equivalent definition.
It turns out that |μ| is a non-negative finite measure. In the same way as a complex number can be represented in apolar form, one has apolar decompositionfor a complex measure: There exists a measurable function θ with real values such that
meaning
for anyabsolutely integrablemeasurable functionf, i.e.,fsatisfying
One can use theRadon–Nikodym theoremto prove that the variation is a measure and the existence of thepolar decomposition.
The sum of two complex measures is a complex measure, as is the product of a complex measure by a complex number. That is to say, the set of all complex measures on a measure space (X, Σ) forms avector spaceover the complex numbers. Moreover, thetotal variation‖⋅‖{\displaystyle \|\cdot \|}defined as
is anorm, with respect to which the space of complex measures is aBanach space.
|
https://en.wikipedia.org/wiki/Complex_measure#Variation_of_a_complex_measure_and_polar_decomposition
|
Inmathematics,Lie group decompositionsare used to analyse the structure ofLie groupsand associated objects, by showing how they are built up out ofsubgroups. They are essential technical tools in therepresentation theoryof Lie groups andLie algebras; they can also be used to study thealgebraic topologyof such groups and associatedhomogeneous spaces. Since the use of Lie group methods became one of the standard techniques in twentieth century mathematics, many phenomena can now be referred back to decompositions.
The same ideas are often applied to Lie groups, Lie algebras,algebraic groupsandp-adic numberanalogues, making it harder to summarise the facts into a unified theory.
|
https://en.wikipedia.org/wiki/Lie_group_decomposition
|
Inquantum information theory,quantum state purificationrefers to the process of representing amixed stateas apure quantum stateof higher-dimensionalHilbert space. The purification allows the original mixed state to be recovered by taking thepartial traceover the additional degrees of freedom. The purification is not unique, the different purifications that can lead to the same mixed states are limited by theSchrödinger–HJW theorem.
Purification is used in algorithms such asentanglement distillation,magic state distillationandalgorithmic cooling.
LetHS{\displaystyle {\mathcal {H}}_{S}}be afinite-dimensionalcomplexHilbert space, and consider a generic (possiblymixed)quantum stateρ{\displaystyle \rho }defined onHS{\displaystyle {\mathcal {H}}_{S}}and admitting a decomposition of the formρ=∑ipi|ϕi⟩⟨ϕi|{\displaystyle \rho =\sum _{i}p_{i}|\phi _{i}\rangle \langle \phi _{i}|}for a collection of (not necessarily mutually orthogonal) states|ϕi⟩∈HS{\displaystyle |\phi _{i}\rangle \in {\mathcal {H}}_{S}}and coefficientspi≥0{\displaystyle p_{i}\geq 0}such that∑ipi=1{\textstyle \sum _{i}p_{i}=1}. Note that any quantum state can be written in such a way for some{|ϕi⟩}i{\displaystyle \{|\phi _{i}\rangle \}_{i}}and{pi}i{\displaystyle \{p_{i}\}_{i}}.[1]
Any suchρ{\displaystyle \rho }can bepurified, that is, represented as thepartial traceof apure statedefined in a larger Hilbert space. More precisely, it is always possible to find a (finite-dimensional) Hilbert spaceHA{\displaystyle {\mathcal {H}}_{A}}and a pure state|ΨSA⟩∈HS⊗HA{\displaystyle |\Psi _{SA}\rangle \in {\mathcal {H}}_{S}\otimes {\mathcal {H}}_{A}}such thatρ=TrA(|ΨSA⟩⟨ΨSA|){\displaystyle \rho =\operatorname {Tr} _{A}{\big (}|\Psi _{SA}\rangle \langle \Psi _{SA}|{\big )}}. Furthermore, the states|ΨSA⟩{\displaystyle |\Psi _{SA}\rangle }satisfying this are all and only those of the form|ΨSA⟩=∑ipi|ϕi⟩⊗|ai⟩{\displaystyle |\Psi _{SA}\rangle =\sum _{i}{\sqrt {p_{i}}}|\phi _{i}\rangle \otimes |a_{i}\rangle }for some orthonormal basis{|ai⟩}i⊂HA{\displaystyle \{|a_{i}\rangle \}_{i}\subset {\mathcal {H}}_{A}}. The state|ΨSA⟩{\displaystyle |\Psi _{SA}\rangle }is then referred to as the "purification ofρ{\displaystyle \rho }". Since the auxiliary space and the basis can be chosen arbitrarily, the purification of a mixed state is not unique; in fact, there are infinitely many purifications of a given mixed state.[2]Because all of them admit a decomposition in the form given above, given any pair of purifications|Ψ⟩,|Ψ′⟩∈HS⊗HA{\displaystyle |\Psi \rangle ,|\Psi '\rangle \in {\mathcal {H}}_{S}\otimes {\mathcal {H}}_{A}}, there is always some unitary operationU:HA→HA{\displaystyle U:{\mathcal {H}}_{A}\to {\mathcal {H}}_{A}}such that|Ψ′⟩=(I⊗U)|Ψ⟩.{\displaystyle |\Psi '\rangle =(I\otimes U)|\Psi \rangle .}
TheSchrödinger–HJW theoremis a result about the realization of amixed stateof aquantum systemas anensembleofpure quantum statesand the relation between the corresponding purifications of thedensity operators. The theorem is named afterErwin Schrödingerwho proved it in 1936,[3]and afterLane P. Hughston,Richard JozsaandWilliam Wootterswho rediscovered in 1993.[4]The result was also found independently (albeit partially) byNicolas Gisinin 1989,[5]and by Nicolas Hadjisavvas building upon work byE. T. Jaynesof 1957,[6][7]while a significant part of it was likewise independently discovered byN. David Merminin 1999 who discovered the link with Schrödinger's work.[8]Thanks to its complicated history, it is also known by various other names such as theGHJW theorem,[9]theHJW theorem, and thepurification theorem.
Consider a mixed quantum stateρ{\displaystyle \rho }with two different realizations as ensemble of pure states asρ=∑ipi|ϕi⟩⟨ϕi|{\textstyle \rho =\sum _{i}p_{i}|\phi _{i}\rangle \langle \phi _{i}|}andρ=∑jqj|φj⟩⟨φj|{\textstyle \rho =\sum _{j}q_{j}|\varphi _{j}\rangle \langle \varphi _{j}|}. Here both|ϕi⟩{\displaystyle |\phi _{i}\rangle }and|φj⟩{\displaystyle |\varphi _{j}\rangle }are not assumed to be mutually orthogonal. There will be two corresponding purifications of the mixed stateρ{\displaystyle \rho }reading as follows:
The sets{|ai⟩}{\displaystyle \{|a_{i}\rangle \}}and{|bj⟩}{\displaystyle \{|b_{j}\rangle \}}are two collections of orthonormal bases of the respective auxiliary spaces. These two purifications only differ by a unitary transformation acting on the auxiliary space, namely, there exists a unitary matrixUA{\displaystyle U_{A}}such that|ΨSA1⟩=(I⊗UA)|ΨSA2⟩{\displaystyle |\Psi _{SA}^{1}\rangle =(I\otimes U_{A})|\Psi _{SA}^{2}\rangle }.[10]Therefore,|ΨSA1⟩=∑jqj|φj⟩⊗UA|bj⟩{\textstyle |\Psi _{SA}^{1}\rangle =\sum _{j}{\sqrt {q_{j}}}|\varphi _{j}\rangle \otimes U_{A}|b_{j}\rangle }, which means that we can realize the different ensembles of a mixed state just by making different measurements on the purifying system.
|
https://en.wikipedia.org/wiki/Purification_of_quantum_state
|
Inalgebra, theelementary divisorsof amoduleover aprincipal ideal domain(PID) occur in one form of thestructure theorem for finitely generated modules over a principal ideal domain.
IfR{\displaystyle R}is a PID andM{\displaystyle M}afinitely generatedR{\displaystyle R}-module, thenMisisomorphicto a finitedirect sumof the form
where the(qi){\displaystyle (q_{i})}are nonzeroprimary ideals.
The list of primary ideals is uniqueup toorder (but a given ideal may be present more than once, so the list represents amultisetof primary ideals); the elementsqi{\displaystyle q_{i}}are unique only up toassociatedness, and are called theelementary divisors. Note that in a PID, the nonzero primary ideals are powers of prime ideals, so the elementary divisors can be written as powersqi=piri{\displaystyle q_{i}=p_{i}^{r_{i}}}ofirreducible elements. The nonnegativeintegerr{\displaystyle r}is called thefree rankorBetti numberof the moduleM{\displaystyle M}.
The module is determined up to isomorphism by specifying its free rankr, and for class of associated irreducible elementspand each positive integerkthe number of times thatpkoccurs among the elementary divisors. The elementary divisors can be obtained from the list ofinvariant factorsof the module by decomposing each of them as far as possible into pairwise relatively prime (non-unit) factors, which will be powers of irreducible elements. This decomposition corresponds to maximally decomposing eachsubmodulecorresponding to an invariant factor by using theChinese remainder theoremforR.Conversely, knowing the multisetMof elementary divisors, the invariant factors can be found, starting from the final one (which is a multiple of all others), as follows. For each irreducible elementpsuch that some powerpkoccurs inM, take the highest such power, removing it fromM, and multiply these powers together for all (classes of associated)pto give the final invariant factor; as long asMisnon-empty, repeat to find the invariant factors before it.
Thislinear algebra-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Elementary_divisors
|
Theinvariant factorsof amoduleover aprincipal ideal domain(PID) occur in one form of thestructure theorem for finitely generated modules over a principal ideal domain.
IfR{\displaystyle R}is aPIDandM{\displaystyle M}afinitely generatedR{\displaystyle R}-module, then
for some integerr≥0{\displaystyle r\geq 0}and a (possibly empty) list of nonzero elementsa1,…,am∈R{\displaystyle a_{1},\ldots ,a_{m}\in R}for whicha1∣a2∣⋯∣am{\displaystyle a_{1}\mid a_{2}\mid \cdots \mid a_{m}}. The nonnegative integerr{\displaystyle r}is called thefree rankorBetti numberof the moduleM{\displaystyle M}, whilea1,…,am{\displaystyle a_{1},\ldots ,a_{m}}are theinvariant factorsofM{\displaystyle M}and are unique up toassociatedness.
The invariant factors of amatrixover a PID occur in theSmith normal formand provide a means of computing the structure of a module from a set of generators and relations.
Thislinear algebra-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Invariant_factors
|
Inmathematics, in the field ofabstract algebra, thestructure theorem for finitely generated modules over a principal ideal domainis a generalization of thefundamental theorem of finitely generated abelian groupsand roughly states thatfinitely generatedmodulesover aprincipal ideal domain(PID) can be uniquely decomposed in much the same way thatintegershave aprime factorization. The result provides a simple framework to understand various canonical form results forsquare matricesoverfields.
When avector spaceover a fieldFhas afinitegenerating set, then one may extract from it abasisconsisting of a finite numbernof vectors, and the space is thereforeisomorphictoFn. The corresponding statement withFgeneralized to aprincipal ideal domainRis no longer true, since a basis for afinitely generated moduleoverRmight not exist. However such a module is still isomorphic to aquotientof some moduleRnwithnfinite (to see this it suffices to construct the morphism that sends the elements of the canonical basis ofRnto the generators of the module, and take the quotient by itskernel.) By changing the choice of generating set, one can in fact describe the module as the quotient of someRnby a particularly simplesubmodule, and this is the structure theorem.
The structure theorem for finitely generated modules over a principal ideal domain usually appears in the following two forms.
For every finitely generated moduleMover a principal ideal domainR, there is a unique decreasing sequence ofproperideals(d1)⊇(d2)⊇⋯⊇(dn){\displaystyle (d_{1})\supseteq (d_{2})\supseteq \cdots \supseteq (d_{n})}such thatMis isomorphic to thesumofcyclic modules:
The generatorsdi{\displaystyle d_{i}}of the ideals are unique up to multiplication by aunit, and are calledinvariant factorsofM. Since the ideals should be proper, these factors must not themselves be invertible (this avoids trivial factors in the sum), and the inclusion of the ideals means one has divisibilityd1|d2|⋯|dn{\displaystyle d_{1}\,|\,d_{2}\,|\,\cdots \,|\,d_{n}}. The free part is visible in the part of the decomposition corresponding to factorsdi=0{\displaystyle d_{i}=0}. Such factors, if any, occur at the end of the sequence.
While the direct sum is uniquely determined byM, the isomorphism giving the decomposition itself isnot uniquein general. For instance ifRis actually a field, then all occurring ideals must be zero, and one obtains the decomposition of a finite dimensional vector space into a direct sum of one-dimensionalsubspaces; the number of such factors is fixed, namely the dimension of the space, but there is a lot of freedom for choosing the subspaces themselves (ifdimM> 1).
The nonzerodi{\displaystyle d_{i}}elements, together with the number ofdi{\displaystyle d_{i}}which are zero, form acomplete set of invariantsfor the module. Explicitly, this means that any two modules sharing the same set of invariants are necessarily isomorphic.
Some prefer to write the free part ofMseparately:
where the visibledi{\displaystyle d_{i}}are nonzero, andfis the number ofdi{\displaystyle d_{i}}'s in the original sequence which are 0.
Every finitely generated moduleMover a principal ideal domainRis isomorphic to one of the form
where(qi)≠R{\displaystyle (q_{i})\neq R}and the(qi){\displaystyle (q_{i})}areprimary ideals. Theqi{\displaystyle q_{i}}are unique (up to multiplication by units).
The elementsqi{\displaystyle q_{i}}are called theelementary divisorsofM. In a PID, nonzero primary ideals are powers of primes, and so(qi)=(piri)=(pi)ri{\displaystyle (q_{i})=(p_{i}^{r_{i}})=(p_{i})^{r_{i}}}. Whenqi=0{\displaystyle q_{i}=0}, the resulting indecomposable module isR{\displaystyle R}itself, and this is inside the part ofMthat is a free module.
The summandsR/(qi){\displaystyle R/(q_{i})}areindecomposable, so the primary decomposition is a decomposition into indecomposable modules, and thus every finitely generated module over a PID is acompletely decomposable module. Since PID's areNoetherian rings, this can be seen as a manifestation of theLasker-Noether theorem.
As before, it is possible to write the free part (whereqi=0{\displaystyle q_{i}=0}) separately and expressMas
where the visibleqi{\displaystyle q_{i}}are nonzero.
One proof proceeds as follows:
This yields the invariant factor decomposition, and the diagonal entries of Smith normal form are the invariant factors.
Another outline of a proof:
Where the map is a projection.M/tMis a finitely generatedtorsion freemodule, and such a module over a commutative PID is afree moduleof finiterank, so it is isomorphic to:Rn{\displaystyle R^{n}}for a positive integern.
Since every free module isprojective module, then exists right inverse of the projection map (it suffices to lift each of the generators ofM/tMintoM). Bysplitting lemma(left split) M splits into:M=tM⊕F{\displaystyle M=tM\oplus F}.
This includes the classification of finite-dimensional vector spaces as a special case, whereR=K{\displaystyle R=K}. Since fields have no non-trivial ideals, every finitely generated vector space is free.
TakingR=Z{\displaystyle R=\mathbb {Z} }yields thefundamental theorem of finitely generated abelian groups.
LetTbe a linear operator on a finite-dimensional vector spaceVoverK. TakingR=K[T]{\displaystyle R=K[T]}, thealgebraofpolynomialswith coefficients inKevaluated atT, yields structure information aboutT.Vcan be viewed as a finitely generated module overK[T]{\displaystyle K[T]}. The last invariant factor is theminimal polynomial, and the product of invariant factors is thecharacteristic polynomial. Combined with a standard matrix form forK[T]/p(T){\displaystyle K[T]/p(T)}, this yields variouscanonical forms:
While the invariants (rank, invariant factors, and elementary divisors) are unique, the isomorphism betweenMand itscanonical formis not unique, and does not even preserve thedirect sumdecomposition. This follows because there are non-trivialautomorphismsof these modules which do not preserve the summands.
However, one has a canonical torsion submoduleT, and similar canonical submodules corresponding to each (distinct) invariant factor, which yield a canonical sequence:
Comparecomposition seriesinJordan–Hölder theorem.
For instance, ifM≈Z⊕Z/2{\displaystyle M\approx \mathbf {Z} \oplus \mathbf {Z} /2}, and(1,0¯),(0,1¯){\displaystyle (1,{\bar {0}}),(0,{\bar {1}})}is one basis, then(1,1¯),(0,1¯){\displaystyle (1,{\bar {1}}),(0,{\bar {1}})}is another basis, and the change of basis matrix[1011]{\displaystyle {\begin{bmatrix}1&0\\1&1\end{bmatrix}}}does not preserve the summandZ{\displaystyle \mathbf {Z} }. However, it does preserve theZ/2{\displaystyle \mathbf {Z} /2}summand, as this is the torsion submodule (equivalently here, the 2-torsion elements).
TheJordan–Hölder theoremis a more general result for finite groups (or modules over an arbitrary ring). In this generality, one obtains acomposition series, rather than adirect sum.
TheKrull–Schmidt theoremand related results give conditions under which a module has something like a primary decomposition, a decomposition as a direct sum ofindecomposable modulesin which the summands are unique up to order.
The primary decomposition generalizes to finitely generated modules over commutativeNoetherian rings, and this result is called theLasker–Noether theorem.
By contrast, unique decomposition intoindecomposablesubmodules does not generalize as far, and the failure is measured by theideal class group, which vanishes for PIDs.
For rings that are not principal ideal domains, unique decomposition need not even hold for modules over a ring generated by two elements. For the ringR=Z[√−5], both the moduleRand its submoduleMgenerated by 2 and 1 + √−5 are indecomposable. WhileRis not isomorphic toM,R⊕Ris isomorphic toM⊕M; thus the images of theMsummands give indecomposable submodulesL1,L2<R⊕Rwhich give a different decomposition ofR⊕R. The failure of uniquely factorizingR⊕Rinto a direct sum of indecomposable modules is directly related (via the ideal class group) to the failure of the unique factorization of elements ofRinto irreducible elements ofR.
However, over aDedekind domainthe ideal class group is the only obstruction, and the structure theorem generalizes tofinitely generated modules over a Dedekind domainwith minor modifications. There is still a unique torsion part, with a torsionfree complement (unique up to isomorphism), but a torsionfree module over a Dedekind domain is no longer necessarily free. Torsionfree modules over a Dedekind domain are determined (up to isomorphism) by rank andSteinitz class(which takes value in the ideal class group), and the decomposition into a direct sum of copies ofR(rank one free modules) is replaced by a direct sum into rank one projective modules: the individual summands are not uniquely determined, but the Steinitz class (of the sum) is.
Similarly for modules that are not finitely generated, one cannot expect such a nice decomposition: even the number of factors may vary. There areZ-submodules ofQ4which are simultaneously direct sums of two indecomposable modules and direct sums of three indecomposable modules, showing the analogue of the primary decomposition cannot hold for infinitely generated modules, even over the integers,Z.
Another issue that arises with non-finitely generated modules is that there are torsion-free modules which are not free. For instance, consider the ringZof integers. ThenQis a torsion-freeZ-module which is not free. Another classical example of such a module is theBaer–Specker group, the group of all sequences of integers under termwise addition. In general, the question of which infinitely generated torsion-free abelian groups are free depends on whichlarge cardinalsexist. A consequence is that any structure theorem for infinitely generated modules depends on a choice ofset theoryaxioms and may be invalid under a different choice.
|
https://en.wikipedia.org/wiki/Structure_theorem_for_finitely_generated_modules_over_a_principal_ideal_domain
|
Inlinear algebra, theFrobenius normal formorrational canonical formof asquarematrixAwith entries in afieldFis acanonical formfor matrices obtained by conjugation byinvertible matricesoverF. The form reflects a minimal decomposition of thevector spaceintosubspacesthat are cyclic forA(i.e.,spannedby some vector and its repeatedimagesunderA). Since only one normal form can be reached from a given matrix (whence the "canonical"), a matrixBissimilartoAif and only if it has the same rational canonical form asA. Since this form can be found without any operations that might change whenextendingthe fieldF(whence the "rational"), notably withoutfactoring polynomials, this shows that whether two matrices are similar does not change upon field extensions. The form is named after German mathematicianFerdinand Georg Frobenius.
Some authors use the term rational canonical form for a somewhat different form that is more properly called theprimary rational canonical form. Instead of decomposing into a minimum number of cyclic subspaces, the primary form decomposes into a maximum number of cyclic subspaces. It is also defined overF, but has somewhat different properties: finding the form requires factorization of polynomials, and as a consequence the primary rational canonical form may change when the same matrix is considered over an extension field ofF. This article mainly deals with the form that does not require factorization, and explicitly mentions "primary" when the form using factorization is meant.
When trying to find out whether two square matricesAandBare similar, one approach is to try, for each of them, to decompose the vector space as far as possible into adirect sumof stable subspaces, and compare the respective actions on these subspaces. For instance if both arediagonalizable, then one can take the decomposition intoeigenspaces(for which the action is as simple as it can get, namely by a scalar), and then similarity can be decided by comparingeigenvaluesand their multiplicities. While in practice this is often a quite insightful approach, there are various drawbacks this has as a general method. First, it requires finding all eigenvalues, say asrootsof thecharacteristic polynomial, but it may not be possible to give an explicit expression for them. Second, a complete set of eigenvalues might exist only in an extension of the field one is working over, and then one does not get a proof of similarity over the original field. FinallyAandBmight not be diagonalizable even over this larger field, in which case one must instead use a decomposition intogeneralized eigenspaces, and possibly intoJordan blocks.
But obtaining such a fine decomposition is not necessary to just decide whether two matrices are similar. The rational canonical form is based on instead using a direct sum decomposition into stable subspaces that are as large as possible, while still allowing a very simple description of the action on each of them. These subspaces must be generated by a single nonzero vectorvand all its images by repeated application of thelinear operatorassociated to the matrix; such subspaces are called cyclic subspaces (by analogy withcyclicsubgroups) and they are clearly stable under the linear operator. Abasisof such a subspace is obtained by takingvand its successive images as long as they arelinearly independent. The matrix of the linear operator with respect to such a basis is thecompanion matrixof amonic polynomial; this polynomial (theminimal polynomialof the operatorrestrictedto the subspace, which notion is analogous to that of theorderof a cyclic subgroup) determines the action of the operator on the cyclic subspace up toisomorphism, and is independent of the choice of the vectorvgenerating the subspace.
A direct sum decomposition into cyclic subspaces always exists, and finding one does not require factoring polynomials. However it is possible that cyclic subspaces do allow a decomposition as direct sum of smaller cyclic subspaces (essentially by theChinese remainder theorem). Therefore, just having for both matrices some decomposition of the space into cyclic subspaces, and knowing the corresponding minimal polynomials, is not in itself sufficient to decide their similarity. An additional condition is imposed to ensure that for similar matrices one gets decompositions into cyclic subspaces that exactly match: in the list of associated minimal polynomials each one must divide the next (and the constant polynomial 1 is forbidden to excludetrivialcyclic subspaces). The resulting list of polynomials are called theinvariant factorsof (theK[X]-moduledefined by) the matrix, and two matrices are similar if and only if they have identical lists of invariant factors. The rational canonical form of a matrixAis obtained by expressing it on a basis adapted to a decomposition into cyclic subspaces whose associated minimal polynomials are the invariant factors ofA; two matrices are similar if and only if they have the same rational canonical form.
Consider the following matrix A, overQ:
Ahasminimal polynomialμ=X6−4X4−2X3+4X2+4X+1{\displaystyle \mu =X^{6}-4X^{4}-2X^{3}+4X^{2}+4X+1}, so that thedimensionof a subspace generated by the repeated images of a single vector is at most 6. Thecharacteristic polynomialisχ=X8−X7−5X6+2X5+10X4+2X3−7X2−5X−1{\displaystyle \chi =X^{8}-X^{7}-5X^{6}+2X^{5}+10X^{4}+2X^{3}-7X^{2}-5X-1}, which is a multiple of the minimal polynomial by a factorX2−X−1{\displaystyle X^{2}-X-1}. There always exist vectors such that the cyclic subspace that they generate has the same minimal polynomial as the operator has on the whole space; indeed most vectors will have this property, and in this case the first standard basis vectore1{\displaystyle e_{1}}does so: the vectorsAk(e1){\displaystyle A^{k}(e_{1})}fork=0,1,…,5{\displaystyle k=0,1,\ldots ,5}are linearly independent and span a cyclic subspace with minimal polynomialμ{\displaystyle \mu }. There exist complementary stable subspaces (of dimension 2) to this cyclic subspace, and the space generated by vectorsv=(3,4,8,0,−1,0,2,−1)⊤{\displaystyle v=(3,4,8,0,-1,0,2,-1)^{\top }}andw=(5,4,5,9,−1,1,1,−2)⊤{\displaystyle w=(5,4,5,9,-1,1,1,-2)^{\top }}is an example. In fact one hasA⋅v=w{\displaystyle A\cdot v=w}, so the complementary subspace is a cyclic subspace generated byv{\displaystyle v}; it has minimal polynomialX2−X−1{\displaystyle X^{2}-X-1}. Sinceμ{\displaystyle \mu }is the minimal polynomial of the whole space, it is clear thatX2−X−1{\displaystyle X^{2}-X-1}must divideμ{\displaystyle \mu }(and it is easily checked that it does), and we have found the invariant factorsX2−X−1{\displaystyle X^{2}-X-1}andμ=X6−4X4−2X3+4X2+4X+1{\displaystyle \mu =X^{6}-4X^{4}-2X^{3}+4X^{2}+4X+1}ofA. Then the rational canonical form ofAis theblock diagonal matrixwith the corresponding companion matrices as diagonal blocks, namely
A basis on which this form is attained is formed by the vectorsv,w{\displaystyle v,w}above, followed byAk(e1){\displaystyle A^{k}(e_{1})}fork=0,1,…,5{\displaystyle k=0,1,\ldots ,5}; explicitly this means that for
one hasA=PCP−1.{\displaystyle A=PCP^{-1}.}
Fix a base fieldFand a finite-dimensionalvector spaceVoverF. Given a polynomialP∈F[X], there is associated to it acompanion matrixCPwhose characteristic polynomial and minimal polynomial are both equal toP.
Theorem: LetVbe a finite-dimensional vector space over a fieldF, andAa square matrix overF. ThenV(viewed as anF[X]-modulewith the action ofXgiven byA) admits aF[X]-module isomorphism
where thefi∈F[X] may be taken to be monic polynomials of positivedegree(so they are non-unitsinF[X]) that satisfy the relations
(where "a | b" is notation for "adividesb"); with these conditions the list of polynomialsfiis unique.
Sketch of Proof: Apply thestructure theorem for finitely generated modules over a principal ideal domaintoV, viewing it as anF[X]-module. The structure theorem provides a decomposition into cyclic factors, each of which is aquotientofF[X] by a properideal; the zero ideal cannot be present since the resultingfree modulewould be infinite-dimensional asFvector space, whileVis finite-dimensional. For the polynomialsfione then takes the unique monic generators of the respective ideals, and since the structure theorem ensures containment of every ideal in the preceding ideal, one obtains the divisibility conditions for thefi. See [DF] for details.
Given an arbitrary square matrix, theelementary divisorsused in the construction of theJordan normal formdo not exist overF[X], so theinvariant factorsfias given above must be used instead. The last of these factorsfkis then the minimal polynomial, which all the
invariant factors therefore divide, and the product of the invariant factors gives the characteristic polynomial. Note that this implies that the minimal polynomial divides the characteristic polynomial (which is essentially theCayley-Hamilton theorem), and that everyirreduciblefactor of the characteristic polynomial also divides the minimal polynomial (possibly with lower multiplicity).
For each invariant factorfione takes itscompanion matrixCfi, and the block diagonal matrix formed from these blocks yields therational canonical formofA. When the minimal polynomial is identical to the characteristic polynomial (the casek= 1), the Frobenius normal form is the companion matrix of the characteristic polynomial. As the rational canonical form is uniquely determined by the unique invariant factors associated toA, and these invariant factors are independent of basis, it follows that two square matricesAandBare similar if and only if they have the same rational canonical form.
The Frobenius normal form does not reflect any form of factorization of the characteristic polynomial, even if it does exist over the ground fieldF. This implies that it is invariant whenFis replaced by a different field (as long as it contains the entries of the original matrixA). On the other hand, this makes the Frobenius normal form rather different from other normal forms that do depend on factoring the characteristic polynomial, notably thediagonal form(ifAis diagonalizable) or more generally theJordan normal form(if the characteristic polynomial splits into linear factors). For instance, the Frobenius normal form of a diagonal matrix with distinct diagonal entries is just the companion matrix of its characteristic polynomial.
There is another way to define a normal form, that, like the Frobenius normal form, is always defined over the same fieldFasA, but that does reflect a possible factorization of the characteristic polynomial (or equivalently the minimal polynomial) into irreducible factors overF, and which reduces to the Jordan normal form when this factorization only contains linear factors (corresponding to eigenvalues). This form[1]is sometimes called thegeneralized Jordan normal form, orprimary rational canonical form. It is based on the fact that the vector space can be canonically decomposed into a direct sum of stable subspaces corresponding to thedistinctirreducible factorsPof the characteristic polynomial (as stated by thelemme des noyaux[fr][2]), where the characteristic polynomial of each summand is a power of the correspondingP. These summands can be further decomposed, non-canonically, as a direct sum ofcyclicF[x]-modules (like is done for the Frobenius normal form above), where the characteristic polynomial of each summand is still a (generally smaller) power ofP. The primary rational canonical form is a block diagonal matrix corresponding to such a decomposition into cyclic modules, with a particular form calledgeneralized Jordan blockin the diagonal blocks, corresponding to a particular choice of a basis for the cyclic modules. This generalized Jordan block is itself ablock matrixof the form
whereCis the companion matrix of the irreducible polynomialP, andUis a matrix whose sole nonzero entry is a 1 in the upper right-hand corner. For the case of a linear irreducible factorP=x−λ, these blocks are reduced to single entriesC=λandU= 1and, one finds a (transposed) Jordan block. In any generalized Jordan block, all entries immediately below the main diagonal are 1. A basis of the cyclic module giving rise to this form is obtained by choosing a generating vectorv(one that is not annihilated byPk−1(A)where the minimal polynomial of the cyclic module isPk), and taking as basis
whered= degP.
|
https://en.wikipedia.org/wiki/Frobenius_normal_form
|
Inlinear algebra, theHermite normal formis an analogue ofreduced echelon formformatricesover theintegersZ{\displaystyle \mathbb {Z} }. Just asreduced echelon formcan be used to solve problems about the solution to the linear systemAx=b{\displaystyle Ax=b}wherex∈Rn{\displaystyle x\in \mathbb {R} ^{n}}, the Hermite normal form can solve problems about the solution to the linear systemAx=b{\displaystyle Ax=b}where this timex{\displaystyle x}is restricted to have integer coordinates only. Other applications of the Hermite normal form includeinteger programming,[1]cryptography,[2]andabstract algebra.[3]
Various authors may prefer to talk about Hermite normal form in either row-style or column-style. They are essentially the same up to transposition.
A matrixA∈Zm×n{\displaystyle A\in \mathbb {Z} ^{m\times n}}has a (row) Hermite normal formH{\displaystyle H}if there is a squareunimodular matrixU{\displaystyle U}whereH=UA{\displaystyle H=UA}.
H{\displaystyle H}has the following restrictions:[4][5][6]
The third condition is not standard among authors, for example some sources force non-pivots to be nonpositive[7][8]or place no sign restriction on them.[9]However, these definitions are equivalent by using a different unimodular matrixU{\displaystyle U}. A unimodular matrix is a square integer matrix whosedeterminantis 1 or −1 (and henceinvertible). In fact, a unimodular matrix is invertible over the integers, as can be seen, for example, fromCramer's Rule.
A matrixA∈Zm×n{\displaystyle A\in \mathbb {Z} ^{m\times n}}has a (column) Hermite normal formH{\displaystyle H}if there is a squareunimodular matrixU{\displaystyle U}whereH=AU{\displaystyle H=AU}andH{\displaystyle H}has the following restrictions:[8][10]
Note that the row-style definition has a unimodular matrixU{\displaystyle U}multiplyingA{\displaystyle A}on the left (meaningU{\displaystyle U}is acting on the rows ofA{\displaystyle A}), while the column-style definition has the unimodular matrix action on the columns ofA{\displaystyle A}. The two definitions of Hermite normal forms are simply transposes of each other.
Every full row rankm-by-nmatrixAwith integer entries has a uniquem-by-nmatrixHin Hermite normal form, such thatH=UAfor some square unimodular matrixU.[5][11][12]
In the examples below,His the Hermite normal form of the matrixA, andUis a unimodular matrix such thatUA=H.A=(331401000019160003)H=(30110100001910003)U=(1−30−10100001−50001){\displaystyle A={\begin{pmatrix}3&3&1&4\\0&1&0&0\\0&0&19&16\\0&0&0&3\end{pmatrix}}\qquad H={\begin{pmatrix}3&0&1&1\\0&1&0&0\\0&0&19&1\\0&0&0&3\end{pmatrix}}\qquad U=\left({\begin{array}{rrrr}1&-3&0&-1\\0&1&0&0\\0&0&1&-5\\0&0&0&1\end{array}}\right)}
A=(236256168311)H=(1050−110328−20061−13)U=(9−515−2011−61){\displaystyle A={\begin{pmatrix}2&3&6&2\\5&6&1&6\\8&3&1&1\end{pmatrix}}\qquad H=\left({\begin{array}{rrrr}1&0&50&-11\\0&3&28&-2\\0&0&61&-13\end{array}}\right)\qquad U=\left({\begin{array}{rrr}9&-5&1\\5&-2&0\\11&-6&1\end{array}}\right)}
IfAhas only one row then eitherH=AorH= −A, depending on whether the single row ofAhas a positive or negative leading coefficient.
There are many algorithms for computing the Hermite normal form, dating back to 1851. One such algorithm is described in.[13]: 43--45But only in 1979 an algorithm for computing the Hermite normal form that ran instrongly polynomial timewas first developed;[14]that is, the number of steps to compute the Hermite normal form is bounded above by a polynomial in the dimensions of the input matrix, and the space used by the algorithm (intermediate numbers) is bounded by a polynomial in the binary encoding size of the numbers in the input matrix.
One class of algorithms is based onGaussian eliminationin that special elementary matrices are repeatedly used.[11][15][16]TheLLLalgorithm can also be used to efficiently compute the Hermite normal form.[17][18]
A typicallatticeinRnhas the formL={∑i=1nαiai|αi∈Z}{\textstyle L=\left\{\left.\sum _{i=1}^{n}\alpha _{i}\mathbf {a} _{i}\;\right\vert \;\alpha _{i}\in {\textbf {Z}}\right\}}where theaiare inRn. If thecolumnsof a matrixAare theai, the lattice can be associated with the columns of a matrix, andAis said to be a basis ofL. Because the Hermite normal form is unique, it can be used to answer many questions about two lattice descriptions. For what follows,LA{\displaystyle L_{A}}denotes the lattice generated by the columns of A. Because the basis is in the columns of the matrixA, the column-style Hermite normal form must be used. Given two bases for a lattice,AandA', the equivalence problem is to decide ifLA=LA′.{\displaystyle L_{A}=L_{A'}.}This can be done by checking if the column-style Hermite normal form ofAandA'are the same up to the addition of zero columns. This strategy is also useful for deciding if a lattice is a subset (LA⊆LA′{\displaystyle L_{A}\subseteq L_{A'}}if and only ifL[A∣A′]=LA′{\displaystyle L_{[A\mid A']}=L_{A'}}), deciding if a vector v is in a lattice (v∈LA{\displaystyle v\in L_{A}}if and only ifL[v∣A]=LA{\displaystyle L_{[v\mid A]}=L_{A}}), and for other calculations.[19]
The linear systemAx=bhas an integer solutionxif and only if the systemHy=bhas an integer solutionywherey=U−1xandHis the column-style Hermite normal form ofA. Checking thatHy=bhas an integer solution is easier thanAx=bbecause the matrixHis triangular.[11]: 55
Many mathematical software packages can compute the Hermite normal form:
Hermite normal form can be defined when we replaceZby an arbitraryDedekind domain.[21](for instance, anyprincipal-ideal domain). For instance, incontrol theoryit can be useful to consider Hermite normal form for the polynomialsF[x]over a given fieldF.
|
https://en.wikipedia.org/wiki/Hermite_normal_form
|
Inmathematicsandphysics,Lieb–Thirring inequalitiesprovide an upper bound on the sums of powers of the negativeeigenvaluesof aSchrödinger operatorin terms of integrals of the potential. They are named afterE. H. LiebandW. E. Thirring.
The inequalities are useful in studies ofquantum mechanicsanddifferential equationsand imply, as a corollary, a lower bound on thekinetic energyofN{\displaystyle N}quantum mechanical particles that plays an important role in the proof ofstability of matter.[1]
For the Schrödinger operator−Δ+V(x)=−∇2+V(x){\displaystyle -\Delta +V(x)=-\nabla ^{2}+V(x)}onRn{\displaystyle \mathbb {R} ^{n}}with real-valued potentialV(x):Rn→R,{\displaystyle V(x):\mathbb {R} ^{n}\to \mathbb {R} ,}the numbersλ1≤λ2≤⋯≤0{\displaystyle \lambda _{1}\leq \lambda _{2}\leq \dots \leq 0}denote the (not necessarily finite) sequence of negative eigenvalues. Then, forγ{\displaystyle \gamma }andn{\displaystyle n}satisfying one of the conditions
there exists a constantLγ,n{\displaystyle L_{\gamma ,n}}, which only depends onγ{\displaystyle \gamma }andn{\displaystyle n}, such that
whereV(x)−:=max(−V(x),0){\displaystyle V(x)_{-}:=\max(-V(x),0)}is the negative part of the potentialV{\displaystyle V}. The casesγ>1/2,n=1{\displaystyle \gamma >1/2,n=1}as well asγ>0,n≥2{\displaystyle \gamma >0,n\geq 2}were proven by E. H. Lieb and W. E. Thirring in 1976[1]and used in their proof of stability of matter. In the caseγ=0,n≥3{\displaystyle \gamma =0,n\geq 3}the left-hand side is simply the number of negative eigenvalues, and proofs were given independently by M. Cwikel,[2]E. H. Lieb[3]and G. V. Rozenbljum.[4]The resultingγ=0{\displaystyle \gamma =0}inequality is thus also called the Cwikel–Lieb–Rosenbljum bound. The remaining critical caseγ=1/2,n=1{\displaystyle \gamma =1/2,n=1}was proven to hold by T. Weidl[5]The conditions onγ{\displaystyle \gamma }andn{\displaystyle n}are necessary and cannot be relaxed.
The Lieb–Thirring inequalities can be compared to the semi-classical limit.
The classicalphase spaceconsists of pairs(p,x)∈R2n.{\displaystyle (p,x)\in \mathbb {R} ^{2n}.}Identifying themomentum operator−i∇{\displaystyle -\mathrm {i} \nabla }withp{\displaystyle p}and assuming that every quantum state is contained in a volume(2π)n{\displaystyle (2\pi )^{n}}in the2n{\displaystyle 2n}-dimensional phase space, the semi-classical approximation
is derived with the constant
While the semi-classical approximation does not need any assumptions onγ>0{\displaystyle \gamma >0}, the Lieb–Thirring inequalities only hold for suitableγ{\displaystyle \gamma }.
Numerous results have been published about the best possible constantLγ,n{\displaystyle L_{\gamma ,n}}in (1) but this problem is still partly open. The semiclassical approximation becomes exact in the limit of large coupling, that is for potentialsβV{\displaystyle \beta V}theWeylasymptotics
hold. This implies thatLγ,ncl≤Lγ,n{\displaystyle L_{\gamma ,n}^{\mathrm {cl} }\leq L_{\gamma ,n}}. Lieb and Thirring[1]were able to show thatLγ,n=Lγ,ncl{\displaystyle L_{\gamma ,n}=L_{\gamma ,n}^{\mathrm {cl} }}forγ≥3/2,n=1{\displaystyle \gamma \geq 3/2,n=1}.M. Aizenmanand E. H. Lieb[6]proved that for fixed dimensionn{\displaystyle n}the ratioLγ,n/Lγ,ncl{\displaystyle L_{\gamma ,n}/L_{\gamma ,n}^{\mathrm {cl} }}is amonotonic, non-increasing function ofγ{\displaystyle \gamma }. SubsequentlyLγ,n=Lγ,ncl{\displaystyle L_{\gamma ,n}=L_{\gamma ,n}^{\mathrm {cl} }}was also shown to hold for alln{\displaystyle n}whenγ≥3/2{\displaystyle \gamma \geq 3/2}byA. Laptevand T. Weidl.[7]Forγ=1/2,n=1{\displaystyle \gamma =1/2,\,n=1}D. Hundertmark, E. H. Lieb and L. E. Thomas[8]proved that the best constant is given byL1/2,1=2L1/2,1cl=1/2{\displaystyle L_{1/2,1}=2L_{1/2,1}^{\mathrm {cl} }=1/2}.
On the other hand, it is known thatLγ,ncl<Lγ,n{\displaystyle L_{\gamma ,n}^{\mathrm {cl} }<L_{\gamma ,n}}for1/2≤γ<3/2,n=1{\displaystyle 1/2\leq \gamma <3/2,n=1}[1]and forγ<1,d≥1{\displaystyle \gamma <1,d\geq 1}.[9]In the former case Lieb and Thirring conjectured that the sharp constant is given by
The best known value for the physical relevant constantL1,3{\displaystyle L_{1,3}}is1.456L1,3cl{\displaystyle 1.456L_{1,3}^{\mathrm {cl} }}[10]and the smallest known constant in the Cwikel–Lieb–Rosenbljum inequality is6.869L0,3cl{\displaystyle 6.869L_{0,3}^{\mathrm {cl} }}.[3]A complete survey of the presently best known values forLγ,n{\displaystyle L_{\gamma ,n}}can be found in the literature.[11]
The Lieb–Thirring inequality forγ=1{\displaystyle \gamma =1}is equivalent to a lower bound on the kinetic energy of a given normalisedN{\displaystyle N}-particlewave functionψ∈L2(RNn){\displaystyle \psi \in L^{2}(\mathbb {R} ^{Nn})}in terms of the one-body density. For an anti-symmetric wave function such that
for all1≤i,j≤N{\displaystyle 1\leq i,j\leq N}, the one-body density is defined as
The Lieb–Thirring inequality (1) forγ=1{\displaystyle \gamma =1}is equivalent to the statement that
where the sharp constantKn{\displaystyle K_{n}}is defined via
The inequality can be extended to particles withspinstates by replacing the one-body density by the spin-summed one-body density. The constantKn{\displaystyle K_{n}}then has to be replaced byKn/q2/n{\displaystyle K_{n}/q^{2/n}}whereq{\displaystyle q}is the number of quantum spin states available to each particle (q=2{\displaystyle q=2}for electrons). If the wave function is symmetric, instead of anti-symmetric, such that
for all1≤i,j≤N{\displaystyle 1\leq i,j\leq N}, the constantKn{\displaystyle K_{n}}has to be replaced byKn/N2/n{\displaystyle K_{n}/N^{2/n}}. Inequality (2) describes the minimum kinetic energy necessary to achieve a given densityρψ{\displaystyle \rho _{\psi }}withN{\displaystyle N}particles inn{\displaystyle n}dimensions. IfL1,3=L1,3cl{\displaystyle L_{1,3}=L_{1,3}^{\mathrm {cl} }}was proven to hold, the right-hand side of (2) forn=3{\displaystyle n=3}would be precisely the kinetic energy term inThomas–Fermitheory.
The inequality can be compared to theSobolev inequality. M. Rumin[12]derived the kinetic energy inequality (2) (with a smaller constant) directly without the use of the Lieb–Thirring inequality.
(for more information, read theStability of matterpage)
The kinetic energy inequality plays an important role in the proof ofstability of matteras presented by Lieb and Thirring.[1]TheHamiltonianunder consideration describes a system ofN{\displaystyle N}particles withq{\displaystyle q}spin states andM{\displaystyle M}fixednucleiat locationsRj∈R3{\displaystyle R_{j}\in \mathbb {R} ^{3}}withchargesZj>0{\displaystyle Z_{j}>0}. The particles and nuclei interact with each other through the electrostaticCoulomb forceand an arbitrarymagnetic fieldcan be introduced. If the particles under consideration arefermions(i.e. the wave functionψ{\displaystyle \psi }is antisymmetric), then the kinetic energy inequality (2) holds with the constantKn/q2/n{\displaystyle K_{n}/q^{2/n}}(notKn/N2/n{\displaystyle K_{n}/N^{2/n}}). This is a crucial ingredient in the proof of stability of matter for a system of fermions. It ensures that theground stateenergyEN,M(Z1,…,ZM){\displaystyle E_{N,M}(Z_{1},\dots ,Z_{M})}of the system can be bounded from below by a constant depending only on the maximum of the nuclei charges,Zmax{\displaystyle Z_{\max }}, times the number of particles,
The system is then stable of the first kind since the ground-state energy is bounded from below and also stable of the second kind, i.e. the energy of decreases linearly with the number of particles and nuclei. In comparison, if the particles are assumed to bebosons(i.e. the wave functionψ{\displaystyle \psi }is symmetric), then the kinetic energy inequality (2) holds only with the constantKn/N2/n{\displaystyle K_{n}/N^{2/n}}and for the ground state energy only a bound of the form−CN5/3{\displaystyle -CN^{5/3}}holds. Since the power5/3{\displaystyle 5/3}can be shown to be optimal, a system of bosons is stable of the first kind but unstable of the second kind.
If the Laplacian−Δ=−∇2{\displaystyle -\Delta =-\nabla ^{2}}is replaced by(i∇+A(x))2{\displaystyle (\mathrm {i} \nabla +A(x))^{2}}, whereA(x){\displaystyle A(x)}is a magnetic field vector potential inRn,{\displaystyle \mathbb {R} ^{n},}the Lieb–Thirring inequality (1) remains true. The proof of this statement uses thediamagnetic inequality. Although all presently known constantsLγ,n{\displaystyle L_{\gamma ,n}}remain unchanged, it is not known whether this is true in general for the best possible constant.
The Laplacian can also be replaced by other powers of−Δ{\displaystyle -\Delta }. In particular for the operator−Δ{\displaystyle {\sqrt {-\Delta }}}, a Lieb–Thirring inequality similar to (1) holds with a different constantLγ,n{\displaystyle L_{\gamma ,n}}and with the power on the right-hand side replaced byγ+n{\displaystyle \gamma +n}. Analogously a kinetic inequality similar to (2) holds, with1+2/n{\displaystyle 1+2/n}replaced by1+1/n{\displaystyle 1+1/n}, which can be used to prove stability of matter for the relativistic Schrödinger operator under additional assumptions on the chargesZk{\displaystyle Z_{k}}.[13]
In essence, the Lieb–Thirring inequality (1) gives an upper bound on the distances of the eigenvaluesλj{\displaystyle \lambda _{j}}to theessential spectrum[0,∞){\displaystyle [0,\infty )}in terms of the perturbationV{\displaystyle V}. Similar inequalities can be proved forJacobi operators.[14]
|
https://en.wikipedia.org/wiki/Lieb%E2%80%93Thirring_inequality
|
Inmathematics, atrace identityis anyequationinvolving thetraceof amatrix.
Trace identities are invariant under simultaneousconjugation.
They are frequently used in theinvariant theoryofn×n{\displaystyle n\times n}matrices to find thegeneratorsandrelationsof thering of invariants, and therefore are useful in answering questions similar to that posed byHilbert's fourteenth problem.
Rowen, Louis Halle (2008),Graduate Algebra: Noncommutative View,Graduate Studies in Mathematics, vol. 2, American Mathematical Society, p. 412,ISBN9780821841532.
|
https://en.wikipedia.org/wiki/Trace_identity
|
Inphysics, thevon Neumann entropy, named afterJohn von Neumann, is a measure of the statistical uncertainty within a description of a quantum system. It extends the concept ofGibbs entropyfrom classicalstatistical mechanicstoquantum statistical mechanics, and it is the quantum counterpart of theShannon entropyfrom classicalinformation theory. For a quantum-mechanical system described by adensity matrixρ, the von Neumann entropy is[1]S=−tr(ρlnρ),{\displaystyle S=-\operatorname {tr} (\rho \ln \rho ),}wheretr{\displaystyle \operatorname {tr} }denotes thetraceandln{\displaystyle \operatorname {ln} }denotes thematrix versionof thenatural logarithm. If the density matrixρis written in a basis of itseigenvectors|1⟩,|2⟩,|3⟩,…{\displaystyle |1\rangle ,|2\rangle ,|3\rangle ,\dots }asρ=∑jηj|j⟩⟨j|,{\displaystyle \rho =\sum _{j}\eta _{j}\left|j\right\rangle \left\langle j\right|,}then the von Neumann entropy is merelyS=−∑jηjlnηj.{\displaystyle S=-\sum _{j}\eta _{j}\ln \eta _{j}.}In this form,Scan be seen as the Shannon entropy of the eigenvalues, reinterpreted as probabilities.[2]
The von Neumann entropy and quantities based upon it are widely used in the study ofquantum entanglement.[3]
In quantum mechanics, probabilities for the outcomes of experiments made upon a system are calculated from thequantum statedescribing that system. Each physical system is associated with avector space, or more specifically aHilbert space. Thedimensionof the Hilbert space may be infinite, as it is for the space ofsquare-integrable functionson a line, which is used to define the quantum physics of a continuous degree of freedom. Alternatively, the Hilbert space may be finite-dimensional, as occurs forspindegrees of freedom. A density operator, the mathematical representation of a quantum state, is apositive semi-definite,self-adjoint operatoroftraceone acting on the Hilbert space of the system.[4][5][6]A density operator that is a rank-1 projection is known as apurequantum state, and all quantum states that are not pure are designatedmixed. Pure states are also known aswavefunctions. Assigning a pure state to a quantum system implies certainty about the outcome of some measurement on that system (i.e.,P(x)=1{\displaystyle P(x)=1}for some outcomex{\displaystyle x}). Thestate spaceof a quantum system is the set of all states, pure and mixed, that can be assigned to it. For any system, the state space is aconvex set: Any mixed state can be written as aconvex combinationof pure states, thoughnot in a unique way.[7]The von Neumann entropy quantifies the extent to which a state is mixed.[8]
The prototypical example of a finite-dimensional Hilbert space is aqubit, a quantum system whose Hilbert space is 2-dimensional. An arbitrary state for a qubit can be written as a linear combination of thePauli matrices, which provide a basis for2×2{\displaystyle 2\times 2}self-adjoint matrices:[9]ρ=12(I+rxσx+ryσy+rzσz),{\displaystyle \rho ={\tfrac {1}{2}}\left(I+r_{x}\sigma _{x}+r_{y}\sigma _{y}+r_{z}\sigma _{z}\right),}where the real numbers(rx,ry,rz){\displaystyle (r_{x},r_{y},r_{z})}are the coordinates of a point within theunit ballandσx=(0110),σy=(0−ii0),σz=(100−1).{\displaystyle \sigma _{x}={\begin{pmatrix}0&1\\1&0\end{pmatrix}},\quad \sigma _{y}={\begin{pmatrix}0&-i\\i&0\end{pmatrix}},\quad \sigma _{z}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}.}The von Neumann entropy vanishes whenρ{\displaystyle \rho }is a pure state, i.e., when the point(rx,ry,rz){\displaystyle (r_{x},r_{y},r_{z})}lies on the surface of the unit ball, and it attains its maximum value whenρ{\displaystyle \rho }is themaximally mixedstate, which is given byrx=ry=rz=0{\displaystyle r_{x}=r_{y}=r_{z}=0}.[10]
Some properties of the von Neumann entropy:
S(∑i=1kλiρi)≥∑i=1kλiS(ρi).{\displaystyle S{\bigg (}\sum _{i=1}^{k}\lambda _{i}\rho _{i}{\bigg )}\geq \sum _{i=1}^{k}\lambda _{i}S(\rho _{i}).}
S(ρA⊗ρB)=S(ρA)+S(ρB).{\displaystyle S(\rho _{A}\otimes \rho _{B})=S(\rho _{A})+S(\rho _{B}).}
S(ρABC)+S(ρB)≤S(ρAB)+S(ρBC).{\displaystyle S(\rho _{ABC})+S(\rho _{B})\leq S(\rho _{AB})+S(\rho _{BC}).}
S(ρAC)≤S(ρA)+S(ρC).{\displaystyle S(\rho _{AC})\leq S(\rho _{A})+S(\rho _{C}).}
Below, the concept of subadditivity is discussed, followed by its generalization to strong subadditivity.
IfρA,ρBare thereduced density matricesof the general stateρAB, then|S(ρA)−S(ρB)|≤S(ρAB)≤S(ρA)+S(ρB).{\displaystyle \left|S(\rho _{A})-S(\rho _{B})\right|\leq S(\rho _{AB})\leq S(\rho _{A})+S(\rho _{B}).}
The right hand inequality is known assubadditivity,and the left is sometimes known as thetriangle inequality.[17]While in Shannon's theory the entropy of a composite system can never be lower than the entropy of any of its parts, in quantum theory this is not the case; i.e., it is possible thatS(ρAB) = 0, whileS(ρA) =S(ρB) > 0. This is expressed by saying that the Shannon entropy ismonotonicbut the von Neumann entropy is not.[18]For example, take theBell stateof twospin-1/2particles:|ψ⟩=|↑↓⟩+|↓↑⟩.{\displaystyle \left|\psi \right\rangle =\left|\uparrow \downarrow \right\rangle +\left|\downarrow \uparrow \right\rangle .}This is a pure state with zero entropy, but each spin has maximum entropy when considered individually, because itsreduced density matrixis the maximally mixed state. This indicates that it is anentangledstate;[19]the use of entropy as an entanglement measure is discussed further below.
The von Neumann entropy is alsostrongly subadditive.[20]Given threeHilbert spaces,A,B,C,S(ρABC)+S(ρB)≤S(ρAB)+S(ρBC).{\displaystyle S(\rho _{ABC})+S(\rho _{B})\leq S(\rho _{AB})+S(\rho _{BC}).}By using the proof technique that establishes the left side of the triangle inequality above, one can show that the strong subadditivity inequality is equivalent to the following inequality:S(ρA)+S(ρC)≤S(ρAB)+S(ρBC){\displaystyle S(\rho _{A})+S(\rho _{C})\leq S(\rho _{AB})+S(\rho _{BC})}whereρAB, etc. are the reduced density matrices of a density matrixρABC.[21]If we apply ordinary subadditivity to the left side of this inequality, we then findS(ρAC)≤S(ρAB)+S(ρBC).{\displaystyle S(\rho _{AC})\leq S(\rho _{AB})+S(\rho _{BC}).}By symmetry, for any tripartite stateρABC, each of the three numbersS(ρAB),S(ρBC),S(ρAC)is less than or equal to the sum of the other two.[22]
Given a quantum state and a specification of a quantum measurement, we can calculate the probabilities for the different possible results of that measurement, and thus we can find the Shannon entropy of that probability distribution. A quantum measurement can be specified mathematically as apositive operator valued measure, or POVM.[23]In the simplest case, a system with a finite-dimensional Hilbert space and measurement with a finite number of outcomes, a POVM is a set ofpositive semi-definitematrices{Fi}{\displaystyle \{F_{i}\}}on the Hilbert space that sum to theidentity matrix,[24]∑i=1nFi=I.{\displaystyle \sum _{i=1}^{n}F_{i}=\operatorname {I} .}The POVM elementFi{\displaystyle F_{i}}is associated with the measurement outcomei{\displaystyle i}, such that the probability of obtaining it when making a measurement on thequantum stateρ{\displaystyle \rho }is given byProb(i)=tr(ρFi).{\displaystyle {\text{Prob}}(i)=\operatorname {tr} (\rho F_{i}).}A POVM isrank-1if all of the elements are proportional to rank-1 projection operators. The von Neumann entropy is the minimum achievable Shannon entropy, where the minimization is taken over all rank-1 POVMs.[25]
Ifρiare density operators andλiis a collection of positive numbers which sum to unity (Σiλi=1{\displaystyle \Sigma _{i}\lambda _{i}=1}), thenρ=∑i=1kλiρi{\displaystyle \rho =\sum _{i=1}^{k}\lambda _{i}\rho _{i}}is a valid density operator, and the difference between its von Neumann entropy and the weighted average of the entropies of theρiis bounded by theShannonentropy of theλi:S(∑i=1kλiρi)−∑i=1kλiS(ρi)≤−∑i=1kλilogλi.{\displaystyle S{\bigg (}\sum _{i=1}^{k}\lambda _{i}\rho _{i}{\bigg )}-\sum _{i=1}^{k}\lambda _{i}S(\rho _{i})\leq -\sum _{i=1}^{k}\lambda _{i}\log \lambda _{i}.}Equality is attained when thesupportsof theρi– the spaces spanned by their eigenvectors corresponding to nonzero eigenvalues – are orthogonal. The difference on the left-hand side of this inequality is known as the Holevo χ quantity and also appears inHolevo's theorem, an important result inquantum information theory.[26]
The time evolution of an isolated system is described by a unitary operator:ρ→UρU†.{\displaystyle \rho \to U\rho U^{\dagger }.}Unitary evolution takes pure states into pure states,[27]and it leaves the von Neumann entropy unchanged. This follows from the fact that the entropy ofρ{\displaystyle \rho }is a function of the eigenvalues ofρ{\displaystyle \rho }.[28]
A measurement upon a quantum system will generally bring about a change of the quantum state of that system. Writing a POVM does not provide the complete information necessary to describe this state-change process.[29]To remedy this, further information is specified by decomposing each POVM element into a product:Ei=Ai†Ai.{\displaystyle E_{i}=A_{i}^{\dagger }A_{i}.}TheKraus operatorsAi{\displaystyle A_{i}}, named forKarl Kraus, provide a specification of the state-change process. They are not necessarily self-adjoint, but the productsAi†Ai{\displaystyle A_{i}^{\dagger }A_{i}}are. If upon performing the measurement the outcomeEi{\displaystyle E_{i}}is obtained, then the initial stateρ{\displaystyle \rho }is updated toρ→ρ′=AiρAi†Prob(i)=AiρAi†tr(ρEi).{\displaystyle \rho \to \rho '={\frac {A_{i}\rho A_{i}^{\dagger }}{\mathrm {Prob} (i)}}={\frac {A_{i}\rho A_{i}^{\dagger }}{\operatorname {tr} (\rho E_{i})}}.}An important special case is the Lüders rule, named forGerhart Lüders.[30][31]If the POVM elements areprojection operators, then the Kraus operators can be taken to be the projectors themselves:ρ→ρ′=ΠiρΠitr(ρΠi).{\displaystyle \rho \to \rho '={\frac {\Pi _{i}\rho \Pi _{i}}{\operatorname {tr} (\rho \Pi _{i})}}.}If the initial stateρ{\displaystyle \rho }is pure, and the projectorsΠi{\displaystyle \Pi _{i}}have rank 1, they can be written as projectors onto the vectors|ψ⟩{\displaystyle |\psi \rangle }and|i⟩{\displaystyle |i\rangle }, respectively. The formula simplifies thus toρ=|ψ⟩⟨ψ|→ρ′=|i⟩⟨i|ψ⟩⟨ψ|i⟩⟨i||⟨i|ψ⟩|2=|i⟩⟨i|.{\displaystyle \rho =|\psi \rangle \langle \psi |\to \rho '={\frac {|i\rangle \langle i|\psi \rangle \langle \psi |i\rangle \langle i|}{|\langle i|\psi \rangle |^{2}}}=|i\rangle \langle i|.}We can define a linear, trace-preserving,completely positive map, by summing over all the possible post-measurement states of a POVM without the normalisation:ρ→∑iAiρAi†.{\displaystyle \rho \to \sum _{i}A_{i}\rho A_{i}^{\dagger }.}It is an example of aquantum channel,[32]and can be interpreted as expressing how a quantum state changes if a measurement is performed but the result of that measurement is lost.[33]Channels defined by projective measurements can never decrease the von Neumann entropy; they leave the entropy unchanged only if they do not change the density matrix.[34]A quantum channel will increase or leave constant the von Neumann entropy of every input state if and only if the channel isunital, i.e., if it leaves fixed the maximally mixed state. An example of a channel that decreases the von Neumann entropy is theamplitude damping channelfor a qubit, which sends all mixed states towards a pure state.[35]
The quantum version of thecanonical distribution, theGibbs states, are found by maximizing the von Neumann entropy under the constraint that the expected value of the Hamiltonian is fixed. A Gibbs state is a density operator with the same eigenvectors as the Hamiltonian, and its eigenvalues areλi=1Zexp(−EikBT),{\displaystyle \lambda _{i}={\frac {1}{Z}}\exp \left(-{\frac {E_{i}}{k_{B}T}}\right),}whereTis the temperature,kB{\displaystyle k_{B}}is theBoltzmann constant, andZis thepartition function.[36][37]The von Neumann entropy of a Gibbs state is, up to a factorkB{\displaystyle k_{B}}, the thermodynamic entropy.[38]
LetρAB{\displaystyle \rho _{AB}}be a joint state for the bipartite quantum systemAB.Then the conditional von Neumann entropyS(A|B){\displaystyle S(A|B)}is the difference between the entropy ofρAB{\displaystyle \rho _{AB}}and the entropy of the marginal state for subsystemBalone:S(A|B)=S(ρAB)−S(ρB).{\displaystyle S(A|B)=S(\rho _{AB})-S(\rho _{B}).}This is bounded above byS(ρA){\displaystyle S(\rho _{A})}. In other words, conditioning the description of subsystemAupon subsystemBcannot increase the entropy associated withA.[39]
Quantum mutual informationcan be defined as the difference between the entropy of the joint state and the total entropy of the marginals:S(A:B)=S(ρA)+S(ρB)−S(ρAB),{\displaystyle S(A:B)=S(\rho _{A})+S(\rho _{B})-S(\rho _{AB}),}which can also be expressed in terms of conditional entropy:[40]S(A:B)=S(A)−S(A|B)=S(B)−S(B|A).{\displaystyle S(A:B)=S(A)-S(A|B)=S(B)-S(B|A).}
Letρ{\displaystyle \rho }andσ{\displaystyle \sigma }be two density operators in the same state space. The relative entropy is defined to beS(σ|ρ)=tr[ρ(logρ−logσ)].{\displaystyle S(\sigma |\rho )=\operatorname {tr} [\rho (\log \rho -\log \sigma )].}The relative entropy is always greater than or equal to zero; it equals zero if and only ifρ=σ{\displaystyle \rho =\sigma }.[41]Unlike the von Neumann entropy itself, the relative entropy is monotonic, in that it decreases (or remains constant) when part of a system is traced over:[42]S(σA|ρA)≤S(σAB|ρAB).{\displaystyle S(\sigma _{A}|\rho _{A})\leq S(\sigma _{AB}|\rho _{AB}).}
Just asenergyis a resource that facilitates mechanical operations, entanglement is a resource that facilitates performing tasks that involve communication and computation.[43]The mathematical definition of entanglement can be paraphrased as saying that maximal knowledge about the whole of a system does not imply maximal knowledge about the individual parts of that system.[44]If the quantum state that describes a pair of particles is entangled, then the results of measurements upon one half of the pair can be strongly correlated with the results of measurements upon the other. However, entanglement is not the same as "correlation" as understood in classical probability theory and in daily life. Instead, entanglement can be thought of aspotentialcorrelation that can be used to generate actual correlation in an appropriate experiment.[45]The state of a composite system is always expressible as a sum, orsuperposition, of products of states of local constituents; it is entangled if this sum cannot be written as a single product term.[46]Entropy provides one tool that can be used to quantify entanglement.[47][48]If the overall system is described by a pure state, the entropy of one subsystem can be used to measure its degree of entanglement with the other subsystems. For bipartite pure states, the von Neumann entropy of reduced states is theuniquemeasure of entanglement in the sense that it is the only function on the family of states that satisfies certain axioms required of an entanglement measure.[49][50]It is thus known as theentanglement entropy.[51]
It is a classical result that the Shannon entropy achieves its maximum at, and only at, the uniform probability distribution {1/n, ..., 1/n}.[52]Therefore, a bipartite pure stateρ∈HA⊗HBis said to be amaximally entangled stateif the reduced state of each subsystem ofρis the diagonal matrix[53](1n⋱1n).{\displaystyle {\begin{pmatrix}{\frac {1}{n}}&&\\&\ddots &\\&&{\frac {1}{n}}\end{pmatrix}}.}
For mixed states, the reduced von Neumann entropy is not the only reasonable entanglement measure.[54]Some of the other measures are also entropic in character. For example, therelative entropy of entanglementis given by minimizing the relative entropy between a given stateρ{\displaystyle \rho }and the set of nonentangled, orseparable,states.[55]Theentanglement of formationis defined by minimizing, over all possible ways of writing ofρ{\displaystyle \rho }as a convex combination of pure states, the average entanglement entropy of those pure states.[56]Thesquashed entanglementis based on the idea of extending a bipartite stateρAB{\displaystyle \rho _{AB}}to a state describing a larger system,ρABE{\displaystyle \rho _{ABE}}, such that the partial trace ofρABE{\displaystyle \rho _{ABE}}overEyieldsρAB{\displaystyle \rho _{AB}}. One then finds theinfimumof the quantity12[S(ρAE)+S(ρBE)−S(ρE)−S(ρABE)],{\displaystyle {\frac {1}{2}}[S(\rho _{AE})+S(\rho _{BE})-S(\rho _{E})-S(\rho _{ABE})],}over all possible choices ofρABE{\displaystyle \rho _{ABE}}.[57]
Just as the Shannon entropy function is one member of the broader family of classicalRényi entropies, so too can the von Neumann entropy be generalized to the quantum Rényi entropies:Sα(ρ)=11−αln[trρα]=11−αln∑i=1Nλiα.{\displaystyle S_{\alpha }(\rho )={\frac {1}{1-\alpha }}\ln[\operatorname {tr} \rho ^{\alpha }]={\frac {1}{1-\alpha }}\ln \sum _{i=1}^{N}\lambda _{i}^{\alpha }.}In the limit thatα→1{\displaystyle \alpha \to 1}, this recovers the von Neumann entropy. The quantum Rényi entropies are all additive for product states, and for anyα{\displaystyle \alpha }, the Rényi entropySα{\displaystyle S_{\alpha }}vanishes for pure states and is maximized by the maximally mixed state. For any given stateρ{\displaystyle \rho },Sα(ρ){\displaystyle S_{\alpha }(\rho )}is a continuous, nonincreasing function of the parameterα{\displaystyle \alpha }. A weak version of subadditivity can be proven:Sα(ρA)−S0(ρB)≤Sα(ρAB)≤Sα(ρA)+S0(ρB).{\displaystyle S_{\alpha }(\rho _{A})-S_{0}(\rho _{B})\leq S_{\alpha }(\rho _{AB})\leq S_{\alpha }(\rho _{A})+S_{0}(\rho _{B}).}Here,S0{\displaystyle S_{0}}is the quantum version of theHartley entropy, i.e., the logarithm of therankof the density matrix.[58]
Thedensity matrixwas introduced, with different motivations, by von Neumann and byLev Landau. The motivation that inspired Landau was the impossibility of describing a subsystem of a composite quantum system by a state vector.[59]On the other hand, von Neumann introduced the density matrix in order to develop both quantum statistical mechanics and a theory of quantum measurements.[60]He introduced the expression now known as von Neumann entropy by arguing that a probabilistic combination of pure states is analogous to a mixture of ideal gases.[61][62]Von Neumann first published on the topic in 1927.[63]His argument was built upon earlier work byAlbert EinsteinandLeo Szilard.[64][65][66]
Max DelbrückandGert Molièreproved the concavity and subadditivity properties of the von Neumann entropy in 1936. Quantum relative entropy was introduced by Hisaharu Umegaki in 1962.[67][68]The subadditivity and triangle inequalities were proved in 1970 byHuzihiro ArakiandElliott H. Lieb.[69]Strong subadditivity is a more difficult theorem. It was conjectured byOscar LanfordandDerek Robinsonin 1968.[70]Lieb andMary Beth Ruskaiproved the theorem in 1973,[71][72]using a matrix inequality proved earlier by Lieb.[73][74]
|
https://en.wikipedia.org/wiki/Von_Neumann_entropy
|
Abinomial QMF– properly anorthonormal binomial quadrature mirror filter– is anorthogonal waveletdeveloped in 1990.
The binomial QMF bank with perfect reconstruction(PR)was designed byAli Akansu, and published in 1990, using the family of binomial polynomials for subband decomposition of discrete-time signals.[1][2][3]Akansu and his fellow authors also showed that these binomial-QMF filters are identical to thewaveletfilters designed independently byIngrid Daubechiesfrom compactly supported orthonormalwavelet transformperspective in 1988 (Daubechies wavelet). It was an extension of Akansu's prior work onBinomial coefficientandHermite polynomialswherein he developed the Modified Hermite Transformation (MHT) in 1987.[4][5]
Later, it was shown that the magnitude square functions of low-pass and high-pass binomial-QMF filters are the unique maximally flat functions in a two-band PR-QMF design framework.[6][7]
|
https://en.wikipedia.org/wiki/Binomial_QMF
|
TheDaubechies wavelets, based on the work ofIngrid Daubechies, are a family oforthogonal waveletsdefining adiscrete wavelet transformand characterized by a maximal number of vanishingmomentsfor some givensupport. With each wavelet type of this class, there is a scaling function (called thefather wavelet) which generates an orthogonalmultiresolution analysis.
In general the Daubechies wavelets are chosen to have the highest numberAof vanishing moments, (this does not imply the best smoothness) for given support width (number of coefficients) 2A.[1]There are two naming schemes in use, DNusing the length or number of taps, and dbAreferring to the number of vanishing moments. So D4 and db2 are the same wavelet transform.
Among the 2A−1possible solutions of the algebraic equations for the moment and orthogonality conditions, the one is chosen whose scaling filter has extremal phase. The wavelet transform is also easy to put into practice using thefast wavelet transform. Daubechies wavelets are widely used in solving a broad range of problems, e.g. self-similarity properties of a signal orfractalproblems, signal discontinuities, etc.
The Daubechies wavelets are not defined in terms of the resulting scaling and wavelet functions; in fact, they are not possible to write down inclosed form. The graphs below are generated using thecascade algorithm, a numeric technique consisting of inverse-transforming [1 0 0 0 0 ... ] an appropriate number of times.
Note that the spectra shown here are not the frequency response of the high and low pass filters, but rather the amplitudes of the continuous Fourier transforms of the scaling (blue) and wavelet (red) functions.
Daubechies orthogonal wavelets D2–D20 resp. db1–db10 are commonly used. Each wavelet has a number ofzero momentsorvanishing momentsequal to half the number of coefficients. For example, D2 has one vanishing moment, D4 has two, etc. A vanishing moment limits the wavelets ability to representpolynomialbehaviour or information in a signal. For example, D2, with one vanishing moment, easily encodes polynomials of one coefficient, or constant signal components. D4 encodes polynomials with two coefficients, i.e. constant and linear signal components; and D6 encodes 3-polynomials, i.e. constant, linear andquadraticsignal components. This ability to encode signals is nonetheless subject to the phenomenon ofscale leakage, and the lack of shift-invariance, which arise from the discrete shifting operation (below) during application of the transform. Sub-sequences which represent linear,quadratic(for example) signal components are treated differently by the transform depending on whether the points align with even- or odd-numbered locations in the sequence. The lack of the important property ofshift-invariance, has led to the development of several different versions of ashift-invariant (discrete) wavelet transform.
Both the scaling sequence (low-pass filter) and the wavelet sequence (band-pass filter) (seeorthogonal waveletfor details of this construction) will here be normalized to have sum equal 2 and sum of squares equal 2. In some applications, they are normalised to have sum2{\displaystyle {\sqrt {2}}}, so that both sequences and all shifts of them by an even number of coefficients are orthonormal to each other.
Using the general representation for a scaling sequence of an orthogonal discrete wavelet transform with approximation orderA,
withN= 2A,phaving real coefficients,p(1) = 1 and deg(p) =A− 1, one can write the orthogonality condition as
or equally as
with the Laurent-polynomial
generating all symmetric sequences andX(−Z)=2−X(Z).{\displaystyle X(-Z)=2-X(Z).}Further,P(X) stands for the symmetric Laurent-polynomial
Since
Ptakes nonnegative values on the segment [0,2].
Equation (*) has one minimal solution for eachA, which can be obtained by division in the ring of truncatedpower seriesinX,
Obviously, this has positive values on (0,2).
The homogeneous equation for (*) is antisymmetric aboutX= 1 and has thus the general solution
withRsome polynomial with real coefficients. That the sum
shall be nonnegative on the interval [0,2] translates into a set of linear restrictions on the coefficients ofR. The values ofPon the interval [0,2] are bounded by some quantity4A−r,{\displaystyle 4^{A-r},}maximizingrresults in a linear program with infinitely many inequality conditions.
To solve
forpone uses a technique called spectral factorization resp. Fejér-Riesz-algorithm. The polynomialP(X) splits into linear factors
Each linear factor represents a Laurent-polynomial
that can be factored into two linear factors. One can assign either one of the two linear factors top(Z), thus one obtains 2Npossible solutions. For extremal phase one chooses the one that has all complex roots ofp(Z) inside or on the unit circle and is thus real.
For Daubechies wavelet transform, a pair of linear filters is used. Each filter of the pair should be aquadrature mirror filter. Solving the coefficient of the linear filterci{\displaystyle c_{i}}using the quadrature mirror filter property results in the following solution for the coefficient values for filter of order 4.
Below are the coefficients for the scaling functions for D2-20. The wavelet coefficients are derived by reversing the order of thescaling functioncoefficients and then reversing the sign of every second one, (i.e., D4 wavelet≈{\displaystyle \approx }{−0.1830127, −0.3169873, 1.1830127, −0.6830127}). Mathematically, this looks likebk=(−1)kaN−1−k{\displaystyle b_{k}=(-1)^{k}a_{N-1-k}}wherekis the coefficient index,bis a coefficient of the wavelet sequence andaa coefficient of the scaling sequence.Nis the wavelet index, i.e., 2 for D2.
Parts of the construction are also used to derive the biorthogonalCohen–Daubechies–Feauveau wavelets(CDFs).
While software such asMathematicasupports Daubechies wavelets directly[2]a basic implementation is possible inMATLAB(in this case, Daubechies 4). This implementation uses periodization to handle the problem of finite length signals. Other, more sophisticated methods are available, but often it is not necessary to use these as it only affects the very ends of the transformed signal. The periodization is accomplished in the forward transform directly in MATLAB vector notation, and the inverse transform by using thecircshift()function:
It is assumed thatS, a column vector with an even number of elements, has been pre-defined as the signal to be analyzed. Note that the D4 coefficients are [1 +√3, 3 +√3, 3 −√3, 1 −√3]/4.
It was shown byAli Akansuin 1990 that thebinomial quadrature mirror filter bank(binomial QMF) is identical to the Daubechies wavelet filter, and its performance was ranked among known subspace solutions from a discrete-time signal processing perspective.[3][4]It was an extension of the prior work onbinomial coefficientandHermite polynomialsthat led to the development of the Modified Hermite Transformation (MHT) in 1987.[5][6]The magnitude square functions ofBinomial-QMFfilters are the unique maximally flat functions in a two-band perfect reconstruction QMF (PR-QMF) design formulation that is related to the wavelet regularity in the continuous domain.[7][8]
|
https://en.wikipedia.org/wiki/Daubechies_wavelet
|
Inapplied mathematics,biorthogonal nearly coiflet basesarewaveletbases proposed by Lowell L. Winger. The wavelet is based onbiorthogonalcoifletwavelet bases, but sacrifices its regularity to increase the filter'sbandwidth, which might lead to betterimage compressionperformance.
Nowadays, a large amount of information is stored, processed, and delivered, so the method of data compressing—especially for images—becomes more significant. Since wavelet transforms can deal with signals in both space and frequency domains, they compensate for the deficiency ofFourier transformsand emerged as a potential technique for image processing.[1]
Traditionalwaveletfilter design prefers filters with high regularity and smoothness to performimage compression.[2]Coifletsare such a kind of filter which emphasizes the vanishing moments of both the wavelet andscaling function, and can be achieved by maximizing the total number of vanishing moments and distributing them between the analysis and synthesislow pass filters. The property of vanishing moments enables the wavelet series of the signal to be asparsepresentation, which is the reason why wavelets can be applied forimage compression.[3]Besidesorthogonalfilter banks,biorthogonal waveletswith maximized vanishing moments have also been proposed.[4]However, regularity andsmoothnessare not sufficient for excellent image compression.[5]Common filter banks prefer filters with high regularity, flat passbands and stopbands, and a narrow transition zone, while Pixstream Incorporated proposed filters with wider passband by sacrificing their regularity and passband flatness.[5]
The biorthogonal wavelet base contains two wavelet functions,ψ(t){\displaystyle \psi (t)}and its couple waveletψ~(t){\displaystyle {\tilde {\psi }}(t)}, whileψ(t){\displaystyle \psi (t)}relates to the lowpass analysis filterH0{\displaystyle H0}and the high pass analysis filterG0{\displaystyle G0}. Similarly,ψ~(t){\displaystyle {\tilde {\psi }}(t)}relates to the lowpass synthesis filterH~0{\displaystyle {\tilde {H}}0}and the high pass synthesis filterG0~{\displaystyle {\tilde {G0}}}. For biorthogonal wavelet base,H0{\displaystyle H0}andG0~{\displaystyle {\tilde {G0}}}are orthogonal; Likewise,G0{\displaystyle G0}andH0~{\displaystyle {\tilde {H0}}}are orthogonal, too.
In order to construct a biorthogonal nearly coiflet base, the Pixstream Incorporated begins with the (max flat) biorthogonal coiflet base.[5]Decomposing and reconstructing low-pass filters expressed by Bernstein polynomials ensures that the coefficients of filters are symmetric, which benefits the image processing: If the phase of real-valued function is symmetry, than the function has generalized linear phase, and since the human eyes are sensitive to symmetrical error, wavelet base with linear phase is better for image processing application.[1]
Recall that theBernstein polynomialsare defined as below:
which can be considered as apolynomialf(x) over the intervalx∈[0,1]{\displaystyle x\in [0,1]}.[6]Besides, the Bernstein form of a general polynomial is expressed by
whered(i) are the Bernstein coefficients. Note that the number of zeros inBernsteincoefficients determines the vanishing moments of wavelet functions.[7]By sacrificing a zero of the Bernstein-basis filter atω=π{\displaystyle \omega =\pi }(which sacrifices its regularity and flatness), the filter is no longercoifletbut nearlycoiflet.[5]Then, the magnitude of the highest-order non-zero Bernstein basiscoefficientis increased, which leads to a widerpassband. On the other hand, to performimage compressionand reconstruction, analysis filters are determined bysynthesis filters. Since the designed filter has a lower regularity, worse flatness and wider passband, the resulting dual low pass filter has a higher regularity, better flatness and narrower passband. Besides, if the passband of the starting biorthogonal coiflet is narrower than the target synthesis filter G0, then its passband is widened only enough to match G0 in order to minimize the impact on smoothness (i.e. the analysis filter H0 is not invariably the design filter). Similarly, if the original coiflet is wider than the target G0, than the original filter's passband is adjusted to match the analysis filter H0. Therefore, the analysis and synthesis filters have similar bandwidth.
Theringingeffect (overshootand undershoot) and shift-variance of image compression might be alleviated by balancing the passband of the analysis and synthesis filters. In other word, the smoothest or highest regularity filters are not always the best choices for synthesis low pass filters.
The idea of this method is to obtain more free parameters by despairing some vanishing elements. However, this technique cannot unify biorthogonal wavelet filter banks with different taps into aclosed-formexpression based on onedegree of freedom.[8]
|
https://en.wikipedia.org/wiki/Biorthogonal_nearly_coiflet_basis
|
Insignal processing, thechirplet transformis aninner productof an input signal with a family of analysis primitives calledchirplets.[2][3]
Similar to thewavelet transform, chirplets are usually generated from (or can be expressed as being from) a singlemother chirplet(analogous to the so-calledmother waveletof wavelet theory).
The termchirplet transformwas coined bySteve Mann, as the title of the first published paper on chirplets. The termchirpletitself (apart from chirplet transform) was also used by Steve Mann, Domingo Mihovilovic, and Ronald Bracewell to describe a windowed portion of achirpfunction. In Mann's words:
A wavelet is a piece of a wave, and a chirplet, similarly, is a piece of a chirp. More precisely, a chirplet is a windowed portion of a chirp function, where the window provides some time localization property. In terms of time–frequency space, chirplets exist as rotated, sheared, or other structures that move from the traditional parallelism with the time and frequency axes that are typical for waves (Fourier andshort-time Fourier transforms) orwavelets.
The chirplet transform thus represents a rotated, sheared, or otherwise transformed tiling of the time–frequency plane. Although chirp signals have been known for many years inradar, pulse compression, and the like, the first published reference to thechirplet transformdescribed specific signal representations based on families of functions related to one another by time–varying frequency modulation or frequency varying time modulation, in addition to time and frequency shifting, and scale changes.[2]In that paper,[2]theGaussianchirplet transform was presented as one such example, together with a successful application to ice fragment detection in radar (improving target detection results over previous approaches). The termchirplet(but not the termchirplet transform) was also proposed for a similar transform, apparently independently, by Mihovilovic andBracewelllater that same year.[3]
The first practical application of the chirplet transform was in water-human-computer interaction (WaterHCI) for marine safety, to assist vessels in navigating through ice-infested waters, using marine radar to detect growlers (small iceberg fragments too small to be visible on conventional radar, yet large enough to damage a vessel).[4][5]
Other applications of the chirplet transform in WaterHCI include the SWIM (Sequential Wave Imprinting Machine).[6][7]
More recently other practical applications have been developed, including image processing (e.g. where there is periodic structure imaged through projective geometry),[6][8]as well as to excise chirp-like interference in spread spectrum communications,[9]in EEG processing,[10]and Chirplet Time Domain Reflectometry.[11]
The warblet transform[12][13][14][15][16][17]is a particular example of the chirplet transform introduced by Mann and Haykin in 1992 and now widely used. It provides a signal representation based on cyclically varying frequency modulated signals (warbling signals).
|
https://en.wikipedia.org/wiki/Chirplet_transform
|
Thecomplex wavelet transform(CWT) is acomplex-valuedextension to the standarddiscrete wavelet transform(DWT). It is a two-dimensionalwavelettransform which providesmultiresolution, sparse representation, and useful characterization of the structure of an image. Further, it purveys a high degree of shift-invariance in its magnitude, which was investigated in.[1]However, a drawback to this transform is that it exhibits2d{\displaystyle 2^{d}}(whered{\displaystyle d}is the dimension of the signal being transformed) redundancy compared to a separable (DWT).
The use of complex wavelets in image processing was originally set up in 1995 by J.M. Lina and L. Gagnon[2]in the framework of the Daubechies orthogonal filters banks.[3]It was then generalized in 1997 byNick Kingsbury[4][5][6]ofCambridge University.
In the area of computer vision, by exploiting the concept of visual contexts, one can quickly focus on candidate regions, where objects of interest may be found, and then compute additional features through the CWT for those regions only. These additional features, while not necessary for global regions, are useful in accurate detection and recognition of smaller objects. Similarly, the CWT may be applied to detect the activated voxels of cortex and additionally thetemporal independent component analysis(tICA) may be utilized to extract the underlying independent sources whose number is determined by Bayesian information criterion[1][permanent dead link].
Thedual-tree complex wavelet transform(DTCWT) calculates the complex transform of a signal using two separate DWT decompositions (treeaand treeb). If the filters used in one are specifically designed different from those in the other it is possible for one DWT to produce the real coefficients and the other the imaginary.
This redundancy of two provides extra information for analysis but at the expense of extra computational power. It also provides approximateshift-invariance(unlike the DWT) yet still allows perfect reconstruction of the signal.
The design of the filters is particularly important for the transform to occur correctly and the necessary characteristics are:
|
https://en.wikipedia.org/wiki/Complex_wavelet_transform
|
In mathematics andsignal processing, theconstant-Q transformandvariable-Q transform, simply known asCQTandVQT, transforms a data series to thefrequency domain. It is related to theFourier transform[1]and very closely related to the complexMorlet wavelettransform.[2]Its design is suited for musical representation.
The transform can be thought of as a series of filtersfk, logarithmically spaced in frequency, with thek-th filter having aspectral widthδfkequal to a multiple of the previous filter's width:
whereδfkis the bandwidth of thek-th filter,fminis the central frequency of the lowest filter, andnis the number of filters peroctave.
Theshort-time Fourier transformofx[n] for a frame shifted to samplemis calculated as follows:
Given a data series at sampling frequencyfs= 1/T,Tbeing the sampling period of our data, for each frequency bin we can define the following:
The equivalent transform kernel can be found by using the following substitutions:
After these modifications, we are left with
The variable-Q transform is the same as constant-Q transform, but the only difference is the filter Q is variable, hence the name variable-Q transform. The variable-Q transform is usefulwhere time resolution on low frequencies is important[examples needed]. There are ways to calculate the bandwidth of the VQT, one of them usingequivalent rectangular bandwidthas a value for VQT bin's bandwidth.[3]
The simplest way to implement a variable-Q transform is add a bandwidth offset calledγlike this one:[citation needed]
This formula can be modified to have extra parameters to adjust sharpness of the transition between constant-Q and constant-bandwidth like this:[citation needed]
withαas a parameter for transition sharpness and whereαof 2 is equals tohyperbolic sinefrequency scale, in terms of frequency resolution.
The direct calculation of the constant-Q transform (either using naivediscrete Fourier transformor slightly fasterGoertzel algorithm) is slow when compared against thefast Fourier transform. However, the fast Fourier transform can itself be employed, in conjunction with the use of akernel, to perform the equivalent calculation but much faster.[4]An approximate inverse to such an implementation was proposed in 2006; it works by going back to the discrete Fourier transform, and is only suitable for pitch instruments.[5]
A development on this method with improved invertibility involves performing CQT (via fast Fourier transform) octave-by-octave, using lowpass filtered and downsampled results for consecutively lower pitches.[6]Implementations of this method include the MATLAB implementation and LibROSA's Python implementation.[7]LibROSA combines the subsampled method with the direct fast Fourier transform method (which it dubs "pseudo-CQT") by having the latter process higher frequencies as a whole.[7]
Thesliding discrete Fourier transformcan be used for faster calculation of constant-Q transform, since the sliding discrete Fourier transform does not have to be linear-frequency spacing and same window size per bin.[8]
Alternatively, the constant-Q transform can be approximated by using multiple fast Fourier transforms of different window sizes and/or sampling rate at different frequency ranges then stitch it together. This is called multiresolutionshort-time Fourier transform, however the window sizes for multiresolution fast Fourier transforms are different per-octave, rather than per-bin.[9][ambiguous]
In general, the transform is well suited to musical data, and this can be seen in some of its advantages compared to the fast Fourier transform. As the output of the transform is effectively amplitude/phase against log frequency, fewer frequency bins are required to cover a given range effectively, and this proves useful where frequencies span several octaves. As the range of human hearing covers approximately ten octaves from 20 Hz to around 20 kHz, this reduction in output data is significant.
The transform exhibits a reduction in frequency resolution with higher frequency bins, which is desirable for auditory applications. The transform mirrors the human auditory system, whereby at lower-frequencies spectral resolution is better, whereas temporal resolution improves at higher frequencies. At the bottom of the piano scale (about 30 Hz), a difference of 1 semitone is a difference of approximately 1.5 Hz, whereas at the top of the musical scale (about 5 kHz), a difference of 1 semitone is a difference of approximately 200 Hz. So for musical data the exponential frequency resolution of constant-Q transform is ideal.
In addition, the harmonics of musical notes form a pattern characteristic of the timbre of the instrument in this transform. Assuming the same relative strengths of each harmonic, as the fundamental frequency changes, the relative position of these harmonics remains constant. This can make identification of instruments much easier. The constant Q transform can also be used for automatic recognition of musical keys based on accumulated chroma content.[10]
Relative to the Fourier transform, implementation of this transform is more tricky. This is due to the varying number of samples used in the calculation of each frequency bin, which also affects the length of any windowing function implemented.[11]
Also note that because the frequency scale is logarithmic, there is no true zero-frequency / DC term present, which may be a drawback in applications that are interested in the DC term. Although for applications that are not interested in the DC such as audio, this is not a drawback.
|
https://en.wikipedia.org/wiki/Constant-Q_transform
|
Inmathematics, thecontinuous wavelet transform(CWT) is a formal (i.e., non-numerical) tool that provides an overcomplete representation of a signal by letting the translation and scale parameter of thewaveletsvary continuously.
The continuous wavelet transform of a functionx(t){\displaystyle x(t)}at a scalea∈R+∗{\displaystyle a\in \mathbb {R^{+*}} }and translational valueb∈R{\displaystyle b\in \mathbb {R} }is expressed by the following integral
whereψ(t){\displaystyle \psi (t)}is a continuous function in both the time domain and the frequency domain called the mother wavelet and the overline represents operation ofcomplex conjugate. The main purpose of the mother wavelet is to provide a source function to generate the daughter wavelets which are simply the translated and scaled versions of the mother wavelet. To recover the original signalx(t){\displaystyle x(t)}, the first inverse continuous wavelet transform can be exploited.
ψ~(t){\displaystyle {\tilde {\psi }}(t)}is thedual functionofψ(t){\displaystyle \psi (t)}and
is admissible constant, where hat means Fourier transform operator. Sometimes,ψ~(t)=ψ(t){\displaystyle {\tilde {\psi }}(t)=\psi (t)}, then the admissible constant becomes
Traditionally, this constant is called wavelet admissible constant. A wavelet whose admissible constant satisfies
is called an admissible wavelet. To recover the original signalx(t){\displaystyle x(t)}, the second inverse continuous wavelet transform can be exploited.
This inverse transform suggests that a wavelet should be defined as
wherew(t){\displaystyle w(t)}is a window. Such defined wavelet can be called as an analyzing wavelet, because it admits to time-frequency analysis. An analyzing wavelet is unnecessary to be admissible.
The scale factora{\displaystyle a}either dilates or compresses a signal. When the scale factor is relatively low, the signal is more contracted which in turn results in a more detailed resulting graph. However, the drawback is that low scale factor does not last for the entire duration of the signal. On the other hand, when the scale factor is high, the signal is stretched out which means that the resulting graph will be presented in less detail. Nevertheless, it usually lasts the entire duration of the signal.
In definition, the continuous wavelet transform is aconvolutionof the input data sequence with a set of functions generated by the mother wavelet. The convolution can be computed by using afast Fourier transform(FFT) algorithm. Normally, the outputXw(a,b){\displaystyle X_{w}(a,b)}is a real valued function except when the mother wavelet is complex. A complex mother wavelet will convert the continuous wavelet transform to a complex valued function. The power spectrum of the continuous wavelet transform can be represented by1a⋅|Xw(a,b)|2{\displaystyle {\frac {1}{a}}\cdot |X_{w}(a,b)|^{2}}.[1][2]
One of the most popular applications of wavelet transform is image compression. The advantage of using wavelet-based coding in image compression is that it provides significant improvements in picture quality at higher compression ratios over conventional techniques. Since wavelet transform has the ability to decompose complex information and patterns into elementary forms, it is commonly used in acoustics processing and pattern recognition, but it has been also proposed as an instantaneous frequency estimator.[3]Moreover, wavelet transforms can be applied to the following scientific research areas: edge and corner detection, partial differential equation solving, transient detection, filter design,electrocardiogram(ECG) analysis, texture analysis, business information analysis and gait analysis.[4]Wavelet transforms can also be used inElectroencephalography(EEG) data analysis to identify epileptic spikes resulting fromepilepsy.[5]Wavelet transform has been also successfully used for the interpretation of time series of landslides[6]and land subsidence,[7]and for calculating the changing periodicities of epidemics.[8]
Continuous Wavelet Transform (CWT) is very efficient in determining the damping ratio of oscillating signals (e.g. identification of damping in dynamic systems). CWT is also very resistant to the noise in the signal.[9]
|
https://en.wikipedia.org/wiki/Continuous_wavelet_transform
|
Innumerical analysisandfunctional analysis, adiscrete wavelet transform(DWT) is anywavelet transformfor which thewaveletsare discretely sampled. As with other wavelet transforms, a key advantage it has overFourier transformsis temporal resolution: it captures both frequencyandlocation information (location in time).
The DWT of a signalx{\displaystyle x}is calculated by passing it through a series of filters. First the samples are passed through alow-pass filterwithimpulse responseg{\displaystyle g}resulting in aconvolutionof the two:
The signal is also decomposed simultaneously using ahigh-pass filterh{\displaystyle h}. The outputs give the detail coefficients (from the high-pass filter) and approximation coefficients (from the low-pass). It is important that the two filters are related to each other and they are known as aquadrature mirror filter.
However, since half the frequencies of the signal have now been removed, half the samples can be discarded according to Nyquist's rule. The filter output of the low-pass filterg{\displaystyle g}in the diagram above is thensubsampledby 2 and further processed by passing it again through a new low-pass filterg{\displaystyle g}and a high- pass filterh{\displaystyle h}with half the cut-off frequency of the previous one, i.e.:
This decomposition has halved the time resolution since only half of each filter output characterises the signal. However, each output has half the frequency band of the input, so the frequency resolution has been doubled.
With thesubsampling operator↓{\displaystyle \downarrow }
the above summation can be written more concisely.
However computing a complete convolutionx∗g{\displaystyle x*g}with subsequent downsampling would waste computation time.
TheLifting schemeis an optimization where these two computations are interleaved.
This decomposition is repeated to further increase the frequency resolution and the approximation coefficients decomposed with high- and low-pass filters and then down-sampled. This is represented as a binary tree with nodes representing a sub-space with a different time-frequency localisation. The tree is known as afilter bank.
At each level in the above diagram the signal is decomposed into low and high frequencies. Due to the decomposition process the input signal must be a multiple of2n{\displaystyle 2^{n}}wheren{\displaystyle n}is the number of levels.
For example a signal with 32 samples, frequency range 0 tofn{\displaystyle f_{n}}and 3 levels of decomposition, 4 output scales are produced:
The filterbank implementation of wavelets can be interpreted as computing the wavelet coefficients of adiscrete set of child waveletsfor a given mother waveletψ(t){\displaystyle \psi (t)}. In the case of the discrete wavelet transform, the mother wavelet is shifted and scaled by powers of two
ψj,k(t)=12jψ(t−k2j2j){\displaystyle \psi _{j,k}(t)={\frac {1}{\sqrt {2^{j}}}}\psi \left({\frac {t-k2^{j}}{2^{j}}}\right)}
wherej{\displaystyle j}is the scale parameter andk{\displaystyle k}is the shift parameter, both of which are integers.
Recall that the wavelet coefficientγ{\displaystyle \gamma }of a signalx(t){\displaystyle x(t)}is the projection ofx(t){\displaystyle x(t)}onto a wavelet, and letx(t){\displaystyle x(t)}be a signal of length2N{\displaystyle 2^{N}}. In the case of a child wavelet in the discrete family above,
γjk=∫−∞∞x(t)12jψ(t−k2j2j)dt{\displaystyle \gamma _{jk}=\int _{-\infty }^{\infty }x(t){\frac {1}{\sqrt {2^{j}}}}\psi \left({\frac {t-k2^{j}}{2^{j}}}\right)dt}
Now fixj{\displaystyle j}at a particular scale, so thatγjk{\displaystyle \gamma _{jk}}is a function ofk{\displaystyle k}only. In light of the above equation,γjk{\displaystyle \gamma _{jk}}can be viewed as aconvolutionofx(t){\displaystyle x(t)}with a dilated, reflected, and normalized version of the mother wavelet,h(t)=12jψ(−t2j){\displaystyle h(t)={\frac {1}{\sqrt {2^{j}}}}\psi \left({\frac {-t}{2^{j}}}\right)}, sampled at the points1,2j,2⋅2j,...,2N{\displaystyle 1,2^{j},2\cdot {2^{j}},...,2^{N}}. But this is precisely what the detail coefficients give at levelj{\displaystyle j}of the discrete wavelet transform. Therefore, for an appropriate choice ofh[n]{\displaystyle h[n]}andg[n]{\displaystyle g[n]}, the detail coefficients of the filter bank correspond exactly to a wavelet coefficient of a discrete set of child wavelets for a given mother waveletψ(t){\displaystyle \psi (t)}.
As an example, consider the discreteHaar wavelet, whose mother wavelet isψ=[1,−1]{\displaystyle \psi =[1,-1]}. Then the dilated, reflected, and normalized version of this wavelet ish[n]=12[−1,1]{\displaystyle h[n]={\frac {1}{\sqrt {2}}}[-1,1]}, which is, indeed, the highpass decomposition filter for the discrete Haar wavelet transform.
The filterbank implementation of the Discrete Wavelet Transform takes onlyO(N)in certain cases, as compared to O(NlogN) for thefast Fourier transform.
Note that ifg[n]{\displaystyle g[n]}andh[n]{\displaystyle h[n]}are both a constant length (i.e. their length is independent of N), thenx∗h{\displaystyle x*h}andx∗g{\displaystyle x*g}each takeO(N)time. The wavelet filterbank does each of these twoO(N)convolutions, then splits the signal into two branches of size N/2. But it only recursively splits the upper branch convolved withg[n]{\displaystyle g[n]}(as contrasted with the FFT, which recursively splits both the upper branch and the lower branch). This leads to the followingrecurrence relation
which leads to anO(N)time for the entire operation, as can be shown by ageometric seriesexpansion of the above relation.
As an example, the discreteHaar wavelettransform is linear, since in that caseh[n]{\displaystyle h[n]}andg[n]{\displaystyle g[n]}are constant length 2.
The locality of wavelets, coupled with the O(N) complexity, guarantees that the transform can be computed online (on a streaming basis). This property is in sharp contrast to FFT, which requires access to the entire signal at once. It also applies to the multi-scale transform and also to the multi-dimensional transforms (e.g., 2-D DWT).[1]
The first DWT was invented by Hungarian mathematicianAlfréd Haar. For an input represented by a list of2n{\displaystyle 2^{n}}numbers, theHaar wavelettransform may be considered to pair up input values, storing the difference and passing the sum. This process is repeated recursively, pairing up the sums to prove the next scale, which leads to2n−1{\displaystyle 2^{n}-1}differences and a final sum.
The most commonly used set of discrete wavelet transforms was formulated by the Belgian mathematicianIngrid Daubechiesin 1988. This formulation is based on the use ofrecurrence relationsto generate progressively finer discrete samplings of an implicit mother wavelet function; each resolution is twice that of the previous scale. In her seminal paper, Daubechies derives a family ofwavelets, the first of which is the Haar wavelet. Interest in this field has exploded since then, and many variations of Daubechies' original wavelets were developed.[2][3][4]
The dual-tree complex wavelet transform (C{\displaystyle \mathbb {C} }WT) is a relatively recent enhancement to the discrete wavelet transform (DWT), with important additional properties: It is nearly shift invariant and directionally selective in two and higher dimensions. It achieves this with a redundancy factor of only2d{\displaystyle 2^{d}}, substantially lower than the undecimated DWT. The multidimensional (M-D) dual-treeC{\displaystyle \mathbb {C} }WT is nonseparable but is based on a computationally efficient, separable filter bank (FB).[5]
Other forms of discrete wavelet transform include the Le Gall–Tabatabai (LGT) 5/3 wavelet developed by Didier Le Gall and Ali J. Tabatabai in 1988 (used inJPEG 2000orJPEG XS),[6][7][8]theBinomial QMFdeveloped byAli Naci Akansuin 1990,[9]theset partitioning in hierarchical trees(SPIHT) algorithm developed by Amir Said with William A. Pearlman in 1996,[10]thenon- or undecimated wavelet transform(where downsampling is omitted), and theNewland transform(where anorthonormalbasis of wavelets is formed from appropriately constructedtop-hat filtersinfrequency space).Wavelet packet transformsare also related to the discrete wavelet transform.Complex wavelet transformis another form.
Complete Java code for a 1-D and 2-D DWT usingHaar,Daubechies,Coiflet, andLegendrewavelets is available from the open source project:JWave.
Furthermore, a fast lifting implementation of the discrete biorthogonalCDF9/7 wavelet transform inC, used in theJPEG 2000image compression standard can be foundhere(archived 5 March 2012).
An example of theHaar waveletinJavais given below:
The figure on the right shows an example of applying the above code to compute the Haar wavelet coefficients on a sound waveform. This example highlights two key properties of the wavelet transform:
The Haar DWT illustrates the desirable properties of wavelets in general. First, it can be performed inO(n){\displaystyle O(n)}operations; second, it captures not only a notion of the frequency content of the input, by examining it at different scales, but also temporal content, i.e. the times at which these frequencies occur. Combined, these two properties make theFast wavelet transform(FWT) an alternative to the conventionalfast Fourier transform(FFT).
Due to the rate-change operators in the filter bank, the discrete WT is not time-invariant but actually very sensitive to the alignment of the signal in time. To address the time-varying problem of wavelet transforms, Mallat and Zhong proposed a new algorithm for wavelet representation of a signal, which is invariant to time shifts.[11]According to this algorithm, which is called a TI-DWT, only the scale parameter is sampled along the dyadic sequence 2^j (j∈Z) and the wavelet transform is calculated for each point in time.[12][13]
The discrete wavelet transform has a huge number of applications in science, engineering, mathematics and computer science. Most notably, it is used forsignal coding, to represent a discrete signal in a more redundant form, often as a preconditioning fordata compression. Practical applications can also be found in signal processing of accelerations for gait analysis,[14][15]image processing,[16][17]in digital communications and many others.[18][19][20]
It is shown that discrete wavelet transform (discrete in scale and shift, and continuous in time) is successfully implemented as analog filter bank in biomedical signal processing for design of low-power pacemakers and also in ultra-wideband (UWB) wireless communications.[21]
Wavelets are often used to denoise two dimensional signals, such as images. The following example provides three steps to remove unwanted white Gaussian noise from the noisy image shown.Matlabwas used to import and filter the image.
The first step is to choose a wavelet type, and a level N of decomposition. In this casebiorthogonal3.5 wavelets were chosen with a level N of 10. Biorthogonal wavelets are commonly used in image processing to detect and filter white Gaussian noise,[22]due to their high contrast of neighboring pixel intensity values. Using these wavelets awavelet transformationis performed on the two dimensional image.
Following the decomposition of the image file, the next step is to determine threshold values for each level from 1 to N. Birgé-Massart strategy[23]is a fairly common method for selecting these thresholds. Using this process individual thresholds are made for N = 10 levels. Applying these thresholds are the majority of the actual filtering of the signal.
The final step is to reconstruct the image from the modified levels. This is accomplished using an inverse wavelet transform. The resulting image, with white Gaussian noise removed is shown below the original image. When filtering any form of data it is important to quantify thesignal-to-noise-ratioof the result.[citation needed]In this case, the SNR of the noisy image in comparison to the original was 30.4958%, and the SNR of the denoised image is 32.5525%. The resulting improvement of the wavelet filtering is a SNR gain of 2.0567%.[24]
Choosing other wavelets, levels, and thresholding strategies can result in different types of filtering. In this example, white Gaussian noise was chosen to be removed. Although, with different thresholding, it could just as easily have been amplified.
To illustrate the differences and similarities between the discrete wavelet transform with thediscrete Fourier transform, consider the DWT and DFT of the following sequence: (1,0,0,0), aunit impulse.
The DFT has orthogonal basis (DFT matrix):
while the DWT with Haar wavelets for length 4 data has orthogonal basis in the rows of:
(To simplify notation, whole numbers are used, so the bases areorthogonalbut notorthonormal.)
Preliminary observations include:
The DWT demonstrates the localization: the (1,1,1,1) term gives the average signal value, the (1,1,–1,–1) places the signal in the left side of the domain, and the
(1,–1,0,0) places it at the left side of the left side, and truncating at any stage yields a downsampled version of the signal:
The DFT, by contrast, expresses the sequence by the interference of waves of various frequencies – thus truncating the series yields alow-pass filteredversion of the series:
Notably, the middle approximation (2-term) differs. From the frequency domain perspective, this is a better approximation, but from the time domain perspective it has drawbacks – it exhibitsundershoot– one of the values is negative, though the original series is non-negative everywhere – andringing, where the right side is non-zero, unlike in the wavelet transform. On the other hand, the Fourier approximation correctly shows a peak, and all points are within1/4{\displaystyle 1/4}of their correct value, though all points have error. The wavelet approximation, by contrast, places a peak on the left half, but has no peak at the first point, and while it is exactly correct for half the values (reflecting location), it has an error of1/2{\displaystyle 1/2}for the other values.
This illustrates the kinds of trade-offs between these transforms, and how in some respects the DWT provides preferable behavior, particularly for the modeling of transients.
Watermarkingusing DCT-DWT alters the wavelet coefficients of middle-frequency coefficient sets of 5-levels DWT transformed host image, followed by applying theDCTtransforms on the selected coefficient sets. Prasanalakshmi B proposed a method[25]that uses the HL frequency sub-band in the middle-frequency coefficient sets LHx and HLx in a 5-level Discrete Wavelet Transform (DWT) transformed image.
This algorithm chooses a coarser level of DWT in terms of imperceptibility and robustness to apply 4×4 block-based DCT on them. Consequently, higherimperceptibilityandrobustnesscan be achieved. Also, the pre-filtering operation is used before extraction of the watermark, sharpening, andLaplacian of Gaussian (LoG)filtering, which increases the difference between the information of the watermark and the hosted image.
The basic idea of the DWT for a two-dimensional image is described as follows: An image is first decomposed into four parts of high, middle, and low-frequency subcomponents (i.e., LL1, HL1, LH1, HH1) by critically subsampling horizontal and vertical channels using subcomponent filters.
The subcomponents HL1, LH1, and HH1 represent the finest scale wavelet coefficients. The subcomponent LL1 is decomposed and critically subsampled to obtain the following coarser-scaled wavelet components. This process is repeated several times, which is determined by the application at hand.
High-frequency components are considered to embed the watermark since they contain edge information, and the human eye is less sensitive to edge changes. In watermarking algorithms, besides the watermark's invisibility, the primary concern is choosing the frequency components to embed the watermark to survive the possible attacks that the transmitted image may undergo. Transform domain techniques have the advantage of unique properties of alternate domains to address spatial domain limitations and have additional features.
The Host image is made to undergo 5-level DWT watermarking. Embedding the watermark in the middle-level frequency sub-bands LLx gives a high degree of imperceptibility and robustness. Consequently, LLx coefficient sets in level five are chosen to increase the robustness of the watermark against common watermarking attacks, especially adding noise and blurring attacks, at little to no additional impact on image quality. Then, the block base DCT is performed on these selected DWT coefficient sets and embeds pseudorandom sequences in middle frequencies. The watermark embedding procedure is explained below:
1. Read the cover image I, of size N×N.
2.The four non-overlapping multi-resolution coefficient sets LL1, HL1, LH1, and HH1 are obtained initially.
3. Decomposition is performed till 5-levels and the frequency subcomponents {HH1, HL1, LH1,{{HH2, HL2, LH2, {HH3, HL3, LH3, {HH4, HL4, LH4, {HH5, HL5, LH5, LL5}}}}}} are obtained by computing the fifth level DWT of the image I.
4. Divide the final four coefficient sets: HH5, HL5, LH5 and LL5 into 4 x 4 blocks.
5. DCT is performed on each block in the chosen coefficient sets. These coefficient sets are chosen to inquire about the imperceptibility and robustness of algorithms equally.
6. Scramble the fingerprint image to gain the scrambled watermark WS (i, j).
7. Re-formulate the scrambled watermark image into a vector of zeros and ones.
8. Two uncorrelated pseudorandom sequences are generated from the key obtained from the palm vein. The number of elements in the two pseudorandom sequences must equal the number of mid-band elements of the DCT-transformed DWT coefficient sets.
9. Embed the two pseudorandom sequences with a gain factor α in the DCT-transformed 4x4 blocks of the selected DWT coefficient sets of the host image. Instead of embedding in all coefficients of the DCT block, it is applied only to the mid-band DCT coefficients. If X is denoted as the matrix of the mid-band coefficients of the DCT transformed block, then embedding is done with watermark bit 0, and X' is updated as X+∝*PN0,watermarkbit=0 and done with watermark bit 1 and X' is updated as X+∝*PN1. Inverse DCT (IDCT) is done on each block after its mid-band coefficients have been modified to embed the watermark bits.
10. To produce the watermarked host image, Perform the inverse DWT (IDWT) on the DWT-transformed image, including the modified coefficient sets.
|
https://en.wikipedia.org/wiki/Discrete_wavelet_transform
|
DjVu[a]is acomputerfile formatdesigned primarily to storescanned documents, especially those containing a combination of text, line drawings,indexed color images, and photographs. It uses technologies such as image layer separation of text and background/images,progressive loading,arithmetic coding, andlossy compressionforbitonal(monochrome) images. This allows high-quality, readable images to be stored in a minimum of space, so that they can be made available on theweb.
DjVu has been promoted as providing smaller files thanPDFfor most scanned documents.[3]The DjVu developers report that color magazine pages compress to 40–70 kB, black-and-white technical papers compress to 15–40 kB, and ancient manuscripts compress to around 100 kB; a satisfactoryJPEGimage typically requires 500 kB.[4]Like PDF, DjVu can contain anOCRtext layer, making it easy to performcopy and pasteand text search operations.
The DjVu technology was originally developed byYann LeCun,Léon Bottou,Patrick Haffner,Paul G. Howard,Patrice Simard, andYoshua BengioatAT&T Labsfrom 1996 to 2001.[4]
Prior to the standardization ofPDFin 2008,[5][6]DjVu was considered superior because it is anopen file format,[citation needed]in contrast to theproprietarynature of PDF at the time. The declared higher compression ratio (and thus smaller file size) and the claimed ease of converting large volumes of text into DjVu format were other arguments for DjVu's superiority over PDF in 2004. Independent technologistBrewster Kahlein a 2004 talk on IT Conversations discussed the benefits of allowing easier access to DjVu files.[7][8]
The DjVu library distributed as part of the open-source packageDjVuLibrehas become thereference implementationfor the DjVu format. DjVuLibre has been maintained and updated by the original developers of DjVu since 2002.[9]
The DjVu file format specification has gone through a number of revisions, the most recent being from 2005.
The primary usage of the DjVu format has been the electronic distribution of documents with a quality comparable to that of printed documents. As that niche is also the primary usage for PDF, it was inevitable that the two formats would become competitors. It should however be observed that the two formats approach the problem of delivering high resolution documents in very different ways: PDF primarily encodes graphics and text asvectoriseddata, whereas DjVu primarily encodes them aspixmapimages. This means PDF places the burden ofrenderingthe document on the reader, whereas DjVu places that burden on the creator.
During a number of years, significantly overlapping with the period when DjVu was being developed, there were no PDF viewers for free operating systems—a particular stumbling block was the rendering of vectorised fonts, which are essential for combining small file size with high resolution in PDF. Since displaying DjVu was a simpler problem for which free software was available, there were suggestions that thefree software movementshould employ DjVu instead of PDF for distributing documentation; rendering for creating DjVu is in principle not much different from rendering for a device-specific printer driver, and DjVu can as a last resort be generated from scans of paper media. However, whenFreeType2.0 in 2000 began to provide rendering of all major vectorised font formats, that specific advantage of DjVu began to erode.
In the 2000s, with the growth of theWorld Wide Weband before widespread adoption ofbroadband, DjVu was often adopted bydigital librariesas their format of choice, thanks to its integration with software likeGreenstone[10]and theInternet Archive,[11]browser plugins which allowed advanced online browsing, smaller file size for comparable quality of book scans and other image-heavy documents[12]and support for embedding and searching full text fromOCR.[13][14]Some features such as the thumbnail previews were later integrated in the Internet Archive's BookReader[15]and DjVu browsing was deprecated in its favour as around 2015 some major browsers stopped supportingNPAPIand DjVu plugins with them.[16]
DjVu.js Viewerattempts to replace the missing browser plugins.[citation needed]
The DjVu file format is based on theInterchange File Formatand is composed of hierarchically organized chunks. The IFF structure is preceded by a 4-byteAT&Tmagic number. Following is a singleFORMchunk with a secondary identifier of eitherDJVUorDJVMfor a single-page or a multi-page document, respectively.
All the chunks can be contained in a single file in the case of the so called bundled documents, or can be contained in several files: one file for every page plus some files with shared chunks.
DjVu divides a single image into many different images, then compresses them separately. To create a DjVu file, the initial image is first separated into three images: a background image, a foreground image, and a mask image. The background and foreground images are typically lower-resolution color images (e.g., 100 dpi); the mask image is a high-resolution bilevel image (e.g., 300 dpi) and is typically where the text is stored. The background and foreground images are then compressed using awavelet-based compressionalgorithm named IW44.[4]The mask image is compressed using a method called JB2 (similar toJBIG2). The JB2 encoding method identifies nearly identical shapes on the page, such as multiple occurrences of a particular character in a given font, style, and size. It compresses the bitmap of each unique shape separately, and then encodes the locations where each shape appears on the page. Thus, instead of compressing a letter "e" in a given font multiple times, it compresses the letter "e" once (as a compressed bit image) and then records every place on the page it occurs.
Optionally, these shapes may be mapped toUTF-8codes (either by hand or potentially by atext recognitionsystem) and stored in the DjVu file. If this mapping exists, it is possible to select and copy text.
Since JB2 (also called DjVuBitonal) is a variation on JBIG2, working on the same principles,[17]both compression methods have the same problems when performing lossy compression. In 2013 it emerged that Xerox photocopiers and scanners had been substituting digits for similar looking ones, for example replacing a 6 with an 8.[18]A DjVu document has been spotted in the wild with character substitutions, such as an n with bleeding serifs turning into a u and an o with a spot inside turning into an e.[19]Whether lossy compression has occurred is not stored in the file.[1]Thus the DjView viewing application can't warn the user thatglyphsubstitutions might have occurred, neither when opening a lossy compressed file, nor in the Information or Metadata dialogue boxes.[20]
DjVu is anopen file formatwith patents.[3]The file format specification is published, as well as source code for the reference library.[3]The original authors distribute anopen-sourceimplementation named "DjVuLibre" under theGNU General Public Licenseand a patent grant.[21]The rights to the commercial development of the encoding software have been transferred to different companies over the years, includingAT&T Corporation,LizardTech,[22]Celartem[23]andePapyrus Solutions K.K.(formerlyCuminas[24]before joining ePapyrus Solutions, Inc.[25]).[26]Patents typically have an expiry term of about 20 years.
Celartem acquired LizardTech andExtensis.[27][28][23][29][30]
The selection of downloadable DjVu viewers is wider onLinux distributionsthan it is on Windows or macOS. Additionally, the format is rarely supported by proprietary scanning software.
Free creators, manipulators, converters, web browser plug-ins, and desktop viewers are available.[2]DjVu is supported by a number of multi-format document viewers and e-book reader software on Linux (Okular,Evince,Zathura), Windows (Okular,SumatraPDF), and Android (Document Viewer,[31]FBReader, EBookDroid, PocketBook).
In 2002, the DjVu file format was chosen by theInternet Archiveas a format in which itsMillion Book Projectprovides scannedpublic-domainbooks online (along withTIFFand PDF).[32]In February 2016, the Internet Archive announced that DjVu would no longer be used for new uploads, among other reasons citing the format's declining use and the difficulty of maintaining theirJava appletbased viewer for the format.[16]
Wikimedia Commons, a media repository used byWikipediaamong others, conditionally permits PDF and DjVu media files.[33]
|
https://en.wikipedia.org/wiki/DjVu
|
Inmathematics, adual waveletis thedualto awavelet. In general, thewavelet seriesgenerated by asquare-integrablefunctionwill have a dual series, in the sense of theRiesz representation theorem. However, the dual series is not itself in general representable by a square-integrable function.
Given a square-integrable functionψ∈L2(R){\displaystyle \psi \in L^{2}(\mathbb {R} )}, define the series{ψjk}{\displaystyle \{\psi _{jk}\}}by
for integersj,k∈Z{\displaystyle j,k\in \mathbb {Z} }.
Such a function is called anR-functionif the linear span of{ψjk}{\displaystyle \{\psi _{jk}\}}isdenseinL2(R){\displaystyle L^{2}(\mathbb {R} )}, and if there exist positive constantsA,Bwith0<A≤B<∞{\displaystyle 0<A\leq B<\infty }such that
for all bi-infinitesquare summableseries{cjk}{\displaystyle \{c_{jk}\}}. Here,‖⋅‖l2{\displaystyle \Vert \cdot \Vert _{l^{2}}}denotes the square-sum norm:
and‖⋅‖L2{\displaystyle \Vert \cdot \Vert _{L^{2}}}denotes the usual norm onL2(R){\displaystyle L^{2}(\mathbb {R} )}:
By theRiesz representation theorem, there exists a unique dual basisψjk{\displaystyle \psi ^{jk}}such that
whereδjk{\displaystyle \delta _{jk}}is theKronecker deltaand⟨f|g⟩{\displaystyle \langle f\vert g\rangle }is the usualinner productonL2(R){\displaystyle L^{2}(\mathbb {R} )}. Indeed, there exists a uniqueseries representationfor a square-integrable functionfexpressed in this basis:
If there exists a functionψ~∈L2(R){\displaystyle {\tilde {\psi }}\in L^{2}(\mathbb {R} )}such that
thenψ~{\displaystyle {\tilde {\psi }}}is called thedual waveletor thewavelet dual to ψ. In general, for some givenR-function ψ, the dual will not exist. In the special case ofψ=ψ~{\displaystyle \psi ={\tilde {\psi }}}, the wavelet is said to be anorthogonal wavelet.
An example of anR-function without a dual is easy to construct. Letϕ{\displaystyle \phi }be an orthogonal wavelet. Then defineψ(x)=ϕ(x)+zϕ(2x){\displaystyle \psi (x)=\phi (x)+z\phi (2x)}for some complex numberz. It is straightforward to show that this ψ does not have a wavelet dual.
|
https://en.wikipedia.org/wiki/Dual_wavelet
|
ECW(Enhanced Compression Wavelet) is aproprietarywavelet compressionimage formatused foraerial photographyandsatellite imagery. It was developed by Earth Resource Mapping, which is now owned byIntergraph, part ofHexagon AB.[1]It is alossy compressionformat for images.
In 1998 Earth Resource Mapping Ltd inPerth, Western Australiacompany founder Stuart Nixon (founder ofNearmap) and two software developers Simon Cope and Mark Sheridan were researching rapid delivery of terabyte sized images over the internet using inexpensive server technology. The outcome of that research was two products, Image Web Server (IWS) and ECW. ECW enablesdiscrete wavelet transforms(DWT) and inverse-DWT operations to be performed quickly on large images while using a relatively small amount of memory.[2]Related (now expired) patents includedUS 6201897andUS 6442298for ECW andUS 6633688for IWS. These patents were obtained by ERDAS Inc. through the acquisition of Earth Resource Mapping on May 21, 2007.[3][4]IndirectlyHexagon ABbecame owner of these patents because they acquiredLeica Geosystemsin 2005 who had acquired ERDAS Inc in 2001.[5]
AfterJPEG2000became an image standard, ER Mapper added tools to read and write JPEG2000 data into the ECW SDK to form the ECW JPEG2000 SDK. After subsequent purchase by ERDAS (themselves subsequently merged into Intergraph), the software development kit was renamed to the ERDAS ECW/JP2 SDK.v5of the SDK was released on 2 July 2013.
Map projectioninformation can be embedded into the ECW file format to supportgeospatialapplications.
Image data of up to 65,535 bands (layers or colors) can be compressed into the ECW v2 or v3 file format at a rate of over 25MBper second on an i7 740QM (4-cores) 1.731GHzprocessor using v4.2 of the ECW/JP2 SDK. Data flow compression allows for compression of large images with smallRAMrequirements. The file format can achieve typical compression ratios from 1:2 to 1:100.
The ECW Protocol (ECWP) is an efficient streaming protocol used to transmit ECW and JPEG2000 images over networks, such as the Internet. ECWP supports ECWPS for private and secure encrypted streaming of image data over public networks such as the Internet.
There is a very fast read-only SDK supporting ECW and JPEG2000 which is available for no charge for desktop implementation for Windows, Linux and MacOSX. A read / write SDK can be purchased for desktop and server implementations for Windows, Linux and MacOSX. A full functioning server implementation (using ECW, JPEG2000, ECWP and JPIP) was offered within thePROVIDER SUITE of the Power Portfolio(formerly IWS) license.[6]A previous version of the SDK (3.3) is available in open source, and can be used for non-Microsoft operating systems, such as Linux, macOS or Android.
|
https://en.wikipedia.org/wiki/ECW_(file_format)
|
Geographic data and informationis defined in theISO/TC 211series of standards as data and information having an implicit or explicit association with a location relative toEarth(ageographic locationorgeographic position).[1][2]It is also calledgeospatial data and information,[citation needed]georeferenced data and information,[citation needed]as well asgeodataandgeoinformation.[citation needed]
Location information (known by the many names mentioned here) is stored in ageographic information system(GIS).
There are also many different types of geodata, includingvector files,raster files,geographic databases, web files, and multi-temporal data.
Spatial dataorspatial informationis broader class of data whose geometry is relevant but it is not necessarilygeoreferenced, such as incomputer-aided design(CAD), seegeometric modeling.
Geographic data and information are the subject of a number of overlappingfields of study, mainly:
"Geospatial technology" may refer to any of "geomatics", "geomatics", or "geographic information technology."
The above is in addition to other related fields, such as:
Thisgeography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Geospatial
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.