text
stringlengths
16
172k
source
stringlengths
32
122
Inphysical systems,dampingis the loss ofenergyof anoscillating systembydissipation.[1][2]Damping is an influence within or upon an oscillatory system that has the effect of reducing or preventing its oscillation.[3]Examples of damping includeviscous dampingin a fluid (seeviscousdrag),surface friction,radiation,[1]resistanceinelectronic oscillators, and absorption and scattering of light inoptical oscillators. Damping not based on energy loss can be important in other oscillating systems such as those that occur inbiological systemsandbikes[4](ex.Suspension (mechanics)). Damping is not to be confused withfriction, which is a type of dissipative force acting on a system. Friction can cause or be a factor of damping. Many systems exhibit oscillatory behavior when they are disturbed from their position ofstatic equilibrium. A mass suspended from a spring, for example, might, if pulled and released, bounce up and down. On each bounce, the system tends to return to its equilibrium position, but overshoots it. Sometimes losses (e.g. frictional) damp the system and can cause the oscillations to gradually decay in amplitude towards zero orattenuate. Thedamping ratiois adimensionlessmeasure, amongst other measures, that characterises how damped a system is. It is denoted byζ("zeta") and varies fromundamped(ζ= 0),underdamped(ζ< 1) throughcritically damped(ζ= 1) tooverdamped(ζ> 1). The behaviour of oscillating systems is often of interest in a diverse range of disciplines that includecontrol engineering,chemical engineering,mechanical engineering,structural engineering, andelectrical engineering. The physical quantity that is oscillating varies greatly, and could be the swaying of a tall building in the wind, or the speed of anelectric motor, but a normalised, or non-dimensionalised approach can be convenient in describing common aspects of behavior. Depending on the amount of damping present, a system exhibits different oscillatory behaviors and speeds. Adamped sine waveordamped sinusoidis asinusoidal functionwhose amplitude approaches zero as time increases. It corresponds to theunderdampedcase of damped second-order systems, or underdamped second-order differential equations.[6]Damped sine waves are commonly seen inscienceandengineering, wherever aharmonic oscillatoris losingenergyfaster than it is being supplied. A true sine wave starting at time = 0 begins at the origin (amplitude = 0). A cosine wave begins at its maximum value due to its phase difference from the sine wave. A given sinusoidal waveform may be of intermediate phase, having both sine and cosine components. The term "damped sine wave" describes all such damped waveforms, whatever their initial phase. The most common form of damping, which is usually assumed, is the form found in linear systems. This form is exponential damping, in which the outer envelope of the successive peaks is an exponential decay curve. That is, when you connect the maximum point of each successive curve, the result resembles an exponential decay function. The general equation for an exponentially damped sinusoid may be represented as:y(t)=Ae−λtcos⁡(ωt−φ){\displaystyle y(t)=Ae^{-\lambda t}\cos(\omega t-\varphi )}where: Other important parameters include: Thedamping ratiois a dimensionless parameter, usually denoted byζ(Greek letter zeta),[7]that characterizes the extent of damping in a second-order ordinarydifferential equation. It is particularly important in the study ofcontrol theory. It is also important in theharmonic oscillator. The greater the damping ratio, the more damped a system is. The damping ratio expresses the level of damping in a system relative to critical damping and can be defined using the damping coefficient: The damping ratio is dimensionless, being the ratio of two coefficients of identical units. Taking the simple example of amass-spring-damper modelwith massm, damping coefficientc, andspring constantk, wherex{\displaystyle x}represents thedegree of freedom, the system'sequation of motionis given by: The corresponding critical damping coefficient is:cc=2km{\displaystyle c_{c}=2{\sqrt {km}}} and thenatural frequencyof the system is:ωn=km{\displaystyle \omega _{n}={\sqrt {\frac {k}{m}}}} Using these definitions, the equation of motion can then be expressed as: This equation is more general than just the mass-spring-damper system and applies to electrical circuits and to other domains. It can be solved with the approach whereCandsare bothcomplexconstants, withssatisfying Two such solutions, for the two values ofssatisfying the equation, can be combined to make the general real solutions, with oscillatory and decaying properties in several regimes: TheQfactor, damping ratioζ, and exponential decay rate α are related such that[9] When a second-order system hasζ<1{\displaystyle \zeta <1}(that is, when the system is underdamped), it has twocomplex conjugatepoles that each have areal partof−α{\displaystyle -\alpha }; that is, the decay rate parameter represents the rate ofexponential decayof the oscillations. A lower damping ratio implies a lower decay rate, and so very underdamped systems oscillate for long times.[10]For example, a high qualitytuning fork, which has a very low damping ratio, has an oscillation that lasts a long time, decaying very slowly after being struck by a hammer. For underdamped vibrations, the damping ratio is also related to thelogarithmic decrementδ{\displaystyle \delta }. The damping ratio can be found for any two peaks, even if they are not adjacent.[11]For adjacent peaks:[12] wherex0andx1are amplitudes of any two successive peaks. As shown in the right figure: wherex1{\displaystyle x_{1}},x3{\displaystyle x_{3}}are amplitudes of two successive positive peaks andx2{\displaystyle x_{2}},x4{\displaystyle x_{4}}are amplitudes of two successive negative peaks. Incontrol theory,overshootrefers to an output exceeding its final, steady-state value.[13]For astep input, thepercentage overshoot(PO) is the maximum value minus the step value divided by the step value. In the case of the unit step, theovershootis just the maximum value of the step response minus one. The percentage overshoot (PO) is related to damping ratio (ζ) by: Conversely, the damping ratio (ζ) that yields a given percentage overshoot is given by: When an object is falling through the air, the only force opposing its freefall is air resistance. An object falling through water or oil would slow down at a greater rate, until eventually reaching a steady-state velocity as the drag force comes into equilibrium with the force from gravity. This is the concept ofviscous drag, which for example is applied in automatic doors or anti-slam doors.[14] Electrical systems that operate withalternating current(AC) use resistors to damp LC resonant circuits.[14] Kinetic energy that causes oscillations is dissipated as heat by electriceddy currentswhich are induced by passing through a magnet's poles, either by a coil or aluminum plate. Eddy currents are a key component ofelectromagnetic inductionwhere they set up amagnetic fluxdirectly opposing the oscillating movement, creating a resistive force.[15]In other words, the resistance caused by magnetic forces slows a system down. An example of this concept being applied is thebrakeson roller coasters.[16] Magnetorheological dampers (MR Dampers) useMagnetorheological fluid, which changes viscosity when subjected to a magnetic field. In this case, Magnetorheological damping may be considered an interdisciplinary form of damping with both viscous and magnetic damping mechanisms.[17][18] Materials have varying degrees of internal damping properties due to microstructural mechanisms within them. This property is sometimes known asdamping capacity. In metals, this arises due to movements of dislocations, as demonstrated nicely in this video:[19]Metals, as well as ceramics and glass, are known for having very light material damping. By contrast, polymers have a much higher material damping that arises from the energy loss required to contiually break and reform theVan der Waals forcesbetween polymer chains. The cross-linking inthermosetplastics causes less movement of the polymer chains and so the damping is less. Material damping is best characterized by the loss factorη{\displaystyle \eta }, given by the equation below for the case of very light damping, such as in metals or ceramics: This is because many microstructural processes that contribute to material damping are not well modelled by viscous damping, and so the damping ratio varies with frequency. Adding the frequency ratio as a factor typically makes the loss factor constant over a wide frequency range.
https://en.wikipedia.org/wiki/Damped_sine_wave
Inmathematics, theFourier transform(FT) is anintegral transformthat takes afunctionas input then outputs another function that describes the extent to which variousfrequenciesare present in the original function. The output of the transform is acomplex-valued function of frequency. The termFourier transformrefers to both this complex-valued function and themathematical operation. When a distinction needs to be made, the output of the operation is sometimes called thefrequency domainrepresentation of the original function. The Fourier transform is analogous to decomposing thesoundof a musicalchordinto theintensitiesof its constituentpitches. Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as theuncertainty principle. Thecriticalcase for this principle is theGaussian function, of substantial importance inprobability theoryandstatisticsas well as in the study of physical phenomena exhibitingnormal distribution(e.g.,diffusion). The Fourier transform of a Gaussian function is another Gaussian function.Joseph Fourierintroducedsine and cosine transforms(whichcorrespond to the imaginary and real componentsof the modern Fourier transform) in his study ofheat transfer, where Gaussian functions appear as solutions of theheat equation. The Fourier transform can be formally defined as animproperRiemann integral, making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory.[note 1]For example, many relatively simple applications use theDirac delta function, which can be treated formally as if it were a function, but the justification requires a mathematically more sophisticated viewpoint.[note 2] The Fourier transform can also be generalized to functions of several variables onEuclidean space, sending a function of3-dimensional'position space' to a function of3-dimensionalmomentum (or a function of space and time to a function of4-momentum). This idea makes the spatial Fourier transform very natural in the study of waves, as well as inquantum mechanics, where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possiblyvector-valued.[note 3]Still further generalization is possible to functions ongroups, which, besides the original Fourier transform onRorRn, notably includes thediscrete-time Fourier transform(DTFT, group =Z), thediscrete Fourier transform(DFT, group =ZmodN) and theFourier seriesor circular Fourier transform (group =S1, the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handleperiodic functions. Thefast Fourier transform(FFT) is an algorithm for computing the DFT. The Fourier transform of a complex-valued (Lebesgue) integrable functionf(x){\displaystyle f(x)}on the real line, is the complex valued functionf^(ξ){\displaystyle {\hat {f}}(\xi )}, defined by the integral[1] f^(ξ)=∫−∞∞f(x)e−i2πξxdx,∀ξ∈R.{\displaystyle {\widehat {f}}(\xi )=\int _{-\infty }^{\infty }f(x)\ e^{-i2\pi \xi x}\,dx,\quad \forall \xi \in \mathbb {R} .} Evaluating the Fourier transform for all values ofξ{\displaystyle \xi }produces thefrequency-domainfunction, and it converges at all frequencies to a continuous function tending to zero at infinity. Iff(x){\displaystyle f(x)}decays with all derivatives, i.e.,lim|x|→∞f(n)(x)=0,∀n∈N,{\displaystyle \lim _{|x|\to \infty }f^{(n)}(x)=0,\quad \forall n\in \mathbb {N} ,}thenf^{\displaystyle {\widehat {f}}}converges for all frequencies and, by theRiemann–Lebesgue lemma,f^{\displaystyle {\widehat {f}}}also decays with all derivatives. First introduced inFourier'sAnalytical Theory of Heat.,[2][3][4][5]the corresponding inversion formula for "sufficiently nice" functions is given by theFourier inversion theorem, i.e., f(x)=∫−∞∞f^(ξ)ei2πξxdξ,∀x∈R.{\displaystyle f(x)=\int _{-\infty }^{\infty }{\widehat {f}}(\xi )\ e^{i2\pi \xi x}\,d\xi ,\quad \forall \ x\in \mathbb {R} .} The functionsf{\displaystyle f}andf^{\displaystyle {\widehat {f}}}are referred to as aFourier transform pair.[6]A common notation for designating transform pairs is:[7]f(x)⟷Ff^(ξ),{\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ {\widehat {f}}(\xi ),}for examplerect⁡(x)⟷Fsinc⁡(ξ).{\displaystyle \operatorname {rect} (x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ \operatorname {sinc} (\xi ).} By analogy, theFourier seriescan be regarded as an abstract Fourier transform on the groupZ{\displaystyle \mathbb {Z} }ofintegers. That is, thesynthesisof a sequence of complex numberscn{\displaystyle c_{n}}is defined by the Fourier transformf(x)=∑n=−∞∞cnei2πnPx,{\displaystyle f(x)=\sum _{n=-\infty }^{\infty }c_{n}\,e^{i2\pi {\tfrac {n}{P}}x},}such thatcn{\displaystyle c_{n}}are given by the inversion formula, i.e., theanalysiscn=1P∫−P/2P/2f(x)e−i2πnPxdx,{\displaystyle c_{n}={\frac {1}{P}}\int _{-P/2}^{P/2}f(x)\,e^{-i2\pi {\frac {n}{P}}x}\,dx,}for some complex-valued,P{\displaystyle P}-periodic functionf(x){\displaystyle f(x)}defined on a bounded interval[−P/2,P/2]∈R{\displaystyle [-P/2,P/2]\in \mathbb {R} }. WhenP→∞,{\displaystyle P\to \infty ,}the constituentfrequenciesare a continuum:nP→ξ∈R,{\displaystyle {\tfrac {n}{P}}\to \xi \in \mathbb {R} ,}[8][9][10]andcn→f^(ξ)∈C{\displaystyle c_{n}\to {\hat {f}}(\xi )\in \mathbb {C} }.[11] In other words, on the finite interval[−P/2,P/2]{\displaystyle [-P/2,P/2]}the functionf(x){\displaystyle f(x)}has a discrete decomposition in the periodic functionsei2πxn/P{\displaystyle e^{i2\pi xn/P}}. On the infinite interval(−∞,∞){\displaystyle (-\infty ,\infty )}the functionf(x){\displaystyle f(x)}has a continuous decomposition in periodic functionsei2πxξ{\displaystyle e^{i2\pi x\xi }}. Ameasurable functionf:R→C{\displaystyle f:\mathbb {R} \to \mathbb {C} }is called (Lebesgue) integrable if theLebesgue integralof its absolute value is finite:‖f‖1=∫R|f(x)|dx<∞.{\displaystyle \|f\|_{1}=\int _{\mathbb {R} }|f(x)|\,dx<\infty .}Iff{\displaystyle f}is Lebesgue integrable then the Fourier transform, given byEq.1, is well-defined for allξ∈R{\displaystyle \xi \in \mathbb {R} }.[12]Furthermore,f^∈L∞∩C(R){\displaystyle {\widehat {f}}\in L^{\infty }\cap C(\mathbb {R} )}is bounded,uniformly continuousand (by theRiemann–Lebesgue lemma) zero at infinity. The spaceL1(R){\displaystyle L^{1}(\mathbb {R} )}is the space of measurable functions for which the norm‖f‖1{\displaystyle \|f\|_{1}}is finite, modulo theequivalence relationof equalityalmost everywhere. The Fourier transform onL1(R){\displaystyle L^{1}(\mathbb {R} )}isone-to-one. However, there is no easy characterization of the image, and thus no easy characterization of the inverse transform. In particular,Eq.2is no longer valid, as it was stated only under the hypothesis thatf(x){\displaystyle f(x)}decayed with all derivatives. WhileEq.1defines the Fourier transform for (complex-valued) functions inL1(R){\displaystyle L^{1}(\mathbb {R} )}, it is not well-defined for other integrability classes, most importantly the space ofsquare-integrable functionsL2(R){\displaystyle L^{2}(\mathbb {R} )}. For example, the functionf(x)=(1+x2)−1/2{\displaystyle f(x)=(1+x^{2})^{-1/2}}is inL2{\displaystyle L^{2}}but notL1{\displaystyle L^{1}}and therefore the Lebesgue integralEq.1does not exist. However, the Fourier transform on the dense subspaceL1∩L2(R)⊂L2(R){\displaystyle L^{1}\cap L^{2}(\mathbb {R} )\subset L^{2}(\mathbb {R} )}admits a unique continuous extension to aunitary operatoronL2(R){\displaystyle L^{2}(\mathbb {R} )}. This extension is important in part because, unlike the case ofL1{\displaystyle L^{1}}, the Fourier transform is anautomorphismof the spaceL2(R){\displaystyle L^{2}(\mathbb {R} )}. In such cases, the Fourier transform can be obtained explicitly byregularizingthe integral, and then passing to a limit. In practice, the integral is often regarded as animproper integralinstead of a proper Lebesgue integral, but sometimes for convergence one needs to useweak limitorprincipal valueinstead of the (pointwise) limits implicit in an improper integral.Titchmarsh (1986)andDym & McKean (1985)each gives three rigorous ways of extending the Fourier transform to square integrable functions using this procedure. A general principle in working with theL2{\displaystyle L^{2}}Fourier transform is that Gaussians are dense inL1∩L2{\displaystyle L^{1}\cap L^{2}}, and the various features of the Fourier transform, such as its unitarity, are easily inferred for Gaussians. Many of the properties of the Fourier transform, can then be proven from two facts about Gaussians:[13] A feature of theL1{\displaystyle L^{1}}Fourier transform is that it is a homomorphism of Banach algebras fromL1{\displaystyle L^{1}}equipped with the convolution operation to the Banach algebra of continuous functions under theL∞{\displaystyle L^{\infty }}(supremum) norm. The conventions chosen in this article are those ofharmonic analysis, and are characterized as the unique conventions such that the Fourier transform is bothunitaryonL2and an algebra homomorphism fromL1toL∞, without renormalizing the Lebesgue measure.[14] When the independent variable (x{\displaystyle x}) representstime(often denoted byt{\displaystyle t}), the transform variable (ξ{\displaystyle \xi }) representsfrequency(often denoted byf{\displaystyle f}). For example, if time is measured inseconds, then frequency is inhertz. The Fourier transform can also be written in terms ofangular frequency,ω=2πξ,{\displaystyle \omega =2\pi \xi ,}whose units areradiansper second. The substitutionξ=ω2π{\displaystyle \xi ={\tfrac {\omega }{2\pi }}}intoEq.1produces this convention, where functionf^{\displaystyle {\widehat {f}}}is relabeledf1^:{\displaystyle {\widehat {f_{1}}}:}f3^(ω)≜∫−∞∞f(x)⋅e−iωxdx=f1^(ω2π),f(x)=12π∫−∞∞f3^(ω)⋅eiωxdω.{\displaystyle {\begin{aligned}{\widehat {f_{3}}}(\omega )&\triangleq \int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }{\widehat {f_{3}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}}Unlike theEq.1definition, the Fourier transform is no longer aunitary transformation, and there is less symmetry between the formulas for the transform and its inverse. Those properties are restored by splitting the2π{\displaystyle 2\pi }factor evenly between the transform and its inverse, which leads to another convention:f2^(ω)≜12π∫−∞∞f(x)⋅e−iωxdx=12πf1^(ω2π),f(x)=12π∫−∞∞f2^(ω)⋅eiωxdω.{\displaystyle {\begin{aligned}{\widehat {f_{2}}}(\omega )&\triangleq {\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\frac {1}{\sqrt {2\pi }}}\ \ {\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\widehat {f_{2}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}}Variations of all three conventions can be created by conjugating the complex-exponentialkernelof both the forward and the reverse transform. The signs must be opposites. In 1822, Fourier claimed (seeJoseph Fourier § The Analytic Theory of Heat) that any function, whether continuous or discontinuous, can be expanded into a series of sines.[15]That important work was corrected and expanded upon by others to provide the foundation for the various forms of the Fourier transform used since. In general, the coefficientsf^(ξ){\displaystyle {\widehat {f}}(\xi )}are complex numbers, which have two equivalent forms (seeEuler's formula):f^(ξ)=Aeiθ⏟polar coordinate form=Acos⁡(θ)+iAsin⁡(θ)⏟rectangular coordinate form.{\displaystyle {\widehat {f}}(\xi )=\underbrace {Ae^{i\theta }} _{\text{polar coordinate form}}=\underbrace {A\cos(\theta )+iA\sin(\theta )} _{\text{rectangular coordinate form}}.} The product withei2πξx{\displaystyle e^{i2\pi \xi x}}(Eq.2) has these forms:f^(ξ)⋅ei2πξx=Aeiθ⋅ei2πξx=Aei(2πξx+θ)⏟polar coordinate form=Acos⁡(2πξx+θ)+iAsin⁡(2πξx+θ)⏟rectangular coordinate form.{\displaystyle {\begin{aligned}{\widehat {f}}(\xi )\cdot e^{i2\pi \xi x}&=Ae^{i\theta }\cdot e^{i2\pi \xi x}\\&=\underbrace {Ae^{i(2\pi \xi x+\theta )}} _{\text{polar coordinate form}}\\&=\underbrace {A\cos(2\pi \xi x+\theta )+iA\sin(2\pi \xi x+\theta )} _{\text{rectangular coordinate form}}.\end{aligned}}}which conveys bothamplitudeandphaseof frequencyξ.{\displaystyle \xi .}Likewise, the intuitive interpretation ofEq.1is that multiplyingf(x){\displaystyle f(x)}bye−i2πξx{\displaystyle e^{-i2\pi \xi x}}has the effect of subtractingξ{\displaystyle \xi }from every frequency component of functionf(x).{\displaystyle f(x).}[note 4]Only the component that was at frequencyξ{\displaystyle \xi }can produce a non-zero value of the infinite integral, because (at least formally) all the other shifted components are oscillatory and integrate to zero. (see§ Example) It is noteworthy how easily the product was simplified using the polar form, and how easily the rectangular form was deduced by an application of Euler's formula. Euler's formula introduces the possibility of negativeξ.{\displaystyle \xi .}AndEq.1is defined∀ξ∈R.{\displaystyle \forall \xi \in \mathbb {R} .}Only certain complex-valuedf(x){\displaystyle f(x)}have transformsf^=0,∀ξ<0{\displaystyle {\widehat {f}}=0,\ \forall \ \xi <0}(SeeAnalytic signal. A simple example isei2πξ0x(ξ0>0).{\displaystyle e^{i2\pi \xi _{0}x}\ (\xi _{0}>0).})  But negative frequency is necessary to characterize all other complex-valuedf(x),{\displaystyle f(x),}found insignal processing,partial differential equations,radar,nonlinear optics,quantum mechanics, and others. For a real-valuedf(x),{\displaystyle f(x),}Eq.1has the symmetry propertyf^(−ξ)=f^∗(ξ){\displaystyle {\widehat {f}}(-\xi )={\widehat {f}}^{*}(\xi )}(see§ Conjugationbelow). This redundancy enablesEq.2to distinguishf(x)=cos⁡(2πξ0x){\displaystyle f(x)=\cos(2\pi \xi _{0}x)}fromei2πξ0x.{\displaystyle e^{i2\pi \xi _{0}x}.}But of course it cannot tell us the actual sign ofξ0,{\displaystyle \xi _{0},}becausecos⁡(2πξ0x){\displaystyle \cos(2\pi \xi _{0}x)}andcos⁡(2π(−ξ0)x){\displaystyle \cos(2\pi (-\xi _{0})x)}are indistinguishable on just the real numbers line. The Fourier transform of a periodic function cannot be defined using the integral formula directly. In order for integral inEq.1to be defined the function must beabsolutely integrable. Instead it is common to useFourier series. It is possible to extend the definition to include periodic functions by viewing them astempered distributions. This makes it possible to see a connection between theFourier seriesand the Fourier transform for periodic functions that have aconvergent Fourier series. Iff(x){\displaystyle f(x)}is aperiodic function, with periodP{\displaystyle P}, that has a convergent Fourier series, then:f^(ξ)=∑n=−∞∞cn⋅δ(ξ−nP),{\displaystyle {\widehat {f}}(\xi )=\sum _{n=-\infty }^{\infty }c_{n}\cdot \delta \left(\xi -{\tfrac {n}{P}}\right),}wherecn{\displaystyle c_{n}}are the Fourier series coefficients off{\displaystyle f}, andδ{\displaystyle \delta }is theDirac delta function. In other words, the Fourier transform is aDirac combfunction whoseteethare multiplied by the Fourier series coefficients. The Fourier transform of anintegrablefunctionf{\displaystyle f}can be sampled at regular intervals of arbitrary length1P.{\displaystyle {\tfrac {1}{P}}.}These samples can be deduced from one cycle of a periodic functionfP{\displaystyle f_{P}}which hasFourier seriescoefficients proportional to those samples by thePoisson summation formula:fP(x)≜∑n=−∞∞f(x+nP)=1P∑k=−∞∞f^(kP)ei2πkPx,∀k∈Z{\displaystyle f_{P}(x)\triangleq \sum _{n=-\infty }^{\infty }f(x+nP)={\frac {1}{P}}\sum _{k=-\infty }^{\infty }{\widehat {f}}\left({\tfrac {k}{P}}\right)e^{i2\pi {\frac {k}{P}}x},\quad \forall k\in \mathbb {Z} } The integrability off{\displaystyle f}ensures the periodic summation converges. Therefore, the samplesf^(kP){\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)}can be determined by Fourier series analysis:f^(kP)=∫PfP(x)⋅e−i2πkPxdx.{\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)=\int _{P}f_{P}(x)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx.} Whenf(x){\displaystyle f(x)}hascompact support,fP(x){\displaystyle f_{P}(x)}has a finite number of terms within the interval of integration. Whenf(x){\displaystyle f(x)}does not have compact support, numerical evaluation offP(x){\displaystyle f_{P}(x)}requires an approximation, such as taperingf(x){\displaystyle f(x)}or truncating the number of terms. The frequency variable must have inverse units to the units of the original function's domain (typically namedt{\displaystyle t}orx{\displaystyle x}). For example, ift{\displaystyle t}is measured in seconds,ξ{\displaystyle \xi }should be in cycles per second orhertz. If the scale of time is in units of2π{\displaystyle 2\pi }seconds, then another Greek letterω{\displaystyle \omega }is typically used instead to representangular frequency(whereω=2πξ{\displaystyle \omega =2\pi \xi }) in units ofradiansper second. If usingx{\displaystyle x}for units of length, thenξ{\displaystyle \xi }must be in inverse length, e.g.,wavenumbers. That is to say, there are two versions of the real line: one which is therangeoft{\displaystyle t}and measured in units oft,{\displaystyle t,}and the other which is the range ofξ{\displaystyle \xi }and measured in inverse units to the units oft.{\displaystyle t.}These two distinct versions of the real line cannot be equated with each other. Therefore, the Fourier transform goes from one space of functions to a different space of functions: functions which have a different domain of definition. In general,ξ{\displaystyle \xi }must always be taken to be alinear formon the space of its domain, which is to say that the second real line is thedual spaceof the first real line. See the article onlinear algebrafor a more formal explanation and for more details. This point of view becomes essential in generalizations of the Fourier transform to generalsymmetry groups, including the case of Fourier series. That there is no one preferred way (often, one says "no canonical way") to compare the two versions of the real line which are involved in the Fourier transform—fixing the units on one line does not force the scale of the units on the other line—is the reason for the plethora of rival conventions on the definition of the Fourier transform. The various definitions resulting from different choices of units differ by various constants. In other conventions, the Fourier transform hasiin the exponent instead of−i, and vice versa for the inversion formula. This convention is common in modern physics[16]and is the default forWolfram Alpha, and does not mean that the frequency has become negative, since there is no canonical definition of positivity for frequency of a complex wave. It simply means thatf^(ξ){\displaystyle {\hat {f}}(\xi )}is the amplitude of the wavee−i2πξx{\displaystyle e^{-i2\pi \xi x}}instead of the waveei2πξx{\displaystyle e^{i2\pi \xi x}}(the former, with its minus sign, is often seen in the time dependence forSinusoidal plane-wave solutions of the electromagnetic wave equation, or in thetime dependence for quantum wave functions). Many of the identities involving the Fourier transform remain valid in those conventions, provided all terms that explicitly involveihave it replaced by−i. InElectrical engineeringthe letterjis typically used for theimaginary unitinstead ofibecauseiis used for current. When usingdimensionless units, the constant factors might not even be written in the transform definition. For instance, inprobability theory, the characteristic functionΦof the probability density functionfof a random variableXof continuous type is defined without a negative sign in the exponential, and since the units ofxare ignored, there is no 2πeither:ϕ(λ)=∫−∞∞f(x)eiλxdx.{\displaystyle \phi (\lambda )=\int _{-\infty }^{\infty }f(x)e^{i\lambda x}\,dx.} (In probability theory, and in mathematical statistics, the use of the Fourier—Stieltjes transform is preferred, because so many random variables are not of continuous type, and do not possess a density function, and one must treat not functions butdistributions, i.e., measures which possess "atoms".) From the higher point of view ofgroup characters, which is much more abstract, all these arbitrary choices disappear, as will be explained in the later section of this article, which treats the notion of the Fourier transform of a function on alocally compact Abelian group. Letf(x){\displaystyle f(x)}andh(x){\displaystyle h(x)}representintegrable functionsLebesgue-measurableon the real line satisfying:∫−∞∞|f(x)|dx<∞.{\displaystyle \int _{-\infty }^{\infty }|f(x)|\,dx<\infty .}We denote the Fourier transforms of these functions asf^(ξ){\displaystyle {\hat {f}}(\xi )}andh^(ξ){\displaystyle {\hat {h}}(\xi )}respectively. The Fourier transform has the following basic properties:[17] af(x)+bh(x)⟺Faf^(ξ)+bh^(ξ);a,b∈C{\displaystyle a\ f(x)+b\ h(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ a\ {\widehat {f}}(\xi )+b\ {\widehat {h}}(\xi );\quad \ a,b\in \mathbb {C} } f(x−x0)⟺Fe−i2πx0ξf^(ξ);x0∈R{\displaystyle f(x-x_{0})\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ e^{-i2\pi x_{0}\xi }\ {\widehat {f}}(\xi );\quad \ x_{0}\in \mathbb {R} } ei2πξ0xf(x)⟺Ff^(ξ−ξ0);ξ0∈R{\displaystyle e^{i2\pi \xi _{0}x}f(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(\xi -\xi _{0});\quad \ \xi _{0}\in \mathbb {R} } f(ax)⟺F1|a|f^(ξa);a≠0{\displaystyle f(ax)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\frac {1}{|a|}}{\widehat {f}}\left({\frac {\xi }{a}}\right);\quad \ a\neq 0}The casea=−1{\displaystyle a=-1}leads to thetime-reversal property:f(−x)⟺Ff^(−ξ){\displaystyle f(-x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(-\xi )} When the real and imaginary parts of a complex function are decomposed into theireven and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:[18] Timedomainf=fRE+fRO+ifIE+ifIO⏟⇕F⇕F⇕F⇕F⇕FFrequencydomainf^=f^RE+if^IO⏞+if^IE+f^RO{\displaystyle {\begin{array}{rlcccccccc}{\mathsf {Time\ domain}}&f&=&f_{_{\text{RE}}}&+&f_{_{\text{RO}}}&+&i\ f_{_{\text{IE}}}&+&\underbrace {i\ f_{_{\text{IO}}}} \\&{\Bigg \Updownarrow }{\mathcal {F}}&&{\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}\\{\mathsf {Frequency\ domain}}&{\widehat {f}}&=&{\widehat {f}}_{_{\text{RE}}}&+&\overbrace {i\ {\widehat {f}}_{_{\text{IO}}}\,} &+&i\ {\widehat {f}}_{_{\text{IE}}}&+&{\widehat {f}}_{_{\text{RO}}}\end{array}}} From this, various relationships are apparent, for example: (f(x))∗⟺F(f^(−ξ))∗{\displaystyle {\bigl (}f(x){\bigr )}^{*}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ \left({\widehat {f}}(-\xi )\right)^{*}}(Note: the ∗ denotescomplex conjugation.) In particular, iff{\displaystyle f}isreal, thenf^{\displaystyle {\widehat {f}}}iseven symmetric(akaHermitian function):f^(−ξ)=(f^(ξ))∗.{\displaystyle {\widehat {f}}(-\xi )={\bigl (}{\widehat {f}}(\xi ){\bigr )}^{*}.} And iff{\displaystyle f}is purely imaginary, thenf^{\displaystyle {\widehat {f}}}isodd symmetric:f^(−ξ)=−(f^(ξ))∗.{\displaystyle {\widehat {f}}(-\xi )=-({\widehat {f}}(\xi ))^{*}.} Re⁡{f(x)}⟺F12(f^(ξ)+(f^(−ξ))∗){\displaystyle \operatorname {Re} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2}}\left({\widehat {f}}(\xi )+{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)}Im⁡{f(x)}⟺F12i(f^(ξ)−(f^(−ξ))∗){\displaystyle \operatorname {Im} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2i}}\left({\widehat {f}}(\xi )-{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)} Substitutingξ=0{\displaystyle \xi =0}in the definition, we obtain:f^(0)=∫−∞∞f(x)dx.{\displaystyle {\widehat {f}}(0)=\int _{-\infty }^{\infty }f(x)\,dx.} The integral off{\displaystyle f}over its domain is known as the average value orDC biasof the function. The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties. The Fourier transformf^{\displaystyle {\hat {f}}}of any integrable functionf{\displaystyle f}isuniformly continuousand[19][20]‖f^‖∞≤‖f‖1{\displaystyle \left\|{\hat {f}}\right\|_{\infty }\leq \left\|f\right\|_{1}} By theRiemann–Lebesgue lemma,[21]f^(ξ)→0as|ξ|→∞.{\displaystyle {\hat {f}}(\xi )\to 0{\text{ as }}|\xi |\to \infty .} However,f^{\displaystyle {\hat {f}}}need not be integrable. For example, the Fourier transform of therectangular function, which is integrable, is thesinc function, which is notLebesgue integrable, because itsimproper integralsbehave analogously to thealternating harmonic series, in converging to a sum without beingabsolutely convergent. It is not generally possible to write theinverse transformas aLebesgue integral. However, when bothf{\displaystyle f}andf^{\displaystyle {\hat {f}}}are integrable, the inverse equalityf(x)=∫−∞∞f^(ξ)ei2πxξdξ{\displaystyle f(x)=\int _{-\infty }^{\infty }{\hat {f}}(\xi )e^{i2\pi x\xi }\,d\xi }holds for almost everyx. As a result, the Fourier transform isinjectiveonL1(R). Letf(x)andg(x)be integrable, and letf̂(ξ)andĝ(ξ)be their Fourier transforms. Iff(x)andg(x)are alsosquare-integrable, then the Parseval formula follows:[22]⟨f,g⟩L2=∫−∞∞f(x)g(x)¯dx=∫−∞∞f^(ξ)g^(ξ)¯dξ,{\displaystyle \langle f,g\rangle _{L^{2}}=\int _{-\infty }^{\infty }f(x){\overline {g(x)}}\,dx=\int _{-\infty }^{\infty }{\hat {f}}(\xi ){\overline {{\hat {g}}(\xi )}}\,d\xi ,}where the bar denotescomplex conjugation. ThePlancherel theorem, which follows from the above, states that[23]‖f‖L22=∫−∞∞|f(x)|2dx=∫−∞∞|f^(ξ)|2dξ.{\displaystyle \|f\|_{L^{2}}^{2}=\int _{-\infty }^{\infty }\left|f(x)\right|^{2}\,dx=\int _{-\infty }^{\infty }\left|{\hat {f}}(\xi )\right|^{2}\,d\xi .} Plancherel's theorem makes it possible to extend the Fourier transform, by a continuity argument, to aunitary operatoronL2(R). OnL1(R) ∩L2(R), this extension agrees with original Fourier transform defined onL1(R), thus enlarging the domain of the Fourier transform toL1(R) +L2(R)(and consequently toLp(R)for1 ≤p≤ 2). Plancherel's theorem has the interpretation in the sciences that the Fourier transform preserves theenergyof the original quantity. The terminology of these formulas is not quite standardised. Parseval's theorem was proved only for Fourier series, and was first proved by Lyapunov. But Parseval's formula makes sense for the Fourier transform as well, and so even though in the context of the Fourier transform it was proved by Plancherel, it is still often referred to as Parseval's formula, or Parseval's relation, or even Parseval's theorem. SeePontryagin dualityfor a general formulation of this concept in the context of locally compact abelian groups. The Fourier transform translates betweenconvolutionand multiplication of functions. Iff(x)andg(x)are integrable functions with Fourier transformsf̂(ξ)andĝ(ξ)respectively, then the Fourier transform of the convolution is given by the product of the Fourier transformsf̂(ξ)andĝ(ξ)(under other conventions for the definition of the Fourier transform a constant factor may appear). This means that if:h(x)=(f∗g)(x)=∫−∞∞f(y)g(x−y)dy,{\displaystyle h(x)=(f*g)(x)=\int _{-\infty }^{\infty }f(y)g(x-y)\,dy,}where∗denotes the convolution operation, then:h^(ξ)=f^(ξ)g^(ξ).{\displaystyle {\hat {h}}(\xi )={\hat {f}}(\xi )\,{\hat {g}}(\xi ).} Inlinear time invariant (LTI) system theory, it is common to interpretg(x)as theimpulse responseof an LTI system with inputf(x)and outputh(x), since substituting theunit impulseforf(x)yieldsh(x) =g(x). In this case,ĝ(ξ)represents thefrequency responseof the system. Conversely, iff(x)can be decomposed as the product of two square integrable functionsp(x)andq(x), then the Fourier transform off(x)is given by the convolution of the respective Fourier transformsp̂(ξ)andq̂(ξ). In an analogous manner, it can be shown that ifh(x)is thecross-correlationoff(x)andg(x):h(x)=(f⋆g)(x)=∫−∞∞f(y)¯g(x+y)dy{\displaystyle h(x)=(f\star g)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}g(x+y)\,dy}then the Fourier transform ofh(x)is:h^(ξ)=f^(ξ)¯g^(ξ).{\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}\,{\hat {g}}(\xi ).} As a special case, theautocorrelationof functionf(x)is:h(x)=(f⋆f)(x)=∫−∞∞f(y)¯f(x+y)dy{\displaystyle h(x)=(f\star f)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}f(x+y)\,dy}for whichh^(ξ)=f^(ξ)¯f^(ξ)=|f^(ξ)|2.{\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}{\hat {f}}(\xi )=\left|{\hat {f}}(\xi )\right|^{2}.} Supposef(x)is an absolutely continuous differentiable function, and bothfand its derivativef′are integrable. Then the Fourier transform of the derivative is given byf′^(ξ)=F{ddxf(x)}=i2πξf^(ξ).{\displaystyle {\widehat {f'\,}}(\xi )={\mathcal {F}}\left\{{\frac {d}{dx}}f(x)\right\}=i2\pi \xi {\hat {f}}(\xi ).}More generally, the Fourier transformation of thenth derivativef(n)is given byf(n)^(ξ)=F{dndxnf(x)}=(i2πξ)nf^(ξ).{\displaystyle {\widehat {f^{(n)}}}(\xi )={\mathcal {F}}\left\{{\frac {d^{n}}{dx^{n}}}f(x)\right\}=(i2\pi \xi )^{n}{\hat {f}}(\xi ).} Analogously,F{dndξnf^(ξ)}=(i2πx)nf(x){\displaystyle {\mathcal {F}}\left\{{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi )\right\}=(i2\pi x)^{n}f(x)}, soF{xnf(x)}=(i2π)ndndξnf^(ξ).{\displaystyle {\mathcal {F}}\left\{x^{n}f(x)\right\}=\left({\frac {i}{2\pi }}\right)^{n}{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi ).} By applying the Fourier transform and using these formulas, someordinary differential equationscan be transformed into algebraic equations, which are much easier to solve. These formulas also give rise to the rule of thumb "f(x)is smoothif and only iff̂(ξ)quickly falls to 0 for|ξ| → ∞." By using the analogous rules for the inverse Fourier transform, one can also say "f(x)quickly falls to 0 for|x| → ∞if and only iff̂(ξ)is smooth." The Fourier transform is a linear transform which has eigenfunctions obeyingF[ψ]=λψ,{\displaystyle {\mathcal {F}}[\psi ]=\lambda \psi ,}withλ∈C.{\displaystyle \lambda \in \mathbb {C} .} A set of eigenfunctions is found by noting that the homogeneous differential equation[U(12πddx)+U(x)]ψ(x)=0{\displaystyle \left[U\left({\frac {1}{2\pi }}{\frac {d}{dx}}\right)+U(x)\right]\psi (x)=0}leads to eigenfunctionsψ(x){\displaystyle \psi (x)}of the Fourier transformF{\displaystyle {\mathcal {F}}}as long as the form of the equation remains invariant under Fourier transform.[note 5]In other words, every solutionψ(x){\displaystyle \psi (x)}and its Fourier transformψ^(ξ){\displaystyle {\hat {\psi }}(\xi )}obey the same equation. Assuminguniquenessof the solutions, every solutionψ(x){\displaystyle \psi (x)}must therefore be an eigenfunction of the Fourier transform. The form of the equation remains unchanged under Fourier transform ifU(x){\displaystyle U(x)}can be expanded in a power series in which for all terms the same factor of either one of±1,±i{\displaystyle \pm 1,\pm i}arises from the factorsin{\displaystyle i^{n}}introduced by thedifferentiationrules upon Fourier transforming the homogeneous differential equation because this factor may then be cancelled. The simplest allowableU(x)=x{\displaystyle U(x)=x}leads to thestandard normal distribution.[24] More generally, a set of eigenfunctions is also found by noting that thedifferentiationrules imply that theordinary differential equation[W(i2πddx)+W(x)]ψ(x)=Cψ(x){\displaystyle \left[W\left({\frac {i}{2\pi }}{\frac {d}{dx}}\right)+W(x)\right]\psi (x)=C\psi (x)}withC{\displaystyle C}constant andW(x){\displaystyle W(x)}being a non-constant even function remains invariant in form when applying the Fourier transformF{\displaystyle {\mathcal {F}}}to both sides of the equation. The simplest example is provided byW(x)=x2{\displaystyle W(x)=x^{2}}which is equivalent to considering the Schrödinger equation for thequantum harmonic oscillator.[25]The corresponding solutions provide an important choice of an orthonormal basis forL2(R)and are given by the "physicist's"Hermite functions. Equivalently one may useψn(x)=24n!e−πx2Hen(2xπ),{\displaystyle \psi _{n}(x)={\frac {\sqrt[{4}]{2}}{\sqrt {n!}}}e^{-\pi x^{2}}\mathrm {He} _{n}\left(2x{\sqrt {\pi }}\right),}whereHen(x)are the "probabilist's"Hermite polynomials, defined asHen(x)=(−1)ne12x2(ddx)ne−12x2.{\displaystyle \mathrm {He} _{n}(x)=(-1)^{n}e^{{\frac {1}{2}}x^{2}}\left({\frac {d}{dx}}\right)^{n}e^{-{\frac {1}{2}}x^{2}}.} Under this convention for the Fourier transform, we have thatψ^n(ξ)=(−i)nψn(ξ).{\displaystyle {\hat {\psi }}_{n}(\xi )=(-i)^{n}\psi _{n}(\xi ).} In other words, the Hermite functions form a completeorthonormalsystem ofeigenfunctionsfor the Fourier transform onL2(R).[17][26]However, this choice of eigenfunctions is not unique. Because ofF4=id{\displaystyle {\mathcal {F}}^{4}=\mathrm {id} }there are only four differenteigenvaluesof the Fourier transform (the fourth roots of unity ±1 and ±i) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction.[27]As a consequence of this, it is possible to decomposeL2(R)as a direct sum of four spacesH0,H1,H2, andH3where the Fourier transform acts onHeksimply by multiplication byik. Since the complete set of Hermite functionsψnprovides a resolution of the identity they diagonalize the Fourier operator, i.e. the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed:F[f](ξ)=∫dxf(x)∑n≥0(−i)nψn(x)ψn(ξ).{\displaystyle {\mathcal {F}}[f](\xi )=\int dxf(x)\sum _{n\geq 0}(-i)^{n}\psi _{n}(x)\psi _{n}(\xi )~.} This approach to define the Fourier transform was first proposed byNorbert Wiener.[28]Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely thefractional Fourier transformused in time–frequency analysis.[29]Inphysics, this transform was introduced byEdward Condon.[30]This change of basis functions becomes possible because the Fourier transform is a unitary transform when using the rightconventions. Consequently, under the proper conditions it may be expected to result from a self-adjoint generatorN{\displaystyle N}via[31]F[ψ]=e−itNψ.{\displaystyle {\mathcal {F}}[\psi ]=e^{-itN}\psi .} The operatorN{\displaystyle N}is thenumber operatorof the quantum harmonic oscillator written as[32][33]N≡12(x−∂∂x)(x+∂∂x)=12(−∂2∂x2+x2−1).{\displaystyle N\equiv {\frac {1}{2}}\left(x-{\frac {\partial }{\partial x}}\right)\left(x+{\frac {\partial }{\partial x}}\right)={\frac {1}{2}}\left(-{\frac {\partial ^{2}}{\partial x^{2}}}+x^{2}-1\right).} It can be interpreted as thegeneratoroffractional Fourier transformsfor arbitrary values oft, and of the conventional continuous Fourier transformF{\displaystyle {\mathcal {F}}}for the particular valuet=π/2,{\displaystyle t=\pi /2,}with theMehler kernelimplementing the correspondingactive transform. The eigenfunctions ofN{\displaystyle N}are theHermite functionsψn(x){\displaystyle \psi _{n}(x)}which are therefore also eigenfunctions ofF.{\displaystyle {\mathcal {F}}.} Upon extending the Fourier transform todistributionstheDirac combis also an eigenfunction of the Fourier transform. Under suitable conditions on the functionf{\displaystyle f}, it can be recovered from its Fourier transformf^{\displaystyle {\hat {f}}}. Indeed, denoting the Fourier transform operator byF{\displaystyle {\mathcal {F}}}, soFf:=f^{\displaystyle {\mathcal {F}}f:={\hat {f}}}, then for suitable functions, applying the Fourier transform twice simply flips the function:(F2f)(x)=f(−x){\displaystyle \left({\mathcal {F}}^{2}f\right)(x)=f(-x)}, which can be interpreted as "reversing time". Since reversing time is two-periodic, applying this twice yieldsF4(f)=f{\displaystyle {\mathcal {F}}^{4}(f)=f}, so the Fourier transform operator is four-periodic, and similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times:F3(f^)=f{\displaystyle {\mathcal {F}}^{3}\left({\hat {f}}\right)=f}. In particular the Fourier transform is invertible (under suitable conditions). More precisely, defining theparity operatorP{\displaystyle {\mathcal {P}}}such that(Pf)(x)=f(−x){\displaystyle ({\mathcal {P}}f)(x)=f(-x)}, we have:F0=id,F1=F,F2=P,F3=F−1=P∘F=F∘P,F4=id{\displaystyle {\begin{aligned}{\mathcal {F}}^{0}&=\mathrm {id} ,\\{\mathcal {F}}^{1}&={\mathcal {F}},\\{\mathcal {F}}^{2}&={\mathcal {P}},\\{\mathcal {F}}^{3}&={\mathcal {F}}^{-1}={\mathcal {P}}\circ {\mathcal {F}}={\mathcal {F}}\circ {\mathcal {P}},\\{\mathcal {F}}^{4}&=\mathrm {id} \end{aligned}}}These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equalityalmost everywhere?) and defining equality of operators – that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of theFourier inversion theorem. This fourfold periodicity of the Fourier transform is similar to a rotation of the plane by 90°, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90° in thetime–frequency domain(considering time as thex-axis and frequency as they-axis), and the Fourier transform can be generalized to thefractional Fourier transform, which involves rotations by other angles. This can be further generalized tolinear canonical transformations, which can be visualized as the action of thespecial linear groupSL2(R)on the time–frequency plane, with the preserved symplectic form corresponding to theuncertainty principle, below. This approach is particularly studied insignal processing, undertime–frequency analysis. TheHeisenberg groupis a certaingroupofunitary operatorson theHilbert spaceL2(R)of square integrable complex valued functionsfon the real line, generated by the translations(Tyf)(x) =f(x+y)and multiplication byei2πξx,(Mξf)(x) =ei2πξxf(x). These operators do not commute, as their (group) commutator is(Mξ−1Ty−1MξTyf)(x)=ei2πξyf(x){\displaystyle \left(M_{\xi }^{-1}T_{y}^{-1}M_{\xi }T_{y}f\right)(x)=e^{i2\pi \xi y}f(x)}which is multiplication by the constant (independent ofx)ei2πξy∈U(1)(thecircle groupof unit modulus complex numbers). As an abstract group, the Heisenberg group is the three-dimensionalLie groupof triples(x,ξ,z) ∈R2×U(1), with the group law(x1,ξ1,t1)⋅(x2,ξ2,t2)=(x1+x2,ξ1+ξ2,t1t2ei2π(x1ξ1+x2ξ2+x1ξ2)).{\displaystyle \left(x_{1},\xi _{1},t_{1}\right)\cdot \left(x_{2},\xi _{2},t_{2}\right)=\left(x_{1}+x_{2},\xi _{1}+\xi _{2},t_{1}t_{2}e^{i2\pi \left(x_{1}\xi _{1}+x_{2}\xi _{2}+x_{1}\xi _{2}\right)}\right).} Denote the Heisenberg group byH1. The above procedure describes not only the group structure, but also a standardunitary representationofH1on a Hilbert space, which we denote byρ:H1→B(L2(R)). Define the linear automorphism ofR2byJ(xξ)=(−ξx){\displaystyle J{\begin{pmatrix}x\\\xi \end{pmatrix}}={\begin{pmatrix}-\xi \\x\end{pmatrix}}}so thatJ2= −I. ThisJcan be extended to a unique automorphism ofH1:j(x,ξ,t)=(−ξ,x,te−i2πξx).{\displaystyle j\left(x,\xi ,t\right)=\left(-\xi ,x,te^{-i2\pi \xi x}\right).} According to theStone–von Neumann theorem, the unitary representationsρandρ∘jare unitarily equivalent, so there is a unique intertwinerW∈U(L2(R))such thatρ∘j=WρW∗.{\displaystyle \rho \circ j=W\rho W^{*}.}This operatorWis the Fourier transform. Many of the standard properties of the Fourier transform are immediate consequences of this more general framework.[34]For example, the square of the Fourier transform,W2, is an intertwiner associated withJ2= −I, and so we have(W2f)(x) =f(−x)is the reflection of the original functionf. Theintegralfor the Fourier transformf^(ξ)=∫−∞∞e−i2πξtf(t)dt{\displaystyle {\hat {f}}(\xi )=\int _{-\infty }^{\infty }e^{-i2\pi \xi t}f(t)\,dt}can be studied forcomplexvalues of its argumentξ. Depending on the properties off, this might not converge off the real axis at all, or it might converge to acomplexanalytic functionfor all values ofξ=σ+iτ, or something in between.[35] ThePaley–Wiener theoremsays thatfis smooth (i.e.,n-times differentiable for all positive integersn) and compactly supported if and only iff̂(σ+iτ)is aholomorphic functionfor which there exists aconstanta> 0such that for anyintegern≥ 0,|ξnf^(ξ)|≤Cea|τ|{\displaystyle \left\vert \xi ^{n}{\hat {f}}(\xi )\right\vert \leq Ce^{a\vert \tau \vert }}for some constantC. (In this case,fis supported on[−a,a].) This can be expressed by saying thatf̂is anentire functionwhich israpidly decreasinginσ(for fixedτ) and of exponential growth inτ(uniformly inσ).[36] (Iffis not smooth, but onlyL2, the statement still holds providedn= 0.[37]) The space of such functions of acomplex variableis called the Paley—Wiener space. This theorem has been generalised to semisimpleLie groups.[38] Iffis supported on the half-linet≥ 0, thenfis said to be "causal" because theimpulse response functionof a physically realisablefiltermust have this property, as no effect can precede its cause.Paleyand Wiener showed that thenf̂extends to aholomorphic functionon the complex lower half-planeτ< 0which tends to zero asτgoes to infinity.[39]The converse is false and it is not known how to characterise the Fourier transform of a causal function.[40] The Fourier transformf̂(ξ)is related to theLaplace transformF(s), which is also used for the solution ofdifferential equationsand the analysis offilters. It may happen that a functionffor which the Fourier integral does not converge on the real axis at all, nevertheless has a complex Fourier transform defined in some region of thecomplex plane. For example, iff(t)is of exponential growth, i.e.,|f(t)|<Cea|t|{\displaystyle \vert f(t)\vert <Ce^{a\vert t\vert }}for some constantsC,a≥ 0, then[41]f^(iτ)=∫−∞∞e2πτtf(t)dt,{\displaystyle {\hat {f}}(i\tau )=\int _{-\infty }^{\infty }e^{2\pi \tau t}f(t)\,dt,}convergent for all2πτ< −a, is thetwo-sided Laplace transformoff. The more usual version ("one-sided") of the Laplace transform isF(s)=∫0∞f(t)e−stdt.{\displaystyle F(s)=\int _{0}^{\infty }f(t)e^{-st}\,dt.} Iffis also causal, and analytical, then:f^(iτ)=F(−2πτ).{\displaystyle {\hat {f}}(i\tau )=F(-2\pi \tau ).}Thus, extending the Fourier transform to the complex domain means it includes the Laplace transform as a special case in the case of causal functions—but with the change of variables=i2πξ. From another, perhaps more classical viewpoint, the Laplace transform by its form involves an additional exponential regulating term which lets it converge outside of the imaginary line where the Fourier transform is defined. As such it can converge for at most exponentially divergent series and integrals, whereas the original Fourier decomposition cannot, enabling analysis of systems with divergent or critical elements. Two particular examples from linear signal processing are the construction of allpass filter networks from critical comb and mitigating filters via exact pole-zero cancellation on the unit circle. Such designs are common in audio processing, where highly nonlinear phase response is sought for, as in reverb. Furthermore, when extended pulselike impulse responses are sought for signal processing work, the easiest way to produce them is to have one circuit which produces a divergent time response, and then to cancel its divergence through a delayed opposite and compensatory response. There, only the delay circuit in-between admits a classical Fourier description, which is critical. Both the circuits to the side are unstable, and do not admit a convergent Fourier decomposition. However, they do admit a Laplace domain description, with identical half-planes of convergence in the complex plane (or in the discrete case, the Z-plane), wherein their effects cancel. In modern mathematics the Laplace transform is conventionally subsumed under the aegis Fourier methods. Both of them are subsumed by the far more general, and more abstract, idea ofharmonic analysis. Still withξ=σ+iτ{\displaystyle \xi =\sigma +i\tau }, iff^{\displaystyle {\widehat {f}}}is complex analytic fora≤τ≤b, then ∫−∞∞f^(σ+ia)ei2πξtdσ=∫−∞∞f^(σ+ib)ei2πξtdσ{\displaystyle \int _{-\infty }^{\infty }{\hat {f}}(\sigma +ia)e^{i2\pi \xi t}\,d\sigma =\int _{-\infty }^{\infty }{\hat {f}}(\sigma +ib)e^{i2\pi \xi t}\,d\sigma }byCauchy's integral theorem. Therefore, the Fourier inversion formula can use integration along different lines, parallel to the real axis.[42] Theorem: Iff(t) = 0fort< 0, and|f(t)| <Cea|t|for some constantsC,a> 0, thenf(t)=∫−∞∞f^(σ+iτ)ei2πξtdσ,{\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}(\sigma +i\tau )e^{i2\pi \xi t}\,d\sigma ,}for anyτ< −⁠a/2π⁠. This theorem implies theMellin inversion formulafor the Laplace transformation,[41]f(t)=1i2π∫b−i∞b+i∞F(s)estds{\displaystyle f(t)={\frac {1}{i2\pi }}\int _{b-i\infty }^{b+i\infty }F(s)e^{st}\,ds}for anyb>a, whereF(s)is the Laplace transform off(t). The hypotheses can be weakened, as in the results of Carleson and Hunt, tof(t)e−atbeingL1, provided thatfbe of bounded variation in a closed neighborhood oft(cf.Dini test), the value offattbe taken to be thearithmetic meanof the left and right limits, and that the integrals be taken in the sense of Cauchy principal values.[43] L2versions of these inversion formulas are also available.[44] The Fourier transform can be defined in any arbitrary number of dimensionsn. As with the one-dimensional case, there are many conventions. For an integrable functionf(x), this article takes the definition:f^(ξ)=F(f)(ξ)=∫Rnf(x)e−i2πξ⋅xdx{\displaystyle {\hat {f}}({\boldsymbol {\xi }})={\mathcal {F}}(f)({\boldsymbol {\xi }})=\int _{\mathbb {R} ^{n}}f(\mathbf {x} )e^{-i2\pi {\boldsymbol {\xi }}\cdot \mathbf {x} }\,d\mathbf {x} }wherexandξaren-dimensionalvectors, andx·ξis thedot productof the vectors. Alternatively,ξcan be viewed as belonging to thedual vector spaceRn⋆{\displaystyle \mathbb {R} ^{n\star }}, in which case the dot product becomes thecontractionofxandξ, usually written as⟨x,ξ⟩. All of the basic properties listed above hold for then-dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and theRiemann–Lebesgue lemmaholds.[21] Generally speaking, the more concentratedf(x)is, the more spread out its Fourier transformf̂(ξ)must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we squeeze a function inx, its Fourier transform stretches out inξ. It is not possible to arbitrarily concentrate both a function and its Fourier transform. The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of anuncertainty principleby viewing a function and its Fourier transform asconjugate variableswith respect to thesymplectic formon thetime–frequency domain: from the point of view of thelinear canonical transformation, the Fourier transform is rotation by 90° in the time–frequency domain, and preserves thesymplectic form. Supposef(x)is an integrable andsquare-integrablefunction. Without loss of generality, assume thatf(x)is normalized:∫−∞∞|f(x)|2dx=1.{\displaystyle \int _{-\infty }^{\infty }|f(x)|^{2}\,dx=1.} It follows from thePlancherel theoremthatf̂(ξ)is also normalized. The spread aroundx= 0may be measured by thedispersion about zerodefined by[45]D0(f)=∫−∞∞x2|f(x)|2dx.{\displaystyle D_{0}(f)=\int _{-\infty }^{\infty }x^{2}|f(x)|^{2}\,dx.} In probability terms, this is thesecond momentof|f(x)|2about zero. The uncertainty principle states that, iff(x)is absolutely continuous and the functionsx·f(x)andf′(x)are square integrable, thenD0(f)D0(f^)≥116π2.{\displaystyle D_{0}(f)D_{0}({\hat {f}})\geq {\frac {1}{16\pi ^{2}}}.} The equality is attained only in the casef(x)=C1e−πx2σ2∴f^(ξ)=σC1e−πσ2ξ2{\displaystyle {\begin{aligned}f(x)&=C_{1}\,e^{-\pi {\frac {x^{2}}{\sigma ^{2}}}}\\\therefore {\hat {f}}(\xi )&=\sigma C_{1}\,e^{-\pi \sigma ^{2}\xi ^{2}}\end{aligned}}}whereσ> 0is arbitrary andC1=⁠4√2/√σ⁠so thatfisL2-normalized. In other words, wherefis a (normalized)Gaussian functionwith varianceσ2/2π, centered at zero, and its Fourier transform is a Gaussian function with varianceσ−2/2π. Gaussian functions are examples ofSchwartz functions(see the discussion on tempered distributions below). In fact, this inequality implies that:(∫−∞∞(x−x0)2|f(x)|2dx)(∫−∞∞(ξ−ξ0)2|f^(ξ)|2dξ)≥116π2,∀x0,ξ0∈R.{\displaystyle \left(\int _{-\infty }^{\infty }(x-x_{0})^{2}|f(x)|^{2}\,dx\right)\left(\int _{-\infty }^{\infty }(\xi -\xi _{0})^{2}\left|{\hat {f}}(\xi )\right|^{2}\,d\xi \right)\geq {\frac {1}{16\pi ^{2}}},\quad \forall x_{0},\xi _{0}\in \mathbb {R} .}Inquantum mechanics, themomentumand positionwave functionsare Fourier transform pairs, up to a factor of thePlanck constant. With this constant properly taken into account, the inequality above becomes the statement of theHeisenberg uncertainty principle.[46] A stronger uncertainty principle is theHirschman uncertainty principle, which is expressed as:H(|f|2)+H(|f^|2)≥log⁡(e2){\displaystyle H\left(\left|f\right|^{2}\right)+H\left(\left|{\hat {f}}\right|^{2}\right)\geq \log \left({\frac {e}{2}}\right)}whereH(p)is thedifferential entropyof theprobability density functionp(x):H(p)=−∫−∞∞p(x)log⁡(p(x))dx{\displaystyle H(p)=-\int _{-\infty }^{\infty }p(x)\log {\bigl (}p(x){\bigr )}\,dx}where the logarithms may be in any base that is consistent. The equality is attained for a Gaussian, as in the previous case. Fourier's original formulation of the transform did not use complex numbers, but rather sines and cosines. Statisticians and others still use this form. An absolutely integrable functionffor which Fourier inversion holds can be expanded in terms of genuine frequencies (avoiding negative frequencies, which are sometimes considered hard to interpret physically[47])λbyf(t)=∫0∞(a(λ)cos⁡(2πλt)+b(λ)sin⁡(2πλt))dλ.{\displaystyle f(t)=\int _{0}^{\infty }{\bigl (}a(\lambda )\cos(2\pi \lambda t)+b(\lambda )\sin(2\pi \lambda t){\bigr )}\,d\lambda .} This is called an expansion as a trigonometric integral, or a Fourier integral expansion. The coefficient functionsaandbcan be found by using variants of the Fourier cosine transform and the Fourier sine transform (the normalisations are, again, not standardised):a(λ)=2∫−∞∞f(t)cos⁡(2πλt)dt{\displaystyle a(\lambda )=2\int _{-\infty }^{\infty }f(t)\cos(2\pi \lambda t)\,dt}andb(λ)=2∫−∞∞f(t)sin⁡(2πλt)dt.{\displaystyle b(\lambda )=2\int _{-\infty }^{\infty }f(t)\sin(2\pi \lambda t)\,dt.} Older literature refers to the two transform functions, the Fourier cosine transform,a, and the Fourier sine transform,b. The functionfcan be recovered from the sine and cosine transform usingf(t)=2∫0∞∫−∞∞f(τ)cos⁡(2πλ(τ−t))dτdλ.{\displaystyle f(t)=2\int _{0}^{\infty }\int _{-\infty }^{\infty }f(\tau )\cos {\bigl (}2\pi \lambda (\tau -t){\bigr )}\,d\tau \,d\lambda .}together with trigonometric identities. This is referred to as Fourier's integral formula.[41][48][49][50] Let the set ofhomogeneousharmonicpolynomialsof degreekonRnbe denoted byAk. The setAkconsists of thesolid spherical harmonicsof degreek. The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, iff(x) =e−π|x|2P(x)for someP(x)inAk, thenf̂(ξ) =i−kf(ξ). Let the setHkbe the closure inL2(Rn)of linear combinations of functions of the formf(|x|)P(x)whereP(x)is inAk. The spaceL2(Rn)is then a direct sum of the spacesHkand the Fourier transform maps each spaceHkto itself and is possible to characterize the action of the Fourier transform on each spaceHk.[21] Letf(x) =f0(|x|)P(x)(withP(x)inAk), thenf^(ξ)=F0(|ξ|)P(ξ){\displaystyle {\hat {f}}(\xi )=F_{0}(|\xi |)P(\xi )}whereF0(r)=2πi−kr−n+2k−22∫0∞f0(s)Jn+2k−22(2πrs)sn+2k2ds.{\displaystyle F_{0}(r)=2\pi i^{-k}r^{-{\frac {n+2k-2}{2}}}\int _{0}^{\infty }f_{0}(s)J_{\frac {n+2k-2}{2}}(2\pi rs)s^{\frac {n+2k}{2}}\,ds.} HereJ(n+ 2k− 2)/2denotes theBessel functionof the first kind with order⁠n+ 2k− 2/2⁠. Whenk= 0this gives a useful formula for the Fourier transform of a radial function.[51]This is essentially theHankel transform. Moreover, there is a simple recursion relating the casesn+ 2andn[52]allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one. In higher dimensions it becomes interesting to studyrestriction problemsfor the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a generalclassof square integrable functions. As such, the restriction of the Fourier transform of anL2(Rn)function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems inLpfor1 <p< 2. It is possible in some cases to define the restriction of a Fourier transform to a setS, providedShas non-zero curvature. The case whenSis the unit sphere inRnis of particular interest. In this case the Tomas–Steinrestriction theorem states that the restriction of the Fourier transform to the unit sphere inRnis a bounded operator onLpprovided1 ≤p≤⁠2n+ 2/n+ 3⁠. One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable setsERindexed byR∈ (0,∞): such as balls of radiusRcentered at the origin, or cubes of side2R. For a given integrable functionf, consider the functionfRdefined by:fR(x)=∫ERf^(ξ)ei2πx⋅ξdξ,x∈Rn.{\displaystyle f_{R}(x)=\int _{E_{R}}{\hat {f}}(\xi )e^{i2\pi x\cdot \xi }\,d\xi ,\quad x\in \mathbb {R} ^{n}.} Suppose in addition thatf∈Lp(Rn). Forn= 1and1 <p< ∞, if one takesER= (−R,R), thenfRconverges tofinLpasRtends to infinity, by the boundedness of theHilbert transform. Naively one may hope the same holds true forn> 1. In the case thatERis taken to be a cube with side lengthR, then convergence still holds. Another natural candidate is the Euclidean ballER= {ξ: |ξ| <R}. In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded inLp(Rn). Forn≥ 2it is a celebrated theorem ofCharles Feffermanthat the multiplier for the unit ball is never bounded unlessp= 2.[28]In fact, whenp≠ 2, this shows that not only mayfRfail to converge tofinLp, but for some functionsf∈Lp(Rn),fRis not even an element ofLp. The definition of the Fourier transform naturally extends fromL1(R){\displaystyle L^{1}(\mathbb {R} )}toL1(Rn){\displaystyle L^{1}(\mathbb {R} ^{n})}. That is, iff∈L1(Rn){\displaystyle f\in L^{1}(\mathbb {R} ^{n})}then the Fourier transformF:L1(Rn)→L∞(Rn){\displaystyle {\mathcal {F}}:L^{1}(\mathbb {R} ^{n})\to L^{\infty }(\mathbb {R} ^{n})}is given byf(x)↦f^(ξ)=∫Rnf(x)e−i2πξ⋅xdx,∀ξ∈Rn.{\displaystyle f(x)\mapsto {\hat {f}}(\xi )=\int _{\mathbb {R} ^{n}}f(x)e^{-i2\pi \xi \cdot x}\,dx,\quad \forall \xi \in \mathbb {R} ^{n}.}This operator isboundedassupξ∈Rn|f^(ξ)|≤∫Rn|f(x)|dx,{\displaystyle \sup _{\xi \in \mathbb {R} ^{n}}\left\vert {\hat {f}}(\xi )\right\vert \leq \int _{\mathbb {R} ^{n}}\vert f(x)\vert \,dx,}which shows that itsoperator normis bounded by1. TheRiemann–Lebesgue lemmashows that iff∈L1(Rn){\displaystyle f\in L^{1}(\mathbb {R} ^{n})}then its Fourier transform actually belongs to thespace of continuous functions which vanish at infinity, i.e.,f^∈C0(Rn)⊂L∞(Rn){\displaystyle {\hat {f}}\in C_{0}(\mathbb {R} ^{n})\subset L^{\infty }(\mathbb {R} ^{n})}.[53]Furthermore, theimageofL1{\displaystyle L^{1}}underF{\displaystyle {\mathcal {F}}}is a strict subset ofC0(Rn){\displaystyle C_{0}(\mathbb {R} ^{n})}. Similarly to the case of one variable, the Fourier transform can be defined onL2(Rn){\displaystyle L^{2}(\mathbb {R} ^{n})}. The Fourier transform inL2(Rn){\displaystyle L^{2}(\mathbb {R} ^{n})}is no longer given by an ordinary Lebesgue integral, although it can be computed by animproper integral, i.e.,f^(ξ)=limR→∞∫|x|≤Rf(x)e−i2πξ⋅xdx{\displaystyle {\hat {f}}(\xi )=\lim _{R\to \infty }\int _{|x|\leq R}f(x)e^{-i2\pi \xi \cdot x}\,dx}where the limit is taken in theL2sense.[54][55] Furthermore,F:L2(Rn)→L2(Rn){\displaystyle {\mathcal {F}}:L^{2}(\mathbb {R} ^{n})\to L^{2}(\mathbb {R} ^{n})}is aunitary operator.[56]For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for anyf,g∈L2(Rn)we have∫Rnf(x)Fg(x)dx=∫RnFf(x)g(x)dx.{\displaystyle \int _{\mathbb {R} ^{n}}f(x){\mathcal {F}}g(x)\,dx=\int _{\mathbb {R} ^{n}}{\mathcal {F}}f(x)g(x)\,dx.} In particular, the image ofL2(Rn)is itself under the Fourier transform. For1<p<2{\displaystyle 1<p<2}, the Fourier transform can be defined onLp(R){\displaystyle L^{p}(\mathbb {R} )}byMarcinkiewicz interpolation, which amounts to decomposing such functions into a fat tail part inL2plus a fat body part inL1. In each of these spaces, the Fourier transform of a function inLp(Rn)is inLq(Rn), whereq=⁠p/p− 1⁠is theHölder conjugateofp(by theHausdorff–Young inequality). However, except forp= 2, the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions inLpfor the range2 <p< ∞requires the study of distributions.[57]In fact, it can be shown that there are functions inLpwithp> 2so that the Fourier transform is not defined as a function.[21] One might consider enlarging the domain of the Fourier transform fromL1+L2{\displaystyle L^{1}+L^{2}}by consideringgeneralized functions, or distributions. A distribution onRn{\displaystyle \mathbb {R} ^{n}}is a continuous linear functional on the spaceCc∞(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}of compactly supported smooth functions (i.e.bump functions), equipped with a suitable topology. SinceCc∞(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}is dense inL2(Rn){\displaystyle L^{2}(\mathbb {R} ^{n})}, thePlancherel theoremallows one to extend the definition of the Fourier transform to general functions inL2(Rn){\displaystyle L^{2}(\mathbb {R} ^{n})}by continuity arguments. The strategy is then to consider the action of the Fourier transform onCc∞(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}and pass to distributions by duality. The obstruction to doing this is that the Fourier transform does not mapCc∞(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}toCc∞(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}. In fact the Fourier transform of an element inCc∞(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}can not vanish on an open set; see the above discussion on the uncertainty principle.[58][59] The Fourier transform can also be defined fortempered distributionsS′(Rn){\displaystyle {\mathcal {S}}'(\mathbb {R} ^{n})}, dual to the space ofSchwartz functionsS(Rn){\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})}. A Schwartz function is a smooth function that decays at infinity, along with all of its derivatives, henceCc∞(Rn)⊂S(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})\subset {\mathcal {S}}(\mathbb {R} ^{n})}and:F:Cc∞(Rn)→S(Rn)∖Cc∞(Rn).{\displaystyle {\mathcal {F}}:C_{c}^{\infty }(\mathbb {R} ^{n})\rightarrow S(\mathbb {R} ^{n})\setminus C_{c}^{\infty }(\mathbb {R} ^{n}).}The Fourier transform is anautomorphismof the Schwartz space and, by duality, also an automorphism of the space of tempered distributions.[21][60]The tempered distributions include well-behaved functions of polynomial growth, distributions of compact support as well as all the integrable functions mentioned above. For the definition of the Fourier transform of a tempered distribution, letf{\displaystyle f}andg{\displaystyle g}be integrable functions, and letf^{\displaystyle {\hat {f}}}andg^{\displaystyle {\hat {g}}}be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula,[21]∫Rnf^(x)g(x)dx=∫Rnf(x)g^(x)dx.{\displaystyle \int _{\mathbb {R} ^{n}}{\hat {f}}(x)g(x)\,dx=\int _{\mathbb {R} ^{n}}f(x){\hat {g}}(x)\,dx.} Every integrable functionf{\displaystyle f}defines (induces) a distributionTf{\displaystyle T_{f}}by the relationTf(ϕ)=∫Rnf(x)ϕ(x)dx,∀ϕ∈S(Rn).{\displaystyle T_{f}(\phi )=\int _{\mathbb {R} ^{n}}f(x)\phi (x)\,dx,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).}So it makes sense to define the Fourier transform of a tempered distributionTf∈S′(R){\displaystyle T_{f}\in {\mathcal {S}}'(\mathbb {R} )}by the duality:⟨T^f,ϕ⟩=⟨Tf,ϕ^⟩,∀ϕ∈S(Rn).{\displaystyle \langle {\widehat {T}}_{f},\phi \rangle =\langle T_{f},{\widehat {\phi }}\rangle ,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).}Extending this to all tempered distributionsT{\displaystyle T}gives the general definition of the Fourier transform. Distributions can be differentiated and the above-mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions. The Fourier transform of afiniteBorel measureμonRnis given by the continuous function:[61]μ^(ξ)=∫Rne−i2πx⋅ξdμ,{\displaystyle {\hat {\mu }}(\xi )=\int _{\mathbb {R} ^{n}}e^{-i2\pi x\cdot \xi }\,d\mu ,}and called theFourier-Stieltjes transformdue to its connection with theRiemann-Stieltjes integralrepresentation of(Radon) measures.[62]Ifμ{\displaystyle \mu }is theprobability distributionof arandom variableX{\displaystyle X}then its Fourier–Stieltjes transform is, by definition, acharacteristic function.[63]If, in addition, the probability distribution has aprobability density function, this definition is subject to the usual Fourier transform.[64]Stated more generally, whenμ{\displaystyle \mu }isabsolutely continuouswith respect to the Lebesgue measure, i.e.,dμ=f(x)dx,{\displaystyle d\mu =f(x)dx,}thenμ^(ξ)=f^(ξ),{\displaystyle {\hat {\mu }}(\xi )={\hat {f}}(\xi ),}and the Fourier-Stieltjes transform reduces to the usual definition of the Fourier transform. That is, the notable difference with the Fourier transform of integrable functions is that the Fourier-Stieltjes transform need not vanish at infinity, i.e., theRiemann–Lebesgue lemmafails for measures.[65] Bochner's theoremcharacterizes which functions may arise as the Fourier–Stieltjes transform of a positive measure on the circle. One example of a finite Borel measure that is not a function is theDirac measure.[66]Its Fourier transform is a constant function (whose value depends on the form of the Fourier transform used). The Fourier transform may be generalized to anylocally compact abelian group, i.e., anabelian groupthat is also alocally compact Hausdorff spacesuch that the group operation is continuous. IfGis a locally compact abelian group, it has a translation invariant measureμ, calledHaar measure. For a locally compact abelian groupG, the set of irreducible, i.e. one-dimensional, unitary representations are called itscharacters. With its natural group structure and the topology of uniform convergence on compact sets (that is, the topology induced by thecompact-open topologyon the space of all continuous functions fromG{\displaystyle G}to thecircle group), the set of charactersĜis itself a locally compact abelian group, called thePontryagin dualofG. For a functionfinL1(G), its Fourier transform is defined by[57]f^(ξ)=∫Gξ(x)f(x)dμfor anyξ∈G^.{\displaystyle {\hat {f}}(\xi )=\int _{G}\xi (x)f(x)\,d\mu \quad {\text{for any }}\xi \in {\hat {G}}.} The Riemann–Lebesgue lemma holds in this case;f̂(ξ)is a function vanishing at infinity onĜ. The Fourier transform onT= R/Zis an example; hereTis a locally compact abelian group, and the Haar measureμonTcan be thought of as the Lebesgue measure on [0,1). Consider the representation ofTon the complex planeCthat is a 1-dimensional complex vector space. There are a group of representations (which are irreducible sinceCis 1-dim){ek:T→GL1(C)=C∗∣k∈Z}{\displaystyle \{e_{k}:T\rightarrow GL_{1}(C)=C^{*}\mid k\in Z\}}whereek(x)=ei2πkx{\displaystyle e_{k}(x)=e^{i2\pi kx}}forx∈T{\displaystyle x\in T}. The character of such representation, that is the trace ofek(x){\displaystyle e_{k}(x)}for eachx∈T{\displaystyle x\in T}andk∈Z{\displaystyle k\in Z}, isei2πkx{\displaystyle e^{i2\pi kx}}itself. In the case of representation of finite group, the character table of the groupGare rows of vectors such that each row is the character of one irreducible representation ofG, and these vectors form an orthonormal basis of the space of class functions that map fromGtoCby Schur's lemma. Now the groupTis no longer finite but still compact, and it preserves the orthonormality of character table. Each row of the table is the functionek(x){\displaystyle e_{k}(x)}ofx∈T,{\displaystyle x\in T,}and the inner product between two class functions (all functions being class functions sinceTis abelian)f,g∈L2(T,dμ){\displaystyle f,g\in L^{2}(T,d\mu )}is defined as⟨f,g⟩=1|T|∫[0,1)f(y)g¯(y)dμ(y){\textstyle \langle f,g\rangle ={\frac {1}{|T|}}\int _{[0,1)}f(y){\overline {g}}(y)d\mu (y)}with the normalizing factor|T|=1{\displaystyle |T|=1}. The sequence{ek∣k∈Z}{\displaystyle \{e_{k}\mid k\in Z\}}is an orthonormal basis of the space of class functionsL2(T,dμ){\displaystyle L^{2}(T,d\mu )}. For any representationVof a finite groupG,χv{\displaystyle \chi _{v}}can be expressed as the span∑i⟨χv,χvi⟩χvi{\textstyle \sum _{i}\left\langle \chi _{v},\chi _{v_{i}}\right\rangle \chi _{v_{i}}}(Vi{\displaystyle V_{i}}are the irreps ofG), such that⟨χv,χvi⟩=1|G|∑g∈Gχv(g)χ¯vi(g){\textstyle \left\langle \chi _{v},\chi _{v_{i}}\right\rangle ={\frac {1}{|G|}}\sum _{g\in G}\chi _{v}(g){\overline {\chi }}_{v_{i}}(g)}. Similarly forG=T{\displaystyle G=T}andf∈L2(T,dμ){\displaystyle f\in L^{2}(T,d\mu )},f(x)=∑k∈Zf^(k)ek{\textstyle f(x)=\sum _{k\in Z}{\hat {f}}(k)e_{k}}. The Pontriagin dualT^{\displaystyle {\hat {T}}}is{ek}(k∈Z){\displaystyle \{e_{k}\}(k\in Z)}and forf∈L2(T,dμ){\displaystyle f\in L^{2}(T,d\mu )},f^(k)=1|T|∫[0,1)f(y)e−i2πkydy{\textstyle {\hat {f}}(k)={\frac {1}{|T|}}\int _{[0,1)}f(y)e^{-i2\pi ky}dy}is its Fourier transform forek∈T^{\displaystyle e_{k}\in {\hat {T}}}. The Fourier transform is also a special case ofGelfand transform. In this particular context, it is closely related to the Pontryagin duality map defined above. Given an abelianlocally compactHausdorfftopological groupG, as before we consider spaceL1(G), defined using a Haar measure. With convolution as multiplication,L1(G)is an abelianBanach algebra. It also has aninvolution* given byf∗(g)=f(g−1)¯.{\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}}.} Taking the completion with respect to the largest possiblyC*-norm gives its envelopingC*-algebra, called the groupC*-algebraC*(G)ofG. (AnyC*-norm onL1(G)is bounded by theL1norm, therefore their supremum exists.) Given any abelianC*-algebraA, the Gelfand transform gives an isomorphism betweenAandC0(A^), whereA^is the multiplicative linear functionals, i.e. one-dimensional representations, onAwith the weak-* topology. The map is simply given bya↦(φ↦φ(a)){\displaystyle a\mapsto {\bigl (}\varphi \mapsto \varphi (a){\bigr )}}It turns out that the multiplicative linear functionals ofC*(G), after suitable identification, are exactly the characters ofG, and the Gelfand transform, when restricted to the dense subsetL1(G)is the Fourier–Pontryagin transform. The Fourier transform can also be defined for functions on a non-abelian group, provided that the group iscompact. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators.[67]The Fourier transform on compact groups is a major tool inrepresentation theory[68]andnon-commutative harmonic analysis. LetGbe a compactHausdorfftopological group. LetΣdenote the collection of all isomorphism classes of finite-dimensional irreducibleunitary representations, along with a definite choice of representationU(σ)on theHilbert spaceHσof finite dimensiondσfor eachσ∈ Σ. Ifμis a finiteBorel measureonG, then the Fourier–Stieltjes transform ofμis the operator onHσdefined by⟨μ^ξ,η⟩Hσ=∫G⟨U¯g(σ)ξ,η⟩dμ(g){\displaystyle \left\langle {\hat {\mu }}\xi ,\eta \right\rangle _{H_{\sigma }}=\int _{G}\left\langle {\overline {U}}_{g}^{(\sigma )}\xi ,\eta \right\rangle \,d\mu (g)}whereU(σ)is the complex-conjugate representation ofU(σ)acting onHσ. Ifμisabsolutely continuouswith respect to theleft-invariant probability measureλonG,representedasdμ=fdλ{\displaystyle d\mu =f\,d\lambda }for somef∈L1(λ), one identifies the Fourier transform offwith the Fourier–Stieltjes transform ofμ. The mappingμ↦μ^{\displaystyle \mu \mapsto {\hat {\mu }}}defines an isomorphism between theBanach spaceM(G)of finite Borel measures (seerca space) and a closed subspace of the Banach spaceC∞(Σ)consisting of all sequencesE= (Eσ)indexed byΣof (bounded) linear operatorsEσ:Hσ→Hσfor which the norm‖E‖=supσ∈Σ‖Eσ‖{\displaystyle \|E\|=\sup _{\sigma \in \Sigma }\left\|E_{\sigma }\right\|}is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism ofC*-algebrasinto a subspace ofC∞(Σ). Multiplication onM(G)is given byconvolutionof measures and the involution * defined byf∗(g)=f(g−1)¯,{\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}},}andC∞(Σ)has a naturalC*-algebra structure as Hilbert space operators. ThePeter–Weyl theoremholds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: iff∈L2(G), thenf(g)=∑σ∈Σdσtr⁡(f^(σ)Ug(σ)){\displaystyle f(g)=\sum _{\sigma \in \Sigma }d_{\sigma }\operatorname {tr} \left({\hat {f}}(\sigma )U_{g}^{(\sigma )}\right)}where the summation is understood as convergent in theL2sense. The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development ofnoncommutative geometry.[citation needed]In this context, a categorical generalization of the Fourier transform to noncommutative groups isTannaka–Krein duality, which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions. Insignal processingterms, a function (of time) is a representation of a signal with perfecttime resolution, but no frequency information, while the Fourier transform has perfectfrequency resolution, but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), andstanding wavesare not localized in time – a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notablytransients, or any signal of finite extent. As alternatives to the Fourier transform, intime–frequency analysis, one uses time–frequency transforms or time–frequency distributions to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as theshort-time Fourier transform,fractional Fourier transform, Synchrosqueezing Fourier transform,[69]or other functions to represent signals, as inwavelet transformsandchirplet transforms, with the wavelet analog of the (continuous) Fourier transform being thecontinuous wavelet transform.[29] The following figures provide a visual illustration of how the Fourier transform's integral measures whether a frequency is present in a particular function. The first image depicts the functionf(t)=cos⁡(2π3t)e−πt2,{\displaystyle f(t)=\cos(2\pi \ 3t)\ e^{-\pi t^{2}},}which is a 3Hzcosine wave (the first term) shaped by aGaussianenvelope function(the second term) that smoothly turns the wave on and off. The next 2 images show the productf(t)e−i2π3t,{\displaystyle f(t)e^{-i2\pi 3t},}which must be integrated to calculate the Fourier transform at +3 Hz. The real part of the integrand has a non-negative average value, because the alternating signs off(t){\displaystyle f(t)}andRe⁡(e−i2π3t){\displaystyle \operatorname {Re} (e^{-i2\pi 3t})}oscillate at the same rate and in phase, whereasf(t){\displaystyle f(t)}andIm⁡(e−i2π3t){\displaystyle \operatorname {Im} (e^{-i2\pi 3t})}oscillate at the same rate but with orthogonal phase. The absolute value of the Fourier transform at +3 Hz is 0.5, which is relatively large. When added to the Fourier transform at -3 Hz (which is identical because we started with a real signal), we find that the amplitude of the 3 Hz frequency component is 1. However, when you try to measure a frequency that is not present, both the real and imaginary component of the integral vary rapidly between positive and negative values. For instance, the red curve is looking for 5 Hz. The absolute value of its integral is nearly zero, indicating that almost no 5 Hz component was in the signal. The general situation is usually more complicated than this, but heuristically this is how the Fourier transform measures how much of an individual frequency is present in a functionf(t).{\displaystyle f(t).} To re-enforce an earlier point, the reason for the response atξ=−3{\displaystyle \xi =-3}Hz  is becausecos⁡(2π3t){\displaystyle \cos(2\pi 3t)}andcos⁡(2π(−3)t){\displaystyle \cos(2\pi (-3)t)}are indistinguishable. The transform ofei2π3t⋅e−πt2{\displaystyle e^{i2\pi 3t}\cdot e^{-\pi t^{2}}}would have just one response, whose amplitude is the integral of the smooth envelope:e−πt2,{\displaystyle e^{-\pi t^{2}},}whereasRe⁡(f(t)⋅e−i2π3t){\displaystyle \operatorname {Re} (f(t)\cdot e^{-i2\pi 3t})}ise−πt2(1+cos⁡(2π6t))/2.{\displaystyle e^{-\pi t^{2}}(1+\cos(2\pi 6t))/2.} Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation ofdifferentiationin the time domain corresponds to multiplication by the frequency,[note 6]so somedifferential equationsare easier to analyze in the frequency domain. Also,convolutionin the time domain corresponds to ordinary multiplication in the frequency domain (seeConvolution theorem). After performing the desired operations, transformation of the result can be made back to the time domain.Harmonic analysisis the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics. Perhaps the most important use of the Fourier transformation is to solvepartial differential equations. Many of the equations of the mathematical physics of the nineteenth century can be treated this way. Fourier studied the heat equation, which in one dimension and in dimensionless units is∂2y(x,t)∂2x=∂y(x,t)∂t.{\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial y(x,t)}{\partial t}}.}The example we will give, a slightly more difficult one, is the wave equation in one dimension,∂2y(x,t)∂2x=∂2y(x,t)∂2t.{\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial ^{2}y(x,t)}{\partial ^{2}t}}.} As usual, the problem is not to find a solution: there are infinitely many. The problem is that of the so-called "boundary problem": find a solution which satisfies the "boundary conditions"y(x,0)=f(x),∂y(x,0)∂t=g(x).{\displaystyle y(x,0)=f(x),\qquad {\frac {\partial y(x,0)}{\partial t}}=g(x).} Here,fandgare given functions. For the heat equation, only one boundary condition can be required (usually the first one). But for the wave equation, there are still infinitely many solutionsywhich satisfy the first boundary condition. But when one imposes both conditions, there is only one possible solution. It is easier to find the Fourier transformŷof the solution than to find the solution directly. This is because the Fourier transformation takes differentiation into multiplication by the Fourier-dual variable, and so a partial differential equation applied to the original function is transformed into multiplication by polynomial functions of the dual variables applied to the transformed function. Afterŷis determined, we can apply the inverse Fourier transformation to findy. Fourier's method is as follows. First, note that any function of the formscos⁡(2πξ(x±t))orsin⁡(2πξ(x±t)){\displaystyle \cos {\bigl (}2\pi \xi (x\pm t){\bigr )}{\text{ or }}\sin {\bigl (}2\pi \xi (x\pm t){\bigr )}}satisfies the wave equation. These are called the elementary solutions. Second, note that therefore any integraly(x,t)=∫0∞dξ[a+(ξ)cos⁡(2πξ(x+t))+a−(ξ)cos⁡(2πξ(x−t))+b+(ξ)sin⁡(2πξ(x+t))+b−(ξ)sin⁡(2πξ(x−t))]{\displaystyle {\begin{aligned}y(x,t)=\int _{0}^{\infty }d\xi {\Bigl [}&a_{+}(\xi )\cos {\bigl (}2\pi \xi (x+t){\bigr )}+a_{-}(\xi )\cos {\bigl (}2\pi \xi (x-t){\bigr )}+{}\\&b_{+}(\xi )\sin {\bigl (}2\pi \xi (x+t){\bigr )}+b_{-}(\xi )\sin \left(2\pi \xi (x-t)\right){\Bigr ]}\end{aligned}}}satisfies the wave equation for arbitrarya+,a−,b+,b−. This integral may be interpreted as a continuous linear combination of solutions for the linear equation. Now this resembles the formula for the Fourier synthesis of a function. In fact, this is the real inverse Fourier transform ofa±andb±in the variablex. The third step is to examine how to find the specific unknown coefficient functionsa±andb±that will lead toysatisfying the boundary conditions. We are interested in the values of these solutions att= 0. So we will sett= 0. Assuming that the conditions needed for Fourier inversion are satisfied, we can then find the Fourier sine and cosine transforms (in the variablex) of both sides and obtain2∫−∞∞y(x,0)cos⁡(2πξx)dx=a++a−{\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\cos(2\pi \xi x)\,dx=a_{+}+a_{-}}and2∫−∞∞y(x,0)sin⁡(2πξx)dx=b++b−.{\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\sin(2\pi \xi x)\,dx=b_{+}+b_{-}.} Similarly, taking the derivative ofywith respect totand then applying the Fourier sine and cosine transformations yields2∫−∞∞∂y(u,0)∂tsin⁡(2πξx)dx=(2πξ)(−a++a−){\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\sin(2\pi \xi x)\,dx=(2\pi \xi )\left(-a_{+}+a_{-}\right)}and2∫−∞∞∂y(u,0)∂tcos⁡(2πξx)dx=(2πξ)(b+−b−).{\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\cos(2\pi \xi x)\,dx=(2\pi \xi )\left(b_{+}-b_{-}\right).} These are four linear equations for the four unknownsa±andb±, in terms of the Fourier sine and cosine transforms of the boundary conditions, which are easily solved by elementary algebra, provided that these transforms can be found. In summary, we chose a set of elementary solutions, parametrized byξ, of which the general solution would be a (continuous) linear combination in the form of an integral over the parameterξ. But this integral was in the form of a Fourier integral. The next step was to express the boundary conditions in terms of these integrals, and set them equal to the given functionsfandg. But these expressions also took the form of a Fourier integral because of the properties of the Fourier transform of a derivative. The last step was to exploit Fourier inversion by applying the Fourier transformation to both sides, thus obtaining expressions for the coefficient functionsa±andb±in terms of the given boundary conditionsfandg. From a higher point of view, Fourier's procedure can be reformulated more conceptually. Since there are two variables, we will use the Fourier transformation in bothxandtrather than operate as Fourier did, who only transformed in the spatial variables. Note thatŷmust be considered in the sense of a distribution sincey(x,t)is not going to beL1: as a wave, it will persist through time and thus is not a transient phenomenon. But it will be bounded and so its Fourier transform can be defined as a distribution. The operational properties of the Fourier transformation that are relevant to this equation are that it takes differentiation inxto multiplication byi2πξand differentiation with respect totto multiplication byi2πfwherefis the frequency. Then the wave equation becomes an algebraic equation inŷ:ξ2y^(ξ,f)=f2y^(ξ,f).{\displaystyle \xi ^{2}{\hat {y}}(\xi ,f)=f^{2}{\hat {y}}(\xi ,f).}This is equivalent to requiringŷ(ξ,f) = 0unlessξ= ±f. Right away, this explains why the choice of elementary solutions we made earlier worked so well: obviouslyf̂=δ(ξ±f)will be solutions. Applying Fourier inversion to these delta functions, we obtain the elementary solutions we picked earlier. But from the higher point of view, one does not pick elementary solutions, but rather considers the space of all distributions which are supported on the (degenerate) conicξ2−f2= 0. We may as well consider the distributions supported on the conic that are given by distributions of one variable on the lineξ=fplus distributions on the lineξ= −fas follows: ifΦis any test function,∬y^ϕ(ξ,f)dξdf=∫s+ϕ(ξ,ξ)dξ+∫s−ϕ(ξ,−ξ)dξ,{\displaystyle \iint {\hat {y}}\phi (\xi ,f)\,d\xi \,df=\int s_{+}\phi (\xi ,\xi )\,d\xi +\int s_{-}\phi (\xi ,-\xi )\,d\xi ,}wheres+, ands−, are distributions of one variable. Then Fourier inversion gives, for the boundary conditions, something very similar to what we had more concretely above (putΦ(ξ,f) =ei2π(xξ+tf), which is clearly of polynomial growth):y(x,0)=∫{s+(ξ)+s−(ξ)}ei2πξx+0dξ{\displaystyle y(x,0)=\int {\bigl \{}s_{+}(\xi )+s_{-}(\xi ){\bigr \}}e^{i2\pi \xi x+0}\,d\xi }and∂y(x,0)∂t=∫{s+(ξ)−s−(ξ)}i2πξei2πξx+0dξ.{\displaystyle {\frac {\partial y(x,0)}{\partial t}}=\int {\bigl \{}s_{+}(\xi )-s_{-}(\xi ){\bigr \}}i2\pi \xi e^{i2\pi \xi x+0}\,d\xi .} Now, as before, applying the one-variable Fourier transformation in the variablexto these functions ofxyields two equations in the two unknown distributionss±(which can be taken to be ordinary functions if the boundary conditions areL1orL2). From a calculational point of view, the drawback of course is that one must first calculate the Fourier transforms of the boundary conditions, then assemble the solution from these, and then calculate an inverse Fourier transform. Closed form formulas are rare, except when there is some geometric symmetry that can be exploited, and the numerical calculations are difficult because of the oscillatory nature of the integrals, which makes convergence slow and hard to estimate. For practical calculations, other methods are often used. The twentieth century has seen the extension of these methods to all linear partial differential equations with polynomial coefficients, and by extending the notion of Fourier transformation to include Fourier integral operators, some non-linear equations as well. The Fourier transform is also used innuclear magnetic resonance(NMR) and in other kinds ofspectroscopy, e.g. infrared (FTIR). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used inmagnetic resonance imaging(MRI) andmass spectrometry. The Fourier transform is useful inquantum mechanicsin at least two different ways. To begin with, the basic conceptual structure of quantum mechanics postulates the existence of pairs ofcomplementary variables, connected by theHeisenberg uncertainty principle. For example, in one dimension, the spatial variableqof, say, a particle, can only be measured by the quantum mechanical "position operator" at the cost of losing information about the momentumpof the particle. Therefore, the physical state of the particle can either be described by a function, called "the wave function", ofqor by a function ofpbut not by a function of both variables. The variablepis called the conjugate variable toq. In classical mechanics, the physical state of a particle (existing in one dimension, for simplicity of exposition) would be given by assigning definite values to bothpandqsimultaneously. Thus, the set of all possible physical states is the two-dimensional real vector space with ap-axis and aq-axis called thephase space. In contrast, quantum mechanics chooses a polarisation of this space in the sense that it picks a subspace of one-half the dimension, for example, theq-axis alone, but instead of considering only points, takes the set of all complex-valued "wave functions" on this axis. Nevertheless, choosing thep-axis is an equally valid polarisation, yielding a different representation of the set of possible physical states of the particle. Both representations of the wavefunction are related by a Fourier transform, such thatϕ(p)=∫dqψ(q)e−ipq/h,{\displaystyle \phi (p)=\int dq\,\psi (q)e^{-ipq/h},}or, equivalently,ψ(q)=∫dpϕ(p)eipq/h.{\displaystyle \psi (q)=\int dp\,\phi (p)e^{ipq/h}.} Physically realisable states areL2, and so by thePlancherel theorem, their Fourier transforms are alsoL2. (Note that sinceqis in units of distance andpis in units of momentum, the presence of the Planck constant in the exponent makes the exponentdimensionless, as it should be.) Therefore, the Fourier transform can be used to pass from one way of representing the state of the particle, by a wave function of position, to another way of representing the state of the particle: by a wave function of momentum. Infinitely many different polarisations are possible, and all are equally valid. Being able to transform states from one representation to another by the Fourier transform is not only convenient but also the underlying reason of the Heisenberguncertainty principle. The other use of the Fourier transform in both quantum mechanics andquantum field theoryis to solve the applicable wave equation. In non-relativistic quantum mechanics,Schrödinger's equationfor a time-varying wave function in one-dimension, not subject to external forces, is−∂2∂x2ψ(x,t)=ih2π∂∂tψ(x,t).{\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).} This is the same as the heat equation except for the presence of the imaginary uniti. Fourier methods can be used to solve this equation. In the presence of a potential, given by the potential energy functionV(x), the equation becomes−∂2∂x2ψ(x,t)+V(x)ψ(x,t)=ih2π∂∂tψ(x,t).{\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)+V(x)\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).} The "elementary solutions", as we referred to them above, are the so-called "stationary states" of the particle, and Fourier's algorithm, as described above, can still be used to solve the boundary value problem of the future evolution ofψgiven its values fort= 0. Neither of these approaches is of much practical use in quantum mechanics. Boundary value problems and the time-evolution of the wave function is not of much practical interest: it is the stationary states that are most important. In relativistic quantum mechanics, Schrödinger's equation becomes a wave equation as was usual in classical physics, except that complex-valued waves are considered. A simple example, in the absence of interactions with other particles or fields, is the free one-dimensional Klein–Gordon–Schrödinger–Fock equation, this time in dimensionless units,(∂2∂x2+1)ψ(x,t)=∂2∂t2ψ(x,t).{\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+1\right)\psi (x,t)={\frac {\partial ^{2}}{\partial t^{2}}}\psi (x,t).} This is, from the mathematical point of view, the same as the wave equation of classical physics solved above (but with a complex-valued wave, which makes no difference in the methods). This is of great use in quantum field theory: each separate Fourier component of a wave can be treated as a separate harmonic oscillator and then quantized, a procedure known as "second quantization". Fourier methods have been adapted to also deal with non-trivial interactions. Finally, thenumber operatorof thequantum harmonic oscillatorcan be interpreted, for example via theMehler kernel, as thegeneratorof theFourier transformF{\displaystyle {\mathcal {F}}}.[32] The Fourier transform is used for the spectral analysis of time-series. The subject of statistical signal processing does not, however, usually apply the Fourier transformation to the signal itself. Even if a real signal is indeed transient, it has been found in practice advisable to model a signal by a function (or, alternatively, a stochastic process) which is stationary in the sense that its characteristic properties are constant over all time. The Fourier transform of such a function does not exist in the usual sense, and it has been found more useful for the analysis of signals to instead take the Fourier transform of its autocorrelation function. The autocorrelation functionRof a functionfis defined byRf(τ)=limT→∞12T∫−TTf(t)f(t+τ)dt.{\displaystyle R_{f}(\tau )=\lim _{T\rightarrow \infty }{\frac {1}{2T}}\int _{-T}^{T}f(t)f(t+\tau )\,dt.} This function is a function of the time-lagτelapsing between the values offto be correlated. For most functionsfthat occur in practice,Ris a bounded even function of the time-lagτand for typical noisy signals it turns out to be uniformly continuous with a maximum atτ= 0. The autocorrelation function, more properly called the autocovariance function unless it is normalized in some appropriate fashion, measures the strength of the correlation between the values offseparated by a time lag. This is a way of searching for the correlation offwith its own past. It is useful even for other statistical tasks besides the analysis of signals. For example, iff(t)represents the temperature at timet, one expects a strong correlation with the temperature at a time lag of 24 hours. It possesses a Fourier transform,Pf(ξ)=∫−∞∞Rf(τ)e−i2πξτdτ.{\displaystyle P_{f}(\xi )=\int _{-\infty }^{\infty }R_{f}(\tau )e^{-i2\pi \xi \tau }\,d\tau .} This Fourier transform is called thepower spectral densityfunction off. (Unless all periodic components are first filtered out fromf, this integral will diverge, but it is easy to filter out such periodicities.) The power spectrum, as indicated by this density functionP, measures the amount of variance contributed to the data by the frequencyξ. In electrical signals, the variance is proportional to the average power (energy per unit time), and so the power spectrum describes how much the different frequencies contribute to the average power of the signal. This process is called the spectral analysis of time-series and is analogous to the usual analysis of variance of data that is not a time-series (ANOVA). Knowledge of which frequencies are "important" in this sense is crucial for the proper design of filters and for the proper evaluation of measuring apparatuses. It can also be useful for the scientific analysis of the phenomena responsible for producing the data. The power spectrum of a signal can also be approximately measured directly by measuring the average power that remains in a signal after all the frequencies outside a narrow band have been filtered out. Spectral analysis is carried out for visual signals as well. The power spectrum ignores all phase relations, which is good enough for many purposes, but for video signals other types of spectral analysis must also be employed, still using the Fourier transform as a tool. Other common notations forf^(ξ){\displaystyle {\hat {f}}(\xi )}include:f~(ξ),F(ξ),F(f)(ξ),(Ff)(ξ),F(f),F{f},F(f(t)),F{f(t)}.{\displaystyle {\tilde {f}}(\xi ),\ F(\xi ),\ {\mathcal {F}}\left(f\right)(\xi ),\ \left({\mathcal {F}}f\right)(\xi ),\ {\mathcal {F}}(f),\ {\mathcal {F}}\{f\},\ {\mathcal {F}}{\bigl (}f(t){\bigr )},\ {\mathcal {F}}{\bigl \{}f(t){\bigr \}}.} In the sciences and engineering it is also common to make substitutions like these:ξ→f,x→t,f→x,f^→X.{\displaystyle \xi \rightarrow f,\quad x\rightarrow t,\quad f\rightarrow x,\quad {\hat {f}}\rightarrow X.} So the transform pairf(x)⟺Ff^(ξ){\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ {\hat {f}}(\xi )}can becomex(t)⟺FX(f){\displaystyle x(t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ X(f)} A disadvantage of the capital letter notation is when expressing a transform such asf⋅g^{\displaystyle {\widehat {f\cdot g}}}orf′^,{\displaystyle {\widehat {f'}},}which become the more awkwardF{f⋅g}{\displaystyle {\mathcal {F}}\{f\cdot g\}}andF{f′}.{\displaystyle {\mathcal {F}}\{f'\}.} In some contexts such as particle physics, the same symbolf{\displaystyle f}may be used for both for a function as well as it Fourier transform, with the two only distinguished by theirargumentI.e.f(k1+k2){\displaystyle f(k_{1}+k_{2})}would refer to the Fourier transform because of the momentum argument, whilef(x0+πr→){\displaystyle f(x_{0}+\pi {\vec {r}})}would refer to the original function because of the positional argument. Although tildes may be used as inf~{\displaystyle {\tilde {f}}}to indicate Fourier transforms, tildes may also be used to indicate a modification of a quantity with a moreLorentz invariantform, such asdk~=dk(2π)32ω{\displaystyle {\tilde {dk}}={\frac {dk}{(2\pi )^{3}2\omega }}}, so care must be taken. Similarly,f^{\displaystyle {\hat {f}}}often denotes theHilbert transformoff{\displaystyle f}. The interpretation of the complex functionf̂(ξ)may be aided by expressing it inpolar coordinateformf^(ξ)=A(ξ)eiφ(ξ){\displaystyle {\hat {f}}(\xi )=A(\xi )e^{i\varphi (\xi )}}in terms of the two real functionsA(ξ)andφ(ξ)where:A(ξ)=|f^(ξ)|,{\displaystyle A(\xi )=\left|{\hat {f}}(\xi )\right|,}is theamplitudeandφ(ξ)=arg⁡(f^(ξ)),{\displaystyle \varphi (\xi )=\arg \left({\hat {f}}(\xi )\right),}is thephase(seearg function). Then the inverse transform can be written:f(x)=∫−∞∞A(ξ)ei(2πξx+φ(ξ))dξ,{\displaystyle f(x)=\int _{-\infty }^{\infty }A(\xi )\ e^{i{\bigl (}2\pi \xi x+\varphi (\xi ){\bigr )}}\,d\xi ,}which is a recombination of all the frequency components off(x). Each component is a complexsinusoidof the forme2πixξwhose amplitude isA(ξ)and whose initialphase angle(atx= 0) isφ(ξ). The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denotedFandF(f)is used to denote the Fourier transform of the functionf. This mapping is linear, which means thatFcan also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the functionf) can be used to writeFfinstead ofF(f). Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the valueξfor its variable, and this is denoted either asFf(ξ)or as(Ff)(ξ). Notice that in the former case, it is implicitly understood thatFis applied first tofand then the resulting function is evaluated atξ, not the other way around. In mathematics and various applied sciences, it is often necessary to distinguish between a functionfand the value offwhen its variable equalsx, denotedf(x). This means that a notation likeF(f(x))formally can be interpreted as the Fourier transform of the values offatx. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example,F(rect⁡(x))=sinc⁡(ξ){\displaystyle {\mathcal {F}}{\bigl (}\operatorname {rect} (x){\bigr )}=\operatorname {sinc} (\xi )}is sometimes used to express that the Fourier transform of arectangular functionis asinc function, orF(f(x+x0))=F(f(x))ei2πx0ξ{\displaystyle {\mathcal {F}}{\bigl (}f(x+x_{0}){\bigr )}={\mathcal {F}}{\bigl (}f(x){\bigr )}\,e^{i2\pi x_{0}\xi }}is used to express the shift property of the Fourier transform. Notice, that the last example is only correct under the assumption that the transformed function is a function ofx, not ofx0. As discussed above, thecharacteristic functionof a random variable is the same as theFourier–Stieltjes transformof its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is definedE(eit⋅X)=∫eit⋅xdμX(x).{\displaystyle E\left(e^{it\cdot X}\right)=\int e^{it\cdot x}\,d\mu _{X}(x).} As in the case of the "non-unitary angular frequency" convention above, the factor of 2πappears in neither the normalizing constant nor the exponent. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponent. The appropriate computation method largely depends how the original mathematical function is represented and the desired form of the output function. In this section we consider both functions of a continuous variable,f(x),{\displaystyle f(x),}and functions of a discrete variable (i.e. ordered pairs ofx{\displaystyle x}andf{\displaystyle f}values). For discrete-valuedx,{\displaystyle x,}the transform integral becomes a summation of sinusoids, which is still a continuous function of frequency (ξ{\displaystyle \xi }orω{\displaystyle \omega }). When the sinusoids are harmonically related (i.e. when thex{\displaystyle x}-values are spaced at integer multiples of an interval), the transform is calleddiscrete-time Fourier transform(DTFT). Sampling the DTFT at equally-spaced values of frequency is the most common modern method of computation. Efficient procedures, depending on the frequency resolution needed, are described atDiscrete-time Fourier transform § Sampling the DTFT. Thediscrete Fourier transform(DFT), used there, is usually computed by afast Fourier transform(FFT) algorithm. Tables ofclosed-formFourier transforms, such as§ Square-integrable functions, one-dimensionaland§ Table of discrete-time Fourier transforms, are created by mathematically evaluating the Fourier analysis integral (or summation) into another closed-form function of frequency (ξ{\displaystyle \xi }orω{\displaystyle \omega }).[70]When mathematically possible, this provides a transform for a continuum of frequency values. Many computer algebra systems such asMatlabandMathematicathat are capable ofsymbolic integrationare capable of computing Fourier transforms analytically. For example, to compute the Fourier transform ofcos(6πt)e−πt2one might enter the commandintegrate cos(6*pi*t) exp(−pi*t^2) exp(-i*2*pi*f*t) from -inf to infintoWolfram Alpha.[note 7] Discrete sampling of the Fourier transform can also be done bynumerical integrationof the definition at each value of frequency for which transform is desired.[71][72][73]The numerical integration approach works on a much broader class of functions than the analytic approach. If the input function is a series of ordered pairs, numerical integration reduces to just a summation over the set of data pairs.[74]The DTFT is a common subcase of this more general situation. The following tables record some closed-form Fourier transforms. For functionsf(x)andg(x)denote their Fourier transforms byf̂andĝ. Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse. The Fourier transforms in this table may be found inErdélyi (1954)orKammler (2000, appendix). Asfis aSchwartz function The Fourier transforms in this table may be found inCampbell & Foster (1948),Erdélyi (1954), orKammler (2000, appendix). The Fourier transforms in this table may be found inErdélyi (1954)orKammler (2000, appendix).
https://en.wikipedia.org/wiki/Fourier_transform
Inmathematics, theharmonic seriesis theinfinite seriesformed by summing all positiveunit fractions:∑n=1∞1n=1+12+13+14+15+⋯.{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n}}=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+\cdots .} The firstn{\displaystyle n}terms of the series sum to approximatelyln⁡n+γ{\displaystyle \ln n+\gamma }, whereln{\displaystyle \ln }is thenatural logarithmandγ≈0.577{\displaystyle \gamma \approx 0.577}is theEuler–Mascheroni constant. Because the logarithm has arbitrarily large values, the harmonic series does not have a finite limit: it is adivergent series. Its divergence was proven in the 14th century byNicole Oresmeusing a precursor to theCauchy condensation testfor the convergence of infinite series. It can also be proven to diverge by comparing the sum to anintegral, according to theintegral test for convergence. Applications of the harmonic series and its partial sums includeEuler's proof that there are infinitely many prime numbers, the analysis of thecoupon collector's problemon how many random trials are needed to provide a complete range of responses, theconnected componentsofrandom graphs, theblock-stacking problemon how far over the edge of a table a stack of blocks can becantilevered, and theaverage case analysisof thequicksortalgorithm. The name of the harmonic series derives from the concept ofovertonesor harmonicsin music: thewavelengthsof the overtones of a vibrating string are12{\displaystyle {\tfrac {1}{2}}},13{\displaystyle {\tfrac {1}{3}}},14{\displaystyle {\tfrac {1}{4}}},etc., of the string'sfundamental wavelength.[1][2]Every term of the harmonic series after the first is theharmonic meanof the neighboring terms, so the terms form aharmonic progression; the phrasesharmonic meanandharmonic progressionlikewise derive from music.[2]Beyond music, harmonic sequences have also had a certain popularity with architects. This was so particularly in theBaroqueperiod, when architects used them to establish theproportionsoffloor plans, ofelevations, and to establish harmonic relationships between both interior and exterior architectural details of churches and palaces.[3] The divergence of the harmonic series was first proven in 1350 byNicole Oresme.[2][4]Oresme's work, and the contemporaneous work ofRichard Swinesheadon a different series, marked the first appearance of infinite series other than thegeometric seriesin mathematics.[5]However, this achievement fell into obscurity.[6]Additional proofs were published in the 17th century byPietro Mengoli[2][7]and byJacob Bernoulli.[8][9][10]Bernoulli credited his brotherJohann Bernoullifor finding the proof,[10]and it was later included in Johann Bernoulli's collected works.[11] The partial sums of the harmonic series were namedharmonic numbers, and given their usual notationHn{\displaystyle H_{n}}, in 1968 byDonald Knuth.[12] The harmonic series is the infinite series∑n=1∞1n=1+12+13+14+15+⋯{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n}}=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+\cdots }in which the terms are all of the positiveunit fractions. It is adivergent series: as more terms of the series are included inpartial sumsof the series, the values of these partial sums grow arbitrarily large, beyond any finite limit. Because it is a divergent series, it should be interpreted as a formal sum, an abstract mathematical expression combining the unit fractions, rather than as something that can be evaluated to a numeric value. There are many different proofs of the divergence of the harmonic series, surveyed in a 2006 paper by S. J. Kifowit and T. A. Stamps.[13]Two of the best-known[1][13]are listed below. One way to prove divergence is to compare the harmonic series with another divergent series, where each denominator is replaced with the next-largestpower of two:1+12+13+14+15+16+17+18+19+⋯≥1+12+14+14+18+18+18+18+116+⋯{\displaystyle {\begin{alignedat}{8}1&+{\frac {1}{2}}&&+{\frac {1}{3}}&&+{\frac {1}{4}}&&+{\frac {1}{5}}&&+{\frac {1}{6}}&&+{\frac {1}{7}}&&+{\frac {1}{8}}&&+{\frac {1}{9}}&&+\cdots \\[5pt]{}\geq 1&+{\frac {1}{2}}&&+{\frac {1}{\color {red}{\mathbf {4} }}}&&+{\frac {1}{4}}&&+{\frac {1}{\color {red}{\mathbf {8} }}}&&+{\frac {1}{\color {red}{\mathbf {8} }}}&&+{\frac {1}{\color {red}{\mathbf {8} }}}&&+{\frac {1}{8}}&&+{\frac {1}{\color {red}{\mathbf {16} }}}&&+\cdots \\[5pt]\end{alignedat}}}Grouping equal terms shows that the second series diverges (because every grouping of convergent series is only convergent):1+(12)+(14+14)+(18+18+18+18)+(116+⋯+116)+⋯=1+12+12+12+12+⋯.{\displaystyle {\begin{aligned}&1+\left({\frac {1}{2}}\right)+\left({\frac {1}{4}}+{\frac {1}{4}}\right)+\left({\frac {1}{8}}+{\frac {1}{8}}+{\frac {1}{8}}+{\frac {1}{8}}\right)+\left({\frac {1}{16}}+\cdots +{\frac {1}{16}}\right)+\cdots \\[5pt]{}={}&1+{\frac {1}{2}}+{\frac {1}{2}}+{\frac {1}{2}}+{\frac {1}{2}}+\cdots .\end{aligned}}}Because each term of the harmonic series is greater than or equal to the corresponding term of the second series (and the terms are all positive), and since the second series diverges, it follows (by thecomparison test) that the harmonic series diverges as well. The same argument proves more strongly that, for everypositiveintegerk{\displaystyle k},∑n=12k1n≥1+k2{\displaystyle \sum _{n=1}^{2^{k}}{\frac {1}{n}}\geq 1+{\frac {k}{2}}}This is the original proof given byNicole Oresmein around 1350.[13]TheCauchy condensation testis a generalization of this argument.[14] It is possible to prove that the harmonic series diverges by comparing its sum with animproper integral. Specifically, consider the arrangement of rectangles shown in the figure to the right. Each rectangle is 1 unit wide and1n{\displaystyle {\tfrac {1}{n}}}units high, so if the harmonic series converged then the total area of the rectangles would be the sum of the harmonic series. The curvey=1x{\displaystyle y={\tfrac {1}{x}}}stays entirely below the upper boundary of the rectangles, so the area under the curve (in the range ofx{\displaystyle x}from one to infinity that is covered by rectangles) would be less than the area of the union of the rectangles. However, the area under the curve is given by a divergentimproper integral,∫1∞1xdx=∞.{\displaystyle \int _{1}^{\infty }{\frac {1}{x}}\,dx=\infty .}Because this integral does not converge, the sum cannot converge either.[13] In the figure to the right, shifting each rectangle to the left by 1 unit, would produce a sequence of rectangles whose boundary lies below the curve rather than above it. This shows that the partial sums of the harmonic series differ from the integral by an amount that is bounded above and below by the unit area of the first rectangle:∫1N+11xdx<∑i=1N1i<∫1N1xdx+1.{\displaystyle \int _{1}^{N+1}{\frac {1}{x}}\,dx<\sum _{i=1}^{N}{\frac {1}{i}}<\int _{1}^{N}{\frac {1}{x}}\,dx+1.}Generalizing this argument, any infinite sum of values of a monotone decreasing positive functionofn{\displaystyle n}(like the harmonic series) has partial sums that are within a bounded distance of the values of the corresponding integrals. Therefore, the sum converges if and only if the integral over the same range of the same function converges. When this equivalence is used to check the convergence of a sum by replacing it with an easier integral, it is known as theintegral test for convergence.[15] Adding the firstn{\displaystyle n}terms of the harmonic series produces apartial sum, called aharmonic numberanddenotedHn{\displaystyle H_{n}}:[12]Hn=∑k=1n1k.{\displaystyle H_{n}=\sum _{k=1}^{n}{\frac {1}{k}}.} These numbers grow very slowly, withlogarithmic growth, as can be seen from the integral test.[15]More precisely, by theEuler–Maclaurin formula,Hn=ln⁡n+γ+12n−εn{\displaystyle H_{n}=\ln n+\gamma +{\frac {1}{2n}}-\varepsilon _{n}}whereγ≈0.5772{\displaystyle \gamma \approx 0.5772}is theEuler–Mascheroni constantand0≤εn≤1/(8n2){\displaystyle 0\leq \varepsilon _{n}\leq 1/(8n^{2})}which approaches 0 asn{\displaystyle n}goes to infinity.[16] No harmonic numbers are integers except forH1=1{\displaystyle H_{1}=1}.[17][18]One way to prove thatHn{\displaystyle H_{n}}is not an integer is to consider the highestpower of two2k{\displaystyle 2^{k}}in the range from1 ton{\displaystyle n}.IfM{\displaystyle M}is theleast common multipleof the numbers from1 ton{\displaystyle n},thenHk{\displaystyle H_{k}}can be rewritten as a sum of fractions with equal denominatorsHn=∑i=1nM/iM{\displaystyle H_{n}=\sum _{i=1}^{n}{\tfrac {M/i}{M}}}in which only one of the numerators,M/2k{\displaystyle M/2^{k}},is odd and the rest are even, and(whenn>1{\displaystyle n>1})M{\displaystyle M}is itself even. Therefore, the result is a fraction with an odd numerator and an even denominator, which cannot be an integer.[17]More generally, any sequence of consecutive integers has a unique member divisible by a greater power of two than all the other sequence members, from which it follows by the same argument that no two harmonic numbers differ by an integer.[18] Another proof that the harmonic numbers are not integers observes that the denominator ofHn{\displaystyle H_{n}}must be divisible by allprime numbersgreater thann/2{\displaystyle n/2}and less than or equal ton{\displaystyle n}, and usesBertrand's postulateto prove that this set of primes is non-empty. The same argument implies more strongly that, except forH1=1{\displaystyle H_{1}=1},H2=1.5{\displaystyle H_{2}=1.5}, andH6=2.45{\displaystyle H_{6}=2.45}, no harmonic number can have aterminating decimalrepresentation.[17]It has been conjectured that every prime number divides the numerators of only a finite subset of the harmonic numbers, but this remains unproven.[19] Thedigamma functionis defined as thelogarithmic derivativeof thegamma functionψ(x)=ddxln⁡(Γ(x))=Γ′(x)Γ(x).{\displaystyle \psi (x)={\frac {d}{dx}}\ln {\big (}\Gamma (x){\big )}={\frac {\Gamma '(x)}{\Gamma (x)}}.}Just as the gamma function provides a continuousinterpolationof thefactorials, the digamma function provides a continuous interpolation of the harmonic numbers, in the sense thatψ(n)=Hn−1−γ{\displaystyle \psi (n)=H_{n-1}-\gamma }.[20]This equation can be used to extend the definition to harmonic numbers with rational indices.[21] Many well-known mathematical problems have solutions involving the harmonic series and its partial sums. Thejeep problemor desert-crossing problem is included in a 9th-century problem collection byAlcuin,Propositiones ad Acuendos Juvenes(formulated in terms of camels rather than jeeps), but with an incorrect solution.[22]The problem asks how far into the desert a jeep can travel and return, starting from a base withn{\displaystyle n}loads of fuel, by carrying some of the fuel into the desert and leaving it in depots. The optimal solution involves placing depots spaced at distancesr2n,r2(n−1),r2(n−2),…{\displaystyle {\tfrac {r}{2n}},{\tfrac {r}{2(n-1)}},{\tfrac {r}{2(n-2)}},\dots }from the starting point and each other, wherer{\displaystyle r}is the range of distance that the jeep can travel with a single load of fuel. On each trip out and back from the base, the jeep places one more depot, refueling at the other depots along the way, and placing as much fuel as it can in the newly placed depot while still leaving enough for itself to return to the previous depots and the base. Therefore, the total distance reached on then{\displaystyle n}th trip isr2n+r2(n−1)+r2(n−2)+⋯=r2Hn,{\displaystyle {\frac {r}{2n}}+{\frac {r}{2(n-1)}}+{\frac {r}{2(n-2)}}+\cdots ={\frac {r}{2}}H_{n},}whereHn{\displaystyle H_{n}}is then{\displaystyle n}thharmonic number. The divergence of the harmonic series implies that crossings of any length are possible with enough fuel.[23] For instance, for Alcuin's version of the problem,r=30{\displaystyle r=30}: a camel can carry 30 measures of grain and can travel one leuca while eating a single measure, where a leuca is a unit of distance roughly equal to 2.3 kilometres (1.4 mi). The problem hasn=3{\displaystyle n=3}: there are 90 measures of grain, enough to supply three trips. For the standard formulation of the desert-crossing problem, it would be possible for the camel to travel302(13+12+11)=27.5{\displaystyle {\tfrac {30}{2}}{\bigl (}{\tfrac {1}{3}}+{\tfrac {1}{2}}+{\tfrac {1}{1}})=27.5}leucas and return, by placing a grain storage depot 5 leucas from the base on the first trip and 12.5 leucas from the base on the second trip. However, Alcuin instead asks a slightly different question, how much grain can be transported a distance of 30 leucas without a final return trip, and either strands some camels in the desert or fails to account for the amount of grain consumed by a camel on its return trips.[22] In theblock-stacking problem, one must place a pile ofn{\displaystyle n}identical rectangular blocks, one per layer, so that they hang as far as possible over the edge of a table without falling. The top block can be placed with12{\displaystyle {\tfrac {1}{2}}}of its length extending beyond the next lower block. If it is placed in this way, the next block down needs to be placed with at most12⋅12{\displaystyle {\tfrac {1}{2}}\cdot {\tfrac {1}{2}}}of its length extending beyond the next lower block, so that thecenter of massof the top two blocks is supported and they do not topple. The third block needs to be placed with at most12⋅13{\displaystyle {\tfrac {1}{2}}\cdot {\tfrac {1}{3}}}of its length extending beyond the next lower block, so that the center of mass of the top three blocks is supported and they do not topple, and so on. In this way, it is possible to place then{\displaystyle n}blocks in such a way that they extend12Hn{\displaystyle {\tfrac {1}{2}}H_{n}}lengths beyond the table, whereHn{\displaystyle H_{n}}is then{\displaystyle n}thharmonic number.[24][25]The divergence of the harmonic series implies that there is no limit on how far beyond the table the block stack can extend.[25]For stacks with one block per layer, no better solution is possible, but significantly more overhang can be achieved using stacks with more than one block per layer.[26] In 1737,Leonhard Eulerobserved that, as aformal sum, the harmonic series is equal to anEuler productin which each term comes from aprime number:∑i=1∞1i=∏p∈P(1+1p+1p2+⋯)=∏p∈P11−1/p,{\displaystyle \sum _{i=1}^{\infty }{\frac {1}{i}}=\prod _{p\in \mathbb {P} }\left(1+{\frac {1}{p}}+{\frac {1}{p^{2}}}+\cdots \right)=\prod _{p\in \mathbb {P} }{\frac {1}{1-1/p}},}whereP{\displaystyle \mathbb {P} }denotes the set of prime numbers. The left equality comes from applying thedistributive lawto the product and recognizing the resulting terms as theprime factorizationsof the terms in the harmonic series, and the right equality uses the standard formula for ageometric series. The product is divergent, just like the sum, but if it converged one could take logarithms and obtainln⁡∏p∈P11−1/p=∑p∈Pln⁡11−1/p=∑p∈P(1p+12p2+13p3+⋯)=∑p∈P1p+K.{\displaystyle \ln \prod _{p\in \mathbb {P} }{\frac {1}{1-1/p}}=\sum _{p\in \mathbb {P} }\ln {\frac {1}{1-1/p}}=\sum _{p\in \mathbb {P} }\left({\frac {1}{p}}+{\frac {1}{2p^{2}}}+{\frac {1}{3p^{3}}}+\cdots \right)=\sum _{p\in \mathbb {P} }{\frac {1}{p}}+K.}Here, each logarithm is replaced by itsTaylor series, and the constantK{\displaystyle K}on the right is the evaluation of the convergent series of terms with exponent greater than one. It follows from these manipulations that the sum of reciprocals of primes, on the right hand of this equality, must diverge, for if it converged these steps could be reversed to show that the harmonic series also converges, which it does not. An immediate corollary is thatthere are infinitely many prime numbers, because a finite sum cannot diverge.[27]Although Euler's work is not considered adequately rigorous by the standards of modern mathematics, it can be made rigorous by taking more care with limits and error bounds.[28]Euler's conclusion that the partial sums of reciprocals of primes grow as adouble logarithmof the number of terms has been confirmed by later mathematicians as one ofMertens' theorems,[29]and can be seen as a precursor to theprime number theorem.[28] Another problem innumber theoryclosely related to the harmonic series concerns the average number ofdivisorsof the numbers in a range from 1 ton{\displaystyle n}, formalized as theaverage orderof thedivisor function,1n∑i=1n⌊ni⌋≤1n∑i=1nni=Hn.{\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}\left\lfloor {\frac {n}{i}}\right\rfloor \leq {\frac {1}{n}}\sum _{i=1}^{n}{\frac {n}{i}}=H_{n}.}The operation of rounding each term in the harmonic series to the next smaller integer multiple of1n{\displaystyle {\tfrac {1}{n}}}causes this average to differ from the harmonic numbers by a small constant, andPeter Gustav Lejeune Dirichletshowed more precisely that the average number of divisors isln⁡n+2γ−1+O(1/n){\displaystyle \ln n+2\gamma -1+O(1/{\sqrt {n}})}(expressed inbig O notation). Bounding the final error term more precisely remains an open problem, known asDirichlet's divisor problem.[30] Several common games or recreations involve repeating a random selection from a set of items until all possible choices have been selected; these include the collection oftrading cards[31][32]and the completion ofparkrunbingo, in which the goal is to obtain all 60 possible numbers of seconds in the times from a sequence of running events.[33]More serious applications of this problem include sampling all variations of a manufactured product for itsquality control,[34]and theconnectivityofrandom graphs.[35]In situations of this form, once there arek{\displaystyle k}items remaining to be collected out of a total ofn{\displaystyle n}equally-likely items, the probability of collecting a new item in a single random choice isk/n{\displaystyle k/n}and the expected number of random choices needed until a new item is collectedisn/k{\displaystyle n/k}.Summing over all values ofk{\displaystyle k}fromn{\displaystyle n}down to 1shows that the total expected number of random choices needed to collect all itemsisnHn{\displaystyle nH_{n}},whereHn{\displaystyle H_{n}}is then{\displaystyle n}thharmonic number.[36] Thequicksortalgorithm for sorting a set of items can be analyzed using the harmonic numbers. The algorithm operates by choosing one item as a "pivot", comparing it to all the others, and recursively sorting the two subsets of items whose comparison places them before the pivot and after the pivot. In either itsaverage-case complexity(with the assumption that all input permutations are equally likely) or in itsexpected timeanalysis of worst-case inputs with a random choice of pivot, all of the items are equally likely to be chosen as the pivot. For such cases, one can compute the probability that two items are ever compared with each other, throughout the recursion, as a function of the number of other items that separate them in the final sorted order. If itemsx{\displaystyle x}andy{\displaystyle y}are separated byk{\displaystyle k}other items, then the algorithm will make a comparison betweenx{\displaystyle x}andy{\displaystyle y}only when, as the recursion progresses, it picksx{\displaystyle x}ory{\displaystyle y}as a pivot before picking any of the otherk{\displaystyle k}items between them. Because each of thesek+2{\displaystyle k+2}items is equally likely to be chosen first, this happens with probability2k+2{\displaystyle {\tfrac {2}{k+2}}}. The total expected number of comparisons, which controls the total running time of the algorithm, can then be calculated by summing these probabilities over all pairs, giving[37]∑i=2n∑k=0i−22k+2=∑i=1n−12Hi=O(nlog⁡n).{\displaystyle \sum _{i=2}^{n}\sum _{k=0}^{i-2}{\frac {2}{k+2}}=\sum _{i=1}^{n-1}2H_{i}=O(n\log n).}The divergence of the harmonic series corresponds in this application to the fact that, in thecomparison model of sortingused for quicksort, it is not possible to sort inlinear time.[38] The series∑n=1∞(−1)n+1n=1−12+13−14+15−⋯{\displaystyle \sum _{n=1}^{\infty }{\frac {(-1)^{n+1}}{n}}=1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+{\frac {1}{5}}-\cdots }is known as thealternating harmonic series. It isconditionally convergentby thealternating series test, but notabsolutely convergent. Its sum is thenatural logarithm of 2.[39] More precisely, the asymptotic expansion of the series begins as11−12+⋯+12n−1−12n=H2n−Hn=ln⁡2−14n+O(n−2).{\displaystyle {\frac {1}{1}}-{\frac {1}{2}}+\cdots +{\frac {1}{2n-1}}-{\frac {1}{2n}}=H_{2n}-H_{n}=\ln 2-{\frac {1}{4n}}+O(n^{-2}).}This results from the equalityHn=2∑k=1n12k{\textstyle H_{n}=2\sum _{k=1}^{n}{\frac {1}{2k}}}and theEuler–Maclaurin formula. Using alternating signs with only odd unit fractions produces a related series, theLeibniz formula forπ[40]∑n=0∞(−1)n2n+1=1−13+15−17+⋯=π4.{\displaystyle \sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2n+1}}=1-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+\cdots ={\frac {\pi }{4}}.} TheRiemann zeta functionis defined for realx>1{\displaystyle x>1}by the convergent seriesζ(x)=∑n=1∞1nx=11x+12x+13x+⋯,{\displaystyle \zeta (x)=\sum _{n=1}^{\infty }{\frac {1}{n^{x}}}={\frac {1}{1^{x}}}+{\frac {1}{2^{x}}}+{\frac {1}{3^{x}}}+\cdots ,}which forx=1{\displaystyle x=1}would be the harmonic series. It can be extended byanalytic continuationto aholomorphic functionon allcomplex numbersexceptx=1{\displaystyle x=1},where the extended function has asimple pole. Other important values of the zeta function includeζ(2)=π2/6{\displaystyle \zeta (2)=\pi ^{2}/6},the solution to theBasel problem,Apéry's constantζ(3){\displaystyle \zeta (3)},proved byRoger Apéryto be anirrational number, and the "critical line" of complex numbers withreal part12{\displaystyle {\tfrac {1}{2}}},conjectured by theRiemann hypothesisto be the only values other than negative integers where the function can be zero.[41] The random harmonic series is∑n=1∞snn,{\displaystyle \sum _{n=1}^{\infty }{\frac {s_{n}}{n}},}where the valuessn{\displaystyle s_{n}}areindependent and identically distributed random variablesthat take the two values+1{\displaystyle +1}and−1{\displaystyle -1}with equalprobability12{\displaystyle {\tfrac {1}{2}}}.It convergeswith probability 1, as can be seen by using theKolmogorov three-series theoremor of the closely relatedKolmogorov maximal inequality. The sum of the series is arandom variablewhoseprobability density functionisclose to14{\displaystyle {\tfrac {1}{4}}}for values between−1{\displaystyle -1}and1{\displaystyle 1},and decreases to near-zero for values greaterthan3{\displaystyle 3}or lessthan−3{\displaystyle -3}.Intermediate between these ranges, at thevalues±2{\displaystyle \pm 2},the probability density is18−ε{\displaystyle {\tfrac {1}{8}}-\varepsilon }for a nonzero but very small valueε<10−42{\displaystyle \varepsilon <10^{-42}}.[42][43] The depleted harmonic series where all of the terms in which the digit 9 appears anywhere in the denominator are removed can be shown to converge to the value22.92067661926415034816....[44]In fact, when all the terms containing any particular string of digits (in anybase) are removed, the series converges.[45]
https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)
Theharmonic series(alsoovertone series) is the sequence ofharmonics,musical tones, orpure toneswhosefrequencyis anintegermultiple of afundamental frequency. Pitchedmusical instrumentsare often based on anacousticresonatorsuch as astringor a column of air, whichoscillatesat numerousmodessimultaneously. As waves travel in both directions along the string or air column, they reinforce and cancel one another to formstanding waves. Interaction with the surrounding air produces audiblesound waves, which travel away from the instrument. These frequencies are generally integer multiples, orharmonics, of thefundamentaland such multiples form theharmonic series. The fundamental, which is usually perceived as the lowestpartialpresent, is generally perceived as thepitchof a musical tone. The musicaltimbreof a steady tone from such an instrument is strongly affected by the relative strength of each harmonic. A "complex tone" (the sound of a note with a timbre particular to the instrument playing the note) "can be described as a combination of many simpleperiodic waves(i.e.,sine waves) orpartials,each with its own frequency ofvibration,amplitude, andphase".[1](See also,Fourier analysis.) Apartialis any of thesine waves(or "simple tones", asElliscalls them[2]when translatingHelmholtz) of which a complex tone is composed, not necessarily with an integer multiple of the lowest harmonic. Aharmonicis any member of the harmonic series, an ideal set offrequenciesthat arepositive integermultiples of a commonfundamental frequency. Thefundamentalis aharmonicbecause it is one times itself. Aharmonic partialis any real partial component of a complex tone that matches (or nearly matches) an ideal harmonic.[3] Aninharmonic partialis any partial that does not match an ideal harmonic.Inharmonicityis a measure of the deviation of a partial from the closest ideal harmonic, typically measured incentsfor each partial.[4] Manypitchedacoustic instrumentsare designed to have partials that are close to being whole-number ratios with very low inharmonicity; therefore, inmusic theory, and in instrument design, it is convenient, although not strictly accurate, to speak of the partials in those instruments' sounds as "harmonics", even though they may have some degree of inharmonicity. Thepiano, one of the most important instruments of western tradition, contains a certain degree of inharmonicity among the frequencies generated by each string. Other pitched instruments, especially certainpercussioninstruments, such asmarimba,vibraphone,tubular bells,timpani, andsinging bowlscontain mostly inharmonic partials, yet may give the ear a good sense of pitch because of a few strong partials that resemble harmonics. Unpitched, or indefinite-pitched instruments, such ascymbalsandtam-tamsmake sounds (produce spectra) that are rich in inharmonic partials and may give no impression of implying any particular pitch. Anovertoneis any partial above the lowest partial. The term overtone does not imply harmonicity or inharmonicity and has no other special meaning other than to exclude the fundamental. It is mostly the relative strength of the different overtones that give an instrument its particulartimbre, tone color, or character. When writing or speaking of overtones and partials numerically, care must be taken to designate each correctly to avoid any confusion of one for the other, so the second overtone may not be the third partial, because it is the second sound in a series.[5] Someelectronic instruments, such assynthesizers, can play a pure frequency with noovertones(asine wave). Synthesizers can also combine pure frequencies into more complex tones, such as to simulate other instruments. Certain flutes and ocarinas are very nearly without overtones. One of the simplest cases to visualise is avibrating string, as in the illustration; the string has fixed points at each end, and each harmonicmodedivides it into an integer number (1, 2, 3, 4, etc.) of equal-sized sections resonating at increasingly higher frequencies.[6][failed verification]Similar arguments apply to vibrating air columns inwind instruments(for example, "the Frenchhornwas originally a valveless instrument that could play only the notes of the harmonic series"[7]), although these are complicated by having the possibility of anti-nodes (that is, the air column is closed at one end and open at the other),conicalas opposed tocylindricalbores, or end-openings that run thegamutfrom no flare, cone flare, or exponentially shaped flares (such as in various bells). In most pitched musical instruments, the fundamental (first harmonic) is accompanied by other, higher-frequency harmonics. Thus shorter-wavelength, higher-frequencywavesoccur with varying prominence and give each instrument its characteristictone quality. The fact that a string is fixed at each end means that the longest allowed wavelength on the string (which gives the fundamental frequency) is twice the length of the string (one round trip, with a half cycle fitting between the nodes at the two ends). Other allowed wavelengths are reciprocal multiples (e.g.1⁄2,1⁄3,1⁄4times) that of the fundamental. Theoretically, these shorter wavelengths correspond tovibrationsat frequencies that are integer multiples of (e.g. 2, 3, 4 times) the fundamental frequency. Physical characteristics of the vibrating medium and/or the resonator it vibrates against often alter these frequencies. (Seeinharmonicityandstretched tuningfor alterations specific to wire-stringed instruments and certainelectric pianos.) However, those alterations are small, and except for precise, highly specialized tuning, it is reasonable to think of the frequencies of the harmonic series as integer multiples of the fundamental frequency. The harmonic series is anarithmetic progression(f, 2f, 3f, 4f, 5f, ...). In terms of frequency (measured incycles per second, orhertz, wherefis the fundamental frequency), the difference between consecutive harmonics is therefore constant and equal to the fundamental. But because human ears respond to soundnonlinearly, higher harmonics are perceived as "closer together" than lower ones. On the other hand, theoctaveseries is ageometric progression(2f, 4f, 8f, 16f, ...), and people perceive these distances as "the same" in the sense ofmusical interval. In terms of what one hears, each successively higheroctavein the harmonic series is divided into increasingly "smaller" and more numerous intervals. The second harmonic, whose frequency is twice the fundamental, sounds an octave higher; the third harmonic, three times the frequency of the fundamental, sounds aperfect fifthabove the second harmonic. The fourth harmonic vibrates at four times the frequency of the fundamental and sounds aperfect fourthabove the third harmonic (two octaves above the fundamental). Double the harmonic number means double the frequency (which sounds an octave higher). Marin Mersennewrote: "The order of the Consonances is natural, and ... the way we count them, starting from unity up to the number six and beyond is founded in nature."[9]However, to quoteCarl Dahlhaus, "the interval-distance of the natural-tone-row [overtones] [...], counting up to 20, includes everything from the octave to the quarter tone, (and) useful and useless musical tones. The natural-tone-row [harmonic series] justifies everything, that means, nothing."[10] If the harmonics are octave displaced and compressed into the span of oneoctave, some of them are approximated by the notes of what theWesthas adopted as the chromatic scale based on the fundamental tone. The Western chromatic scale has been modified into twelve equalsemitones, which is slightly out of tune with many of the harmonics, especially the 7th, 11th, and 13th harmonics. In the late 1930s, composerPaul Hindemithranked musical intervals according to their relativedissonancebased on these and similar harmonic relationships.[11] Below is a comparison between the first 31 harmonics and the intervals of12-tone equal temperament(12TET), octave displaced and compressed into the span of one octave. Tinted fields highlight differences greater than 5cents(1⁄20of a semitone), which is the human ear's "just noticeable difference" for notes played one after the other (smaller differences are noticeable with notes played simultaneously). The frequencies of the harmonic series, being integer multiples of the fundamental frequency, are naturally related to each other by whole-numbered ratios and small whole-numbered ratios are likely the basis of the consonance of musical intervals (seejust intonation). This objective structure is augmented by psychoacoustic phenomena. For example, a perfect fifth, say 200 and 300 Hz (cycles per second), causes a listener to perceive acombination toneof 100 Hz (the difference between 300 Hz and 200 Hz); that is, an octave below the lower (actual sounding) note. This 100 Hz first-order combination tone then interacts with both notes of the interval to produce second-order combination tones of 200 (300 − 100) and 100 (200 − 100) Hz and all further nth-order combination tones are all the same, being formed from various subtraction of 100, 200, and 300. When one contrasts this with a dissonant interval such as atritone(not tempered) with a frequency ratio of 7:5 one gets, for example, 700 − 500 = 200 (1st order combination tone) and 500 − 200 = 300 (2nd order). The rest of the combination tones are octaves of 100 Hz so the 7:5 interval actually contains four notes: 100 Hz (and its octaves), 300 Hz, 500 Hz and 700 Hz. The lowest combination tone (100 Hz) is a seventeenth (two octaves and amajor third) below the lower (actual sounding) note of thetritone. All the intervals succumb to similar analysis as has been demonstrated byPaul Hindemithin his bookThe Craft of Musical Composition, although he rejected the use of harmonics from the seventh and beyond.[11] TheMixolydian modeis consonant with the first 10 harmonics of the harmonic series (the 11th harmonic, a tritone, is not in the Mixolydian mode). TheIonian modeis consonant with only the first 6 harmonics of the series (the seventh harmonic, a minor seventh, is not in the Ionian mode). TheRishabhapriya ragamis consonant with the first 14 harmonics of the series. The relativeamplitudes(strengths) of the various harmonics primarily determine thetimbreof different instruments and sounds, though onsettransients,formants,noises, and inharmonicities also play a role. For example, theclarinetandsaxophonehave similarmouthpiecesandreeds, and both produce sound throughresonanceof air inside a chamber whose mouthpiece end is considered closed. Because the clarinet's resonator is cylindrical, theeven-numbered harmonics are less present. The saxophone's resonator is conical, which allows the even-numbered harmonics to sound more strongly and thus produces a more complex tone. Theinharmonicringing of the instrument's metal resonator is even more prominent in the sounds of brass instruments. Human ears tend to group phase-coherent, harmonically-related frequency components into a single sensation. Rather than perceiving the individual partials–harmonic and inharmonic, of a musical tone, humans perceive them together as a tone color or timbre, and the overallpitchis heard as the fundamental of the harmonic series being experienced. If a sound is heard that is made up of even just a few simultaneous sine tones, and if the intervals among those tones form part of a harmonic series, the brain tends to group this input into a sensation of the pitch of the fundamental of that series,even if the fundamental is not present. Variations in the frequency of harmonics can also affect theperceivedfundamental pitch. These variations, most clearly documented in the piano and other stringed instruments but also apparent inbrass instruments, are caused by a combination of metal stiffness and the interaction of the vibrating air or string with the resonating body of the instrument. David Cope(1997) suggests the concept ofinterval strength,[12]in which an interval's strength, consonance, or stability (seeconsonance and dissonance) is determined by its approximation to a lower and stronger, or higher and weaker, position in the harmonic series. See also:Lipps–Meyer law. Thus, an equal-tempered perfect fifth (playⓘ) is stronger than an equal-temperedminor third(playⓘ), since they approximate a just perfect fifth (playⓘ) and just minor third (playⓘ), respectively. The just minor third appears between harmonics 5 and 6 while the just fifth appears lower, between harmonics 2 and 3. Sources
https://en.wikipedia.org/wiki/Harmonic_series_(music)
In mathematics, theHelmholtz equationis theeigenvalue problemfor theLaplace operator. It corresponds to theelliptic partial differential equation:∇2f=−k2f,{\displaystyle \nabla ^{2}f=-k^{2}f,}where∇2is the Laplace operator,k2is the eigenvalue, andfis the (eigen)function. When the equation is applied to waves,kis known as thewave number. The Helmholtz equation has a variety of applications in physics and other sciences, including thewave equation, thediffusion equation, and theSchrödinger equationfor a free particle. Inoptics, the Helmholtz equation is the wave equation for theelectric field.[1] The equation is named afterHermann von Helmholtz, who studied it in 1860.[2] The Helmholtz equation often arises in the study of physical problems involvingpartial differential equations(PDEs) in both space and time. The Helmholtz equation, which represents atime-independentform of thewave equation, results from applying the technique ofseparation of variablesto reduce the complexity of the analysis. For example, consider the wave equation(∇2−1c2∂2∂t2)u(r,t)=0.{\displaystyle \left(\nabla ^{2}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\right)u(\mathbf {r} ,t)=0.} Separation of variables begins by assuming that the wave functionu(r,t)is in fact separable:u(r,t)=A(r)T(t).{\displaystyle u(\mathbf {r} ,t)=A(\mathbf {r} )T(t).} Substituting this form into the wave equation and then simplifying, we obtain the following equation:∇2AA=1c2Td2Tdt2.{\displaystyle {\frac {\nabla ^{2}A}{A}}={\frac {1}{c^{2}T}}{\frac {\mathrm {d} ^{2}T}{\mathrm {d} t^{2}}}.} Notice that the expression on the left side depends only onr, whereas the right expression depends only ont. As a result, this equation is valid in the general case if and only if both sides of the equation are equal to the same constant value. This argument is key in the technique of solving linear partial differential equations by separation of variables. From this observation, we obtain two equations, one forA(r), the other forT(t):∇2AA=−k2{\displaystyle {\frac {\nabla ^{2}A}{A}}=-k^{2}}1c2Td2Tdt2=−k2,{\displaystyle {\frac {1}{c^{2}T}}{\frac {\mathrm {d} ^{2}T}{\mathrm {d} t^{2}}}=-k^{2},} where we have chosen, without loss of generality, the expression−k2for the value of the constant. (It is equally valid to use any constantkas the separation constant;−k2is chosen only for convenience in the resulting solutions.) Rearranging the first equation, we obtain the (homogeneous) Helmholtz equation:∇2A+k2A=(∇2+k2)A=0.{\displaystyle \nabla ^{2}A+k^{2}A=(\nabla ^{2}+k^{2})A=0.} Likewise, after making the substitutionω=kc, wherekis thewave number, andωis theangular frequency(assuming a monochromatic field), the second equation becomes d2Tdt2+ω2T=(d2dt2+ω2)T=0.{\displaystyle {\frac {\mathrm {d} ^{2}T}{\mathrm {d} t^{2}}}+\omega ^{2}T=\left({\frac {\mathrm {d} ^{2}}{\mathrm {d} t^{2}}}+\omega ^{2}\right)T=0.} We now have Helmholtz's equation for the spatial variablerand a second-orderordinary differential equationin time. The solution in time will be alinear combinationofsineandcosinefunctions, whose exact form is determined by initial conditions, while the form of the solution in space will depend on theboundary conditions. Alternatively,integral transforms, such as theLaplaceorFourier transform, are often used to transform ahyperbolic PDEinto a form of the Helmholtz equation.[3] Because of its relationship to the wave equation, the Helmholtz equation arises in problems in such areas ofphysicsas the study ofelectromagnetic radiation,seismology, andacoustics. The solution to the spatial Helmholtz equation:∇2A=−k2A{\displaystyle \nabla ^{2}A=-k^{2}A}can be obtained for simple geometries usingseparation of variables. The two-dimensional analogue of the vibrating string is the vibrating membrane, with the edges clamped to be motionless. The Helmholtz equation was solved for many basic shapes in the 19th century: the rectangular membrane bySiméon Denis Poissonin 1829, the equilateral triangle byGabriel Laméin 1852, and the circular membrane byAlfred Clebschin 1862. The elliptical drumhead was studied byÉmile Mathieu, leading toMathieu's differential equation. If the edges of a shape are straight line segments, then a solution is integrable or knowable in closed-form only if it is expressible as a finite linear combination of plane waves that satisfy the boundary conditions (zero at the boundary, i.e., membrane clamped). If the domain is a circle of radiusa, then it is appropriate to introduce polar coordinatesrandθ. The Helmholtz equation takes the formArr+1rAr+1r2Aθθ+k2A=0.{\displaystyle \ A_{rr}+{\frac {1}{r}}A_{r}+{\frac {1}{r^{2}}}A_{\theta \theta }+k^{2}A=0~.} We may impose the boundary condition thatAvanishes ifr=a; thusA(a,θ)=0.{\displaystyle \ A(a,\theta )=0~.} the method of separation of variables leads to trial solutions of the formA(r,θ)=R(r)Θ(θ),{\displaystyle \ A(r,\theta )=R(r)\Theta (\theta )\ ,}whereΘmust be periodic ofperiod2π.This leads to Θ″+n2Θ=0,{\displaystyle \ \Theta ''+n^{2}\Theta =0\ ,}r2R″+rR′+r2k2R−n2R=0.{\displaystyle \ r^{2}R''+rR'+r^{2}k^{2}R-n^{2}R=0~.} It follows from the periodicity condition thatΘ=αcos⁡nθ+βsin⁡nθ,{\displaystyle \ \Theta =\alpha \cos n\theta +\beta \sin n\theta \ ,}and thatnmust be an integer. The radial componentRhas the formR=γJn(ρ),{\displaystyle \ R=\gamma \ J_{n}(\rho )\ ,}where theBessel functionJn(ρ)satisfies Bessel's equationz2Jn″+zJn′+(z2−n2)Jn=0,{\displaystyle \ z^{2}J_{n}''+zJ_{n}'+(z^{2}-n^{2})J_{n}=0\ ,}andz=k r.The radial functionJnhas infinitely many roots for each value ofn,denoted byρm,n.The boundary condition thatAvanishes wherer=awill be satisfied if the corresponding wavenumbers are given bykm,n=1aρm,n.{\displaystyle \ k_{m,n}={\frac {1}{a}}\rho _{m,n}~.} The general solutionAthen takes the form of ageneralized Fourier seriesof terms involving products ofJn(km,nr)and the sine (or cosine) ofn θ.These solutions are the modes ofvibration of a circular drumhead. In spherical coordinates, the solution is: A(r,θ,φ)=∑ℓ=0∞∑m=−ℓ+ℓ(aℓmjℓ(kr)+bℓmyℓ(kr))Yℓm(θ,φ).{\displaystyle \ A(r,\theta ,\varphi )=\sum _{\ell =0}^{\infty }\sum _{m=-\ell }^{+\ell }{\bigl (}\ a_{\ell m}\ j_{\ell }(kr)+b_{\ell m}\ y_{\ell }(kr)\ {\bigr )}\ Y_{\ell }^{m}(\theta ,\varphi )~.} This solution arises from the spatial solution of thewave equationanddiffusion equation. Herejℓ(kr)andyℓ(kr)are thespherical Bessel functions, andYmℓ(θ,φ)are thespherical harmonics(Abramowitz and Stegun, 1964). Note that these forms are general solutions, and requireboundary conditionsto be specified to be used in any specific case. For infinite exterior domains, aradiation conditionmay also be required (Sommerfeld, 1949). Writingr0= (x,y,z)functionA(r0)has asymptoticsA(r0)=eikr0r0f(r0r0,k,u0)+o(1r0)asr0→∞{\displaystyle A(r_{0})={\frac {e^{ikr_{0}}}{r_{0}}}f\left({\frac {\mathbf {r} _{0}}{r_{0}}},k,u_{0}\right)+o\left({\frac {1}{r_{0}}}\right){\text{ as }}r_{0}\to \infty } where functionfis called scattering amplitude andu0(r0)is the value ofAat each boundary pointr0. Given a 2-dimensional plane where A is known, the solution to the Helmholtz equation is given by:[4]A(x,y,z)=−12π∬−∞+∞A′(x′,y′)eikrrzr(ik−1r)d⁡x′d⁡y′,{\displaystyle A(x,y,z)=-{\frac {1}{2\pi }}\iint _{-\infty }^{+\infty }A'(x',y')\ {\frac {~~e^{ikr}\ }{r}}\ {\frac {\ z\ }{r}}\left(\ i\ k-{\frac {1}{r}}\ \right)\ \operatorname {d} x'\ \operatorname {d} y'\ ,} where Aszapproaches zero, all contributions from the integral vanish except forr= 0  .ThusA(x,y,0)=A′(x,y){\displaystyle \ A(x,y,0)=A'(x,y)\ }up to a numerical factor, which can be verified to be1by transforming the integral to polar coordinates(ρ,θ).{\displaystyle \ \left(\rho ,\theta \right)~.} This solution is important in diffraction theory, e.g. in derivingFresnel diffraction. In theparaxial approximationof the Helmholtz equation,[5]thecomplex amplitudeAis expressed asA(r)=u(r)eikz{\displaystyle A(\mathbf {r} )=u(\mathbf {r} )e^{ikz}}whereurepresents the complex-valued amplitude which modulates the sinusoidal plane wave represented by the exponential factor. Then under a suitable assumption,uapproximately solves∇⊥2u+2ik∂u∂z=0,{\displaystyle \nabla _{\perp }^{2}u+2ik{\frac {\partial u}{\partial z}}=0,}where∇⊥2=def∂2∂x2+∂2∂y2{\textstyle \nabla _{\perp }^{2}{\overset {\text{ def }}{=}}{\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}}is the transverse part of theLaplacian. This equation has important applications in the science ofoptics, where it provides solutions that describe the propagation ofelectromagnetic waves(light) in the form of eitherparaboloidalwaves orGaussian beams. Mostlasersemit beams that take this form. The assumption under which the paraxial approximation is valid is that thezderivative of the amplitude functionuis a slowly varying function ofz: |∂2u∂z2|≪|k∂u∂z|.{\displaystyle \left|{\frac {\partial ^{2}u}{\partial z^{2}}}\right|\ll \left|k{\frac {\partial u}{\partial z}}\right|.} This condition is equivalent to saying that the angleθbetween thewave vectorkand the optical axiszis small:θ≪ 1. The paraxial form of the Helmholtz equation is found by substituting the above-stated expression for the complex amplitude into the general form of the Helmholtz equation as follows: ∇2(u(x,y,z)eikz)+k2u(x,y,z)eikz=0.{\displaystyle \nabla ^{2}(u\left(x,y,z\right)e^{ikz})+k^{2}u\left(x,y,z\right)e^{ikz}=0.} Expansion and cancellation yields the following: (∂2∂x2+∂2∂y2)u(x,y,z)eikz+(∂2∂z2u(x,y,z))eikz+2(∂∂zu(x,y,z))ikeikz=0.{\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}\right)u(x,y,z)e^{ikz}+\left({\frac {\partial ^{2}}{\partial z^{2}}}u(x,y,z)\right)e^{ikz}+2\left({\frac {\partial }{\partial z}}u(x,y,z)\right)ik{e^{ikz}}=0.} Because of the paraxial inequality stated above, the∂2u/∂z2term is neglected in comparison with thek·∂u/∂zterm. This yields the paraxial Helmholtz equation. Substitutingu(r) =A(r)e−ikzthen gives the paraxial equation for the original complex amplitudeA: ∇⊥2A+2ik∂A∂z+2k2A=0.{\displaystyle \nabla _{\perp }^{2}A+2ik{\frac {\partial A}{\partial z}}+2k^{2}A=0.} TheFresnel diffraction integralis an exact solution to the paraxial Helmholtz equation.[6] Theinhomogeneous Helmholtz equationis the equation∇2A(x)+k2A(x)=−f(x),∀x∈Rn,{\displaystyle \nabla ^{2}A(\mathbf {x} )+k^{2}A(\mathbf {x} )=-f(\mathbf {x} ),\quad \forall \mathbf {x} \in \mathbb {R} ^{n},}whereƒ:Rn→Cis a function withcompact support, andn= 1, 2, 3.This equation is very similar to thescreened Poisson equation, and would be identical if the plus sign (in front of thekterm) were switched to a minus sign. In order to solve this equation uniquely, one needs to specify aboundary conditionat infinity, which is typically theSommerfeld radiation conditionlimr→∞rn−12(∂∂r−ik)A(x)=0,{\displaystyle \lim _{r\to \infty }r^{\frac {n-1}{2}}\left({\frac {\partial }{\partial r}}-ik\right)A(\mathbf {x} )=0,}inn{\displaystyle n}spatial dimensions, for all angles (i.e. any value ofθ,ϕ{\displaystyle \theta ,\phi }). Herer=∑i=1nxi2{\displaystyle r={\sqrt {\sum _{i=1}^{n}x_{i}^{2}}}}wherexi,{\displaystyle x_{i},}are the coordinates of the vectorx{\displaystyle \mathbf {x} }. With this condition, the solution to the inhomogeneous Helmholtz equation is A(x)=∫RnG(x,x′)f(x′)dx′{\displaystyle A(\mathbf {x} )=\int _{\mathbb {R} ^{n}}\!G(\mathbf {x} ,\mathbf {x'} )f(\mathbf {x'} )\,\mathrm {d} \mathbf {x'} } (notice this integral is actually over a finite region, sincefhas compact support). Here,Gis theGreen's functionof this equation, that is, the solution to the inhomogeneous Helmholtz equation withfequaling theDirac delta function, soGsatisfies ∇2G(x,x′)+k2G(x,x′)=−δ(x,x′)∈Rn.{\displaystyle \nabla ^{2}G(\mathbf {x} ,\mathbf {x'} )+k^{2}G(\mathbf {x} ,\mathbf {x'} )=-\delta (\mathbf {x} ,\mathbf {x'} )\in \mathbb {R} ^{n}.} The expression for the Green's function depends on the dimensionnof the space. One hasG(x,x′)=ieik|x−x′|2k{\displaystyle G(x,x')={\frac {ie^{ik|x-x'|}}{2k}}}forn= 1, G(x,x′)=i4H0(1)(k|x−x′|){\displaystyle G(\mathbf {x} ,\mathbf {x'} )={\frac {i}{4}}H_{0}^{(1)}(k|\mathbf {x} -\mathbf {x'} |)}forn= 2, whereH(1)0is aHankel function, andG(x,x′)=eik|x−x′|4π|x−x′|{\displaystyle G(\mathbf {x} ,\mathbf {x'} )={\frac {e^{ik|\mathbf {x} -\mathbf {x'} |}}{4\pi |\mathbf {x} -\mathbf {x'} |}}}forn= 3. Note that we have chosen the boundary condition that the Green's function is an outgoing wave for|x| → ∞. Finally, for general n, G(x,x′)=cdkpHp(1)(k|x−x′|)|x−x′|p{\displaystyle G(\mathbf {x} ,\mathbf {x'} )=c_{d}k^{p}{\frac {H_{p}^{(1)}(k|\mathbf {x} -\mathbf {x'} |)}{|\mathbf {x} -\mathbf {x'} |^{p}}}} wherep=n−22{\displaystyle p={\frac {n-2}{2}}}andcd=i4(2π)p{\displaystyle c_{d}={\frac {i}{4(2\pi )^{p}}}}.[7]
https://en.wikipedia.org/wiki/Helmholtz_equation
Instantaneous phase and frequencyare important concepts insignal processingthat occur in the context of the representation and analysis of time-varying functions.[1]Theinstantaneous phase(also known aslocal phaseor simplyphase) of acomplex-valuedfunctions(t), is the real-valued function: whereargis thecomplex argument function. Theinstantaneous frequencyis thetemporal rate of changeof the instantaneous phase. And for areal-valuedfunctions(t), it is determined from the function'sanalytic representation,sa(t):[2] wheres^(t){\displaystyle {\hat {s}}(t)}represents theHilbert transformofs(t). Whenφ(t) is constrained to itsprincipal value, either the interval(−π,π]or[0, 2π), it is calledwrapped phase. Otherwise it is calledunwrapped phase, which is a continuous function of argumentt, assumingsa(t) is a continuous function oft. Unless otherwise indicated, the continuous form should be inferred. whereω> 0. In this simple sinusoidal example, the constantθis also commonly referred to asphaseorphase offset.φ(t) is a function of time;θis not. In the next example, we also see that the phase offset of a real-valued sinusoid is ambiguous unless a reference (sin or cos) is specified.φ(t) is unambiguously defined. whereω> 0. In both examples the local maxima ofs(t) correspond toφ(t) = 2πNfor integer values ofN. This has applications in the field of computer vision. Instantaneous angular frequencyis defined as: andinstantaneous (ordinary) frequencyis defined as: whereφ(t) must be theunwrapped phase; otherwise, ifφ(t) is wrapped, discontinuities inφ(t) will result inDirac deltaimpulses inf(t). The inverse operation, which always unwraps phase, is: This instantaneous frequency,ω(t), can be derived directly from thereal and imaginary partsofsa(t), instead of thecomplex argwithout concern of phase unwrapping. 2m1πandm2πare the integer multiples ofπnecessary to add to unwrap the phase. At values of time,t, where there is no change to integerm2, the derivative ofφ(t) is For discrete-time functions, this can be written as a recursion: Discontinuities can then be removed by adding 2πwhenever Δφ[n] ≤ −π, and subtracting 2πwhenever Δφ[n] >π. That allowsφ[n] to accumulate without limit and produces an unwrapped instantaneous phase. An equivalent formulation that replaces the modulo 2πoperation with a complex multiplication is: where the asterisk denotes complex conjugate. The discrete-time instantaneous frequency (in units of radians per sample) is simply the advancement of phase for that sample In some applications, such as averaging the values of phase at several moments of time, it may be useful to convert each value to a complex number, or vector representation:[3] This representation is similar to the wrapped phase representation in that it does not distinguish between multiples of 2πin the phase, but similar to the unwrapped phase representation since it is continuous. A vector-average phase can be obtained as theargof the sum of the complex numbers without concern about wrap-around.
https://en.wikipedia.org/wiki/Instantaneous_phase
Asinusoidwithmodulationcan be decomposed into, or synthesized from, twoamplitude-modulatedsinusoids that are inquadrature phase, i.e., with aphase offsetof one-quarter cycle (90 degrees orπ/2 radians). All three sinusoids have the samecenter frequency. The two amplitude-modulated sinusoids are known as thein-phase(I) andquadrature(Q) components, which describes their relationships with the amplitude- and phase-modulated carrier.[A][2] Or in other words, it is possible to create an arbitrarily phase-shifted sine wave, by mixing together two sine waves that are 90° out of phase in different proportions. The implication is that the modulations in some signal can be treated separately from thecarrier waveof the signal. This has extensive use in manyradioand signal processing applications.[3]I/Q datais used to represent the modulations of some carrier, independent of that carrier's frequency. In vector analysis, a vector with polar coordinatesA,φand Cartesian coordinatesx=Acos(φ),y=Asin(φ),can be represented as the sum of orthogonal components:[x, 0] + [0,y].Similarly in trigonometry, theangle sum identityexpresses: And in functional analysis, whenxis a linear function of some variable, such as time, these components aresinusoids, and they areorthogonal functions. Aphase-shiftofx→x+π/2changes the identity to: in which casecos(x) cos(φ)is the in-phase component. In both conventionscos(φ)is the in-phase amplitude modulation, which explains why some authors refer to it as the actual in-phase component. In an angle modulation application, withcarrier frequencyf,φis also a time-variant function, giving:[1]: eqs.(4.45)&(7.64) A(t)⋅cos⁡[2πft+φ(t)]=cos⁡(2πft)⋅A(t)cos⁡[φ(t)]+cos⁡(2πft+π2)⋅A(t)sin⁡[φ(t)]=cos⁡(2πft)⋅A(t)cos⁡[φ(t)]⏟in-phase−sin⁡(2πft)⋅A(t)sin⁡[φ(t)]⏟quadrature.{\displaystyle {\begin{aligned}A(t)\cdot \cos[2\pi ft+\varphi (t)]\ &=\cos(2\pi ft)\cdot A(t)\cos[\varphi (t)]\ +\ \cos \left(2\pi ft+{\tfrac {\pi }{2}}\right)\cdot A(t)\sin[\varphi (t)]\\[8pt]&=\underbrace {\cos(2\pi ft)\cdot A(t)\cos[\varphi (t)]} _{\text{in-phase}}\ \underbrace {\ -\ \sin(2\pi ft)\cdot A(t)\sin[\varphi (t)]} _{\text{quadrature}}.\end{aligned}}} When all three terms above are multiplied by an optional amplitude function,A(t) > 0,the left-hand side of the equality is known as theamplitude/phaseform, and the right-hand side is thequadrature-carrierorIQform.[B]Because of the modulation, the components are no longer completely orthogonal functions. But whenA(t)andφ(t)are slowly varying functions compared to2πft,the assumption of orthogonality is a common one.[C]Authors often call it anarrowband assumption, or anarrowband signal model.[4][5] A stream of information about how to amplitude-modulate the I and Q phases of a sine wave is known as theI/Q data.[6]By just amplitude-modulating these two 90°-out-of-phase sine waves and adding them, it is possible to produce the effect of arbitrarily modulating some carrier: amplitude and phase. And if the I/Q data itself has some frequency (e.g. aphasor) then the carrier also can be frequency modulated. So I/Q data is a complete representation of how a carrier is modulated: amplitude, phase and frequency. For received signals, by determining how much in-phase carrier and how much quadrature carrier is present in the signal it is possible to represent that signal using in-phase and quadrature components, so I/Q data can get generated from a signal with reference to a carrier sine wave. I/Q data has extensive use in many signal processing contexts, including forradiomodulation,software-defined radio,audio signal processingandelectrical engineering. I/Q data is a two-dimensional stream. Some sources treat I/Q as acomplex number;[1]with the I and Q components corresponding to the real and imaginary parts, respectively. Others treat it as distinct pairs of values, as a 2Dvector, or as separate streams. When called "I/Q data" the information is likely digital. However, I/Q may be represented as analog signals.[7]The concepts are applicable to both the analog and digital representations of I/Q. This technique of using I/Q data to represent the modulations of a signal separate to the signal's frequency is known asequivalent baseband signal, supported by the§ Narrowband signal model. It is sometimes referred to asvector modulation. The data rate of I/Q is largely independent to the frequency of the signal being modulated. I/Q data can be generated at a relatively slow rate (e.g. millions of bits per second), perhaps generated by software in part of thephysical layerof a protocol stack. I/Q data is used to modulate a carrier frequency, which may be faster (e.g.Gigahertz, perhaps anintermediate frequency).[8] As well as within a transmitter, I/Q data is also a common means to represent the signal from some receiver. Designs such as theDigital down converterallow the input signal to be represented as streams of I/Q data, likely for further processing and symbol extraction in aDSP. Analog systems may suffer from issues, such asIQ imbalance. I/Q data may also be used as a means to capture and store data used in spectrum monitoring.[3]Since I/Q allows the representation of the modulation separate to the actual carrier frequency, it is possible to represent a capture of all the radio traffic in someRF bandor section thereof, with a reasonable amount of data, irrespective of the frequency being monitored. E.g. if there is a capture of 100 MHz ofWi-Fichannels within the5 GHz U-NII band, that I/Q capture can besampledat 200 million samples per second (according toNyquist) as opposed to the 10,000 million samples per second required to sample directly at 5 GHz. Avector signal generatorwill typically use I/Q data alongside some programmed frequency to generate its signal.[8]And similarly avector signal analysercan provide a stream of I/Q data in its output. Manymodulationschemes, e.g.quadrature amplitude modulationrely heavily on I/Q. The termalternating currentapplies to a voltage vs. time function that is sinusoidal with afrequencyf.When it is applied to a typical (linear time-invariant) circuit or device, it causes a current that is also sinusoidal. In general there is a constant phase differenceφbetween any two sinusoids. The input sinusoidal voltage is usually defined to have zero phase, meaning that it is arbitrarily chosen as a convenient time reference. So the phase difference is attributed to the current function, e.g.sin(2πft+φ),whose orthogonal components aresin(2πft) cos(φ)andsin(2πft+π/2) sin(φ),as we have seen. Whenφhappens to be such that the in-phase component is zero, the current and voltage sinusoids are said to bein quadrature, which means they are orthogonal to each other. In that case, no average (active) electrical power is consumed. Rather power is temporarily stored by the device and given back, once every⁠1/2f⁠seconds. Note that the termin quadratureonly implies that two sinusoids are orthogonal, not that they arecomponentsof another sinusoid.
https://en.wikipedia.org/wiki/In-phase_and_quadrature_components
Anoscilloscope(formerly known as anoscillograph, informallyscopeorO-scope) is a type ofelectronic test instrumentthat graphically displays varyingvoltagesof one or more signals as a function of time. Their main purpose is capturing information on electrical signals for debugging, analysis, or characterization. The displayedwaveformcan then be analyzed for properties such asamplitude,frequency,rise time, time interval,distortion, and others. Originally, calculation of these values required manually measuring the waveform against the scales built into the screen of the instrument.[1]Modern digital instruments may calculate and display these properties directly. Oscilloscopes are used in the sciences, engineering, biomedical, automotive and the telecommunications industry. General-purpose instruments are used for maintenance of electronic equipment and laboratory work. Special-purpose oscilloscopes may be used to analyze an automotive ignition system or to display the waveform of the heartbeat as anelectrocardiogram, for instance. Early high-speed visualisations of electrical voltages were made with an electro-mechanicaloscillograph,[2][3]invented byAndré Blondelin 1893. These gave valuable insights into high speed voltage changes, but had a frequency response in single kHz, and were superseded by the oscilloscope which used acathode-ray tube(CRT) as its display element. TheBraun tube, forerunner of the CRT, was known in 1897, and in 1899Jonathan Zenneckequipped it with beam-forming plates and a magnetic field for deflecting the trace, and this formed the basis of the CRT.[4]Early CRTs had been applied experimentally to laboratory measurements as early as the 1920s, but suffered from poor stability of the vacuum and the cathode emitters.V. K. Zworykindescribed a permanently sealed, high-vacuum CRT with a thermionic emitter in 1931. This stable and reproducible component allowedGeneral Radioto manufacture an oscilloscope that was usable outside a laboratory setting.[1] AfterWorld War IIsurplus electronic parts became the basis for the revival ofHeathkit Corporation, and a $50 oscilloscope kit made from such parts proved its premiere market success. An analog oscilloscope is typically divided into four sections: the display, vertical controls, horizontal controls and trigger controls. The display is usually a CRT with horizontal and vertical reference lines called thegraticule. CRT displays also have controls for focus, intensity, and beam finder. The vertical section controls the amplitude of the displayed signal. This section has a volts-per-division (Volts/Div) selector knob, an AC/DC/Ground selector switch, and the vertical (primary) input for the instrument. Additionally, this section is typically equipped with the vertical beam position knob. The horizontal section controls the time base or sweep of the instrument. The primary control is the Seconds-per-Division (Sec/Div) selector switch. Also included is a horizontal input for plotting dual X-Y axis signals. The horizontal beam position knob is generally located in this section. The trigger section controls the start event of the sweep. The trigger can be set to automatically restart after each sweep or can be configured to respond to an internal or external event. The principal controls of this section are the source and coupling selector switches, and an external trigger input (EXT Input) and level adjustment. In addition to the basic instrument, most oscilloscopes are supplied with a probe. The probe connects to any input on the instrument and typically has a resistor of ten times the oscilloscope'sinput impedance. This results in a 0.1 (‑10×) attenuation factor; this helps to isolate the capacitive load presented by the probe cable from the signal being measured. Some probes have a switch allowing the operator to bypass the resistor when appropriate.[1] Most modern oscilloscopes are lightweight, portable instruments compact enough for a single person to carry. In addition to portable units, the market offers a number of miniature battery-powered instruments for field service applications. Laboratory grade oscilloscopes, especially older units that usevacuum tubes, are generally bench-top devices or are mounted on dedicated carts. Special-purpose oscilloscopes may berack-mountedor permanently mounted into a custom instrument housing. The signal to be measured is fed to one of the input connectors, which is usually a coaxial connector such as aBNCorUHF type.Binding postsorbanana plugsmay be used for lower frequencies. If the signal source has its own coaxial connector, then a simplecoaxial cableis used; otherwise, a specialized cable called a "scope probe", supplied with the oscilloscope, is used. In general, for routine use, an open wire test lead for connecting to the point being observed is not satisfactory, and a probe is generally necessary. General-purpose oscilloscopes usually present an input impedance of 1megohmin parallel with a small but known capacitance such as 20 picofarads.[5]This allows the use of standard oscilloscope probes.[6]Scopes for use with very high frequencies may have 50 Ω inputs. These must be either connected directly to a 50 Ω signal source or used with Z0or active probes. Less-frequently-used inputs include one (or two) for triggering the sweep, horizontal deflection for X‑Y mode displays, and trace brightening/darkening, sometimes calledz'‑axis inputs. Open wire test leads (flying leads) are likely to pick up interference, so they are not suitable for low level signals. Furthermore, the leads have a high inductance, so they are not suitable for high frequencies. Using a shielded cable (i.e., coaxial cable) is better for low level signals. Coaxial cable also has lower inductance, but it has higher capacitance: a typical 50 ohm cable has about 90 pF per meter. Consequently, a one-meter direct (1×) coaxial probe loads a circuit with a capacitance of about 110 pF and a resistance of 1 megohm. To minimize loading, attenuator probes (e.g., 10× probes) are used. A typical probe uses a 9 megohm series resistor shunted by a low-value capacitor to make an RC compensated divider with the cable capacitance and scope input. The RC time constants are adjusted to match. For example, the 9 megohm series resistor is shunted by a 12.2 pF capacitor for a time constant of 110 microseconds. The cable capacitance of 90 pF in parallel with the scope input of 20 pF and 1 megohm (total capacitance 110 pF) also gives a time constant of 110 microseconds. In practice, there is an adjustment so the operator can precisely match the low frequency time constant (called compensating the probe). Matching the time constants makes the attenuation independent of frequency. At low frequencies (where the resistance ofRis much less than the reactance ofC), the circuit looks like a resistive divider; at high frequencies (resistance much greater than reactance), the circuit looks like a capacitive divider.[7] The result is a frequency compensated probe for modest frequencies. It presents a load of about 10 megohms shunted by 12 pF. Such a probe is an improvement, but does not work well when the time scale shrinks to several cable transit times or less (transit time is typically 5 ns).[clarification needed]In that time frame, the cable looks like its characteristic impedance, and reflections from the transmission line mismatch at the scope input and the probe causes ringing.[8]The modern scope probe uses lossy low capacitance transmission lines and sophisticated frequency shaping networks to make the 10× probe perform well at several hundred megahertz. Consequently, there are other adjustments for completing the compensation.[9][10] Probes with 10:1 attenuation are by far the most common; for large signals (and slightly-less capacitive loading), 100:1 probes may be used. There are also probes that contain switches to select 10:1 or direct (1:1) ratios, but the latter setting has significant capacitance (tens of pF) at the probe tip, because the whole cable's capacitance is then directly connected. Most oscilloscopes provide for probe attenuation factors, displaying the effective sensitivity at the probe tip. Historically, some auto-sensing circuitry used indicator lamps behind translucent windows in the panel to illuminate different parts of the sensitivity scale. To do so, the probe connectors (modified BNCs) had an extra contact to define the probe's attenuation. (A certain value of resistor, connected to ground, "encodes" the attenuation.) Because probes wear out, and because the auto-sensing circuitry is not compatible between different oscilloscope makes, auto-sensing probe scaling is not foolproof. Likewise, manually setting the probe attenuation is prone to user error. Setting the probe scaling incorrectly is a common error, and throws the reading off by a factor of 10. Specialhigh voltage probesform compensated attenuators with the oscilloscope input. These have a large probe body, and some require partly filling a canister surrounding the series resistor with volatile liquidfluorocarbonto displace air. The oscilloscope end has a box with several waveform-trimming adjustments. For safety, a barrier disc keeps the user's fingers away from the point being examined. Maximum voltage is in the low tens of kV. (Observing a high voltage ramp can create a staircase waveform with steps at different points every repetition, until the probe tip is in contact. Until then, a tiny arc charges the probe tip, and its capacitance holds the voltage (open circuit). As the voltage continues to climb, another tiny arc charges the tip further.) There are also current probes, with cores that surround the conductor carrying current to be examined. One type has a hole for the conductor, and requires that the wire be passed through the hole for semi-permanent or permanent mounting. However, other types, used for temporary testing, have a two-part core that can be clamped around a wire. Inside the probe, a coil wound around the core provides a current into an appropriate load, and the voltage across that load is proportional to current. This type of probe only senses AC. A more-sophisticated probe includes a magnetic flux sensor (Hall effectsensor) in the magnetic circuit. The probe connects to an amplifier, which feeds (low frequency) current into the coil to cancel the sensed field; the magnitude of the current provides the low-frequency part of the current waveform, right down to DC. The coil still picks up high frequencies. There is a combining network akin to a loudspeaker crossover. This control adjusts CRT focus to obtain the sharpest, most-detailed trace. In practice, focus must be adjusted slightly when observing very different signals, so it must be an external control. The control varies the voltage applied to a focusing anode within the CRT. Flat-panel displays do not need this control. This adjusts trace brightness. Slow traces on CRT oscilloscopes need less, and fast ones, especially if not often repeated, require more brightness. On flat panels, however, trace brightness is essentially independent of sweep speed, because the internal signal processing effectively synthesizes the display from the digitized data. This control may instead be called "shape" or "spot shape". It adjusts the voltage on the last CRT anode (immediately next to the Y deflection plates). For a circular spot, the final anode must be at the same potential as both of the Y-plates (for a centred spot the Y-plate voltages must be the same). If the anode is made more positive, the spot becomes elliptical in the X-plane as the more negative Y-plates will repel the beam. If the anode is made more negative, the spot becomes elliptical in the Y-plane as the more positive Y-plates will attract the beam. This control may be absent from simpler oscilloscope designs or may even be an internal control. It is not necessary with flat panel displays. Modern oscilloscopes have direct-coupled deflection amplifiers, which means the trace could be deflected off-screen. They also might have their beam blanked without the operator knowing it. To help in restoring a visible display, the beam finder circuit overrides any blanking and limits the beam deflection to the visible portion of the screen. Beam-finder circuits often distort the trace while activated. The graticule is a grid of lines that serve as reference marks for measuring the displayed trace. These markings, whether located directly on the screen or on a removable plastic filter, usually consist of a 1 cm grid with closer tick marks (often at 2 mm) on the centre vertical and horizontal axis. One expects to see ten major divisions across the screen; the number of vertical major divisions varies. Comparing the grid markings with the waveform permits one to measure both voltage (vertical axis) and time (horizontal axis). Frequency can also be determined by measuring the waveform period and calculating its reciprocal. On old and lower-cost CRT oscilloscopes the graticule is a sheet of plastic, often with light-diffusing markings and concealed lamps at the edge of the graticule. The lamps had a brightness control. Higher-cost instruments have the graticule marked on the inside face of the CRT, to eliminateparallax errors; better ones also had adjustable edge illumination with diffusing markings. (Diffusing markings appear bright.) Digital oscilloscopes, however, generate the graticule markings on the display in the same way as the trace. External graticules also protect the glass face of the CRT from accidental impact. Some CRT oscilloscopes with internal graticules have an unmarked tinted sheet plastic light filter to enhance trace contrast; this also serves to protect the faceplate of the CRT. Accuracy and resolution of measurements using a graticule is relatively limited; better instruments sometimes have movable bright markers on the trace. These permit internal circuits to make more refined measurements. Both calibrated vertical sensitivity and calibrated horizontal time are set in1 – 2 – 5 – 10steps. This leads, however, to some awkward interpretations of minor divisions. Digital oscilloscopes generate the graticule digitally. The scale, spacing, etc., of the graticule can therefore be varied, and accuracy of readings may be improved. These select the horizontal speed of the CRT's spot as it creates the trace; this process is commonly referred to as the sweep. In all but the least-costly modern oscilloscopes, the sweep speed is selectable and calibrated in units of time per major graticule division. Quite a wide range of sweep speeds is generally provided, from seconds to as fast as picoseconds (in the fastest) per division. Usually, a continuously-variable control (often a knob in front of the calibrated selector knob) offers uncalibrated speeds, typically slower than calibrated. This control provides a range somewhat greater than the calibrated steps, making any speed between the steps available. Some higher-end analog oscilloscopes have a holdoff control. This sets a time after a trigger during which the sweep circuit cannot be triggered again. It helps provide a stable display of repetitive events in which some triggers would create confusing displays. It is usually set to minimum, because a longer time decreases the number of sweeps per second, resulting in a dimmer trace. SeeHoldofffor a more detailed description. To accommodate a wide range of input amplitudes, a switch selects calibrated sensitivity of the vertical deflection. Another control, often in front of the calibrated selector knob, offers a continuously variable sensitivity over a limited range from calibrated to less-sensitive settings. Often the observed signal is offset by a steady component, and only the changes are of interest. An input coupling switch in the "AC" position connects a capacitor in series with the input that blocks low-frequency signals and DC. However, when the signal has a fixed offset of interest, or changes slowly, the user will usually prefer "DC" coupling, which bypasses any such capacitor. Most oscilloscopes offer the DC input option. For convenience, to see where zero volts input currently shows on the screen, many oscilloscopes have a third switch position (usually labeled "GND" for ground) that disconnects the input and grounds it. Often, in this case, the user centers the trace with the vertical position control. Better oscilloscopes have a polarity selector. Normally, a positive input moves the trace upward; the polarity selector offers an "inverting" option, in which a positive-going signal deflects the trace downward. The vertical position control moves the whole displayed trace up and down. It is used to set the no-input trace exactly on the center line of the graticule, but also permits offsetting vertically by a limited amount. With direct coupling, adjustment of this control can compensate for a limited DC component of an input. This control is found only on more elaborate oscilloscopes; it offers adjustable sensitivity for external horizontal inputs. It is only active when the instrument is in X-Y mode, i.e. the internal horizontal sweep is turned off. The horizontal position control moves the display sidewise. It usually sets the left end of the trace at the left edge of the graticule, but it can displace the whole trace when desired. This control also moves the X-Y mode traces sidewise in some instruments, and can compensate for a limited DC component as for vertical position. Each input channel usually has its own set of sensitivity, coupling, and position controls, though some four-trace oscilloscopes have only minimal controls for their third and fourth channels. Dual-trace oscilloscopes have a mode switch to select either channel alone, both channels, or (in some) an X‑Y display, which uses the second channel for X deflection. When both channels are displayed, the type of channel switching can be selected on some oscilloscopes; on others, the type depends upon timebase setting. If manually selectable, channel switching can be free-running (asynchronous), or between consecutive sweeps. Some Philips dual-trace analog oscilloscopes had a fast analog multiplier, and provided a display of the product of the input channels. Multiple-trace oscilloscopes have a switch for each channel to enable or disable display of the channel's trace. These include controls for the delayed-sweep timebase, which is calibrated, and often also variable. The slowest speed is several steps faster than the slowest main sweep speed, though the fastest is generally the same. A calibrated multiturn delay time control offers wide range, high resolution delay settings; it spans the full duration of the main sweep, and its reading corresponds to graticule divisions (but with much finer precision). Its accuracy is also superior to that of the display. A switch selects display modes: Main sweep only, with a brightened region showing when the delayed sweep is advancing, delayed sweep only, or (on some) a combination mode. Good CRT oscilloscopes include a delayed-sweep intensity control, to allow for the dimmer trace of a much-faster delayed sweep which nevertheless occurs only once per main sweep. Such oscilloscopes also are likely to have a trace separation control for multiplexed display of both the main and delayed sweeps together. A switch selects the trigger source. It can be an external input, one of the vertical channels of a dual or multiple-trace oscilloscope, or the AC line (mains) frequency. Another switch enables or disables auto trigger mode, or selects single sweep, if provided in the oscilloscope. Either a spring-return switch position or a pushbutton arms single sweeps. A trigger level control varies the voltage required to generate a trigger, and the slope switch selects positive-going or negative-going polarity at the selected trigger level. To display events with unchanging or slowly (visibly) changing waveforms, but occurring at times that may not be evenly spaced, modern oscilloscopes have triggered sweeps. Compared to older, simpler oscilloscopes with continuously-running sweep oscillators, triggered-sweep oscilloscopes are markedly more versatile. A triggered sweep starts at a selected point on the signal, providing a stable display. In this way, triggering allows the display of periodic signals such as sine waves and square waves, as well as nonperiodic signals such as single pulses, or pulses that do not recur at a fixed rate. With triggered sweeps, the scope blanks the beam and starts to reset the sweep circuit each time the beam reaches the extreme right side of the screen. For a period of time, calledholdoff, (extendable by a front-panel control on some better oscilloscopes), the sweep circuit resets completely and ignores triggers. Once holdoff expires, the next trigger starts a sweep. The trigger event is usually the input waveform reaching some user-specified threshold voltage (trigger level) in the specified direction (going positive or going negative—trigger polarity). In some cases, variable holdoff time can be useful to make the sweep ignore interfering triggers that occur before the events to be observed. In the case of repetitive, but complex waveforms, variable holdoff can provide a stable display that could not otherwise be achieved. Trigger holdoffdefines a certain period following a trigger during which the sweep cannot be triggered again. This makes it easier to establish a stable view of awaveformwith multiple edges, which would otherwise cause additional triggers.[11] Imagine the following repeating waveform:The green line is the waveform, the red vertical partial line represents the location of the trigger, and the yellow line represents the trigger level. If the scope was simply set to trigger on every rising edge, this waveform would cause three triggers for each cycle:Assuming the signal is fairly highfrequency, the scope display would probably look something like this:On an actual scope, each trigger would be the same channel, so all would be the same color. It is desirable for the scope to trigger on only one edge per cycle, so it is necessary to set the holdoff at slightly less than the period of the waveform. This prevents triggering from occurring more than once per cycle, but still lets it trigger on the first edge of the next cycle. Triggered sweeps can display a blank screen if there are no triggers. To avoid this, these sweeps include a timing circuit that generates free-running triggers so a trace is always visible. This is referred to as "auto sweep" or "automatic sweep" in the controls. Once triggers arrive, the timer stops providing pseudo-triggers. The user will usually disable automatic sweep when observing low repetition rates. If the input signal is periodic, the sweep repetition rate can be adjusted to display a few cycles of the waveform. Early (tube) oscilloscopes and lowest-cost oscilloscopes have sweep oscillators that run continuously, and are uncalibrated. Such oscilloscopes are very simple, comparatively inexpensive, and were useful in radio servicing and some TV servicing. Measuring voltage or time is possible, but only with extra equipment, and is quite inconvenient. They are primarily qualitative instruments. They have a few (widely spaced) frequency ranges, and relatively wide-range continuous frequency control within a given range. In use, the sweep frequency is set to slightly lower than some submultiple of the input frequency, to display typically at least two cycles of the input signal (so all details are visible). A very simple control feeds an adjustable amount of the vertical signal (or possibly, a related external signal) to the sweep oscillator. The signal triggers beam blanking and a sweep retrace sooner than it would occur free-running, and the display becomes stable. Some oscilloscopes offer these. The user manually arms the sweep circuit (typically by a pushbutton or equivalent). "Armed" means it is ready to respond to a trigger. Once the sweep completes, it resets, and does not sweep again until re-armed. This mode, combined with an oscilloscope camera, captures single-shot events. Types of trigger include: Some recent designs of oscilloscopes include more sophisticated triggering schemes; these are described toward the end of this article. More sophisticated analog oscilloscopes contain a second timebase for a delayed sweep. A delayed sweep provides a very detailed look at some small selected portion of the main timebase. The main timebase serves as a controllable delay, after which the delayed timebase starts. This can start when the delay expires, or can be triggered (only) after the delay expires. Ordinarily, the delayed timebase is set for a faster sweep, sometimes much faster, such as 1000:1. At extreme ratios, jitter in the delays on consecutive main sweeps degrades the display, but delayed-sweep triggers can overcome this. The display shows the vertical signal in one of several modes: the main timebase, or the delayed timebase only, or a combination thereof. When the delayed sweep is active, the main sweep trace brightens while the delayed sweep is advancing. In one combination mode, provided only on some oscilloscopes, the trace changes from the main sweep to the delayed sweep once the delayed sweep starts, though less of the delayed fast sweep is visible for longer delays. Another combination mode multiplexes (alternates) the main and delayed sweeps so that both appear at once; a trace separation control displaces them. DSOs can display waveforms this way, without offering a delayed timebase as such. Oscilloscopes with two vertical inputs, referred to as dual-trace oscilloscopes, are extremely useful and commonplace. Using a single-beam CRT, theymultiplexthe inputs, usually switching between them fast enough to display two traces apparently at once. Less common are oscilloscopes with more traces; four inputs are common among these, but a few (Kikusui, for one) offered a display of the sweep trigger signal if desired. Some multi-trace oscilloscopes use the external trigger input as an optional vertical input, and some have third and fourth channels with only minimal controls. In all cases, the inputs, when independently displayed, are time-multiplexed, but dual-trace oscilloscopes often can add their inputs to display a real-time analog sum. Inverting one channel while adding them together results in a display of the differences between them, provided neither channel is overloaded. This difference mode can provide a moderate-performance differential input.) Switching channels can be asynchronous, i.e. free-running, with respect to the sweep frequency; or it can be done after each horizontal sweep is complete. Asynchronous switching is usually designated "Chopped", while sweep-synchronized is designated "Alt[ernate]". A given channel is alternately connected and disconnected, leading to the term "chopped". Multi-trace oscilloscopes also switch channels either in chopped or alternate modes. In general, chopped mode is better for slower sweeps. It is possible for the internal chopping rate to be a multiple of the sweep repetition rate, creating blanks in the traces, but in practice this is rarely a problem. The gaps in one trace are overwritten by traces of the following sweep. A few oscilloscopes had a modulated chopping rate to avoid this occasional problem. Alternate mode, however, is better for faster sweeps. True dual-beam CRT oscilloscopes did exist, but were not common. One type (Cossor, U.K.) had a beam-splitter plate in its CRT, and single-ended deflection following the splitter. Others had two completeelectron guns, requiring tight control of axial (rotational) mechanical alignment in manufacturing the CRT. Beam-splitter types had horizontal deflection common to both vertical channels, but dual-gun oscilloscopes could have separate time bases, or use one time base for both channels. Multiple-gun CRTs (up to ten guns) were made in past decades. With ten guns, the envelope (bulb) was cylindrical throughout its length. (Also see "CRT Invention" inOscilloscope history.) In an analog oscilloscope, the vertical amplifier acquires the signal[s] to be displayed and provides a signal large enough to deflect the CRT's beam. In better oscilloscopes, it delays the signal by a fraction of a microsecond. The maximum deflection is at least somewhat beyond the edges of the graticule, and more typically some distance off-screen. The amplifier has to have low distortion to display its input accurately (it must be linear), and it has to recover quickly from overloads. As well, its time-domain response has to represent transients accurately—minimal overshoot, rounding, and tilt of a flat pulse top. A vertical input goes to a frequency-compensated step attenuator to reduce large signals to prevent overload. The attenuator feeds one or more low-level stages, which in turn feed gain stages (and a delay-line driver if there is a delay). Subsequent gain stages lead to the final output stage, which develops a large signal swing (tens of volts, sometimes over 100 volts) for CRT electrostatic deflection. In dual and multiple-trace oscilloscopes, an internal electronic switch selects the relatively low-level output of one channel's early-stage amplifier and sends it to the following stages of the vertical amplifier. In free-running ("chopped") mode, the oscillator (which may be simply a different operating mode of the switch driver) blanks the beam before switching, and unblanks it only after the switching transients have settled. Part way through the amplifier is a feed to the sweep trigger circuits, for internal triggering from the signal. This feed would be from an individual channel's amplifier in a dual or multi-trace oscilloscope, the channel depending upon the setting of the trigger source selector. This feed precedes the delay (if there is one), which allows the sweep circuit to unblank the CRT and start the forward sweep, so the CRT can show the triggering event. High-quality analog delays add a modest cost to an oscilloscope, and are omitted in cost-sensitive oscilloscopes. The delay, itself, comes from a special cable with a pair of conductors wound around a flexible, magnetically soft core. The coiling provides distributed inductance, while a conductive layer close to the wires provides distributed capacitance. The combination is a wideband transmission line with considerable delay per unit length. Both ends of the delay cable require matched impedances to avoid reflections. Most modern oscilloscopes have several inputs for voltages, and thus can be used to plot one varying voltage versus another. This is especially useful for graphing I-V curves (currentversusvoltagecharacteristics) for components such asdiodes, as well asLissajous figures. Lissajous figures are an example of how an oscilloscope can be used to trackphasedifferences between multiple input signals. This is very frequently used inbroadcast engineeringto plot the left and rightstereophonicchannels, to ensure that thestereo generatoriscalibratedproperly. Historically, stable Lissajous figures were used to show that two sine waves had a relatively simple frequency relationship, a numerically-small ratio. They also indicated phase difference between two sine waves of the same frequency. The X-Y mode also lets the oscilloscope serve as avector monitorto display images or user interfaces. Many early games, such asTennis for Two, used an oscilloscope as an output device.[12] Complete loss of signal in an X-Y CRT display means that the beam is stationary, striking a small spot. This risks burning the phosphor if the brightness is too high. Such damage was more common in older scopes as the phosphors previously used burned more easily. Some dedicated X-Y displays reduce beam current greatly, or blank the display entirely, if there are no inputs present. Some analogue oscilloscopes feature a Z input. This is generally an input terminal that connects directly to the CRT grid (usually via a coupling capacitor). This allows an external signal to either increase (if positive) or decrease (if negative) the brightness of the trace, even allowing it to be totally blanked. The voltage range to achieve cut-off to a brightened display is of the order of 10–20 volts depending on the CRT characteristics. An example of a practical application is if a pair of sine waves of known frequency are used to generate a circular Lissajous figure and a higher unknown frequency is applied to the Z input. This turns the continuous circle into a circle of dots. The number of dots multiplied by the X-Y frequency gives the Z frequency. This technique only works if the Z frequency is an integer ratio of the X-Y frequency and only if it is not so large that the dots become so numerous that they are difficult to count. As with all practical instruments, oscilloscopes do not respond equally to all possible input frequencies. The range ofsinusoidfrequencies an oscilloscope can usefully display is referred to as itsbandwidth. Bandwidth applies primarily to the Y-axis, though the X-axis sweeps must be fast enough to show the highest-frequency waveforms. The bandwidth is defined as the frequency at which the sensitivity is 0.707 of the sensitivity at DC or the lowest AC frequency (a drop of 3dB).[13]The oscilloscope's response drops off rapidly as the input frequency rises above that point. Within the stated bandwidth the response is not necessarily exactly uniform (or "flat"), but should always fall within a +0 to −3 dB range. One source[13]says there is a noticeable effect on the accuracy of voltage measurements at only 20 percent of the stated bandwidth. Some oscilloscopes' specifications do include a narrower tolerance range within the stated bandwidth. Probes also have bandwidth limits and must be chosen and used to handle the frequencies of interest properly. To achieve the flattest response, most probes must be "compensated" (an adjustment performed using a test signal from the oscilloscope) to allow for thereactanceof the probe's cable. Another related specification isrise time. This is the time taken between 10% and 90% of the maximum amplitude response at the leading edge of a pulse. It is related to the bandwidth approximately by: Bandwidth in Hz × rise time in seconds = 0.35.[14] For example, an oscilloscope with a rise time of 1 nanosecond would have a bandwidth of 350 MHz. In analog instruments, the bandwidth of the oscilloscope is limited by the vertical amplifiers and the CRT or other display subsystem. In digital instruments, the sampling rate of theanalog-to-digital converter(ADC) is a factor, but the stated analog bandwidth (and therefore the overall bandwidth of the instrument) is usually less than the ADC'sNyquist frequency. This is due to limitations in the analog signal amplifier, deliberate design of theanti-aliasing filterthat precedes the ADC, or both. For a digital oscilloscope, a rule of thumb is that the continuous sampling rate should be ten times the highest frequency desired to resolve; for example a 20 megasample/second rate would be applicable for measuring signals up to about 2 MHz. This lets the anti-aliasing filter be designed with a 3 dB down point of 2 MHz and an effective cutoff at 10 MHz (the Nyquist frequency), avoiding the artifacts of a very steep("brick-wall") filter. Asampling oscilloscopecan display signals of considerably higher frequency than the sampling rate if the signals are exactly, or nearly, repetitive. It does this by taking one sample from each successive repetition of the input waveform, each sample being at an increased time interval from the trigger event. The waveform is then displayed from these collected samples. This mechanism is referred to as "equivalent-time sampling".[15]Some oscilloscopes can operate in either this mode or in the more traditional "real-time" mode at the operator's choice. For digital oscilloscopes, waveform interval is defined as the time interval between adjacent points of a displayed waveform while sampling interval is defined as the time interval between adjacent gathered samples (= 1 / sampling frequency), and the waveform interval is usually longer than the sample interval.[16]In other words, the displayed waveform is an aggregation of the gathered samples (e.g., each displayed point is the average over each waveform interval). Some oscilloscopes havecursors. These are lines that can be moved about the screen to measure the time interval between two points, or the difference between two voltages. A few older oscilloscopes simply brightened the trace at movable locations. These cursors are more accurate than visual estimates referring to graticule lines.[17][18] Better quality general purpose oscilloscopes include a calibration signal for setting up the compensation of test probes; this is (often) a 1 kHz square-wave signal of a definite peak-to-peak voltage available at a test terminal on the front panel. Some better oscilloscopes also have a squared-off loop for checking and adjusting current probes. Sometimes a user wants to see an event that happens only occasionally. To catch these events, some oscilloscopes—calledstorage scopes—preserve the most recent sweep on the screen. This was originally achieved with a special CRT, astorage tube, which retained the image of even a very brief event for a long time. Some digital oscilloscopes can sweep at speeds as slow as once per hour, emulating a stripchart recorder. That is, the signal scrolls across the screen from right to left. Most oscilloscopes with this facility switch from a sweep to a strip-chart mode at about one sweep per ten seconds. This is because otherwise, the scope looks broken: it is collecting data, but the dot cannot be seen. All but the simplest models of current oscilloscopes more often use digital signal sampling. Samples feed fast analog-to-digital converters, following which all signal processing (and storage) is digital. Many oscilloscopes accommodate plug-in modules for different purposes, e.g., high-sensitivity amplifiers of relatively narrow bandwidth, differential amplifiers, amplifiers with four or more channels, sampling plugins for repetitive signals of very high frequency, and special-purpose plugins, including audio/ultrasonic spectrum analyzers, and stable-offset-voltage direct-coupled channels with relatively high gain. One of the most frequent uses of scopes istroubleshootingmalfunctioning electronic equipment. For example, where avoltmetermay show a totally unexpected voltage, a scope may reveal that the circuit is oscillating. In other cases the precise shape or timing of a pulse is important. In a piece of electronic equipment, for example, the connections between stages (e.g.,electronic mixers,electronic oscillators,amplifiers) may be 'probed' for the expected signal, using the scope as a simple signal tracer. If the expected signal is absent or incorrect, some preceding stage of the electronics is not operating correctly. Since most failures occur because of a single faulty component, each measurement can show that some of the stages of a complex piece of equipment either work, or probably did not cause the fault. Once the faulty stage is found, further probing can usually tell a skilled technician exactly which component has failed. Once the component is replaced, the unit can be restored to service, or at least the next fault can be isolated. This sort of troubleshooting is typical of radio and TV receivers, as well as audio amplifiers, but can apply to quite different devices such as electronic motor drives. Another use is to check newly designed circuitry. Often, a newly designed circuit misbehaves because of design errors, bad voltage levels, electrical noise etc. Digital electronics usually operate from a clock, so a dual-trace scope showing both the clock signal and a test signal dependent upon the clock is useful. Storage scopes are helpful for "capturing" rare electronic events that cause defective operation. Oscilloscopes are often used duringreal-time softwaredevelopment to check, among other things, missed deadlines and worst-case latencies.[19] First appearing in the 1970s for ignition system analysis, automotive oscilloscopes are becoming an important workshop tool for testing sensors and output signals on electronicengine managementsystems,brakingandstabilitysystems. Some oscilloscopes can trigger and decode serial bus messages, such as theCAN buscommonly used in automotive applications. Many oscilloscopes today provide one or more external interfaces to allow remoteinstrument controlby external software. These interfaces (or buses) includeGPIB,Ethernet,serial port,USBandWi-Fi. The following section is a brief summary of various types and models available. For a detailed discussion, refer to the other article. The earliest and simplest type of oscilloscope consisted of a CRT, a verticalamplifier, a timebase, a horizontal amplifier and apower supply. These are now called "analog" scopes to distinguish them from the "digital" scopes that became common in the 1990s and later. Analog scopes do not necessarily include a calibrated reference grid for size measurement of waves, and they may not display waves in the traditional sense of a line segment sweeping from left to right. Instead, they could be used for signal analysis by feeding a reference signal into one axis and the signal to measure into the other axis. For an oscillating reference and measurement signal, this results in a complex looping pattern referred to as aLissajous figure. The shape of the curve can be interpreted to identify properties of the measurement signal in relation to the reference signal and is useful across a wide range of oscillation frequencies. The dual-beam analog oscilloscope can display two signals simultaneously. A special dual-beam CRT generates and deflects two separate beams. Multi-trace analog oscilloscopes can simulate a dual-beam display with chop and alternate sweeps—but those features do not provide simultaneous displays. (Real-time digital oscilloscopes offer the same benefits of a dual-beam oscilloscope, but they do not require a dual-beam display.) The disadvantages of the dual trace oscilloscope are that it cannot switch quickly between traces, and cannot capture two fast transient events. A dualbeamoscilloscope avoids those problems. Trace storage is an extra feature available on some analog scopes; they used direct-view storage CRTs. Storage allows a trace pattern that normally would decay in a fraction of a second to remain on the screen for several minutes or longer. An electrical circuit can then be deliberately activated to store and erase the trace on the screen. While analog devices use continually varying voltages, digital devices use numbers that correspond to samples of the voltage. In the case of digital oscilloscopes, an analog-to-digital converter (ADC) changes the measured voltages into digital information. The digital storage oscilloscope, or DSO for short, is the standard type of oscilloscope today for the majority of industrial applications, and thanks to the low costs of entry-level oscilloscopes even for hobbyists. It replaces the electrostatic storage method in analog storage scopes with digitalmemory, which stores sample data as long as required without degradation and displays it without the brightness issues of storage-type CRTs. It also allows complex processing of the signal by high-speeddigital signal processingcircuits.[1] A standard DSO is limited to capturing signals with a bandwidth of less than half the sampling rate of the ADC (called theNyquist limit). There is a variation of the DSO called thedigital sampling oscilloscopewhich can exceed this limit for certain types of signal, such as high-speed communications signals, where the waveform consists of repeating pulses. This type of DSO deliberately samples at a much lower frequency than the Nyquist limit and then uses signal processing to reconstruct a composite view of a typical pulse.[20] Alogic analyzeris similar to an oscilloscope, but for each input signal only provides thelogic levelwithout the shape of its analog waveform. A mixed-signal oscilloscope (or MSO) meanwhile has two kinds of inputs: a small number of analog channels (typically two or four), and a larger number of logic channels (typically sixteen). It provides the ability to accurately time-correlate analog and logic signals, thus offering a distinct advantage over a separate oscilloscope and logic analyzer. Typically, logic channels may be grouped and displayed as a bus with each bus value displayed at the bottom of the display in hexadecimal or binary. On most MSOs, the trigger can be set across both analog and logic channels. A mixed-domain oscilloscope (MDO) is an oscilloscope that comes with an additional RF input which is solely used for dedicated FFT-basedspectrum analyzerfunctionality. Often, this RF input offers a higher bandwidth than the conventional analog input channels. This is in contrast to the FFT functionality of conventional digital oscilloscopes, which use the normal analog inputs. Some MDOs allow time-correlation of events in the time domain (like a specific serial data package) with events happening in the frequency domain (like RF transmissions). Handheld oscilloscopes are useful for many test and field service applications. Today, a handheld oscilloscope is usually adigital sampling oscilloscope, using aliquid crystaldisplay. Many handheld and bench oscilloscopes have the ground reference voltage common to all input channels. If more than one measurement channel is used at the same time, all the input signals must have the same voltage reference, and the shared default reference is the "earth". If there is no differential preamplifier or external signal isolator, this traditional desktop oscilloscope is not suitable for floating measurements. (Occasionally an oscilloscope user breaks the ground pin in the power supply cord of a bench-top oscilloscope in an attempt to isolate the signal common from the earth ground. This practice is unreliable since the entire stray capacitance of the instrument cabinet connects into the circuit. It is also a hazard to break a safety ground connection, and instruction manuals strongly advise against it.) Some models of oscilloscope have isolated inputs, where the signal reference level terminals are not connected together. Each input channel can be used to make a "floating" measurement with an independent signal reference level. Measurements can be made without tying one side of the oscilloscope input to the circuit signal common or ground reference. The isolation available is categorized as shown below: Some digital oscilloscope rely on a PC platform for display and control of the instrument. This can be in the form of a standalone oscilloscope with internal PC platform (PC mainboard), or as external oscilloscope which connects throughUSBorLANto a separate PC or laptop. A large number of instruments used in a variety of technical fields are really oscilloscopes with inputs, calibration, controls, display calibration, etc., specialized and optimized for a particular application. Examples of such oscilloscope-based instruments includewaveform monitorsfor analyzing video levels intelevision productionsand medical devices such as vital function monitors and electrocardiogram and electroencephalogram instruments. In automobile repair, an ignition analyzer is used to show the spark waveforms for each cylinder. All of these are essentially oscilloscopes, performing the basic task of showing the changes in one or more input signals over time in anX‑Ydisplay. Other instruments convert the results of their measurements to a repetitive electrical signal, and incorporate an oscilloscope as a display element. Such complex measurement systems includespectrum analyzers, transistor analyzers, andtime domain reflectometers(TDRs). Unlike an oscilloscope, these instruments automatically generate stimulus or sweep a measurement parameter.
https://en.wikipedia.org/wiki/Oscilloscope
Inphysicsandengineering, aphasor(aportmanteauofphase vector[1][2]) is acomplex numberrepresenting asinusoidal functionwhoseamplitudeAandinitial phaseθaretime-invariantand whoseangular frequencyωis fixed. It is related to a more general concept calledanalytic representation,[3]which decomposes a sinusoid into the product of a complex constant and a factor depending on time and frequency. The complex constant, which depends on amplitude and phase, is known as aphasor, orcomplex amplitude,[4][5]and (in older texts)sinor[6]or evencomplexor.[6] A common application is in the steady-state analysis of anelectrical networkpowered bytime varying currentwhere all signals are assumed to be sinusoidal with a common frequency. Phasor representation allows the analyst to represent the amplitude and phase of the signal using a single complex number. The only difference in their analytic representations is the complex amplitude (phasor). A linear combination of such functions can be represented as a linear combination of phasors (known asphasor arithmeticorphasor algebra[7]: 53) and the time/frequency dependent factor that they all have in common. The origin of the term phasor rightfully suggests that a (diagrammatic) calculus somewhat similar to that possible forvectorsis possible for phasors as well.[6]An important additional feature of the phasor transform is thatdifferentiationandintegrationof sinusoidal signals (having constant amplitude, period and phase) corresponds to simplealgebraic operationson the phasors; the phasor transform thus allows theanalysis(calculation) of theACsteady stateofRLC circuitsby solving simplealgebraic equations(albeit with complex coefficients) in the phasor domain instead of solvingdifferential equations(withrealcoefficients) in the time domain.[8][9][a]The originator of the phasor transform wasCharles Proteus Steinmetzworking atGeneral Electricin the late 19th century.[10][11]He got his inspiration fromOliver Heaviside. Heaviside's operational calculus was modified so that the variable p becomes jω. The complex number j has simple meaning: phase shift.[12] Glossing over some mathematical details, the phasor transform can also be seen as a particular case of theLaplace transform(limited to a single frequency), which, in contrast to phasor representation, can be used to (simultaneously) derive thetransient responseof an RLC circuit.[9][11]However, the Laplace transform is mathematically more difficult to apply and the effort may be unjustified if only steady state analysis is required.[11] Phasor notation(also known asangle notation) is amathematical notationused inelectronics engineeringandelectrical engineering. A vector whosepolar coordinatesare magnitudeA{\displaystyle A}andangleθ{\displaystyle \theta }is writtenA∠θ.{\displaystyle A\angle \theta .}[13]1∠θ{\displaystyle 1\angle \theta }can represent either thevector(cos⁡θ,sin⁡θ){\displaystyle (\cos \theta ,\,\sin \theta )}or thecomplex numbercos⁡θ+isin⁡θ=eiθ{\displaystyle \cos \theta +i\sin \theta =e^{i\theta }}, according toEuler's formulawithi2=−1{\displaystyle i^{2}=-1}, both of which havemagnitudesof 1. The angle may be stated indegreeswith an implied conversion from degrees toradians. For example1∠90{\displaystyle 1\angle 90}would be assumed to be1∠90∘,{\displaystyle 1\angle 90^{\circ },}which is the vector(0,1){\displaystyle (0,\,1)}or the numbereiπ/2=i.{\displaystyle e^{i\pi /2}=i.} Multiplication and division of complex numbers become straight forward through the phasor notation. Given the vectorsv1=A1∠θ1{\displaystyle v_{1}=A_{1}\angle \theta _{1}}andv2=A2∠θ2{\displaystyle v_{2}=A_{2}\angle \theta _{2}}, the following is true:[14] A real-valued sinusoid with constant amplitude, frequency, and phase has the form: where only parametert{\displaystyle t}is time-variant. The inclusion of animaginary component: gives it, in accordance withEuler's formula, the factoring property described in the lead paragraph: whose real part is the original sinusoid. The benefit of the complex representation is that linear operations with other complex representations produces a complex result whose real part reflects the same linear operations with the real parts of the other complex sinusoids. Furthermore, all the mathematics can be done with just the phasorsAeiθ,{\displaystyle Ae^{i\theta },}and the common factoreiωt{\displaystyle e^{i\omega t}}is reinserted prior to the real part of the result. The functionAei(ωt+θ){\displaystyle Ae^{i(\omega t+\theta )}}is ananalytic representationofAcos⁡(ωt+θ).{\displaystyle A\cos(\omega t+\theta ).}Figure 2 depicts it as a rotating vector in the complex plane. It is sometimes convenient to refer to the entire function as aphasor,[15]as we do in the next section. Multiplication of the phasorAeiθeiωt{\displaystyle Ae^{i\theta }e^{i\omega t}}by a complex constant,Beiϕ{\displaystyle Be^{i\phi }}, produces another phasor. That means its only effect is to change the amplitude and phase of the underlying sinusoid:Re⁡((Aeiθ⋅Beiϕ)⋅eiωt)=Re⁡((ABei(θ+ϕ))⋅eiωt)=ABcos⁡(ωt+(θ+ϕ)).{\displaystyle {\begin{aligned}&\operatorname {Re} \left(\left(Ae^{i\theta }\cdot Be^{i\phi }\right)\cdot e^{i\omega t}\right)\\={}&\operatorname {Re} \left(\left(ABe^{i(\theta +\phi )}\right)\cdot e^{i\omega t}\right)\\={}&AB\cos(\omega t+(\theta +\phi )).\end{aligned}}} In electronics,Beiϕ{\displaystyle Be^{i\phi }}would represent animpedance, which is independent of time. In particular it isnotthe shorthand notation for another phasor. Multiplying a phasor current by an impedance produces a phasor voltage. But the product of two phasors (or squaring a phasor) would represent the product of two sinusoids, which is a non-linear operation that produces new frequency components. Phasor notation can only represent systems with one frequency, such as a linear system stimulated by a sinusoid. The sum of multiple phasors produces another phasor. That is because the sum of sinusoids with the same frequency is also a sinusoid with that frequency:A1cos⁡(ωt+θ1)+A2cos⁡(ωt+θ2)=Re⁡(A1eiθ1eiωt)+Re⁡(A2eiθ2eiωt)=Re⁡(A1eiθ1eiωt+A2eiθ2eiωt)=Re⁡((A1eiθ1+A2eiθ2)eiωt)=Re⁡((A3eiθ3)eiωt)=A3cos⁡(ωt+θ3),{\displaystyle {\begin{aligned}&A_{1}\cos(\omega t+\theta _{1})+A_{2}\cos(\omega t+\theta _{2})\\[3pt]={}&\operatorname {Re} \left(A_{1}e^{i\theta _{1}}e^{i\omega t}\right)+\operatorname {Re} \left(A_{2}e^{i\theta _{2}}e^{i\omega t}\right)\\[3pt]={}&\operatorname {Re} \left(A_{1}e^{i\theta _{1}}e^{i\omega t}+A_{2}e^{i\theta _{2}}e^{i\omega t}\right)\\[3pt]={}&\operatorname {Re} \left(\left(A_{1}e^{i\theta _{1}}+A_{2}e^{i\theta _{2}}\right)e^{i\omega t}\right)\\[3pt]={}&\operatorname {Re} \left(\left(A_{3}e^{i\theta _{3}}\right)e^{i\omega t}\right)\\[3pt]={}&A_{3}\cos(\omega t+\theta _{3}),\end{aligned}}}where:A32=(A1cos⁡θ1+A2cos⁡θ2)2+(A1sin⁡θ1+A2sin⁡θ2)2,{\displaystyle A_{3}^{2}=(A_{1}\cos \theta _{1}+A_{2}\cos \theta _{2})^{2}+(A_{1}\sin \theta _{1}+A_{2}\sin \theta _{2})^{2},} and, if we takeθ3∈[−π2,3π2]{\textstyle \theta _{3}\in \left[-{\frac {\pi }{2}},{\frac {3\pi }{2}}\right]}, thenθ3{\displaystyle \theta _{3}}is: or, via thelaw of cosineson thecomplex plane(or thetrigonometric identity for angle differences):A32=A12+A22−2A1A2cos⁡(180∘−Δθ)=A12+A22+2A1A2cos⁡(Δθ),{\displaystyle A_{3}^{2}=A_{1}^{2}+A_{2}^{2}-2A_{1}A_{2}\cos(180^{\circ }-\Delta \theta )=A_{1}^{2}+A_{2}^{2}+2A_{1}A_{2}\cos(\Delta \theta ),}whereΔθ=θ1−θ2.{\displaystyle \Delta \theta =\theta _{1}-\theta _{2}.} A key point is thatA3andθ3do not depend onωort, which is what makes phasor notation possible. The time and frequency dependence can be suppressed and re-inserted into the outcome as long as the only operations used in between are ones that produce another phasor. Inangle notation, the operation shown above is written:A1∠θ1+A2∠θ2=A3∠θ3.{\displaystyle A_{1}\angle \theta _{1}+A_{2}\angle \theta _{2}=A_{3}\angle \theta _{3}.} Another way to view addition is that twovectorswith coordinates[A1cos(ωt+θ1),A1sin(ωt+θ1)]and[A2cos(ωt+θ2),A2sin(ωt+θ2)]areadded vectoriallyto produce a resultant vector with coordinates[A3cos(ωt+θ3),A3sin(ωt+θ3)](see animation). In physics, this sort of addition occurs when sinusoidsinterferewith each other, constructively or destructively. The static vector concept provides useful insight into questions like this: "What phase difference would be required between three identical sinusoids for perfect cancellation?" In this case, simply imagine taking three vectors of equal length and placing them head to tail such that the last head matches up with the first tail. Clearly, the shape which satisfies these conditions is an equilateraltriangle, so the angle between each phasor to the next is 120° (2π⁄3radians), or one third of a wavelengthλ⁄3. So the phase difference between each wave must also be 120°, as is the case inthree-phase power. In other words, what this shows is that:cos⁡(ωt)+cos⁡(ωt+2π3)+cos⁡(ωt−2π3)=0.{\displaystyle \cos(\omega t)+\cos \left(\omega t+{\frac {2\pi }{3}}\right)+\cos \left(\omega t-{\frac {2\pi }{3}}\right)=0.} In the example of three waves, the phase difference between the first and the last wave was 240°, while for two waves destructive interference happens at 180°. In the limit of many waves, the phasors must form a circle for destructive interference, so that the first phasor is nearly parallel with the last. This means that for many sources, destructive interference happens when the first and last wave differ by 360 degrees, a full wavelengthλ{\displaystyle \lambda }. This is why in single slitdiffraction, the minima occur whenlightfrom the far edge travels a full wavelength further than the light from the near edge. As the single vector rotates in an anti-clockwise direction, its tip at point A will rotate one complete revolution of 360° or 2πradians representing one complete cycle. If the length of its moving tip is transferred at different angular intervals in time to a graph as shown above, a sinusoidal waveform would be drawn starting at the left with zero time. Each position along the horizontal axis indicates the time that has elapsed since zero time,t= 0. When the vector is horizontal the tip of the vector represents the angles at 0°, 180°, and at 360°. Likewise, when the tip of the vector is vertical it represents the positive peak value, (+Amax) at 90° orπ⁄2and the negative peak value, (−Amax) at 270° or3π⁄2. Then the time axis of the waveform represents the angle either in degrees or radians through which the phasor has moved. So we can say that a phasor represents a scaled voltage or current value of a rotating vector which is "frozen" at some point in time, (t) and in our example above, this is at an angle of 30°. Sometimes when we are analysing alternating waveforms we may need to know the position of the phasor, representing the alternating quantity at some particular instant in time especially when we want to compare two different waveforms on the same axis. For example, voltage and current. We have assumed in the waveform above that the waveform starts at timet= 0with a corresponding phase angle in either degrees or radians. But if a second waveform starts to the left or to the right of this zero point, or if we want to represent in phasor notation the relationship between the two waveforms, then we will need to take into account this phase difference,Φof the waveform. Consider the diagram below from the previous Phase Difference tutorial. The timederivativeorintegralof a phasor produces another phasor.[b]For example:Re⁡(ddt(Aeiθ⋅eiωt))=Re⁡(Aeiθ⋅iωeiωt)=Re⁡(Aeiθ⋅eiπ/2ωeiωt)=Re⁡(ωAei(θ+π/2)⋅eiωt)=ωA⋅cos⁡(ωt+θ+π2).{\displaystyle {\begin{aligned}&\operatorname {Re} \left({\frac {\mathrm {d} }{\mathrm {d} t}}{\mathord {\left(Ae^{i\theta }\cdot e^{i\omega t}\right)}}\right)\\={}&\operatorname {Re} \left(Ae^{i\theta }\cdot i\omega e^{i\omega t}\right)\\={}&\operatorname {Re} \left(Ae^{i\theta }\cdot e^{i\pi /2}\omega e^{i\omega t}\right)\\={}&\operatorname {Re} \left(\omega Ae^{i(\theta +\pi /2)}\cdot e^{i\omega t}\right)\\={}&\omega A\cdot \cos \left(\omega t+\theta +{\frac {\pi }{2}}\right).\end{aligned}}} Therefore, in phasor representation, the time derivative of a sinusoid becomes just multiplication by the constantiω=eiπ/2⋅ω{\textstyle i\omega =e^{i\pi /2}\cdot \omega }. Similarly, integrating a phasor corresponds to multiplication by1iω=e−iπ/2ω.{\textstyle {\frac {1}{i\omega }}={\frac {e^{-i\pi /2}}{\omega }}.}The time-dependent factor,eiωt,{\displaystyle e^{i\omega t},}is unaffected. When we solve alinear differential equationwith phasor arithmetic, we are merely factoringeiωt{\displaystyle e^{i\omega t}}out of all terms of the equation, and reinserting it into the answer. For example, consider the following differential equation for the voltage across thecapacitorin anRC circuit:dvC(t)dt+1RCvC(t)=1RCvS(t).{\displaystyle {\frac {\mathrm {d} \,v_{\text{C}}(t)}{\mathrm {d} t}}+{\frac {1}{RC}}v_{\text{C}}(t)={\frac {1}{RC}}v_{\text{S}}(t).} When the voltage source in this circuit is sinusoidal:vS(t)=VP⋅cos⁡(ωt+θ),{\displaystyle v_{\text{S}}(t)=V_{\text{P}}\cdot \cos(\omega t+\theta ),} we may substitutevS(t)=Re⁡(Vs⋅eiωt).{\displaystyle v_{\text{S}}(t)=\operatorname {Re} \left(V_{\text{s}}\cdot e^{i\omega t}\right).} vC(t)=Re⁡(Vc⋅eiωt),{\displaystyle v_{\text{C}}(t)=\operatorname {Re} \left(V_{\text{c}}\cdot e^{i\omega t}\right),}where phasorVs=VPeiθ,{\displaystyle V_{\text{s}}=V_{\text{P}}e^{i\theta },}and phasorVc{\displaystyle V_{\text{c}}}is the unknown quantity to be determined. In the phasor shorthand notation, the differential equation reduces to:iωVc+1RCVc=1RCVs.{\displaystyle i\omega V_{\text{c}}+{\frac {1}{RC}}V_{\text{c}}={\frac {1}{RC}}V_{\text{s}}.} Since this must hold for allt{\displaystyle t}, specifically:t−π2ω,{\textstyle t-{\frac {\pi }{2\omega }},}it follows that: It is also readily seen that:ddtRe⁡(Vc⋅eiωt)=Re⁡(ddt(Vc⋅eiωt))=Re⁡(iωVc⋅eiωt)ddtIm⁡(Vc⋅eiωt)=Im⁡(ddt(Vc⋅eiωt))=Im⁡(iωVc⋅eiωt).{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} t}}\operatorname {Re} \left(V_{\text{c}}\cdot e^{i\omega t}\right)&=\operatorname {Re} \left({\frac {\mathrm {d} }{\mathrm {d} t}}{\mathord {\left(V_{\text{c}}\cdot e^{i\omega t}\right)}}\right)=\operatorname {Re} \left(i\omega V_{\text{c}}\cdot e^{i\omega t}\right)\\{\frac {\mathrm {d} }{\mathrm {d} t}}\operatorname {Im} \left(V_{\text{c}}\cdot e^{i\omega t}\right)&=\operatorname {Im} \left({\frac {\mathrm {d} }{\mathrm {d} t}}{\mathord {\left(V_{\text{c}}\cdot e^{i\omega t}\right)}}\right)=\operatorname {Im} \left(i\omega V_{\text{c}}\cdot e^{i\omega t}\right).\end{aligned}}} Substituting these intoEq.1andEq.2, multiplyingEq.2byi,{\displaystyle i,}and adding both equations gives:iωVc⋅eiωt+1RCVc⋅eiωt=1RCVs⋅eiωt(iωVc+1RCVc)⋅eiωt=(1RCVs)⋅eiωtiωVc+1RCVc=1RCVs.{\displaystyle {\begin{aligned}i\omega V_{\text{c}}\cdot e^{i\omega t}+{\frac {1}{RC}}V_{\text{c}}\cdot e^{i\omega t}&={\frac {1}{RC}}V_{\text{s}}\cdot e^{i\omega t}\\\left(i\omega V_{\text{c}}+{\frac {1}{RC}}V_{\text{c}}\right)\!\cdot e^{i\omega t}&=\left({\frac {1}{RC}}V_{\text{s}}\right)\cdot e^{i\omega t}\\i\omega V_{\text{c}}+{\frac {1}{RC}}V_{\text{c}}&={\frac {1}{RC}}V_{\text{s}}.\end{aligned}}} Solving for the phasor capacitor voltage gives:Vc=11+iωRC⋅Vs=1−iωRC1+(ωRC)2⋅VPeiθ.{\displaystyle V_{\text{c}}={\frac {1}{1+i\omega RC}}\cdot V_{\text{s}}={\frac {1-i\omega RC}{1+(\omega RC)^{2}}}\cdot V_{\text{P}}e^{i\theta }.} As we have seen, the factor multiplyingVs{\displaystyle V_{\text{s}}}represents differences of the amplitude and phase ofvC(t){\displaystyle v_{\text{C}}(t)}relative toVP{\displaystyle V_{\text{P}}}andθ.{\displaystyle \theta .} In polar coordinate form, the first term of the last expression is:1−iωRC1+(ωRC)2=11+(ωRC)2⋅e−iϕ(ω),{\displaystyle {\frac {1-i\omega RC}{1+(\omega RC)^{2}}}={\frac {1}{\sqrt {1+(\omega RC)^{2}}}}\cdot e^{-i\phi (\omega )},}whereϕ(ω)=arctan⁡(ωRC){\displaystyle \phi (\omega )=\arctan(\omega RC)}. Therefore:vC(t)=Re⁡(Vc⋅eiωt)=11+(ωRC)2⋅VPcos⁡(ωt+θ−ϕ(ω)).{\displaystyle v_{\text{C}}(t)=\operatorname {Re} \left(V_{\text{c}}\cdot e^{i\omega t}\right)={\frac {1}{\sqrt {1+(\omega RC)^{2}}}}\cdot V_{\text{P}}\cos(\omega t+\theta -\phi (\omega )).} A quantity called compleximpedanceis the ratio of two phasors, which is not a phasor, because it does not correspond to a sinusoidally varying function. With phasors, the techniques for solvingDCcircuits can be applied to solve linear AC circuits.[a] In an AC circuit we have real power (P) which is a representation of the average power into the circuit and reactive power (Q) which indicates power flowing back and forth. We can also define thecomplex powerS=P+jQand the apparent power which is the magnitude ofS. The power law for an AC circuit expressed in phasors is thenS=VI*(whereI*is thecomplex conjugateofI, and the magnitudes of the voltage and current phasorsVand ofIare theRMSvalues of the voltage and current, respectively). Given this we can apply the techniques ofanalysis of resistive circuitswith phasors to analyze single frequency linear AC circuits containing resistors, capacitors, andinductors. Multiple frequency linear AC circuits and AC circuits with different waveforms can be analyzed to find voltages and currents by transforming all waveforms to sine wave components (usingFourier series) with magnitude and phase then analyzing each frequency separately, as allowed by thesuperposition theorem. This solution method applies only to inputs that are sinusoidal and for solutions that are in steady state, i.e., after all transients have died out.[16] The concept is frequently involved in representing anelectrical impedance. In this case, the phase angle is thephase differencebetween the voltage applied to the impedance and the current driven through it. In analysis ofthree phaseAC power systems, usually a set of phasors is defined as the three complexcube roots of unity, graphically represented as unit magnitudes at angles of 0, 120 and 240 degrees. By treating polyphase AC circuit quantities as phasors, balanced circuits can be simplified and unbalanced circuits can be treated as an algebraic combination ofsymmetrical components. This approach greatly simplifies the work required in electrical calculations of voltage drop, power flow, and short-circuit currents. In the context of power systems analysis, the phase angle is often given indegrees, and the magnitude inRMSvalue rather than the peak amplitude of the sinusoid. The technique ofsynchrophasorsuses digital instruments to measure the phasors representing transmission system voltages at widespread points in a transmission network. Differences among the phasors indicate power flow and system stability. The rotating frame picture using phasor can be a powerful tool to understand analog modulations such asamplitude modulation(and its variants[17]) andfrequency modulation. x(t)=Re⁡(Aeiθ⋅ei2πf0t),{\displaystyle x(t)=\operatorname {Re} \left(Ae^{i\theta }\cdot e^{i2\pi f_{0}t}\right),}where the term in brackets is viewed as a rotating vector in the complex plane. The phasor has lengthA{\displaystyle A}, rotates anti-clockwise at a rate off0{\displaystyle f_{0}}revolutions per second, and at timet=0{\displaystyle t=0}makes an angle ofθ{\displaystyle \theta }with respect to the positive real axis. The waveformx(t){\displaystyle x(t)}can then be viewed as a projection of this vector onto the real axis. A modulated waveform is represented by this phasor (the carrier) and two additional phasors (the modulation phasors). If the modulating signal is a single tone of the formAmcos⁡2πfmt{\displaystyle Am\cos {2\pi f_{m}t}}, wherem{\displaystyle m}is the modulation depth andfm{\displaystyle f_{m}}is the frequency of the modulating signal, then for amplitude modulation the two modulation phasors are given by, 12Ameiθ⋅ei2π(f0+fm)t,{\displaystyle {1 \over 2}Ame^{i\theta }\cdot e^{i2\pi (f_{0}+f_{m})t},}12Ameiθ⋅ei2π(f0−fm)t.{\displaystyle {1 \over 2}Ame^{i\theta }\cdot e^{i2\pi (f_{0}-f_{m})t}.} The two modulation phasors are phased such that their vector sum is always in phase with the carrier phasor. An alternative representation is two phasors counter rotating around the end of the carrier phasor at a ratefm{\displaystyle f_{m}}relative to the carrier phasor. That is, 12Ameiθ⋅ei2πfmt,{\displaystyle {1 \over 2}Ame^{i\theta }\cdot e^{i2\pi f_{m}t},}12Ameiθ⋅e−i2πfmt.{\displaystyle {1 \over 2}Ame^{i\theta }\cdot e^{-i2\pi f_{m}t}.} Frequency modulation is a similar representation except that the modulating phasors are not in phase with the carrier. In this case the vector sum of the modulating phasors is shifted 90° from the carrier phase. Strictly, frequency modulation representation requires additional small modulation phasors at2fm,3fm{\displaystyle 2f_{m},3f_{m}}etc, but for most practical purposes these are ignored because their effect is very small.
https://en.wikipedia.org/wiki/Phasor
Inpsychoacoustics, apure toneis a sound with asinusoidalwaveform; that is, asinewaveof constantfrequency,phase-shift, andamplitude.[1]By extension, insignal processinga single-frequency tone or pure tone is a purely sinusoidalsignal(e.g., a voltage). A pure tone has the property – unique among real-valued wave shapes – that its wave shape is unchanged bylinear time-invariant systems; that is, only the phase and amplitude change between such a system's pure-tone input and its output. Sine and cosine waves can be used asbasicbuilding blocks of more complex waves. As additional sine waves having different frequencies arecombined, the waveform transforms from a sinusoidal shape into a more complex shape. When considered as part of a wholespectrum, a pure tone may also be called aspectral component. In clinicalaudiology, pure tones are used forpure-tone audiometryto characterize hearing thresholds at different frequencies.Sound localizationis often more difficult with pure tones than with other sounds.[2][3] Pure tones have been used by 19th century physicists likeGeorg OhmandHermann von Helmholtzto support theories asserting that the ear functions in a way equivalent to aFourier frequency analysis.[4][5]InOhm's acoustic law, later further elaborated by Helmholtz,musical tonesare perceived as a set of pure tones. The percept ofpitchdepends on the frequency of the most prominent tone, and the phases of the individual components is discarded. This theory has often been blamed for creating a confusion between pitch, frequency and pure tones.[6] Unlikemusical tonesthat are composed of the sum of a number of harmonically related sinusoidal components, pure tones only contain one such sinusoidal waveform. When presented in isolation, and when its frequency pertains to a certain range, pure tones give rise to a single pitch percept, which can be characterized by its frequency. In this situation, the instantaneous phase of the pure tone varies linearly with time. If a pure tone gives rise to a constant, steady-state percept, then it can be concluded that its phase does not influence this percept. However, when multiple pure tones are presented at once, like in musical tones, their relative phase plays a role in the resulting percept. In such a situation, the perceived pitch is not determined by the frequency of any individual component, but by the frequency relationship between these components (seemissing fundamental).
https://en.wikipedia.org/wiki/Pure_tone
Inmechanicsandphysics,simple harmonic motion(sometimes abbreviated asSHM) is a special type ofperiodicmotionan object experiences by means of arestoring forcewhose magnitude is directlyproportionalto the distance of the object from an equilibrium position and acts towards the equilibrium position. It results in anoscillationthat is described by asinusoidwhich continues indefinitely (if uninhibited byfrictionor any otherdissipationofenergy).[1] Simple harmonic motion can serve as amathematical modelfor a variety of motions, but is typified by the oscillation of amasson aspringwhen it is subject to the linearelasticrestoring force given byHooke's law. The motion issinusoidalin time and demonstrates a singleresonantfrequency. Other phenomena can be modeled by simple harmonic motion, including the motion of asimple pendulum, although for it to be an accurate model, thenet forceon the object at the end of the pendulum must be proportional to the displacement (and even so, it is only a good approximation when the angle of the swing is small; seesmall-angle approximation). Simple harmonic motion can also be used to modelmolecular vibration. Simple harmonic motion provides a basis for the characterization of more complicated periodic motion through the techniques ofFourier analysis. The motion of aparticlemoving along a straight line with anaccelerationwhose direction is always toward afixed pointon the line and whose magnitude is proportional to the displacement from the fixed point is called simple harmonic motion.[2] In the diagram, asimple harmonic oscillator, consisting of a weight attached to one end of a spring, is shown. The other end of the spring is connected to a rigid support such as a wall. If the system is left at rest at theequilibriumposition then there is no netforceacting on the mass. However, if the mass is displaced from the equilibrium position, the springexertsa restoringelasticforce that obeysHooke's law. Mathematically,F=−kx,{\displaystyle \mathbf {F} =-k\mathbf {x} ,}whereFis the restoring elastic force exerted by the spring (inSIunits:N),kis thespring constant(N·m−1), andxis thedisplacementfrom the equilibrium position (inmetres). For any simple mechanical harmonic oscillator: Once the mass is displaced from its equilibrium position, it experiences a net restoring force. As a result, itacceleratesand starts going back to the equilibrium position. When the mass moves closer to the equilibrium position, the restoring force decreases. At the equilibrium position, the net restoring force vanishes. However, atx= 0, the mass hasmomentumbecause of the acceleration that the restoring force has imparted. Therefore, the mass continues past the equilibrium position, compressing the spring. A net restoring force then slows it down until itsvelocityreaches zero, whereupon it is accelerated back to the equilibrium position again. As long as the system has noenergyloss, the mass continues to oscillate. Thus simple harmonic motion is a type ofperiodicmotion. If energy is lost in the system, then the mass exhibitsdamped oscillation. Note if the real space andphase spaceplot are not co-linear, the phase space motion becomes elliptical. The area enclosed depends on the amplitude and the maximum momentum. InNewtonian mechanics, for one-dimensional simple harmonic motion, the equation of motion, which is a second-order linearordinary differential equationwithconstantcoefficients, can be obtained by means ofNewton's second lawandHooke's lawfor amasson aspring. Fnet=md2xdt2=−kx,{\displaystyle F_{\mathrm {net} }=m{\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}=-kx,}wheremis theinertial massof the oscillating body,xis itsdisplacementfrom theequilibrium(or mean) position, andkis a constant (thespring constantfor a mass on a spring). Therefore,d2xdt2=−kmx{\displaystyle {\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}=-{\frac {k}{m}}x} Solving thedifferential equationabove produces a solution that is asinusoidal function:x(t)=c1cos⁡(ωt)+c2sin⁡(ωt),{\displaystyle x(t)=c_{1}\cos \left(\omega t\right)+c_{2}\sin \left(\omega t\right),}whereω=k/m.{\textstyle {\omega }={\sqrt {{k}/{m}}}.}The meaning of the constantsc1{\displaystyle c_{1}}andc2{\displaystyle c_{2}}can be easily found: settingt=0{\displaystyle t=0}on the equation above we see thatx(0)=c1{\displaystyle x(0)=c_{1}}, so thatc1{\displaystyle c_{1}}is the initial position of the particle,c1=x0{\displaystyle c_{1}=x_{0}}; taking the derivative of that equation and evaluating at zero we get thatx˙(0)=ωc2{\displaystyle {\dot {x}}(0)=\omega c_{2}}, so thatc2{\displaystyle c_{2}}is the initial speed of the particle divided by the angular frequency,c2=v0ω{\displaystyle c_{2}={\frac {v_{0}}{\omega }}}. Thus we can write:x(t)=x0cos⁡(kmt)+v0kmsin⁡(kmt).{\displaystyle x(t)=x_{0}\cos \left({\sqrt {\frac {k}{m}}}t\right)+{\frac {v_{0}}{\sqrt {\frac {k}{m}}}}\sin \left({\sqrt {\frac {k}{m}}}t\right).} This equation can also be written in the form:x(t)=Acos⁡(ωt−φ),{\displaystyle x(t)=A\cos \left(\omega t-\varphi \right),}where or equivalently In the solution,c1andc2are two constants determined by the initial conditions (specifically, the initial position at timet= 0isc1, while the initial velocity isc2ω), and the origin is set to be the equilibrium position.[A]Each of these constants carries a physical meaning of the motion:Ais theamplitude(maximum displacement from the equilibrium position),ω= 2πfis theangular frequency, andφis the initialphase.[B] Using the techniques ofcalculus, thevelocityandaccelerationas a function of time can be found:v(t)=dxdt=−Aωsin⁡(ωt−φ),{\displaystyle v(t)={\frac {\mathrm {d} x}{\mathrm {d} t}}=-A\omega \sin(\omega t-\varphi ),} a(t)=d2xdt2=−Aω2cos⁡(ωt−φ).{\displaystyle a(t)={\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}=-A\omega ^{2}\cos(\omega t-\varphi ).} By definition, if a massmis under SHM its acceleration is directly proportional to displacement.a(x)=−ω2x.{\displaystyle a(x)=-\omega ^{2}x.}whereω2=km{\displaystyle \omega ^{2}={\frac {k}{m}}} Sinceω= 2πf,f=12πkm,{\displaystyle f={\frac {1}{2\pi }}{\sqrt {\frac {k}{m}}},}and, sinceT=⁠1/f⁠whereTis the time period,T=2πmk.{\displaystyle T=2\pi {\sqrt {\frac {m}{k}}}.} These equations demonstrate that the simple harmonic motion isisochronous(the period and frequency are independent of the amplitude and the initial phase of the motion). Substitutingω2with⁠k/m⁠, thekinetic energyKof the system at timetisK(t)=12mv2(t)=12mω2A2sin2⁡(ωt−φ)=12kA2sin2⁡(ωt−φ),{\displaystyle K(t)={\tfrac {1}{2}}mv^{2}(t)={\tfrac {1}{2}}m\omega ^{2}A^{2}\sin ^{2}(\omega t-\varphi )={\tfrac {1}{2}}kA^{2}\sin ^{2}(\omega t-\varphi ),}and thepotential energyisU(t)=12kx2(t)=12kA2cos2⁡(ωt−φ).{\displaystyle U(t)={\tfrac {1}{2}}kx^{2}(t)={\tfrac {1}{2}}kA^{2}\cos ^{2}(\omega t-\varphi ).}In the absence of friction and other energy loss, the totalmechanical energyhas a constant valueE=K+U=12kA2.{\displaystyle E=K+U={\tfrac {1}{2}}kA^{2}.} The following physical systems are some examples ofsimple harmonic oscillator. A massmattached to a spring of spring constantkexhibits simple harmonic motion inclosed space. The equation for describing the period:T=2πmk{\displaystyle T=2\pi {\sqrt {\frac {m}{k}}}}shows the period of oscillation is independent of the amplitude, though in practice the amplitude should be small. The above equation is also valid in the case when an additional constant force is being applied on the mass, i.e. the additional constant force cannot change the period of oscillation. Simple harmonic motion can be considered the one-dimensionalprojectionofuniform circular motion. If an object moves with angular speedωaround a circle of radiusrcentered at theoriginof thexy-plane, then its motion along each coordinate is simple harmonic motion with amplituderand angular frequencyω. The motion of a body in which it moves to and from a definite point is also calledoscillatory motionor vibratory motion. The time period is able to be calculated byT=2πlg{\displaystyle T=2\pi {\sqrt {\frac {l}{g}}}}where l is the distance from rotation to the object's center of mass undergoing SHM and g is gravitational acceleration. This is analogous to the mass-spring system. In thesmall-angle approximation, themotion of a simple pendulumis approximated by simple harmonic motion. The period of a mass attached to a pendulum of lengthlwith gravitational accelerationg{\displaystyle g}is given byT=2πlg{\displaystyle T=2\pi {\sqrt {\frac {l}{g}}}} This shows that the period of oscillation is independent of the amplitude and mass of the pendulum but not of the acceleration due togravity,g{\displaystyle g}, therefore a pendulum of the same length on the Moon would swing more slowly due to the Moon's lower gravitational field strength. Because the value ofg{\displaystyle g}varies slightly over the surface of the earth, the time period will vary slightly from place to place and will also vary with height above sea level. This approximation is accurate only for small angles because of the expression forangular accelerationαbeing proportional to the sine of the displacement angle: −mglsin⁡θ=Iα,{\displaystyle -mgl\sin \theta =I\alpha ,} whereIis themoment of inertia. Whenθis small,sinθ≈θand therefore the expression becomes −mglθ=Iα{\displaystyle -mgl\theta =I\alpha } which makes angular acceleration directly proportional and opposite toθ, satisfying the definition of simple harmonic motion (that net force is directly proportional to the displacement from the mean position and is directed towards the mean position). A Scotch yoke mechanism can be used to convert between rotational motion and linear reciprocating motion. The linear motion can take various forms depending on the shape of the slot, but the basic yoke with a constant rotation speed produces a linear motion that is simple harmonic in form. x(t)=Asin⁡(ωt+φ′),{\displaystyle x(t)=A\sin \left(\omega t+\varphi '\right),}wheretan⁡φ′=c1c2,{\displaystyle \tan \varphi '={\frac {c_{1}}{c_{2}}},}
https://en.wikipedia.org/wiki/Simple_harmonic_motion
Inphysics,mathematics,engineering, and related fields, awaveis a propagating dynamic disturbance (change fromequilibrium) of one or morequantities.Periodic wavesoscillate repeatedly about an equilibrium (resting) value at somefrequency. When the entirewaveformmoves in one direction, it is said to be atravelling wave; by contrast, a pair ofsuperimposedperiodic waves traveling in opposite directions makes astanding wave. In a standing wave, the amplitude of vibration has nulls at some positions where the wave amplitude appears smaller or even zero. There are two types of waves that are most commonly studied inclassical physics:mechanical wavesandelectromagnetic waves. In a mechanical wave,stressandstrainfields oscillate about amechanical equilibrium. A mechanical wave is a localdeformation (strain)in some physical medium that propagates from particle to particle by creating localstressesthat cause strain in neighboring particles too. For example,soundwaves are variations of the localpressureandparticle motionthat propagate through the medium. Other examples of mechanical waves areseismic waves,gravity waves,surface wavesandstring vibrations. In an electromagnetic wave (such as light), coupling between the electric and magnetic fields sustains propagation of waves involving these fields according toMaxwell's equations. Electromagnetic waves can travel through avacuumand through somedielectricmedia (at wavelengths where they are consideredtransparent). Electromagnetic waves, as determined by their frequencies (orwavelengths), have more specific designations includingradio waves,infrared radiation,terahertz waves,visible light,ultraviolet radiation,X-raysandgamma rays. Other types of waves includegravitational waves, which are disturbances inspacetimethat propagate according togeneral relativity;heat diffusion waves;plasma wavesthat combine mechanical deformations and electromagnetic fields;reaction–diffusion waves, such as in theBelousov–Zhabotinsky reaction; and many more. Mechanical and electromagnetic waves transferenergy,[1]momentum, andinformation, but they do not transfer particles in the medium. In mathematics andelectronicswaves are studied assignals.[2]On the other hand, some waves haveenvelopeswhich do not move at all such asstanding waves(which are fundamental to music) andhydraulic jumps. A physical wavefieldis almost always confined to some finite region of space, called itsdomain. For example, the seismic waves generated byearthquakesare significant only in the interior and surface of the planet, so they can be ignored outside it. However, waves with infinite domain, that extend over the whole space, are commonly studied in mathematics, and are very valuable tools for understanding physical waves in finite domains. Aplane waveis an important mathematical idealization where the disturbance is identical along any (infinite) planenormalto a specific direction of travel. Mathematically, the simplest wave is asinusoidal plane wavein which at any point the field experiencessimple harmonic motionat one frequency. In linear media, complicated waves can generally be decomposed as the sum of many sinusoidal plane waves havingdifferent directions of propagationand/ordifferent frequencies. A plane wave is classified as atransverse waveif the field disturbance at each point is described by a vector perpendicular to the direction of propagation (also the direction of energy transfer); orlongitudinal waveif those vectors are aligned with the propagation direction. Mechanical waves include both transverse and longitudinal waves; on the other hand electromagnetic plane waves are strictly transverse while sound waves in fluids (such as air) can only be longitudinal. That physical direction of an oscillating field relative to the propagation direction is also referred to as the wave'spolarization, which can be an important attribute. A wave can be described just like a field, namely as afunctionF(x,t){\displaystyle F(x,t)}wherex{\displaystyle x}is a position andt{\displaystyle t}is a time. The value ofx{\displaystyle x}is a point of space, specifically in the region where the wave is defined. In mathematical terms, it is usually avectorin theCartesian three-dimensional spaceR3{\displaystyle \mathbb {R} ^{3}}. However, in many cases one can ignore one dimension, and letx{\displaystyle x}be a point of the Cartesian planeR2{\displaystyle \mathbb {R} ^{2}}. This is the case, for example, when studying vibrations of a drum skin. One may even restrictx{\displaystyle x}to a point of the Cartesian lineR{\displaystyle \mathbb {R} }– that is, the set ofreal numbers. This is the case, for example, when studying vibrations in aviolin stringorrecorder. The timet{\displaystyle t}, on the other hand, is always assumed to be ascalar; that is, a real number. The value ofF(x,t){\displaystyle F(x,t)}can be any physical quantity of interest assigned to the pointx{\displaystyle x}that may vary with time. For example, ifF{\displaystyle F}represents the vibrations inside an elastic solid, the value ofF(x,t){\displaystyle F(x,t)}is usually a vector that gives the current displacement fromx{\displaystyle x}of the material particles that would be at the pointx{\displaystyle x}in the absence of vibration. For an electromagnetic wave, the value ofF{\displaystyle F}can be theelectric fieldvectorE{\displaystyle E}, or themagnetic fieldvectorH{\displaystyle H}, or any related quantity, such as thePoynting vectorE×H{\displaystyle E\times H}. Influid dynamics, the value ofF(x,t){\displaystyle F(x,t)}could be the velocity vector of the fluid at the pointx{\displaystyle x}, or any scalar property likepressure,temperature, ordensity. In a chemical reaction,F(x,t){\displaystyle F(x,t)}could be the concentration of some substance in the neighborhood of pointx{\displaystyle x}of the reaction medium. For any dimensiond{\displaystyle d}(1, 2, or 3), the wave's domain is then asubsetD{\displaystyle D}ofRd{\displaystyle \mathbb {R} ^{d}}, such that the function valueF(x,t){\displaystyle F(x,t)}is defined for any pointx{\displaystyle x}inD{\displaystyle D}. For example, when describing the motion of adrum skin, one can considerD{\displaystyle D}to be adisk(circle) on the planeR2{\displaystyle \mathbb {R} ^{2}}with center at the origin(0,0){\displaystyle (0,0)}, and letF(x,t){\displaystyle F(x,t)}be the vertical displacement of the skin at the pointx{\displaystyle x}ofD{\displaystyle D}and at timet{\displaystyle t}. Waves of the same type are often superposed and encountered simultaneously at a given point in space and time. The properties at that point are the sum of the properties of each component wave at that point. In general, the velocities are not the same, so the wave form will change over time and space. Sometimes one is interested in a single specific wave. More often, however, one needs to understand large set of possible waves; like all the ways that a drum skin can vibrate after being struck once with adrum stick, or all the possibleradarechoes one could get from anairplanethat may be approaching anairport. In some of those situations, one may describe such a family of waves by a functionF(A,B,…;x,t){\displaystyle F(A,B,\ldots ;x,t)}that depends on certainparametersA,B,…{\displaystyle A,B,\ldots }, besidesx{\displaystyle x}andt{\displaystyle t}. Then one can obtain different waves – that is, different functions ofx{\displaystyle x}andt{\displaystyle t}– by choosing different values for those parameters. For example, the sound pressure inside arecorderthat is playing a "pure" note is typically astanding wave, that can be written as The parameterA{\displaystyle A}defines the amplitude of the wave (that is, the maximum sound pressure in the bore, which is related to the loudness of the note);c{\displaystyle c}is the speed of sound;L{\displaystyle L}is the length of the bore; andn{\displaystyle n}is a positive integer (1,2,3,...) that specifies the number ofnodesin the standing wave. (The positionx{\displaystyle x}should be measured from themouthpiece, and the timet{\displaystyle t}from any moment at which the pressure at the mouthpiece is maximum. The quantityλ=4L/(2n−1){\displaystyle \lambda =4L/(2n-1)}is thewavelengthof the emitted note, andf=c/λ{\displaystyle f=c/\lambda }is itsfrequency.) Many general properties of these waves can be inferred from this general equation, without choosing specific values for the parameters. As another example, it may be that the vibrations of a drum skin after a single strike depend only on the distancer{\displaystyle r}from the center of the skin to the strike point, and on the strengths{\displaystyle s}of the strike. Then the vibration for all possible strikes can be described by a functionF(r,s;x,t){\displaystyle F(r,s;x,t)}. Sometimes the family of waves of interest has infinitely many parameters. For example, one may want to describe what happens to the temperature in a metal bar when it is initially heated at various temperatures at different points along its length, and then allowed to cool by itself in vacuum. In that case, instead of a scalar or vector, the parameter would have to be a functionh{\displaystyle h}such thath(x){\displaystyle h(x)}is the initial temperature at each pointx{\displaystyle x}of the bar. Then the temperatures at later times can be expressed by a functionF{\displaystyle F}that depends on the functionh{\displaystyle h}(that is, afunctional operator), so that the temperature at a later time isF(h;x,t){\displaystyle F(h;x,t)} Another way to describe and study a family of waves is to give a mathematical equation that, instead of explicitly giving the value ofF(x,t){\displaystyle F(x,t)}, only constrains how those values can change with time. Then the family of waves in question consists of all functionsF{\displaystyle F}that satisfy those constraints – that is, allsolutionsof the equation. This approach is extremely important in physics, because the constraints usually are a consequence of the physical processes that cause the wave to evolve. For example, ifF(x,t){\displaystyle F(x,t)}is the temperature inside a block of somehomogeneousandisotropicsolid material, its evolution is constrained by thepartial differential equation whereQ(p,f){\displaystyle Q(p,f)}is the heat that is being generated per unit of volume and time in the neighborhood ofx{\displaystyle x}at timet{\displaystyle t}(for example, by chemical reactions happening there);x1,x2,x3{\displaystyle x_{1},x_{2},x_{3}}are the Cartesian coordinates of the pointx{\displaystyle x};∂F/∂t{\displaystyle \partial F/\partial t}is the (first) derivative ofF{\displaystyle F}with respect tot{\displaystyle t}; and∂2F/∂xi2{\displaystyle \partial ^{2}F/\partial x_{i}^{2}}is the second derivative ofF{\displaystyle F}relative toxi{\displaystyle x_{i}}. (The symbol "∂{\displaystyle \partial }" is meant to signify that, in the derivative with respect to some variable, all other variables must be considered fixed.) This equation can be derived from the laws of physics that govern thediffusion of heatin solid media. For that reason, it is called theheat equationin mathematics, even though it applies to many other physical quantities besides temperatures. For another example, we can describe all possible sounds echoing within a container of gas by a functionF(x,t){\displaystyle F(x,t)}that gives the pressure at a pointx{\displaystyle x}and timet{\displaystyle t}within that container. If the gas was initially at uniform temperature and composition, the evolution ofF{\displaystyle F}is constrained by the formula HereP(x,t){\displaystyle P(x,t)}is some extra compression force that is being applied to the gas nearx{\displaystyle x}by some external process, such as aloudspeakerorpistonright next top{\displaystyle p}. This same differential equation describes the behavior of mechanical vibrations and electromagnetic fields in a homogeneous isotropic non-conducting solid. Note that this equation differs from that of heat flow only in that the left-hand side is∂2F/∂t2{\displaystyle \partial ^{2}F/\partial t^{2}}, the second derivative ofF{\displaystyle F}with respect to time, rather than the first derivative∂F/∂t{\displaystyle \partial F/\partial t}. Yet this small change makes a huge difference on the set of solutionsF{\displaystyle F}. This differential equation is called "the"wave equationin mathematics, even though it describes only one very special kind of waves. Consider a travelingtransverse wave(which may be apulse) on a string (the medium). Consider the string to have a single spatial dimension. Consider this wave as traveling This wave can then be described by the two-dimensional functions or, more generally, byd'Alembert's formula:[6]u(x,t)=F(x−vt)+G(x+vt).{\displaystyle u(x,t)=F(x-vt)+G(x+vt).}representing two component waveformsF{\displaystyle F}andG{\displaystyle G}traveling through the medium in opposite directions. A generalized representation of this wave can be obtained[7]as thepartial differential equation1v2∂2u∂t2=∂2u∂x2.{\displaystyle {\frac {1}{v^{2}}}{\frac {\partial ^{2}u}{\partial t^{2}}}={\frac {\partial ^{2}u}{\partial x^{2}}}.} General solutions are based uponDuhamel's principle.[8] The form or shape ofFind'Alembert's formulainvolves the argumentx−vt. Constant values of this argument correspond to constant values ofF, and these constant values occur ifxincreases at the same rate thatvtincreases. That is, the wave shaped like the functionFwill move in the positivex-direction at velocityv(andGwill propagate at the same speed in the negativex-direction).[9] In the case of a periodic functionFwith periodλ, that is,F(x+λ−vt) =F(x−vt), the periodicity ofFin space means that a snapshot of the wave at a given timetfinds the wave varying periodically in space with periodλ(thewavelengthof the wave). In a similar fashion, this periodicity ofFimplies a periodicity in time as well:F(x−v(t+T)) =F(x−vt) providedvT=λ, so an observation of the wave at a fixed locationxfinds the wave undulating periodically in time with periodT=λ/v.[10] The amplitude of a wave may be constant (in which case the wave is ac.w.orcontinuous wave), or may bemodulatedso as to vary with time and/or position. The outline of the variation in amplitude is called theenvelopeof the wave. Mathematically, themodulated wavecan be written in the form:[11][12][13]u(x,t)=A(x,t)sin⁡(kx−ωt+ϕ),{\displaystyle u(x,t)=A(x,t)\sin \left(kx-\omega t+\phi \right),}whereA(x,t){\displaystyle A(x,\ t)}is the amplitude envelope of the wave,k{\displaystyle k}is thewavenumberandϕ{\displaystyle \phi }is thephase. If thegroup velocityvg{\displaystyle v_{g}}(see below) is wavelength-independent, this equation can be simplified as:[14]u(x,t)=A(x−vgt)sin⁡(kx−ωt+ϕ),{\displaystyle u(x,t)=A(x-v_{g}t)\sin \left(kx-\omega t+\phi \right),}showing that the envelope moves with the group velocity and retains its shape. Otherwise, in cases where the group velocity varies with wavelength, the pulse shape changes in a manner often described using anenvelope equation.[14][15] There are two velocities that are associated with waves, thephase velocityand thegroup velocity. Phase velocity is the rate at which thephaseof the wavepropagates in space: any given phase of the wave (for example, thecrest) will appear to travel at the phase velocity. The phase velocity is given in terms of thewavelengthλ(lambda) andperiodTasvp=λT.{\displaystyle v_{\mathrm {p} }={\frac {\lambda }{T}}.} Group velocity is a property of waves that have a defined envelope, measuring propagation through space (that is, phase velocity) of the overall shape of the waves' amplitudes—modulation or envelope of the wave. Asine wave, sinusoidal wave, or sinusoid (symbol: ∿) is aperiodic wavewhosewaveform(shape) is thetrigonometricsine function. Inmechanics, as a linearmotionover time, this issimple harmonic motion; asrotation, it corresponds touniform circular motion. Sine waves occur often inphysics, includingwind waves,soundwaves, andlightwaves, such asmonochromatic radiation. Inengineering,signal processing, andmathematics,Fourier analysisdecomposes general functions into a sum of sine waves of various frequencies, relative phases, and magnitudes. Aplane waveis a kind of wave whose value varies only in one spatial direction. That is, its value is constant on a plane that is perpendicular to that direction. Plane waves can be specified by a vector of unit lengthn^{\displaystyle {\hat {n}}}indicating the direction that the wave varies in, and a wave profile describing how the wave varies as a function of the displacement along that direction (n^⋅x→{\displaystyle {\hat {n}}\cdot {\vec {x}}}) and time (t{\displaystyle t}). Since the wave profile only depends on the positionx→{\displaystyle {\vec {x}}}in the combinationn^⋅x→{\displaystyle {\hat {n}}\cdot {\vec {x}}}, any displacement in directions perpendicular ton^{\displaystyle {\hat {n}}}cannot affect the value of the field. Plane waves are often used to modelelectromagnetic wavesfar from a source. For electromagnetic plane waves, the electric and magnetic fields themselves are transverse to the direction of propagation, and also perpendicular to each other. A standing wave, also known as astationary wave, is a wave whoseenveloperemains in a constant position. This phenomenon arises as a result ofinterferencebetween two waves traveling in opposite directions. Thesumof two counter-propagating waves (of equal amplitude and frequency) creates astanding wave. Standing waves commonly arise when a boundary blocks further propagation of the wave, thus causing wave reflection, and therefore introducing a counter-propagating wave. For example, when aviolinstring is displaced, transverse waves propagate out to where the string is held in place at thebridgeand thenut, where the waves are reflected back. At the bridge and nut, the two opposed waves are inantiphaseand cancel each other, producing anode. Halfway between two nodes there is anantinode, where the two counter-propagating wavesenhanceeach other maximally. There is no netpropagation of energyover time. Asolitonorsolitary waveis a self-reinforcingwave packetthat maintains its shape while it propagates at a constant velocity. Solitons are caused by a cancellation ofnonlinearanddispersive effectsin the medium. (Dispersive effects are a property of certain systems where the speed of a wave depends on its frequency.) Solitons are the solutions of a widespread class of weakly nonlinear dispersivepartial differential equationsdescribing physical systems. Wave propagation is any of the ways in which waves travel. With respect to the direction of theoscillationrelative to the propagation direction, we can distinguish betweenlongitudinal waveandtransverse waves. Electromagnetic wavespropagate invacuumas well as in material media. Propagation of other wave types such as sound may occur only in atransmission medium. The propagation and reflection of plane waves—e.g. Pressure waves (P wave) orShear waves (SH or SV-waves)are phenomena that were first characterized within the field of classicalseismology, and are now considered fundamental concepts in modernseismic tomography. The analytical solution to this problem exists and is well known. The frequency domain solution can be obtained by first finding theHelmholtz decompositionof the displacement field, which is then substituted into thewave equation. From here, theplane wave eigenmodescan be calculated.[citation needed][clarification needed] The analytical solution of SV-wave in a half-space indicates that the plane SV wave reflects back to the domain as a P and SV waves, leaving out special cases. The angle of the reflected SV wave is identical to the incidence wave, while the angle of the reflected P wave is greater than the SV wave. For the same wave frequency, the SV wavelength is smaller than the P wavelength. This fact has been depicted in this animated picture.[16] Similar to the SV wave, the P incidence, in general, reflects as the P and SV wave. There are some special cases where the regime is different.[clarification needed] Wave velocity is a general concept, of various kinds of wave velocities, for a wave'sphaseandspeedconcerning energy (and information) propagation. Thephase velocityis given as:vp=ωk,{\displaystyle v_{\rm {p}}={\frac {\omega }{k}},}where: The phase speed gives you the speed at which a point of constantphaseof the wave will travel for a discrete frequency. The angular frequencyωcannot be chosen independently from the wavenumberk, but both are related through thedispersion relationship:ω=Ω(k).{\displaystyle \omega =\Omega (k).} In the special caseΩ(k) =ck, withca constant, the waves are called non-dispersive, since all frequencies travel at the same phase speedc. For instanceelectromagnetic wavesinvacuumare non-dispersive. In case of other forms of the dispersion relation, we have dispersive waves. The dispersion relationship depends on the medium through which the waves propagate and on the type of waves (for instanceelectromagnetic,soundorwaterwaves). The speed at which a resultantwave packetfrom a narrow range of frequencies will travel is called thegroup velocityand is determined from thegradientof thedispersion relation:vg=∂ω∂k{\displaystyle v_{\rm {g}}={\frac {\partial \omega }{\partial k}}} In almost all cases, a wave is mainly a movement of energy through a medium. Most often, the group velocity is the velocity at which the energy moves through this medium. Waves exhibit common behaviors under a number of standard situations, for example: Waves normally move in a straight line (that is, rectilinearly) through atransmission medium. Such media can be classified into one or more of the following categories: Waves are usually defined in media which allow most or all of a wave's energy to propagate withoutloss. However materials may be characterized as "lossy" if they remove energy from a wave, usually converting it into heat. This is termed "absorption." A material which absorbs a wave's energy, either in transmission or reflection, is characterized by arefractive indexwhich iscomplex. The amount of absorption will generally depend on the frequency (wavelength) of the wave, which, for instance, explains why objects may appear colored. When a wave strikes a reflective surface, it changes direction, such that the angle made by theincident waveand linenormalto the surface equals the angle made by the reflected wave and the same normal line. Refraction is the phenomenon of a wave changing its speed. Mathematically, this means that the size of thephase velocitychanges. Typically, refraction occurs when a wave passes from onemediuminto another. The amount by which a wave is refracted by a material is given by therefractive indexof the material. The directions of incidence and refraction are related to the refractive indices of the two materials bySnell's law. A wave exhibits diffraction when it encounters an obstacle that bends the wave or when it spreads after emerging from an opening. Diffraction effects are more pronounced when the size of the obstacle or opening is comparable to the wavelength of the wave. When waves in a linear medium (the usual case) cross each other in a region of space, they do not actually interact with each other, but continue on as if the other one were not present. However at any pointinthat region thefield quantitiesdescribing those waves add according to thesuperposition principle. If the waves are of the same frequency in a fixedphaserelationship, then there will generally be positions at which the two waves arein phaseand their amplitudesadd, and other positions where they areout of phaseand their amplitudes (partially or fully)cancel. This is called aninterference pattern. The phenomenon of polarization arises when wave motion can occur simultaneously in twoorthogonaldirections.Transverse wavescan be polarized, for instance. When polarization is used as a descriptor without qualification, it usually refers to the special, simple case oflinear polarization. A transverse wave is linearly polarized if it oscillates in only one direction or plane. In the case of linear polarization, it is often useful to add the relative orientation of that plane, perpendicular to the direction of travel, in which the oscillation occurs, such as "horizontal" for instance, if the plane of polarization is parallel to the ground.Electromagnetic wavespropagating in free space, for instance, are transverse; they can be polarized by the use of apolarizing filter. Longitudinal waves, such as sound waves, do not exhibit polarization. For these waves there is only one direction of oscillation, that is, along the direction of travel. Dispersion is the frequency dependence of therefractive index, a consequence of the atomic nature of materials.[17]: 67A wave undergoes dispersion when either thephase velocityor thegroup velocitydepends on the wave frequency. Dispersion is seen by letting white light pass through aprism, the result of which is to produce the spectrum of colors of the rainbow.Isaac Newtonwas the first to recognize that this meant that white light was a mixture of light of different colors.[17]: 190 The Doppler effect or Doppler shift is the change infrequencyof a wave in relation to an observer who is moving relative to the wave source.[18]It is named after theAustrianphysicistChristian Doppler, who described the phenomenon in 1842. A mechanical wave is an oscillation ofmatter, and therefore transfers energy through amedium.[19]While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial position. Mechanical waves can be produced only in media which possesselasticityandinertia. There are three types of mechanical waves:transverse waves,longitudinal waves, andsurface waves. The transverse vibration of a string is a function of tension and inertia, and is constrained by the length of the string as the ends are fixed. This constraint limits the steady state modes that are possible, and thereby the frequencies. The speed of a transverse wave traveling along avibrating string(v) is directly proportional to the square root of thetensionof the string (T) over thelinear mass density(μ): where the linear densityμis the mass per unit length of the string. Acoustic orsoundwaves are compression waves which travel as body waves at the speed given by: or the square root of the adiabaticbulk modulusdivided by the ambient density of the medium (seespeed of sound). Body waves travel through the interior of the medium along paths controlled by the material properties in terms of density and modulus (stiffness). The density and modulus, in turn, vary according to temperature, composition, and material phase. This effect resembles the refraction of light waves. Two types of particle motion result in two types of body waves: Primary and Secondary waves. Seismic waves are waves of energy that travel through the Earth's layers, and are a result of earthquakes, volcanic eruptions, magma movement, large landslides and large man-made explosions that give out low-frequency acoustic energy. They include body waves—the primary (P waves) and secondary waves (S waves)—and surface waves, such asRayleigh waves,Love waves, andStoneley waves. A shock wave is a type of propagating disturbance. When a wave moves faster than the localspeed of soundin afluid, it is a shock wave. Like an ordinary wave, a shock wave carries energy and can propagate through a medium; however, it is characterized by an abrupt, nearly discontinuous change inpressure,temperatureanddensityof the medium.[20] Shear waves are body waves due to shear rigidity and inertia. They can only be transmitted through solids and to a lesser extent through liquids with a sufficiently high viscosity. An electromagnetic wave consists of two waves that are oscillations of theelectricandmagneticfields. An electromagnetic wave travels in a direction that is at right angles to the oscillation direction of both fields. In the 19th century,James Clerk Maxwellshowed that, invacuum, the electric and magnetic fields satisfy thewave equationboth with speed equal to that of thespeed of light. From this emerged the idea thatlightis an electromagnetic wave. The unification of light and electromagnetic waves was experimentally confirmed byHertzin the end of the 1880s. Electromagnetic waves can have different frequencies (and thus wavelengths), and are classified accordingly in wavebands, such asradio waves,microwaves,infrared,visible light,ultraviolet,X-rays, andgamma rays. The range of frequencies in each of these bands is continuous, and the limits of each band are mostly arbitrary, with the exception of visible light, which must be visible to the normal human eye. TheSchrödinger equationdescribes the wave-like behavior ofparticlesinquantum mechanics. Solutions of this equation arewave functionswhich can be used to describe the probability density of a particle. TheDirac equationis a relativistic wave equation detailing electromagnetic interactions. Dirac waves accounted for the fine details of the hydrogen spectrum in a completely rigorous way. The wave equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-1⁄2particles. Louis de Brogliepostulated that all particles withmomentumhave a wavelength wherehis thePlanck constant, andpis the magnitude of themomentumof the particle. This hypothesis was at the basis ofquantum mechanics. Nowadays, this wavelength is called thede Broglie wavelength. For example, theelectronsin aCRTdisplay have a de Broglie wavelength of about 10−13m. A wave representing such a particle traveling in thek-direction is expressed by the wave function as follows: where the wavelength is determined by thewave vectorkas: and the momentum by: However, a wave like this with definite wavelength is not localized in space, and so cannot represent a particle localized in space. To localize a particle, de Broglie proposed a superposition of different wavelengths ranging around a central value in awave packet,[24]a waveform often used inquantum mechanicsto describe thewave functionof a particle. In a wave packet, the wavelength of the particle is not precise, and the local wavelength deviates on either side of the main wavelength value. In representing the wave function of a localized particle, thewave packetis often taken to have aGaussian shapeand is called aGaussian wave packet.[25][26][27]Gaussian wave packets also are used to analyze water waves.[28] For example, a Gaussian wavefunctionψmight take the form:[29] at some initial timet= 0, where the central wavelength is related to the central wave vectork0asλ0= 2π /k0. It is well known from the theory ofFourier analysis,[30]or from theHeisenberg uncertainty principle(in the case of quantum mechanics) that a narrow range of wavelengths is necessary to produce a localized wave packet, and the more localized the envelope, the larger the spread in required wavelengths. TheFourier transformof a Gaussian is itself a Gaussian.[31]Given the Gaussian: the Fourier transform is: The Gaussian in space therefore is made up of waves: that is, a number of waves of wavelengthsλsuch thatkλ= 2 π. The parameter σ decides the spatial spread of the Gaussian along thex-axis, while the Fourier transform shows a spread inwave vectorkdetermined by 1/σ. That is, the smaller the extent in space, the larger the extent ink, and hence inλ= 2π/k. Gravity waves are waves generated in a fluid medium or at the interface between two media when the force of gravity or buoyancy works to restore equilibrium. Surface waves on water are the most familiar example. Gravitational wavesalso travel through space. The first observation of gravitational waves was announced on 11 February 2016.[32]Gravitational waves are disturbances in the curvature ofspacetime, predicted by Einstein's theory ofgeneral relativity.
https://en.wikipedia.org/wiki/Wave_(physics)
Thewave equationis a second-order linearpartial differential equationfor the description ofwavesorstanding wavefields such asmechanical waves(e.g.waterwaves,sound wavesandseismic waves) orelectromagnetic waves(includinglightwaves). It arises in fields likeacoustics,electromagnetism, andfluid dynamics. This article focuses on waves inclassical physics. Quantum physics uses an operator-basedwave equationoften as arelativistic wave equation. The wave equation is ahyperbolic partial differential equationdescribing waves, including traveling andstanding waves; the latter can be considered aslinear superpositionsof waves traveling in opposite directions. This article mostly focuses on the scalar wave equation describing waves inscalarsby scalar functionsu=u(x, y, z, t)of a time variablet(a variable representing time) and one or more spatial variablesx, y, z(variables representing a position in a space under discussion). At the same time, there are vector wave equations describing waves invectorssuch aswaves for an electrical field, magnetic field, and magnetic vector potentialandelastic waves. By comparison with vector wave equations, the scalar wave equation can be seen as a special case of the vector wave equations; in theCartesian coordinate system, the scalar wave equation is the equation to be satisfied by each component (for each coordinate axis, such as thexcomponent for thexaxis) of a vector wave without sources of waves in the considered domain (i.e., space and time). For example, in the Cartesian coordinate system, for(Ex,Ey,Ez){\displaystyle (E_{x},E_{y},E_{z})}as the representation of an electric vector field waveE→{\displaystyle {\vec {E}}}in the absence of wave sources, each coordinate axis componentEi{\displaystyle E_{i}}(i=x,y,z) must satisfy the scalar wave equation. Other scalar wave equation solutionsuare forphysical quantitiesinscalarssuch aspressurein a liquid or gas, or thedisplacementalong some specific direction of particles of a vibrating solid away from their resting (equilibrium) positions. The scalar wave equation is ∂2u∂t2=c2(∂2u∂x2+∂2u∂y2+∂2u∂z2){\displaystyle {\frac {\partial ^{2}u}{\partial t^{2}}}=c^{2}\left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)} where The equation states that, at any given point, the second derivative ofu{\displaystyle u}with respect to time is proportional to the sum of the second derivatives ofu{\displaystyle u}with respect to space, with the constant of proportionality being the square of the speed of the wave. Using notations fromvector calculus, the wave equation can be written compactly asutt=c2Δu,{\displaystyle u_{tt}=c^{2}\Delta u,}or◻u=0,{\displaystyle \Box u=0,}where the double subscript denotes the second-orderpartial derivativewith respect to time,Δ{\displaystyle \Delta }is theLaplace operatorand◻{\displaystyle \Box }thed'Alembert operator, defined as:utt=∂2u∂t2,Δ=∂2∂x2+∂2∂y2+∂2∂z2,◻=1c2∂2∂t2−Δ.{\displaystyle u_{tt}={\frac {\partial ^{2}u}{\partial t^{2}}},\qquad \Delta ={\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}+{\frac {\partial ^{2}}{\partial z^{2}}},\qquad \Box ={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-\Delta .} A solution to this (two-way) wave equation can be quite complicated. Still, it can be analyzed as a linear combination of simple solutions that aresinusoidalplane waveswith various directions of propagation and wavelengths but all with the same propagation speedc. This analysis is possible because the wave equation islinearand homogeneous, so that any multiple of a solution is also a solution, and the sum of any two solutions is again a solution. This property is called thesuperposition principlein physics. The wave equation alone does not specify a physical solution; a unique solution is usually obtained by setting a problem with further conditions, such asinitial conditions, which prescribe the amplitude and phase of the wave. Another important class of problems occurs in enclosed spaces specified byboundary conditions, for which the solutions representstanding waves, orharmonics, analogous to the harmonics of musical instruments. The wave equation in one spatial dimension can be written as follows:∂2u∂t2=c2∂2u∂x2.{\displaystyle {\frac {\partial ^{2}u}{\partial t^{2}}}=c^{2}{\frac {\partial ^{2}u}{\partial x^{2}}}.}This equation is typically described as having only one spatial dimensionx, because the only otherindependent variableis the timet. The wave equation in one space dimension can be derived in a variety of different physical settings. Most famously, it can be derived for the case of astring vibratingin a two-dimensional plane, with each of its elements being pulled in opposite directions by the force oftension.[2] Another physical setting for derivation of the wave equation in one space dimension usesHooke's law. In thetheory of elasticity, Hooke's law is an approximation for certain materials, stating that the amount by which a material body is deformed (thestrain) is linearly related to the force causing the deformation (thestress). The wave equation in the one-dimensional case can be derived fromHooke's lawin the following way: imagine an array of little weights of massminterconnected with massless springs of lengthh. The springs have aspring constantofk: Here the dependent variableu(x)measures the distance from the equilibrium of the mass situated atx, so thatu(x)essentially measures the magnitude of a disturbance (i.e. strain) that is traveling in an elastic material. The resulting force exerted on the massmat the locationx+his:FHooke=Fx+2h−Fx=k[u(x+2h,t)−u(x+h,t)]−k[u(x+h,t)−u(x,t)].{\displaystyle {\begin{aligned}F_{\text{Hooke}}&=F_{x+2h}-F_{x}=k[u(x+2h,t)-u(x+h,t)]-k[u(x+h,t)-u(x,t)].\end{aligned}}} By equating the latter equation with FNewton=ma(t)=m∂2∂t2u(x+h,t),{\displaystyle {\begin{aligned}F_{\text{Newton}}&=m\,a(t)=m\,{\frac {\partial ^{2}}{\partial t^{2}}}u(x+h,t),\end{aligned}}} the equation of motion for the weight at the locationx+his obtained:∂2∂t2u(x+h,t)=km[u(x+2h,t)−u(x+h,t)−u(x+h,t)+u(x,t)].{\displaystyle {\frac {\partial ^{2}}{\partial t^{2}}}u(x+h,t)={\frac {k}{m}}[u(x+2h,t)-u(x+h,t)-u(x+h,t)+u(x,t)].}If the array of weights consists ofNweights spaced evenly over the lengthL=Nhof total massM=Nm, and the totalspring constantof the arrayK=k/N, we can write the above equation as ∂2∂t2u(x+h,t)=KL2M[u(x+2h,t)−2u(x+h,t)+u(x,t)]h2.{\displaystyle {\frac {\partial ^{2}}{\partial t^{2}}}u(x+h,t)={\frac {KL^{2}}{M}}{\frac {[u(x+2h,t)-2u(x+h,t)+u(x,t)]}{h^{2}}}.} Taking the limitN→ ∞,h→ 0and assuming smoothness, one gets∂2u(x,t)∂t2=KL2M∂2u(x,t)∂x2,{\displaystyle {\frac {\partial ^{2}u(x,t)}{\partial t^{2}}}={\frac {KL^{2}}{M}}{\frac {\partial ^{2}u(x,t)}{\partial x^{2}}},}which is from the definition of asecond derivative.KL2/Mis the square of the propagation speed in this particular case. In the case of a stress pulse propagating longitudinally through a bar, the bar acts much like an infinite number of springs in series and can be taken as an extension of the equation derived for Hooke's law. A uniform bar, i.e. of constant cross-section, made from a linear elastic material has a stiffnessKgiven byK=EAL,{\displaystyle K={\frac {EA}{L}},}whereAis the cross-sectional area, andEis theYoung's modulusof the material. The wave equation becomes∂2u(x,t)∂t2=EALM∂2u(x,t)∂x2.{\displaystyle {\frac {\partial ^{2}u(x,t)}{\partial t^{2}}}={\frac {EAL}{M}}{\frac {\partial ^{2}u(x,t)}{\partial x^{2}}}.} ALis equal to the volume of the bar, and thereforeALM=1ρ,{\displaystyle {\frac {AL}{M}}={\frac {1}{\rho }},}whereρis the density of the material. The wave equation reduces to∂2u(x,t)∂t2=Eρ∂2u(x,t)∂x2.{\displaystyle {\frac {\partial ^{2}u(x,t)}{\partial t^{2}}}={\frac {E}{\rho }}{\frac {\partial ^{2}u(x,t)}{\partial x^{2}}}.} The speed of a stress wave in a bar is thereforeE/ρ{\displaystyle {\sqrt {E/\rho }}}. For the one-dimensional wave equation a relatively simple general solution may be found. Defining new variables[3]ξ=x−ct,η=x+ct{\displaystyle {\begin{aligned}\xi &=x-ct,\\\eta &=x+ct\end{aligned}}}changes the wave equation into∂2u∂ξ∂η(x,t)=0,{\displaystyle {\frac {\partial ^{2}u}{\partial \xi \partial \eta }}(x,t)=0,}which leads to the general solutionu(x,t)=F(ξ)+G(η)=F(x−ct)+G(x+ct).{\displaystyle u(x,t)=F(\xi )+G(\eta )=F(x-ct)+G(x+ct).} In other words, the solution is the sum of a right-traveling functionFand a left-traveling functionG. "Traveling" means that the shape of these individual arbitrary functions with respect toxstays constant, however, the functions are translated left and right with time at the speedc. This was derived byJean le Rond d'Alembert.[4] Another way to arrive at this result is to factor the wave equation using two first-orderdifferential operators:[∂∂t−c∂∂x][∂∂t+c∂∂x]u=0.{\displaystyle \left[{\frac {\partial }{\partial t}}-c{\frac {\partial }{\partial x}}\right]\left[{\frac {\partial }{\partial t}}+c{\frac {\partial }{\partial x}}\right]u=0.}Then, for our original equation, we can definev≡∂u∂t+c∂u∂x,{\displaystyle v\equiv {\frac {\partial u}{\partial t}}+c{\frac {\partial u}{\partial x}},}and find that we must have∂v∂t−c∂v∂x=0.{\displaystyle {\frac {\partial v}{\partial t}}-c{\frac {\partial v}{\partial x}}=0.} Thisadvection equationcan be solved by interpreting it as telling us that the directional derivative ofvin the(1,-c)direction is 0. This means that the value ofvis constant oncharacteristiclines of the formx+ct=x0, and thus thatvmust depend only onx+ct, that is, have the formH(x+ct). Then, to solve the first (inhomogenous) equation relatingvtou, we can note that its homogenous solution must be a function of the formF(x-ct), by logic similar to the above. Guessing a particular solution of the formG(x+ct), we find that [∂∂t+c∂∂x]G(x+ct)=H(x+ct).{\displaystyle \left[{\frac {\partial }{\partial t}}+c{\frac {\partial }{\partial x}}\right]G(x+ct)=H(x+ct).} Expanding out the left side, rearranging terms, then using the change of variabless=x+ctsimplifies the equation to G′(s)=H(s)2c.{\displaystyle G'(s)={\frac {H(s)}{2c}}.} This means we can find a particular solutionGof the desired form by integration. Thus, we have again shown thatuobeysu(x,t) =F(x-ct) +G(x+ct).[5] For aninitial-value problem, the arbitrary functionsFandGcan be determined to satisfy initial conditions:u(x,0)=f(x),{\displaystyle u(x,0)=f(x),}ut(x,0)=g(x).{\displaystyle u_{t}(x,0)=g(x).} The result isd'Alembert's formula:u(x,t)=f(x−ct)+f(x+ct)2+12c∫x−ctx+ctg(s)ds.{\displaystyle u(x,t)={\frac {f(x-ct)+f(x+ct)}{2}}+{\frac {1}{2c}}\int _{x-ct}^{x+ct}g(s)\,ds.} In the classical sense, iff(x) ∈Ck, andg(x) ∈Ck−1, thenu(t,x) ∈Ck. However, the waveformsFandGmay also begeneralized functions, such as the delta-function. In that case, the solution may be interpreted as an impulse that travels to the right or the left. The basic wave equation is alinear differential equation, and so it will adhere to thesuperposition principle. This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. theFourier transformbreaks up a wave into sinusoidal components. Another way to solve the one-dimensional wave equation is to first analyze its frequencyeigenmodes. A so-called eigenmode is a solution that oscillates in time with a well-definedconstantangular frequencyω, so that the temporal part of the wave function takes the forme−iωt= cos(ωt) −isin(ωt), and the amplitude is a functionf(x)of the spatial variablex, giving aseparation of variablesfor the wave function:uω(x,t)=e−iωtf(x).{\displaystyle u_{\omega }(x,t)=e^{-i\omega t}f(x).} This produces anordinary differential equationfor the spatial partf(x):∂2uω∂t2=∂2∂t2(e−iωtf(x))=−ω2e−iωtf(x)=c2∂2∂x2(e−iωtf(x)).{\displaystyle {\frac {\partial ^{2}u_{\omega }}{\partial t^{2}}}={\frac {\partial ^{2}}{\partial t^{2}}}\left(e^{-i\omega t}f(x)\right)=-\omega ^{2}e^{-i\omega t}f(x)=c^{2}{\frac {\partial ^{2}}{\partial x^{2}}}\left(e^{-i\omega t}f(x)\right).} Therefore,d2dx2f(x)=−(ωc)2f(x),{\displaystyle {\frac {d^{2}}{dx^{2}}}f(x)=-\left({\frac {\omega }{c}}\right)^{2}f(x),}which is precisely aneigenvalue equationforf(x), hence the name eigenmode. Known as theHelmholtz equation, it has the well-knownplane-wavesolutionsf(x)=Ae±ikx,{\displaystyle f(x)=Ae^{\pm ikx},}withwave numberk=ω/c. The total wave function for this eigenmode is then the linear combinationuω(x,t)=e−iωt(Ae−ikx+Beikx)=Ae−i(kx+ωt)+Bei(kx−ωt),{\displaystyle u_{\omega }(x,t)=e^{-i\omega t}\left(Ae^{-ikx}+Be^{ikx}\right)=Ae^{-i(kx+\omega t)}+Be^{i(kx-\omega t)},}where complex numbersA,Bdepend in general on any initial and boundary conditions of the problem. Eigenmodes are useful in constructing a full solution to the wave equation, because each of them evolves in time trivially with the phase factore−iωt,{\displaystyle e^{-i\omega t},}so that a full solution can be decomposed into aneigenmode expansion:u(x,t)=∫−∞∞s(ω)uω(x,t)dω,{\displaystyle u(x,t)=\int _{-\infty }^{\infty }s(\omega )u_{\omega }(x,t)\,d\omega ,}or in terms of the plane waves,u(x,t)=∫−∞∞s+(ω)e−i(kx+ωt)dω+∫−∞∞s−(ω)ei(kx−ωt)dω=∫−∞∞s+(ω)e−ik(x+ct)dω+∫−∞∞s−(ω)eik(x−ct)dω=F(x−ct)+G(x+ct),{\displaystyle {\begin{aligned}u(x,t)&=\int _{-\infty }^{\infty }s_{+}(\omega )e^{-i(kx+\omega t)}\,d\omega +\int _{-\infty }^{\infty }s_{-}(\omega )e^{i(kx-\omega t)}\,d\omega \\&=\int _{-\infty }^{\infty }s_{+}(\omega )e^{-ik(x+ct)}\,d\omega +\int _{-\infty }^{\infty }s_{-}(\omega )e^{ik(x-ct)}\,d\omega \\&=F(x-ct)+G(x+ct),\end{aligned}}}which is exactly in the same form as in the algebraic approach. Functionss±(ω)are known as theFourier componentand are determined by initial and boundary conditions. This is a so-calledfrequency-domainmethod, alternative to directtime-domainpropagations, such asFDTDmethod, of thewave packetu(x,t), which is complete for representing waves in absence of time dilations. Completeness of the Fourier expansion for representing waves in the presence of time dilations has been challenged bychirpwave solutions allowing for time variation ofω.[6]The chirp wave solutions seem particularly implied by very large but previously inexplicable radar residuals in theflyby anomalyand differ from the sinusoidal solutions in being receivable at any distance only at proportionally shifted frequencies and time dilations, corresponding to past chirp states of the source. The vectorial wave equation (from which the scalar wave equation can be directly derived) can be obtained by applying a force equilibrium to an infinitesimal volume element. If the medium has a modulus of elasticityE{\displaystyle E}that is homogeneous (i.e. independent ofx{\displaystyle \mathbf {x} }) within the volume element, then its stress tensor is given byT=E∇u{\displaystyle \mathbf {T} =E\nabla \mathbf {u} }, for a vectorial elastic deflectionu(x,t){\displaystyle \mathbf {u} (\mathbf {x} ,t)}. The local equilibrium of: can be written asρ∂2u∂t2−EΔu=0.{\displaystyle \rho {\frac {\partial ^{2}\mathbf {u} }{\partial t^{2}}}-E\Delta \mathbf {u} =\mathbf {0} .} By merging densityρ{\displaystyle \rho }and elasticity moduleE,{\displaystyle E,}the sound velocityc=E/ρ{\displaystyle c={\sqrt {E/\rho }}}results (material law). After insertion, follows the well-known governing wave equation for a homogeneous medium:[7]∂2u∂t2−c2Δu=0.{\displaystyle {\frac {\partial ^{2}\mathbf {u} }{\partial t^{2}}}-c^{2}\Delta \mathbf {u} ={\boldsymbol {0}}.}(Note: Instead of vectorialu(x,t),{\displaystyle \mathbf {u} (\mathbf {x} ,t),}only scalaru(x,t){\displaystyle u(x,t)}can be used, i.e. waves are travelling only along thex{\displaystyle x}axis, and the scalar wave equation follows as∂2u∂t2−c2∂2u∂x2=0{\displaystyle {\frac {\partial ^{2}u}{\partial t^{2}}}-c^{2}{\frac {\partial ^{2}u}{\partial x^{2}}}=0}.) The above vectorial partial differential equation of the 2nd order delivers two mutually independent solutions. From the quadratic velocity termc2=(+c)2=(−c)2{\displaystyle c^{2}=(+c)^{2}=(-c)^{2}}can be seen that there are two waves travelling in opposite directions+c{\displaystyle +c}and−c{\displaystyle -c}are possible, hence results the designation “two-way wave equation”. It can be shown for plane longitudinal wave propagation that the synthesis of twoone-way wave equationsleads to a general two-way wave equation. For∇c=0,{\displaystyle \nabla \mathbf {c} =\mathbf {0} ,}special two-wave equation with the d'Alembert operator results:[8](∂∂t−c⋅∇)(∂∂t+c⋅∇)u=(∂2∂t2+(c⋅∇)c⋅∇)u=(∂2∂t2+(c⋅∇)2)u=0.{\displaystyle \left({\frac {\partial }{\partial t}}-\mathbf {c} \cdot \nabla \right)\left({\frac {\partial }{\partial t}}+\mathbf {c} \cdot \nabla \right)\mathbf {u} =\left({\frac {\partial ^{2}}{\partial t^{2}}}+(\mathbf {c} \cdot \nabla )\mathbf {c} \cdot \nabla \right)\mathbf {u} =\left({\frac {\partial ^{2}}{\partial t^{2}}}+(\mathbf {c} \cdot \nabla )^{2}\right)\mathbf {u} =\mathbf {0} .}For∇c=0,{\displaystyle \nabla \mathbf {c} =\mathbf {0} ,}this simplifies to(∂2∂t2+c2Δ)u=0.{\displaystyle \left({\frac {\partial ^{2}}{\partial t^{2}}}+c^{2}\Delta \right)\mathbf {u} =\mathbf {0} .}Therefore, the vectorial 1st-orderone-way wave equationwith waves travelling in a pre-defined propagation directionc{\displaystyle \mathbf {c} }results[9]as∂u∂t−c⋅∇u=0.{\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}-\mathbf {c} \cdot \nabla \mathbf {u} =\mathbf {0} .} A solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the corresponding solution for a spherical wave. The result can then be also used to obtain the same solution in two space dimensions. To obtain a solution with constant frequencies, apply theFourier transformΨ(r,t)=∫−∞∞Ψ(r,ω)e−iωtdω,{\displaystyle \Psi (\mathbf {r} ,t)=\int _{-\infty }^{\infty }\Psi (\mathbf {r} ,\omega )e^{-i\omega t}\,d\omega ,}which transforms the wave equation into anelliptic partial differential equationof the form:(∇2+ω2c2)Ψ(r,ω)=0.{\displaystyle \left(\nabla ^{2}+{\frac {\omega ^{2}}{c^{2}}}\right)\Psi (\mathbf {r} ,\omega )=0.} This is theHelmholtz equationand can be solved usingseparation of variables. Inspherical coordinatesthis leads to a separation of the radial and angular variables, writing the solution as:[10]Ψ(r,ω)=∑l,mflm(r)Ylm(θ,ϕ).{\displaystyle \Psi (\mathbf {r} ,\omega )=\sum _{l,m}f_{lm}(r)Y_{lm}(\theta ,\phi ).}The angular part of the solution take the form ofspherical harmonicsand the radial function satisfies:[d2dr2+2rddr+k2−l(l+1)r2]fl(r)=0.{\displaystyle \left[{\frac {d^{2}}{dr^{2}}}+{\frac {2}{r}}{\frac {d}{dr}}+k^{2}-{\frac {l(l+1)}{r^{2}}}\right]f_{l}(r)=0.}independent ofm{\displaystyle m}, withk2=ω2/c2{\displaystyle k^{2}=\omega ^{2}/c^{2}}. Substitutingfl(r)=1rul(r),{\displaystyle f_{l}(r)={\frac {1}{\sqrt {r}}}u_{l}(r),}transforms the equation into[d2dr2+1rddr+k2−(l+12)2r2]ul(r)=0,{\displaystyle \left[{\frac {d^{2}}{dr^{2}}}+{\frac {1}{r}}{\frac {d}{dr}}+k^{2}-{\frac {(l+{\frac {1}{2}})^{2}}{r^{2}}}\right]u_{l}(r)=0,}which is theBessel equation. Consider the casel= 0. Then there is no angular dependence and the amplitude depends only on the radial distance, i.e.,Ψ(r,t) →u(r,t). In this case, the wave equation reduces to[clarification needed](∇2−1c2∂2∂t2)Ψ(r,t)=0,{\displaystyle \left(\nabla ^{2}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\right)\Psi (\mathbf {r} ,t)=0,}or(∂2∂r2+2r∂∂r−1c2∂2∂t2)u(r,t)=0.{\displaystyle \left({\frac {\partial ^{2}}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial }{\partial r}}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\right)u(r,t)=0.} This equation can be rewritten as∂2(ru)∂t2−c2∂2(ru)∂r2=0,{\displaystyle {\frac {\partial ^{2}(ru)}{\partial t^{2}}}-c^{2}{\frac {\partial ^{2}(ru)}{\partial r^{2}}}=0,}where the quantityrusatisfies the one-dimensional wave equation. Therefore, there are solutions in the formu(r,t)=1rF(r−ct)+1rG(r+ct),{\displaystyle u(r,t)={\frac {1}{r}}F(r-ct)+{\frac {1}{r}}G(r+ct),}whereFandGare general solutions to the one-dimensional wave equation and can be interpreted as respectively an outgoing and incoming spherical waves. The outgoing wave can be generated by apoint source, and they make possible sharp signals whose form is altered only by a decrease in amplitude asrincreases (see an illustration of a spherical wave on the top right). Such waves exist only in cases of space with odd dimensions.[citation needed] For physical examples of solutions to the 3D wave equation that possess angular dependence, seedipole radiation. Although the word "monochromatic" is not exactly accurate, since it refers to light orelectromagnetic radiationwith well-defined frequency, the spirit is to discover the eigenmode of the wave equation in three dimensions. Following the derivation in the previous section onplane-wave eigenmodes, if we again restrict our solutions to spherical waves that oscillate in time with well-definedconstantangular frequencyω, then the transformed functionru(r,t)has simply plane-wave solutions:ru(r,t)=Aei(ωt±kr),{\displaystyle ru(r,t)=Ae^{i(\omega t\pm kr)},}oru(r,t)=Arei(ωt±kr).{\displaystyle u(r,t)={\frac {A}{r}}e^{i(\omega t\pm kr)}.} From this we can observe that the peak intensity of the spherical-wave oscillation, characterized as the squared wave amplitudeI=|u(r,t)|2=|A|2r2,{\displaystyle I=|u(r,t)|^{2}={\frac {|A|^{2}}{r^{2}}},}drops at the rate proportional to1/r2, an example of theinverse-square law. The wave equation is linear inuand is left unaltered by translations in space and time. Therefore, we can generate a great variety of solutions by translating and summing spherical waves. Letφ(ξ,η,ζ)be an arbitrary function of three independent variables, and let the spherical wave formFbe adelta function. Let a family of spherical waves have center at(ξ,η,ζ), and letrbe the radial distance from that point. Thus r2=(x−ξ)2+(y−η)2+(z−ζ)2.{\displaystyle r^{2}=(x-\xi )^{2}+(y-\eta )^{2}+(z-\zeta )^{2}.} Ifuis a superposition of such waves with weighting functionφ, thenu(t,x,y,z)=14πc∭φ(ξ,η,ζ)δ(r−ct)rdξdηdζ;{\displaystyle u(t,x,y,z)={\frac {1}{4\pi c}}\iiint \varphi (\xi ,\eta ,\zeta ){\frac {\delta (r-ct)}{r}}\,d\xi \,d\eta \,d\zeta ;}the denominator4πcis a convenience. From the definition of the delta function,umay also be written asu(t,x,y,z)=t4π∬Sφ(x+ctα,y+ctβ,z+ctγ)dω,{\displaystyle u(t,x,y,z)={\frac {t}{4\pi }}\iint _{S}\varphi (x+ct\alpha ,y+ct\beta ,z+ct\gamma )\,d\omega ,}whereα,β, andγare coordinates on the unit sphereS, andωis the area element onS. This result has the interpretation thatu(t,x)isttimes the mean value ofφon a sphere of radiusctcentered atx:u(t,x,y,z)=tMct[φ].{\displaystyle u(t,x,y,z)=tM_{ct}[\varphi ].} It follows thatu(0,x,y,z)=0,ut(0,x,y,z)=φ(x,y,z).{\displaystyle u(0,x,y,z)=0,\quad u_{t}(0,x,y,z)=\varphi (x,y,z).} The mean value is an even function oft, and hence ifv(t,x,y,z)=∂∂t(tMct[φ]),{\displaystyle v(t,x,y,z)={\frac {\partial }{\partial t}}{\big (}tM_{ct}[\varphi ]{\big )},}thenv(0,x,y,z)=φ(x,y,z),vt(0,x,y,z)=0.{\displaystyle v(0,x,y,z)=\varphi (x,y,z),\quad v_{t}(0,x,y,z)=0.} These formulas provide the solution for the initial-value problem for the wave equation. They show that the solution at a given pointP, given(t,x,y,z)depends only on the data on the sphere of radiusctthat is intersected by thelight conedrawn backwards fromP. It doesnotdepend upon data on the interior of this sphere. Thus the interior of the sphere is alacunafor the solution. This phenomenon is calledHuygens' principle. It is only true for odd numbers of space dimension, where for one dimension the integration is performed over the boundary of an interval with respect to the Dirac measure.[11][12] In two space dimensions, the wave equation is utt=c2(uxx+uyy).{\displaystyle u_{tt}=c^{2}\left(u_{xx}+u_{yy}\right).} We can use the three-dimensional theory to solve this problem if we regarduas a function in three dimensions that is independent of the third dimension. If u(0,x,y)=0,ut(0,x,y)=ϕ(x,y),{\displaystyle u(0,x,y)=0,\quad u_{t}(0,x,y)=\phi (x,y),} then the three-dimensional solution formula becomes u(t,x,y)=tMct[ϕ]=t4π∬Sϕ(x+ctα,y+ctβ)dω,{\displaystyle u(t,x,y)=tM_{ct}[\phi ]={\frac {t}{4\pi }}\iint _{S}\phi (x+ct\alpha ,\,y+ct\beta )\,d\omega ,} whereαandβare the first two coordinates on the unit sphere, anddωis the area element on the sphere. This integral may be rewritten as a double integral over the discDwith center(x,y)and radiusct: u(t,x,y)=12πc∬Dϕ(x+ξ,y+η)(ct)2−ξ2−η2dξdη.{\displaystyle u(t,x,y)={\frac {1}{2\pi c}}\iint _{D}{\frac {\phi (x+\xi ,y+\eta )}{\sqrt {(ct)^{2}-\xi ^{2}-\eta ^{2}}}}d\xi \,d\eta .} It is apparent that the solution at(t,x,y)depends not only on the data on the light cone where(x−ξ)2+(y−η)2=c2t2,{\displaystyle (x-\xi )^{2}+(y-\eta )^{2}=c^{2}t^{2},}but also on data that are interior to that cone. We want to find solutions toutt− Δu= 0foru:Rn× (0, ∞) →Rwithu(x, 0) =g(x)andut(x, 0) =h(x).[13] Assumen≥ 3is an odd integer, andg∈Cm+1(Rn),h∈Cm(Rn)form= (n+ 1)/2. Letγn= 1 × 3 × 5 × ⋯ × (n− 2)and let u(x,t)=1γn[∂t(1t∂t)n−32(tn−21|∂Bt(x)|∫∂Bt(x)gdS)+(1t∂t)n−32(tn−21|∂Bt(x)|∫∂Bt(x)hdS)]{\displaystyle u(x,t)={\frac {1}{\gamma _{n}}}\left[\partial _{t}\left({\frac {1}{t}}\partial _{t}\right)^{\frac {n-3}{2}}\left(t^{n-2}{\frac {1}{|\partial B_{t}(x)|}}\int _{\partial B_{t}(x)}g\,dS\right)+\left({\frac {1}{t}}\partial _{t}\right)^{\frac {n-3}{2}}\left(t^{n-2}{\frac {1}{|\partial B_{t}(x)|}}\int _{\partial B_{t}(x)}h\,dS\right)\right]} Then Assumen≥ 2is an even integer andg∈Cm+1(Rn),h∈Cm(Rn), form= (n+ 2)/2. Letγn= 2 × 4 × ⋯ ×nand let u(x,t)=1γn[∂t(1t∂t)n−22(tn1|Bt(x)|∫Bt(x)g(t2−|y−x|2)12dy)+(1t∂t)n−22(tn1|Bt(x)|∫Bt(x)h(t2−|y−x|2)12dy)]{\displaystyle u(x,t)={\frac {1}{\gamma _{n}}}\left[\partial _{t}\left({\frac {1}{t}}\partial _{t}\right)^{\frac {n-2}{2}}\left(t^{n}{\frac {1}{|B_{t}(x)|}}\int _{B_{t}(x)}{\frac {g}{(t^{2}-|y-x|^{2})^{\frac {1}{2}}}}dy\right)+\left({\frac {1}{t}}\partial _{t}\right)^{\frac {n-2}{2}}\left(t^{n}{\frac {1}{|B_{t}(x)|}}\int _{B_{t}(x)}{\frac {h}{(t^{2}-|y-x|^{2})^{\frac {1}{2}}}}dy\right)\right]} then Consider the inhomogeneous wave equation in1+D{\displaystyle 1+D}dimensions(∂tt−c2∇2)u=s(t,x){\displaystyle (\partial _{tt}-c^{2}\nabla ^{2})u=s(t,x)}By rescaling time, we can set wave speedc=1{\displaystyle c=1}. Since the wave equation(∂tt−∇2)u=s(t,x){\displaystyle (\partial _{tt}-\nabla ^{2})u=s(t,x)}has order 2 in time, there are twoimpulse responses: an acceleration impulse and a velocity impulse. The effect of inflicting an acceleration impulse is to suddenly change the wave velocity∂tu{\displaystyle \partial _{t}u}. The effect of inflicting a velocity impulse is to suddenly change the wave displacementu{\displaystyle u}. For acceleration impulse,s(t,x)=δD+1(t,x){\displaystyle s(t,x)=\delta ^{D+1}(t,x)}whereδ{\displaystyle \delta }is theDirac delta function. The solution to this case is called theGreen's functionG{\displaystyle G}for the wave equation. For velocity impulse,s(t,x)=∂tδD+1(t,x){\displaystyle s(t,x)=\partial _{t}\delta ^{D+1}(t,x)}, so if we solve the Green functionG{\displaystyle G}, the solution for this case is just∂tG{\displaystyle \partial _{t}G}.[citation needed] The main use of Green's functions is to solveinitial value problemsbyDuhamel's principle, both for the homogeneous and the inhomogeneous case. Given the Green functionG{\displaystyle G}, and initial conditionsu(0,x),∂tu(0,x){\displaystyle u(0,x),\partial _{t}u(0,x)}, the solution to the homogeneous wave equation is[14]u=(∂tG)∗u+G∗∂tu{\displaystyle u=(\partial _{t}G)\ast u+G\ast \partial _{t}u}where the asterisk isconvolutionin space. More explicitly,u(t,x)=∫(∂tG)(t,x−x′)u(0,x′)dx′+∫G(t,x−x′)(∂tu)(0,x′)dx′.{\displaystyle u(t,x)=\int (\partial _{t}G)(t,x-x')u(0,x')dx'+\int G(t,x-x')(\partial _{t}u)(0,x')dx'.}For the inhomogeneous case, the solution has one additional term by convolution over spacetime:∬t′<tG(t−t′,x−x′)s(t′,x′)dt′dx′.{\displaystyle \iint _{t'<t}G(t-t',x-x')s(t',x')dt'dx'.} By aFourier transform,G^(ω)=1−ω02+ω12+⋯+ωD2,G(t,x)=1(2π)D+1∫G^(ω)e+iω0t+iω→⋅x→dω0dω→.{\displaystyle {\hat {G}}(\omega )={\frac {1}{-\omega _{0}^{2}+\omega _{1}^{2}+\cdots +\omega _{D}^{2}}},\quad G(t,x)={\frac {1}{(2\pi )^{D+1}}}\int {\hat {G}}(\omega )e^{+i\omega _{0}t+i{\vec {\omega }}\cdot {\vec {x}}}d\omega _{0}d{\vec {\omega }}.}Theω0{\displaystyle \omega _{0}}term can be integrated by theresidue theorem. It would require us to perturb the integral slightly either by+iϵ{\displaystyle +i\epsilon }or by−iϵ{\displaystyle -i\epsilon }, because it is an improper integral. One perturbation gives the forward solution, and the other the backward solution.[15]The forward solution givesG(t,x)=1(2π)D∫sin⁡(‖ω→‖t)‖ω→‖eiω→⋅x→dω→,∂tG(t,x)=1(2π)D∫cos⁡(‖ω→‖t)eiω→⋅x→dω→.{\displaystyle G(t,x)={\frac {1}{(2\pi )^{D}}}\int {\frac {\sin(\|{\vec {\omega }}\|t)}{\|{\vec {\omega }}\|}}e^{i{\vec {\omega }}\cdot {\vec {x}}}d{\vec {\omega }},\quad \partial _{t}G(t,x)={\frac {1}{(2\pi )^{D}}}\int \cos(\|{\vec {\omega }}\|t)e^{i{\vec {\omega }}\cdot {\vec {x}}}d{\vec {\omega }}.}The integral can be solved by analytically continuing thePoisson kernel, giving[14][16]G(t,x)=limϵ→0+CDD−1Im⁡[‖x‖2−(t−iϵ)2]−(D−1)/2{\displaystyle G(t,x)=\lim _{\epsilon \rightarrow 0^{+}}{\frac {C_{D}}{D-1}}\operatorname {Im} \left[\|x\|^{2}-(t-i\epsilon )^{2}\right]^{-(D-1)/2}}whereCD=π−(D+1)/2Γ((D+1)/2){\displaystyle C_{D}=\pi ^{-(D+1)/2}\Gamma ((D+1)/2)}is half the surface area of a(D+1){\displaystyle (D+1)}-dimensionalhypersphere.[16] We can relate the Green's function inD{\displaystyle D}dimensions to the Green's function inD+n{\displaystyle D+n}dimensions.[17] Given a functions(t,x){\displaystyle s(t,x)}and a solutionu(t,x){\displaystyle u(t,x)}of a differential equation in(1+D){\displaystyle (1+D)}dimensions, we can trivially extend it to(1+D+n){\displaystyle (1+D+n)}dimensions by setting the additionaln{\displaystyle n}dimensions to be constant:s(t,x1:D,xD+1:D+n)=s(t,x1:D),u(t,x1:D,xD+1:D+n)=u(t,x1:D).{\displaystyle s(t,x_{1:D},x_{D+1:D+n})=s(t,x_{1:D}),\quad u(t,x_{1:D},x_{D+1:D+n})=u(t,x_{1:D}).}Since the Green's function is constructed fromf{\displaystyle f}andu{\displaystyle u}, the Green's function in(1+D+n){\displaystyle (1+D+n)}dimensions integrates to the Green's function in(1+D){\displaystyle (1+D)}dimensions:GD(t,x1:D)=∫RnGD+n(t,x1:D,xD+1:D+n)dnxD+1:D+n.{\displaystyle G_{D}(t,x_{1:D})=\int _{\mathbb {R} ^{n}}G_{D+n}(t,x_{1:D},x_{D+1:D+n})d^{n}x_{D+1:D+n}.} The Green's function inD{\displaystyle D}dimensions can be related to the Green's function inD+2{\displaystyle D+2}dimensions. By spherical symmetry,GD(t,r)=∫R2GD+2(t,r2+y2+z2)dydz.{\displaystyle G_{D}(t,r)=\int _{\mathbb {R} ^{2}}G_{D+2}(t,{\sqrt {r^{2}+y^{2}+z^{2}}})dydz.}Integrating in polar coordinates,GD(t,r)=2π∫0∞GD+2(t,r2+q2)qdq=2π∫r∞GD+2(t,q′)q′dq′,{\displaystyle G_{D}(t,r)=2\pi \int _{0}^{\infty }G_{D+2}(t,{\sqrt {r^{2}+q^{2}}})qdq=2\pi \int _{r}^{\infty }G_{D+2}(t,q')q'dq',}where in the last equality we made the change of variablesq′=r2+q2{\displaystyle q'={\sqrt {r^{2}+q^{2}}}}. Thus, we obtain the recurrence relationGD+2(t,r)=−12πr∂rGD(t,r).{\displaystyle G_{D+2}(t,r)=-{\frac {1}{2\pi r}}\partial _{r}G_{D}(t,r).} WhenD=1{\displaystyle D=1}, the integrand in the Fourier transform is thesinc functionG1(t,x)=12π∫Rsin⁡(|ω|t)|ω|eiωxdω=12π∫sinc⁡(ω)eiωxtdω=sgn⁡(t−x)+sgn⁡(t+x)4={12θ(t−|x|)t>0−12θ(−t−|x|)t<0{\displaystyle {\begin{aligned}G_{1}(t,x)&={\frac {1}{2\pi }}\int _{\mathbb {R} }{\frac {\sin(|\omega |t)}{|\omega |}}e^{i\omega x}d\omega \\&={\frac {1}{2\pi }}\int \operatorname {sinc} (\omega )e^{i\omega {\frac {x}{t}}}d\omega \\&={\frac {\operatorname {sgn}(t-x)+\operatorname {sgn}(t+x)}{4}}\\&={\begin{cases}{\frac {1}{2}}\theta (t-|x|)\quad t>0\\-{\frac {1}{2}}\theta (-t-|x|)\quad t<0\end{cases}}\end{aligned}}}wheresgn{\displaystyle \operatorname {sgn} }is thesign functionandθ{\displaystyle \theta }is theunit step function. One solution is the forward solution, the other is the backward solution. The dimension can be raised to give theD=3{\displaystyle D=3}caseG3(t,r)=δ(t−r)4πr{\displaystyle G_{3}(t,r)={\frac {\delta (t-r)}{4\pi r}}}and similarly for the backward solution. This can be integrated down by one dimension to give theD=2{\displaystyle D=2}caseG2(t,r)=∫Rδ(t−r2+z2)4πr2+z2dz=θ(t−r)2πt2−r2{\displaystyle G_{2}(t,r)=\int _{\mathbb {R} }{\frac {\delta (t-{\sqrt {r^{2}+z^{2}}})}{4\pi {\sqrt {r^{2}+z^{2}}}}}dz={\frac {\theta (t-r)}{2\pi {\sqrt {t^{2}-r^{2}}}}}} InD=1{\displaystyle D=1}case, the Green's function solution is the sum of two wavefrontssgn⁡(t−x)4+sgn⁡(t+x)4{\displaystyle {\frac {\operatorname {sgn}(t-x)}{4}}+{\frac {\operatorname {sgn}(t+x)}{4}}}moving in opposite directions. In odd dimensions, the forward solution is nonzero only att=r{\displaystyle t=r}. As the dimensions increase, the shape of wavefront becomes increasingly complex, involving higher derivatives of the Dirac delta function. For example,[17]G1=12cθ(τ)G3=14πc2δ(τ)rG5=18π2c2(δ(τ)r3+δ′(τ)cr2)G7=116π3c2(3δ(τ)r4+3δ′(τ)cr3+δ′′(τ)c2r2){\displaystyle {\begin{aligned}&G_{1}={\frac {1}{2c}}\theta (\tau )\\&G_{3}={\frac {1}{4\pi c^{2}}}{\frac {\delta (\tau )}{r}}\\&G_{5}={\frac {1}{8\pi ^{2}c^{2}}}\left({\frac {\delta (\tau )}{r^{3}}}+{\frac {\delta ^{\prime }(\tau )}{cr^{2}}}\right)\\&G_{7}={\frac {1}{16\pi ^{3}c^{2}}}\left(3{\frac {\delta (\tau )}{r^{4}}}+3{\frac {\delta ^{\prime }(\tau )}{cr^{3}}}+{\frac {\delta ^{\prime \prime }(\tau )}{c^{2}r^{2}}}\right)\end{aligned}}}whereτ=t−r{\displaystyle \tau =t-r}, and the wave speedc{\displaystyle c}is restored. In even dimensions, the forward solution is nonzero inr≤t{\displaystyle r\leq t}, the entire region behind the wavefront becomes nonzero, called awake. The wake has equation:[17]GD(t,x)=(−1)1+D/21(2π)D/21cDθ(t−r/c)(t2−r2/c2)(D−1)/2{\displaystyle G_{D}(t,x)=(-1)^{1+D/2}{\frac {1}{(2\pi )^{D/2}}}{\frac {1}{c^{D}}}{\frac {\theta (t-r/c)}{\left(t^{2}-r^{2}/c^{2}\right)^{(D-1)/2}}}}The wavefront itself also involves increasingly higher derivatives of the Dirac delta function. This means that a generalHuygens' principle– the wave displacement at a point(t,x){\displaystyle (t,x)}in spacetime depends only on the state at points oncharacteristic rayspassing(t,x){\displaystyle (t,x)}– only holds in odd dimensions. A physical interpretation is that signals transmitted by waves remain undistorted in odd dimensions, but distorted in even dimensions.[18]: 698 Hadamard's conjecturestates that this generalized Huygens' principle still holds in all odd dimensions even when the coefficients in the wave equation are no longer constant. It is not strictly correct, but it is correct for certain families of coefficients[18]: 765 For an incident wave traveling from one medium (where the wave speed isc1) to another medium (where the wave speed isc2), one part of the wave will transmit into the second medium, while another part reflects back into the other direction and stays in the first medium. The amplitude of the transmitted wave and the reflected wave can be calculated by using the continuity condition at the boundary. Consider the component of the incident wave with anangular frequencyofω, which has the waveformuinc(x,t)=Aei(k1x−ωt),A∈C.{\displaystyle u^{\text{inc}}(x,t)=Ae^{i(k_{1}x-\omega t)},\quad A\in \mathbb {C} .}Att= 0, the incident reaches the boundary between the two media atx= 0. Therefore, the corresponding reflected wave and the transmitted wave will have the waveformsurefl(x,t)=Bei(−k1x−ωt),utrans(x,t)=Cei(k2x−ωt),B,C∈C.{\displaystyle u^{\text{refl}}(x,t)=Be^{i(-k_{1}x-\omega t)},\quad u^{\text{trans}}(x,t)=Ce^{i(k_{2}x-\omega t)},\quad B,C\in \mathbb {C} .}The continuity condition at the boundary isuinc(0,t)+urefl(0,t)=utrans(0,t),uxinc(0,t)+uxref(0,t)=uxtrans(0,t).{\displaystyle u^{\text{inc}}(0,t)+u^{\text{refl}}(0,t)=u^{\text{trans}}(0,t),\quad u_{x}^{\text{inc}}(0,t)+u_{x}^{\text{ref}}(0,t)=u_{x}^{\text{trans}}(0,t).}This gives the equationsA+B=C,A−B=k2k1C=c1c2C,{\displaystyle A+B=C,\quad A-B={\frac {k_{2}}{k_{1}}}C={\frac {c_{1}}{c_{2}}}C,}and we have the reflectivity and transmissivityBA=c2−c1c2+c1,CA=2c2c2+c1.{\displaystyle {\frac {B}{A}}={\frac {c_{2}-c_{1}}{c_{2}+c_{1}}},\quad {\frac {C}{A}}={\frac {2c_{2}}{c_{2}+c_{1}}}.}Whenc2<c1, the reflected wave has areflection phase changeof 180°, sinceB/A< 0. The energy conservation can be verified byB2c1+C2c2=A2c1.{\displaystyle {\frac {B^{2}}{c_{1}}}+{\frac {C^{2}}{c_{2}}}={\frac {A^{2}}{c_{1}}}.}The above discussion holds true for any component, regardless of its angular frequency ofω. The limiting case ofc2= 0corresponds to a "fixed end" that does not move, whereas the limiting case ofc2→ ∞corresponds to a "free end". A flexible string that is stretched between two pointsx= 0andx=Lsatisfies the wave equation fort> 0and0 <x<L. On the boundary points,umay satisfy a variety of boundary conditions. A general form that is appropriate for applications is −ux(t,0)+au(t,0)=0,ux(t,L)+bu(t,L)=0,{\displaystyle {\begin{aligned}-u_{x}(t,0)+au(t,0)&=0,\\u_{x}(t,L)+bu(t,L)&=0,\end{aligned}}} whereaandbare non-negative. The case whereuis required to vanish at an endpoint (i.e. "fixed end") is the limit of this condition when the respectiveaorbapproaches infinity. The method ofseparation of variablesconsists in looking for solutions of this problem in the special formu(t,x)=T(t)v(x).{\displaystyle u(t,x)=T(t)v(x).} A consequence is thatT″c2T=v″v=−λ.{\displaystyle {\frac {T''}{c^{2}T}}={\frac {v''}{v}}=-\lambda .} Theeigenvalueλmust be determined so that there is a non-trivial solution of the boundary-value problemv″+λv=0,−v′(0)+av(0)=0,v′(L)+bv(L)=0.{\displaystyle {\begin{aligned}v''+\lambda v=0,&\\-v'(0)+av(0)&=0,\\v'(L)+bv(L)&=0.\end{aligned}}} This is a special case of the general problem ofSturm–Liouville theory. Ifaandbare positive, the eigenvalues are all positive, and the solutions are trigonometric functions. A solution that satisfies square-integrable initial conditions foruandutcan be obtained from expansion of these functions in the appropriate trigonometric series. The one-dimensional initial-boundary value theory may be extended to an arbitrary number of space dimensions. Consider a domainDinm-dimensionalxspace, with boundaryB. Then the wave equation is to be satisfied ifxis inD, andt> 0. On the boundary ofD, the solutionushall satisfy ∂u∂n+au=0,{\displaystyle {\frac {\partial u}{\partial n}}+au=0,} wherenis the unit outward normal toB, andais a non-negative function defined onB. The case whereuvanishes onBis a limiting case foraapproaching infinity. The initial conditions are u(0,x)=f(x),ut(0,x)=g(x),{\displaystyle u(0,x)=f(x),\quad u_{t}(0,x)=g(x),} wherefandgare defined inD. This problem may be solved by expandingfandgin the eigenfunctions of the Laplacian inD, which satisfy the boundary conditions. Thus the eigenfunctionvsatisfies ∇⋅∇v+λv=0{\displaystyle \nabla \cdot \nabla v+\lambda v=0} inD, and ∂v∂n+av=0{\displaystyle {\frac {\partial v}{\partial n}}+av=0} onB. In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundaryB. IfBis a circle, then these eigenfunctions have an angular component that is a trigonometric function of the polar angleθ, multiplied by aBessel function(of integer order) of the radial component. Further details are inHelmholtz equation. If the boundary is a sphere in three space dimensions, the angular components of the eigenfunctions arespherical harmonics, and the radial components areBessel functionsof half-integer order. The inhomogeneous wave equation in one dimension isutt(x,t)−c2uxx(x,t)=s(x,t){\displaystyle u_{tt}(x,t)-c^{2}u_{xx}(x,t)=s(x,t)}with initial conditionsu(x,0)=f(x),{\displaystyle u(x,0)=f(x),}ut(x,0)=g(x).{\displaystyle u_{t}(x,0)=g(x).} The functions(x,t)is often called the source function because in practice it describes the effects of the sources of waves on the medium carrying them. Physical examples of source functions include the force driving a wave on a string, or the charge or current density in theLorenz gaugeofelectromagnetism. One method to solve the initial-value problem (with the initial values as posed above) is to take advantage of a special property of the wave equation in an odd number of space dimensions, namely that its solutions respect causality. That is, for any point(xi,ti), the value ofu(xi,ti)depends only on the values off(xi+cti)andf(xi−cti)and the values of the functiong(x)between(xi−cti)and(xi+cti). This can be seen ind'Alembert's formula, stated above, where these quantities are the only ones that show up in it. Physically, if the maximum propagation speed isc, then no part of the wave that cannot propagate to a given point by a given time can affect the amplitude at the same point and time. In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that causally affects point(xi,ti)asRC. Suppose we integrate the inhomogeneous wave equation over this region:∬RC(c2uxx(x,t)−utt(x,t))dxdt=∬RCs(x,t)dxdt.{\displaystyle \iint _{R_{C}}{\big (}c^{2}u_{xx}(x,t)-u_{tt}(x,t){\big )}\,dx\,dt=\iint _{R_{C}}s(x,t)\,dx\,dt.} To simplify this greatly, we can useGreen's theoremto simplify the left side to get the following:∫L0+L1+L2(−c2ux(x,t)dt−ut(x,t)dx)=∬RCs(x,t)dxdt.{\displaystyle \int _{L_{0}+L_{1}+L_{2}}{\big (}{-}c^{2}u_{x}(x,t)\,dt-u_{t}(x,t)\,dx{\big )}=\iint _{R_{C}}s(x,t)\,dx\,dt.} The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute:∫xi−ctixi+cti−ut(x,0)dx=−∫xi−ctixi+ctig(x)dx.{\displaystyle \int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}-u_{t}(x,0)\,dx=-\int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}g(x)\,dx.} In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thusdt= 0. For the other two sides of the region, it is worth noting thatx±ctis a constant, namelyxi±cti, where the sign is chosen appropriately. Using this, we can get the relationdx±cdt= 0, again choosing the right sign:∫L1(−c2ux(x,t)dt−ut(x,t)dx)=∫L1(cux(x,t)dx+cut(x,t)dt)=c∫L1du(x,t)=cu(xi,ti)−cf(xi+cti).{\displaystyle {\begin{aligned}\int _{L_{1}}{\big (}{-}c^{2}u_{x}(x,t)\,dt-u_{t}(x,t)\,dx{\big )}&=\int _{L_{1}}{\big (}cu_{x}(x,t)\,dx+cu_{t}(x,t)\,dt{\big )}\\&=c\int _{L_{1}}\,du(x,t)\\&=cu(x_{i},t_{i})-cf(x_{i}+ct_{i}).\end{aligned}}} And similarly for the final boundary segment:∫L2(−c2ux(x,t)dt−ut(x,t)dx)=−∫L2(cux(x,t)dx+cut(x,t)dt)=−c∫L2du(x,t)=cu(xi,ti)−cf(xi−cti).{\displaystyle {\begin{aligned}\int _{L_{2}}{\big (}{-}c^{2}u_{x}(x,t)\,dt-u_{t}(x,t)\,dx{\big )}&=-\int _{L_{2}}{\big (}cu_{x}(x,t)\,dx+cu_{t}(x,t)\,dt{\big )}\\&=-c\int _{L_{2}}\,du(x,t)\\&=cu(x_{i},t_{i})-cf(x_{i}-ct_{i}).\end{aligned}}} Adding the three results together and putting them back in the original integral gives∬RCs(x,t)dxdt=−∫xi−ctixi+ctig(x)dx+cu(xi,ti)−cf(xi+cti)+cu(xi,ti)−cf(xi−cti)=2cu(xi,ti)−cf(xi+cti)−cf(xi−cti)−∫xi−ctixi+ctig(x)dx.{\displaystyle {\begin{aligned}\iint _{R_{C}}s(x,t)\,dx\,dt&=-\int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}g(x)\,dx+cu(x_{i},t_{i})-cf(x_{i}+ct_{i})+cu(x_{i},t_{i})-cf(x_{i}-ct_{i})\\&=2cu(x_{i},t_{i})-cf(x_{i}+ct_{i})-cf(x_{i}-ct_{i})-\int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}g(x)\,dx.\end{aligned}}} Solving foru(xi,ti), we arrive atu(xi,ti)=f(xi+cti)+f(xi−cti)2+12c∫xi−ctixi+ctig(x)dx+12c∫0ti∫xi−c(ti−t)xi+c(ti−t)s(x,t)dxdt.{\displaystyle u(x_{i},t_{i})={\frac {f(x_{i}+ct_{i})+f(x_{i}-ct_{i})}{2}}+{\frac {1}{2c}}\int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}g(x)\,dx+{\frac {1}{2c}}\int _{0}^{t_{i}}\int _{x_{i}-c(t_{i}-t)}^{x_{i}+c(t_{i}-t)}s(x,t)\,dx\,dt.} In the last equation of the sequence, the bounds of the integral over the source function have been made explicit. Looking at this solution, which is valid for all choices(xi,ti)compatible with the wave equation, it is clear that the first two terms are simply d'Alembert's formula, as stated above as the solution of the homogeneous wave equation in one dimension. The difference is in the third term, the integral over the source. The elastic wave equation (also known as theNavier–Cauchy equation) in three dimensions describes the propagation of waves in anisotropichomogeneouselasticmedium. Most solid materials are elastic, so this equation describes such phenomena asseismic wavesin theEarthandultrasonicwaves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion:ρu¨=f+(λ+2μ)∇(∇⋅u)−μ∇×(∇×u),{\displaystyle \rho {\ddot {\mathbf {u} }}=\mathbf {f} +(\lambda +2\mu )\nabla (\nabla \cdot \mathbf {u} )-\mu \nabla \times (\nabla \times \mathbf {u} ),}where: By using∇ × (∇ ×u) = ∇(∇ ⋅u) − ∇ ⋅ ∇u= ∇(∇ ⋅u) − ∆u, the elastic wave equation can be rewritten into the more common form of the Navier–Cauchy equation. Note that in the elastic wave equation, both force and displacement arevectorquantities. Thus, this equation is sometimes known as the vector wave equation. As an aid to understanding, the reader will observe that iffand∇ ⋅uare set to zero, this becomes (effectively) Maxwell's equation for the propagation of the electric fieldE, which has only transverse waves. Indispersivewave phenomena, the speed of wave propagation varies with the wavelength of the wave, which is reflected by adispersion relation ω=ω(k),{\displaystyle \omega =\omega (\mathbf {k} ),} whereωis theangular frequency, andkis thewavevectordescribingplane-wavesolutions. For light waves, the dispersion relation isω= ±c|k|, but in general, the constant speedcgets replaced by a variablephase velocity: vp=ω(k)k.{\displaystyle v_{\text{p}}={\frac {\omega (k)}{k}}.}
https://en.wikipedia.org/wiki/Wave_equation
Thetilde(/ˈtɪldə/, also/ˈtɪld,-di,-deɪ/)[1]is agrapheme⟨˜⟩or⟨~⟩with a number of uses. The name of the character came intoEnglishfromSpanishtilde, which in turn came from theLatintitulus, meaning 'title' or 'superscription'.[2]Its primary use is as adiacritic(accent) in combination with a base letter. Its freestanding form is used in modern texts mainly to indicateapproximation. The tilde was originally one of a variety of marks written over an omitted letter or several letters as ascribal abbreviation(a "mark of contraction").[3]Thus, the commonly used wordsAnno Dominiwere frequently abbreviated toAoDñi, with an elevated terminal with a contraction mark placed over the "n". Such a mark could denote the omission of one letter or several letters. This saved on the expense of the scribe's labor and the cost of vellum and ink. Medieval European charters written in Latin are largely made up of such abbreviated words with contraction marks and other abbreviations; only uncommon words were given in full. The text of theDomesday Bookof 1086, relating for example, to themanor of MollandinDevon(see adjacent picture), is highlyabbreviatedas indicated by numerous tildes. The text with abbreviations expanded is as follows: Mollande tempore regis Eduardi geldabat pro quattuor hidis et uno ferling. Terra est quadraginta carucae. In dominio sunt tres carucae et decem servi et triginta villani et viginti bordarii cum sedecim carucis. Ibi duodecim acrae prati et quindecim acrae silvae. Pastura tres leugae in longitudine et latitudine. Reddit quattuor et viginti libras ad pensam. Huic manerio est adjuncta Blachepole. Elwardus tenebat tempore regis Edwardi pro manerio et geldabat pro dimidia hida. Terra est duae carucae. Ibi sunt quinque villani cum uno servo. Valet viginti solidos ad pensam et arsuram. Eidem manerio est injuste adjuncta Nimete et valet quindecim solidos. Ipsi manerio pertinet tercius denarius de Hundredis Nortmoltone et Badentone et Brantone et tercium animal pasturae morarum. Ontypewritersdesigned for languages that routinely usediacritics(accent marks), there are two possible solutions. Keys can be dedicated toprecomposed charactersor alternatively adead keymechanism can be provided. With the latter, a mark is made when a dead key is typed, but unlike normal keys, the paper carriage does not move on and thus the next letter to be typed is printed under that accent. Typewriters forSpanishtypically have a dedicated key forÑ/ñ but, asPortugueseusesÃ/ã andÕ/õ, a single dead-key (rather than take two keys to dedicate) is the most practical solution. The tilde symbol did not exist independently as amovable typeorhot-leadprinting character since thetype casesfor Spanish or Portuguese would includesortsfor the accented forms. The firstASCIIstandard (X3.64-1963) did not have a tilde.[4]: 246Like Portuguese and Spanish, the French, German andScandinavianlanguages also needed symbols in excess of the basic 26 needed for English. TheASAworked with and through theCCITTto internationalize the code-set, to meet the basic needs of at least the Western European languages. It appears to have been at their May 13–15, 1963 meeting that the CCITT decided that the proposed ISO 7-bit code standard would be suitable for their needs if a lower case alphabet and five diacritical marks [...] were added to it.[5]At the October 29–31 meeting, then, the ISO subcommittee altered the ISO draft to meet the CCITT requirements, replacing the up-arrow and left-arrow with diacriticals, adding diacritical meanings to the apostrophe and quotation mark, and making thenumber signa dual[a]for the tilde.[6] Thus ISO646 was born (and the ASCII standard updated to X3.64-1967), providing the tilde and other symbols as optional characters.[4]: 247[b] ISO646 and ASCII incorporated many of the overprinting lower-case diacritics from typewriters, including tilde. Overprinting was intended to work by putting abackspacecode between the codes for letter and diacritic.[8]However even at that time, mechanisms that could do this or any other overprinting were not widely available, did not work for capital letters, and were impossible on video displays, with the result that this concept failed to gain significant acceptance. Consequently, many of these free-standing diacritics (and theunderscore) were quickly reused by software as additional syntax, basically becoming new types of syntactic symbols that a programming language could use. As this usage became predominant,type designgradually evolved so these diacritic characters became larger and more vertically centered, making them useless as overprinted diacritics but much easier to read as free-standing characters that had come to be used for entirely different and novel purposes. Most modern fonts align the plain ASCII "spacing" (free-standing) tilde at the same level asdashes, or only slightly higher.[citation needed] The free-standing tilde is at code 126 in ASCII, where it was inherited into Unicode as U+007E. A similar shaped mark (⁓) is known in typography andlexicographyas aswung dash: these are used in dictionaries to indicate the omission of the entry word.[9] As indicated by the etymological origin of the word "tilde" in English, this symbol has been closely associated with theSpanish language. The connection stems from the use of the tilde above the letter⟨n⟩to form the (different) letter⟨ñ⟩in Spanish, a feature shared by onlya few other languages, most of which are historically connected to Spanish. This peculiarity can help non-native speakers quickly identify a text as being written in Spanish with little chance of error. Particularly during the 1990s, Spanish-speaking intellectuals and news outlets demonstrated support for the language and the culture by defending this letter againstglobalisationandcomputerisationtrends that threatened to remove it from keyboards and other standardised products and codes.[10][11]TheInstituto Cervantes, founded bySpain's governmentto promote the Spanish language internationally, chose as its logo a highly stylisedÑwith a large tilde. The 24-hour news channelCNNin the US later adopted a similar strategy on its existing logo for the launch of itsSpanish-language version, therefore being written as CN͠N. And similarly to theNational Basketball Association(NBA), theSpain men's national basketball teamis nicknamed "ÑBA". In Spanish itself the wordtildeis used more generally for diacritics, including the stress-marking acute accent.[12]The diacritic~is more commonly calledvirgulillaorla tilde de la eñe, and is not considered an accent mark in Spanish, but rather simply a part of the letterñ(much likethe dotoverımakes anicharacter that is familiar to readers of English). TheEnglish languagedoes not use the tilde as a diacritic, though it is used in someloanwords. The standalone form of the symbol is used more widely. Informally,[13]it means"approximately", "about", or "around", such as "~30 minutes before", meaning "approximately30 minutes before".[14][15]It may also mean "similar to",[16]including "of the sameorder of magnitudeas",[13]such as "x~y" meaning thatxandyare of the same order of magnitude. Another approximation symbol is thedouble tilde≈, meaning "approximately/almost equal to".[14][16][17]The tilde is also used to indicatecongruenceof shapes by placing it over an=symbol, thus≅. In more recent digital usage, tildes on either side of a word or phrase have sometimes come to convey a particular tone that "let[s] the enclosed words perform both sincerity and irony", which can pre-emptively defuse a negative reaction.[18]For example,BuzzFeedjournalist Joseph Bernstein interprets the tildes in the followingtweet: as a way of making it clear that both the author and reader are aware that the enclosed phrase – "spirit of the season" – "is cliche and we know this quality is beneath our author, and we don't want you to think our author is a cliche person generally".[18][c]More uses are in the text messaging appWhatsappused by the side of a username. Among other uses, the symbol has been used onsocial mediatoindicate sarcasm.[19]It may also be used online, especially in informal writing such asfanfiction, to convey a cutesy, playful, or flirtatious tone.[20] In some languages, the tilde is adiacriticmark placed over aletterto indicate a change in its pronunciation: The tilde was firstly used in thepolytonic orthographyofAncient Greek, as a variant of thecircumflex, representing a rise inpitchfollowed by a return to standard pitch.[21] Later, it was used to makeabbreviationsin medievalLatindocuments. When an⟨n⟩or⟨m⟩followed a vowel, it was often omitted, and a tilde (physically, a small⟨N⟩) was placed over the preceding vowel to indicate the missing letter; this is the origin of the use of tilde to indicate nasalization (comparethe development of the umlautas an abbreviation of⟨e⟩.)[citation needed]A tilde represented an omitted⟨a⟩or a syllable containing it.[22]The practice of using the tilde over a vowel to indicate omission of an⟨n⟩or⟨m⟩continued in printed books inFrenchas a means of reducing text length until the 17th century. It was also used inPortugueseandSpanish.[citation needed] The tilde was also used occasionally to make other abbreviations, such as over the letter⟨q⟩, makingq̃,to signify the wordque("that")[citation needed]. It also appears forquaand together with the letter⟨p⟩to formp̃forpra.[22] It is also as a small⟨n⟩that the tilde originated when written above other letters, marking aLatin⟨n⟩which had beenelidedin old Galician-Portuguese. In modernPortugueseit indicatesnasalizationof the base vowel:mão"hand", from Lat.manu-;razões"reasons", from Lat.rationes.[citation needed]This usage has been adopted in the orthographies of severalnative languages of South America, such asGuaraniandNheengatu, as well as in theInternational Phonetic Alphabet(IPA) and many other phonetic alphabets. For example,[ljɔ̃]is the IPA transcription of the pronunciation of the French place-nameLyon. InBreton, the symbol⟨ñ⟩after a vowel means that the letter⟨n⟩serves only to give the vowel a nasalised pronunciation, without being itself pronounced, as it normally is. For example,⟨an⟩gives the pronunciation[ãn]whereas⟨añ⟩gives[ã]. In theDMGromanization ofTunisian Arabic, the tilde is used for nasal vowels õ and ṏ. The tilded⟨n⟩(⟨ñ⟩,⟨Ñ⟩) developed from the digraph⟨nn⟩in Spanish. In this language,⟨ñ⟩is considered a separate letter calledeñe(IPA:[ˈeɲe]), rather than a letter-diacritic combination; it is placed in Spanish dictionaries between the letters⟨n⟩and⟨o⟩. In Spanish, the wordtildeactually refers to diacritics in general, e.g. the acute accent inJosé,[23]while the diacritic in⟨ñ⟩is called "virgulilla" (IPA:[birɣuˈliʝa]) (yeísta) or (IPA:[birɣuˈliʎa]) (non-yeísta).[24]Current languages in which the tilded⟨n⟩(⟨ñ⟩) is used for thepalatal nasalconsonant/ɲ/include InVietnamese, a tilde over a vowel represents a creaky risingtone(ngã). Letters with the tilde are not considered separate letters of theVietnamese alphabet. Inphonetics, a tilde is used as adiacritic that is placedabove a letter, below it orsuperimposedonto the middle of it: A tilde between twophonemesindicates optionality, or "alternates with". E.g. ⟨ɕ~ʃ⟩ could indicate that the sounds may alternate depending on context (free variation), or that they vary based on region or speaker, or some other variation. InEstonian, the symbol⟨õ⟩stands for theclose-mid back unrounded vowel, and it is considered an independent letter. Some languages and alphabets use the tilde for other purposes, such as: The tilde is used in various ways in punctuation, including: In some languages (such as in French),[citation needed]a tilde or a tilde-likewave dash(Unicode:U+301C〜WAVE DASH) may be used as apunctuationmark (instead of an unspacedhyphen,en dashorem dash) between twonumbers, to indicate arange. Doing so avoids the risk of confusion withsubtractionor a hyphenated number (such as a part number or model number). For example, "12~15" means "12 to 15", "~3" means "up to three", and "100~" means "100 and greater".[citation needed]East Asian languagesalmost always use this convention, but it is sometimes done for clarity in some other languages as well.Chineseuses the wave dash andfull-widthem dash interchangeably for this purpose. In English, the tilde is often used to express ranges and model numbers inelectronics, but rarely in formal grammar or in type-set documents, as a wavy dash preceding a number sometimes represents an approximation (see below). Therange tildeis used for various purposes inFrench, but only to denote ranges of numbers (e.g.,« 21~32 degrés Celsius »" means "21 to 32 degrees Celsius")[citation needed] (The symbolU+2248≈ALMOST EQUAL TO(adouble tilde) is also used in French, for example,« ≈400 mètres »means "approximately 400 meters"[citation needed].) Before a number the tilde can mean 'approximately'; '~42' means 'approximately 42'.[28]When used withcurrency symbolsthat precede the number (national conventions differ), the tilde precedes the symbol, thus for example '~$10' means 'about ten dollars'.[29][better source needed] The symbols≈(almost equal to) and≅(approximately equal to) are among the othersymbols used to express approximation. Thewave dash(波ダッシュ,nami dasshu)is used for various purposes in Japanese, including to denote ranges of numbers (e.g.,5〜10means between 5 and 10) in place of dashes or brackets, and to indicate origin. The wave dash is also used to separate a title and a subtitle in the same line, as acolonis used in English. When used in conversations via email or instant messenger it may be used as asarcasm mark[citation needed]. The sign is used as a replacement for thechōon, katakana character, in Japanese, extending the final syllable. WeChatusers frequently replace final punctuations with tildes in messages. An analysis of such "innovative uses" of tildes found that final tildes are most used to make the message friendlier and polite. They make expressives more sincere and directives less abrupt. Less commonly, final tildes imply sounds, i.e. otomatopeas and sound extensions. This use is compared tosajiao(Chinese:撒娇), a child-like acting seen in East Asian cultures that are also vocalized by raising or extending tone.[30] A tilde in front of a single quantity can mean "approximately", "about"[14]or "of the sameorder of magnitudeas." In writtenmathematical logic, the tilde representsnegation: "~p" means "notp", where "p" is aproposition. Modern use often replaces the tilde with the negation symbol (¬) for this purpose, to avoid confusion withequivalence relations. Inmathematics, the tilde operator (which can be represented by a tilde or the dedicated characterU+223C∼TILDE OPERATOR), sometimes called "twiddle", is often used to denote anequivalence relationbetween two objects. Thus "x~y" means "xisequivalenttoy". It is a weaker statement than stating thatxequalsy. The expression "x~y" is sometimes read aloud as "xtwiddlesy", perhaps as an analogue to the verbal expression of "x=y".[31] The tilde can indicate approximate equality in a variety of ways. It can be used to denote theasymptotic equalityof two functions. For example,f(x) ~g(x)means thatlimx→∞f(x)g(x)=1{\displaystyle \lim _{x\to \infty }{\frac {f(x)}{g(x)}}=1}.[13] A tilde is also used to indicate "approximatelyequal to" (e.g. 1.902 ~= 2). This usage probably developed as a typed alternative to thelibra symbolused for the same purpose in written mathematics, which is an equal sign with the upper bar replaced by a bar with an upward hump, bump, or loop in the middle (︍︍♎︎) or, sometimes, a tilde (≃).[citation needed]The symbol "≈" is also used for this purpose. Inphysicsandastronomy, a tilde can be used between two expressions (e.g.h~ 10−34J s) to state that the two are of the sameorder of magnitude.[13] Instatisticsandprobability theory, the tilde means "is distributed as";[13]seerandom variable(e.g.X~B(n,p)for abinomial distribution). A tilde can also be used to represent geometricsimilarity(e.g.∆ABC~ ∆DEF, meaningtriangleABCis similar toDEF). A triple tilde (≋) is often used to showcongruence, an equivalence relation in geometry.[citation needed] Ingraph theory, the tilde can be used to represent adjacency between vertices. The edge(x,y){\displaystyle (x,y)}connects verticesx{\displaystyle x}andy{\displaystyle y}which can be said to be adjacent, and this adjacency can be denotedx∼y{\displaystyle x\sim y}. The symbol "f~{\displaystyle {\tilde {f}}}" is pronounced as "eff tilde" or, informally, as "eff twiddle".[32][33]This can be used to denote theFourier transformoff, or aliftoff, and can have a variety of other meanings depending on the context. A tilde placed below a letter in mathematics can represent avectorquantity (e.g.(x1,x2,x3,…,xn)=x∼{\displaystyle (x_{1},x_{2},x_{3},\ldots ,x_{n})={\underset {^{\sim }}{\mathbf {x} }}}). Instatisticsandprobability theory, a tilde placed on top of a variable is sometimes used to represent themedianof that variable; thusy~{\displaystyle {\tilde {\mathbf {y} }}}would indicate the median of the variabley{\displaystyle \mathbf {y} }. A tilde over the letter n (n~{\displaystyle {\tilde {n}}}) is sometimes used to indicate theharmonic mean. In machine learning, a tilde may represent a candidate value for a cell state inGRUsorLSTMunits. (e.g. c̃) Often inphysics, one can consider anequilibrium solutionto an equation, and then a perturbation to that equilibrium. For the variables in the original equation (for instanceX{\displaystyle X}) a substitutionX→x+x~{\displaystyle X\to x+{\tilde {x}}}can be made, wherex{\displaystyle x}is the equilibrium part andx~{\displaystyle {\tilde {x}}}is the perturbed part. A tilde is also used inparticle physicsto denote the hypotheticalsupersymmetricpartner. For example, anelectronis referred to by the lettere, and itssuperpartnertheselectronis writtenẽ. In multibody mechanics, the tilde operator maps three-dimensional vectorsω∈R3{\displaystyle {\boldsymbol {\omega }}\in \mathbb {R} ^{3}}to skew-symmetrical matricesω~=[0−ω3ω2ω30−ω1−ω2ω10]{\displaystyle {\tilde {\boldsymbol {\omega }}}={\begin{bmatrix}0&-\omega _{3}&\omega _{2}\\\omega _{3}&0&-\omega _{1}\\-\omega _{2}&\omega _{1}&0\end{bmatrix}}}(see[34]or[35]). For relations involving preference,economistssometimes use the tilde to representindifferencebetween two or more bundles of goods. For example, to say that a consumer is indifferent between bundlesxandy, an economist would writex~y. It can approximate the sine wave symbol (∿,U+223F), which is used inelectronicsto indicatealternating current, in place of +, −, or ⎓ fordirect current. The tilde may indicate alternatingallomorphsormorphological alternation, as in//ˈniː~ɛl+t//forkneel~knelt(theplus sign'+' indicates a morpheme boundary).[36][37] The tilde may represent some sort of phonetic or phonemic variation between two sounds, which might beallophonesor infree variation. For example,[χ~x]can represent "either[χ]or[x]". Informal semantics, it is also used as a notation for thesquiggle operatorwhich plays a key role in many theories offocus.[38] Ininterlinear gloss, a tilde sets off an element added to a word byreduplication; were a hyphen or double hyphen used instead, confusion would arise because that element would be notated in the same way as an independent morpheme requiring an independent gloss. Computer programmersuse the tilde in various ways and sometimes call the symbol (as opposed to the diacritic) asquiggle,squiggly,swiggle, ortwiddle. According to theJargon File, other synonyms sometimes used in programming includenot,approx,wiggle,enyay(aftereñe) and (humorously)sqiggle/ˈskɪɡəl/.[39] OnUnix-likeoperating systems(includingAIX,BSD,LinuxandmacOS), tilde normally indicates the current user'shome directory. For example, if the current user's home directory is/home/user, then the commandcd ~is equivalent tocd /home/user,cd $HOME, orcd.[39]This convention derives from theLear-SieglerADM-3Aterminal in common use during the 1970s, which happened to have the tilde symbol and the word "Home" (for moving the cursor to the upper left) on the same key.[40]When prepended to a particular username, the tilde indicates that user's home directory (e.g.,~janedoefor the home directory of userjanedoe, such as/home/janedoe).[41] Used inURLson theWorld Wide Web, it often denotes a personal website on aUnix-based server. For example,http://www.example.com/~johndoe/might be the personal website of John Doe. This mimics the Unix shell usage of the tilde. However, when accessed from the web, file access is usually directed to asubdirectoryin the user's home directory, such as/home/username/public_htmlor/home/username/www.[42] In URLs, the characters%7E(or%7e) may substitute for a tilde if an input device lacks a tilde key.[43]Thus,http://www.example.com/~johndoe/andhttp://www.example.com/%7Ejohndoe/will behave in the same manner. The tilde is used in theAWKprogramming languageas part of the pattern match operators forregular expressions:[44] The operators are also used in theSQLvariant of the databasePostgreSQL.[45] A variant of this, with the plain tilde replaced with=~, was adopted inPerl[46].Rubyalso uses this variant without the negated operator.[47] InAPL[48]: 68andMATLAB,[49]tilde represents the monadic logical function NOT. and in APL it additionally represents the dyadicmultisetfunctionwithout(set difference).[48]: 258 InCthe tilde character is used asbitwise NOTunaryoperator, following the notation in logic (an!causes a logical NOT, instead).[50]This is also used by many languages based on or influenced by C, such asC++,C#,D,Java,JavaScript,Perl,PHP, andPython.[51]TheMySQL databasealso use tilde as bitwise invert[52]as does Microsoft's SQL ServerTransact-SQL (T-SQL)language. JavaScriptalso uses tilde as bitwise NOT. Because bitwise operators work on integers, and numbers in JavaScript are 64 bit floating point numbers, the operator converts numbers to a 32-bit signed integer before it performing the negation.[53]The conversion truncates the fractional part and most significant bits. This lets two tildes~~xto be used as a short syntax to cast to integer. However, it is not recommended as use for truncation. In contrast, it does not truncate BigInts, which are arbitrarily large integers.[54] In C++[55]and C#,[56]the tilde is also used as the first character in aclass'smethodname (where the rest of the name must be the same name as the class) to indicate adestructor– a special method which is called at the end of theobject's life. In ASP.NET applications, tilde ('~') is used as a shortcut to the root of the application's virtual directory.[57] In theCSSstylesheet language, the tilde finds the element selected by the right-hand side that shares the parent with an element selected by the left-hand side.[58] In theD programming language, the tilde is used as bitwise not operator,concatenationoperator such as those ofarrays,[59]and to indicate an object destructor.[60][61]Tilde operator can be overloaded for user types,[62]and binary tilde operator is mostly used to merging two objects, or adding some objects to set of objects. It was introduced because plus operator can have different meaning in many situations. For example, "120" + "14" may produce "134" (addition of two numbers), "12014" (concatenation of strings), or something else.[63]D disallows + operator for arrays (and strings), and provides separate operator for concatenation (similarlyPHPprogramming language solved this problem by using dot operator for concatenation, and + for number addition, which will also work on strings containing numbers). InEiffel, the tilde is used for object comparison. Ifaandbdenote objects, the Boolean expressiona~bhas value true if and only if these objects are equal, as defined by the applicable version of the library routineis_equal, which by default denotes field-by-field object equality but can be redefined in any class to support a specific notion of equality.[64]: 114–115Ifaandbare references, the object equality expressiona~bis to be contrasted witha=bwhich denotes reference equality. Unlike the calla.is_equal(b), the expressiona~bistype-safeeven in the presence ofcovariance. In theApache Groovy programming languagethe tilde character is used as an operator mapped to the bitwiseNegate() method.[65]Given a String the method will produce a java.util.regex.Pattern. Given an integer it will negate the integer bitwise like in C.=~and==~can in Groovy be used to match a regular expression.[66][67] InHaskell, the tilde is used in type constraints to indicate type equality.[68]Also, in pattern-matching, the tilde is used to indicate a lazy pattern match.[69] In theInform6 programming language, the tilde is used to indicate a quotation mark inside a quoted string. Tilde itself is created by@@126.[70] In "text mode" of theLaTeXtypesetting language a tilde diacritic can be obtained using, e.g.,\~{n}, yielding "ñ". A stand-alone tilde can be obtained by using\textasciitildeor\string~. In "math mode" a tilde diacritic can be written as, e.g.,\tilde{x}. For a wider tilde\widetildecan be used. The\simcommand produce a tilde-like binary relation symbol that is often used in mathematical expressions, and the double-tilde≈is obtained with\approx.In both text and math mode, a tilde on its own (~) renders a white space with no line breaking.In both text and math mode, a tilde on its own (~) renders a white space with no line breaking.[71]Theurlpackage also supports entering tildes directly, e.g.,\url{http://server/~name}.[citation needed]. InMediaWikisyntax, four tildes are a shortcut for a user's signature. Three and five tildes puts the signature without timestamp and only the timestamp, respectively.[72] InCommon Lisp, the tilde is used as the prefix for format specifiers in format strings.[73] InMax/MSP, MSP objects have names ending with a tilde. MSP objects process at the computer's sampling rate and mainly deal with sound.[74] InStandard ML, the tilde is used as the prefix for negative numbers and as the unary negation operator.[75] InOCaml, the tilde is used to specify the label for a labeled parameter.[76] InR, the tilde operator is used to separate the left- and right-hand sides in a model formula.[77] InObject REXX, the twiddle is used as a "message send" symbol. For example,Employee.name~lower()would cause thelower()method to act on the objectEmployee'snameattribute, returning the result of the operation.~~returns the object that received the method rather than the result produced. Thus, it can be used when the result need not be returned or when cascading methods are to be used.team~~insert("Jane")~~insert("Joe")~~insert("Steve")would send multiple concurrentinsertmessages, thus invoking theinsertmethod three consecutive times on theteamobject.[78] InRaku, a prefixing tildeconvertsa value to a string. An infix tildeconcatenatesstrings,[79]taking place of the dot operator in Perl, as the dot is used for member access instead of->.[80]~~is called "the smartmatch operator" and its semantics depend on the type of the right-side argument. Namely, it checks numeric and string equalities, performsregular expressionmatch tests (as opposed to=~in Perl[80]), andtype checking.[79] InYAML, the "Core schema," a set of aliases that processors are recommended to use, resolves a tilde as null.[81] The presence (or absence) of a tilde engraved on the keyboard depends on the territory where it was sold. In either case, computer's system settings determine thekeyboard mappingand the default setting will match the engravings on the keys. Even so, it certainly possible to configure a keyboard for a different locale than that supplied by the retailer. On American and British keyboards, the tilde is a standard keytop and pressing it produces a free-standing "ASCII Tilde". To generate a letter with a tilde diacritic requires theUS internationalorUK extendedkeyboard setting. Instructions for other national languages and keyboards are beyond the scope of this article. The dominantUnixconvention for naming backup copies of files is appending a tilde to the original file name. It originated with theEmacstext editor[82]and was adopted by many other editors and some command-line tools. Emacs also introduced an elaborate numbered backup scheme, with files namedfilename.~1~,filename.~2~and so on.[83]It didn't catch on, as the rise ofversion controlsoftware eliminates the need for this usage.[citation needed] The tilde was part ofMicrosoft'sfilename manglingscheme when it extended theFATfile system standard to support long filenames forMicrosoft Windows. Programs written prior to this development could only access filenames in the so-called8.3 format—the filenames consisted of a maximum of eight characters from a restricted character set (e.g. no spaces), followed by a period, followed by three more characters. In order to permit these legacy programs to access files in the FAT file system, each file had to be given two names—one long, more descriptive one, and one that conformed to the 8.3 format. This was accomplished with a name-mangling scheme in which the first six characters of the filename are followed by a tilde and a digit. For example, "Program Files" might become "PROGRA~1".[84] The tilde symbol is also often used to prefix hidden temporary files that are created when a document is opened in Windows.[citation needed]For example, when a document "Document1.doc" is opened in Word, a file called "~$cument1.doc" is created in the same directory. This file contains information about which user has the file open, to prevent multiple users from attempting to change a document at the same time.[85] In thejuggling notationsystem Beatmap, tilde can be added to either "hand" in a pair of fields to say "cross the arms with this hand on top".Mills' Messis thus represented as (~2x,1)(1,2x)(2x,~1)*.[86] Unicode encodes a number of cases of "letter with tilde" asprecomposed charactersand these are displayed below. In addition, many more symbols may be composed using thecombining characterfacility (U+0303◌̃COMBINING TILDE,U+0330◌̰COMBINING TILDE BELOWand others) that may be used with any letter or other diacritic to create a customised symbol but this does not mean that the result has any real-world application and are not shown in the table. A tilde diacritic can be added to almost any character by using acombiningtilde. Greek and Cyrillic letters with tilde (Α͂ ᾶ,Η͂ ῆ,Ι͂ ῖ, ῗ,Υ͂ ῦ, ῧ andА̃ а̃,Ә̃ ә̃,Е̃ е̃,И̃ и̃,О̃ о̃,У̃ у̃,Ј̃ j̃) are formed using this method. In practice the full-width tilde(全角チルダ,zenkaku chiruda)(UnicodeU+FF5E~FULLWIDTH TILDE), is often used instead of the wave dash(波ダッシュ,nami dasshu)(UnicodeU+301C〜WAVE DASH), because theShift JIScode for the wave dash, 0x8160, which should be mapped to U+301C,[87][88]is instead mapped to U+FF5E[89]inWindows code page 932(Microsoft'scode pagefor Japanese), a widely used extension of Shift JIS. This decision avoided a shape definition error in the original (6.2) Unicode code charts:[90]the wave dash reference glyph in JIS / Shift JIS[91][92]matches the Unicode reference glyph for U+FF5EFULLWIDTH TILDE,[93]while the original reference glyph for U+301C[90]was reflected, incorrectly,[94]when Unicode imported the JIS wave dash. In other platforms such as theclassic Mac OSandmacOS, 0x8160 is correctly mapped to U+301C. It is generally difficult, if not impossible, for users of Japanese Windows to type U+301C, especially in legacy, non-Unicode applications. A similar situation exists regarding the KoreanKS X 1001character set, in which Microsoft maps theEUC-KRorUHCcode for the wave dash (0xA1AD) toU+223C∼TILDE OPERATOR,[95][96]whileIBMandApplemap it to U+301C.[97][98][99]Microsoft also uses U+FF5E to map the KS X 1001 raised tilde (0xA2A6),[96]while Apple usesU+02DC˜SMALL TILDE.[99] The current Unicode reference glyph for U+301C has been corrected[94]to match the JIS standard[100]in response to a 2014 proposal, which noted that while the existing Unicode reference glyph had been matched by fonts from the discontinuedWindows XP, all other major platforms including later versions of Microsoft Windows shipped with fonts matching the JIS reference glyph for U+301C.[101] The JIS / Shift JIS wave dash is still formally mapped to U+301C as ofJIS X 0213,[102]whereas theWHATWGEncoding Standard used byHTML5follows Microsoft in mapping 0x8160 to U+FF5E.[103]These two code points have a similar or identical glyph in severalcomputer fonts, reducing the confusion and incompatibility.
https://en.wikipedia.org/wiki/Tilde#Electronics
In physics, theJosephson effectis a phenomenon that occurs when twosuperconductorsare placed in proximity, with some barrier or restriction between them. The effect is named after the British physicistBrian Josephson, who predicted in 1962 the mathematical relationships for the current and voltage across the weak link.[1][2]It is an example of amacroscopic quantum phenomenon, where the effects of quantum mechanics are observable at ordinary, rather than atomic, scale. The Josephson effect has many practical applications because it exhibits a precise relationship between different physical measures, such as voltage and frequency, facilitating highly accurate measurements. The Josephson effect produces a current, known as asupercurrent, that flows continuously without any voltage applied, across a device known as aJosephson junction(JJ).[clarification needed]These consist of two or more superconductors coupled by a weak link. The weak link can be a thin insulating barrier (known as asuperconductor–insulator–superconductor junction, or S-I-S), a short section of non-superconducting metal (S-N-S), or a physical constriction that weakens the superconductivity at the point of contact (S-c-S). Josephson junctions have important applications inquantum-mechanical circuits, such asSQUIDs,superconducting qubits, andRSFQdigital electronics. TheNISTstandard for onevoltis achieved byan array of 20,208 Josephson junctions in series.[3] The DC Josephson effect had been seen in experiments prior to 1962,[4]but had been attributed to "super-shorts" or breaches in the insulating barrier leading to the direct conduction of electrons between the superconductors. In 1962, Brian Josephson became interested in superconducting tunneling. He was then 23 years old and a second-year graduate student ofBrian Pippardat theMond Laboratoryof theUniversity of Cambridge. That year, Josephson took a many-body theory course withPhilip W. Anderson, aBell Labsemployee on sabbatical leave for the 1961–1962 academic year. The course introduced Josephson to the idea of broken symmetry in superconductors, and he "was fascinated by the idea of broken symmetry, and wondered whether there could be any way of observing it experimentally". Josephson studied the experiments byIvar Giaeverand Hans Meissner, and theoretical work by Robert Parmenter. Pippard initially believed that the tunneling effect was possible but that it would be too small to be noticeable, but Josephson did not agree, especially after Anderson introduced him to a preprint of "Superconductive Tunneling" byCohen,Falicov, and Phillips about the superconductor-barrier-normal metal system.[5][6]: 223–224 Josephson and his colleagues were initially unsure about the validity of Josephson's calculations. Anderson later remembered: We were all—Josephson, Pippard and myself, as well as various other people who also habitually sat at the Mond tea and participated in the discussions of the next few weeks—very much puzzled by the meaning of the fact that the current depends on the phase. After further review, they concluded that Josephson's results were valid. Josephson then submitted "Possible new effects in superconductive tunnelling" toPhysics Lettersin June 1962[1]. The newer journalPhysics Letterswas chosen instead of the better establishedPhysical Review Lettersdue to their uncertainty about the results.John Bardeen, by then already Nobel Prize winner, was initially publicly skeptical of Josephson's theory in 1962, but came to accept it after further experiments and theoretical clarifications.[6]: 222–227See also:John Bardeen § Josephson effect controversy. In January 1963, Anderson and hisBell Labscolleague John Rowell submitted the first paper toPhysical Review Lettersto claim the experimental observation of Josephson's effect "Probable Observation of the Josephson Superconducting Tunneling Effect".[7]These authors were awarded patents[8]on the effects that were never enforced, but never challenged.[citation needed] Before Josephson's prediction, it was only known that single (i.e., non-paired) electrons can flow through an insulating barrier, by means ofquantum tunneling. Josephson was the first to predict the tunneling of superconductingCooper pairs. For this work, Josephson received theNobel Prize in Physicsin 1973.[9]John Bardeen was one of the nominators.[6]: 230 Types of Josephson junction include theφ Josephson junction(of whichπ Josephson junctionis a special example),long Josephson junction, andsuperconducting tunnel junction. Other uses include: The Josephson effect can be calculated using the laws of quantum mechanics. A diagram of a single Josephson junction is shown at right. Assume that superconductor A hasGinzburg–Landau order parameterψA=nAeiϕA{\displaystyle \psi _{A}={\sqrt {n_{A}}}e^{i\phi _{A}}}, and superconductor BψB=nBeiϕB{\displaystyle \psi _{B}={\sqrt {n_{B}}}e^{i\phi _{B}}}, which can be interpreted as thewave functionsofCooper pairsin the two superconductors. If the electric potential difference across the junction isV{\displaystyle V}, then the energy difference between the two superconductors is2eV{\displaystyle 2eV}, since each Cooper pair has twice the charge of one electron. TheSchrödinger equationfor thistwo-state quantum systemis therefore:[15] iℏ∂∂t(nAeiϕAnBeiϕB)=(eVKK−eV)(nAeiϕAnBeiϕB),{\displaystyle i\hbar {\frac {\partial }{\partial t}}{\begin{pmatrix}{\sqrt {n_{A}}}e^{i\phi _{A}}\\{\sqrt {n_{B}}}e^{i\phi _{B}}\end{pmatrix}}={\begin{pmatrix}eV&K\\K&-eV\end{pmatrix}}{\begin{pmatrix}{\sqrt {n_{A}}}e^{i\phi _{A}}\\{\sqrt {n_{B}}}e^{i\phi _{B}}\end{pmatrix}},} where the constantK{\displaystyle K}is a characteristic of the junction. To solve the above equation, first calculate the time derivative of the order parameter in superconductor A: ∂∂t(nAeiϕA)=nA˙eiϕA+nA(iϕ˙AeiϕA)=(nA˙+inAϕ˙A)eiϕA,{\displaystyle {\frac {\partial }{\partial t}}({\sqrt {n_{A}}}e^{i\phi _{A}})={\dot {\sqrt {n_{A}}}}e^{i\phi _{A}}+{\sqrt {n_{A}}}(i{\dot {\phi }}_{A}e^{i\phi _{A}})=({\dot {\sqrt {n_{A}}}}+i{\sqrt {n_{A}}}{\dot {\phi }}_{A})e^{i\phi _{A}},} and therefore the Schrödinger equation gives: (nA˙+inAϕ˙A)eiϕA=1iℏ(eVnAeiϕA+KnBeiϕB).{\displaystyle ({\dot {\sqrt {n_{A}}}}+i{\sqrt {n_{A}}}{\dot {\phi }}_{A})e^{i\phi _{A}}={\frac {1}{i\hbar }}(eV{\sqrt {n_{A}}}e^{i\phi _{A}}+K{\sqrt {n_{B}}}e^{i\phi _{B}}).} The phase difference of Ginzburg–Landau order parameters across the junction is called theJosephson phase: φ=ϕB−ϕA.{\displaystyle \varphi =\phi _{B}-\phi _{A}.}The Schrödinger equation can therefore be rewritten as: nA˙+inAϕ˙A=1iℏ(eVnA+KnBeiφ),{\displaystyle {\dot {\sqrt {n_{A}}}}+i{\sqrt {n_{A}}}{\dot {\phi }}_{A}={\frac {1}{i\hbar }}(eV{\sqrt {n_{A}}}+K{\sqrt {n_{B}}}e^{i\varphi }),} and itscomplex conjugateequation is: nA˙−inAϕ˙A=1−iℏ(eVnA+KnBe−iφ).{\displaystyle {\dot {\sqrt {n_{A}}}}-i{\sqrt {n_{A}}}{\dot {\phi }}_{A}={\frac {1}{-i\hbar }}(eV{\sqrt {n_{A}}}+K{\sqrt {n_{B}}}e^{-i\varphi }).} Add the two conjugate equations together to eliminateϕ˙A{\displaystyle {\dot {\phi }}_{A}}: 2nA˙=1iℏ(KnBeiφ−KnBe−iφ)=KnBℏ⋅2sin⁡φ.{\displaystyle 2{\dot {\sqrt {n_{A}}}}={\frac {1}{i\hbar }}(K{\sqrt {n_{B}}}e^{i\varphi }-K{\sqrt {n_{B}}}e^{-i\varphi })={\frac {K{\sqrt {n_{B}}}}{\hbar }}\cdot 2\sin \varphi .} SincenA˙=n˙A2nA{\displaystyle {\dot {\sqrt {n_{A}}}}={\frac {{\dot {n}}_{A}}{2{\sqrt {n_{A}}}}}}, we have: n˙A=2KnAnBℏsin⁡φ.{\displaystyle {\dot {n}}_{A}={\frac {2K{\sqrt {n_{A}n_{B}}}}{\hbar }}\sin \varphi .} Now, subtract the two conjugate equations to eliminatenA˙{\displaystyle {\dot {\sqrt {n_{A}}}}}: 2inAϕ˙A=1iℏ(2eVnA+KnBeiφ+KnBe−iφ),{\displaystyle 2i{\sqrt {n_{A}}}{\dot {\phi }}_{A}={\frac {1}{i\hbar }}(2eV{\sqrt {n_{A}}}+K{\sqrt {n_{B}}}e^{i\varphi }+K{\sqrt {n_{B}}}e^{-i\varphi }),} which gives: ϕ˙A=−1ℏ(eV+KnBnAcos⁡φ).{\displaystyle {\dot {\phi }}_{A}=-{\frac {1}{\hbar }}(eV+K{\sqrt {\frac {n_{B}}{n_{A}}}}\cos \varphi ).} Similarly, for superconductor B we can derive that: n˙B=−2KnAnBℏsin⁡φ,ϕ˙B=1ℏ(eV−KnAnBcos⁡φ).{\displaystyle {\dot {n}}_{B}=-{\frac {2K{\sqrt {n_{A}n_{B}}}}{\hbar }}\sin \varphi ,\,{\dot {\phi }}_{B}={\frac {1}{\hbar }}(eV-K{\sqrt {\frac {n_{A}}{n_{B}}}}\cos \varphi ).} Noting that the evolution of Josephson phase isφ˙=ϕ˙B−ϕ˙A{\displaystyle {\dot {\varphi }}={\dot {\phi }}_{B}-{\dot {\phi }}_{A}}and the time derivative ofcharge carrier densityn˙A{\displaystyle {\dot {n}}_{A}}is proportional to currentI{\displaystyle I}, whennA≈nB{\displaystyle n_{A}\approx n_{B}}, the above solution yields theJosephson equations:[16] I(t)=Icsin⁡(φ(t)){\displaystyle I(t)=I_{c}\sin(\varphi (t))}(1) ∂φ∂t=2eV(t)ℏ{\displaystyle {\frac {\partial \varphi }{\partial t}}={\frac {2eV(t)}{\hbar }}}(2) whereV(t){\displaystyle V(t)}andI(t){\displaystyle I(t)}are the voltage across and the current through the Josephson junction, andIc{\displaystyle I_{c}}is a parameter of the junction named thecritical current. Equation (1) is called thefirst Josephson relationorweak-link current-phase relation, and equation (2) is called thesecond Josephson relationorsuperconducting phase evolution equation. The critical current of the Josephson junction depends on the properties of the superconductors, and can also be affected by environmental factors like temperature and externally applied magnetic field. TheJosephson constantis defined as: KJ=2eh,{\displaystyle K_{J}={\frac {2e}{h}}\,,} and its inverse is themagnetic flux quantum: Φ0=h2e=2πℏ2e.{\displaystyle \Phi _{0}={\frac {h}{2e}}=2\pi {\frac {\hbar }{2e}}\,.} The superconducting phase evolution equation can be reexpressed as: ∂φ∂t=2π[KJV(t)]=2πΦ0V(t).{\displaystyle {\frac {\partial \varphi }{\partial t}}=2\pi [K_{J}V(t)]={\frac {2\pi }{\Phi _{0}}}V(t)\,.} If we define: Φ=Φ0φ2π,{\displaystyle \Phi =\Phi _{0}{\frac {\varphi }{2\pi }}\,,} then the voltage across the junction is: V=Φ02π∂φ∂t=dΦdt,{\displaystyle V={\frac {\Phi _{0}}{2\pi }}{\frac {\partial \varphi }{\partial t}}={\frac {d\Phi }{dt}}\,,} which is very similar toFaraday's law of induction. But note that this voltage does not come from magnetic energy, since there isno magnetic field in the superconductors; Instead, this voltage comes from the kinetic energy of the carriers (i.e. the Cooper pairs). This phenomenon is also known askinetic inductance. There are three main effects predicted by Josephson that follow directly from the Josephson equations: The DC Josephson effect is a direct current crossing the insulator in the absence of any external electromagnetic field, owing totunneling. This DC Josephson current is proportional to the sine of the Josephson phase (phase difference across the insulator, which stays constant over time), and may take values between−Ic{\displaystyle -I_{c}}andIc{\displaystyle I_{c}}. With a fixed voltageVDC{\displaystyle V_{DC}}across the junction, the phase will vary linearly with time and the current will be a sinusoidal AC (alternating current) with amplitudeIc{\displaystyle I_{c}}and frequencyKJVDC{\displaystyle K_{J}V_{DC}}. This means a Josephson junction can act as a perfect voltage-to-frequency converter. Microwave radiation of a single(angular) frequencyω{\displaystyle \omega }can induce quantized DC voltages[17]across the Josephson junction, in which case the Josephson phase takes the formφ(t)=φ0+nωt+asin⁡(ωt){\displaystyle \varphi (t)=\varphi _{0}+n\omega t+a\sin(\omega t)}, and the voltage and current across the junction will be:V(t)=ℏ2eω(n+acos⁡(ωt)),andI(t)=Ic∑m=−∞∞Jm(a)sin⁡(φ0+(n+m)ωt),{\displaystyle V(t)={\frac {\hbar }{2e}}\omega (n+a\cos(\omega t)),{\text{ and }}I(t)=I_{c}\sum _{m=-\infty }^{\infty }J_{m}(a)\sin(\varphi _{0}+(n+m)\omega t),} The DC components are:VDC=nℏ2eω,andIDC=IcJ−n(a)sin⁡φ0.{\displaystyle V_{\text{DC}}=n{\frac {\hbar }{2e}}\omega ,{\text{ and }}I_{\text{DC}}=I_{c}J_{-n}(a)\sin \varphi _{0}.} This means a Josephson junction can act like a perfect frequency-to-voltage converter,[18]which is the theoretical basis for the Josephson voltage standard. When the current and Josephson phase varies over time, the voltage drop across the junction will also vary accordingly; As shown in derivation below, the Josephson relations determine that this behavior can be modeled by akinetic inductancenamed Josephson Inductance.[19] Rewrite the Josephson relations as: Now, apply thechain ruleto calculate the time derivative of the current: Rearrange the above result in the form of thecurrent–voltage characteristicof an inductor: This gives the expression for the kinetic inductance as a function of the Josephson Phase: Here,LJ=L(0)=Φ02πIc{\displaystyle L_{J}=L(0)={\frac {\Phi _{0}}{2\pi I_{c}}}}is a characteristic parameter of the Josephson junction, named the Josephson Inductance. Note that although the kinetic behavior of the Josephson junction is similar to that of an inductor, there is no associated magnetic field. This behaviour is derived from the kinetic energy of the charge carriers, instead of the energy in a magnetic field. Based on the similarity of the Josephson junction to a non-linear inductor, the energy stored in a Josephson junction when a supercurrent flows through it can be calculated.[20] The supercurrent flowing through the junction is related to the Josephson phase by the current-phase relation (CPR): The superconducting phase evolution equation is analogous toFaraday's law: Assume that at timet1{\displaystyle t_{1}}, the Josephson phase isφ1{\displaystyle \varphi _{1}}; At a later timet2{\displaystyle t_{2}}, the Josephson phase evolved toφ2{\displaystyle \varphi _{2}}. The energy increase in the junction is equal to the work done on the junction: This shows that the change of energy in the Josephson junction depends only on the initial and final state of the junction and not thepath. Therefore, the energy stored in a Josephson junction is astate function, which can be defined as: HereEJ=|E(0)|=Φ0Ic2π{\displaystyle E_{J}=|E(0)|={\frac {\Phi _{0}I_{c}}{2\pi }}}is a characteristic parameter of the Josephson junction, named the Josephson Energy. It is related to the Josephson Inductance byEJ=LJIc2{\displaystyle E_{J}=L_{J}I_{c}^{2}}. An alternative but equivalent definitionE(φ)=EJ(1−cos⁡φ){\displaystyle E(\varphi )=E_{J}(1-\cos \varphi )}is also often used. Again, note that a non-linearmagnetic coil inductoraccumulatespotential energyin its magnetic field when a current passes through it; However, in the case of Josephson junction, no magnetic field is created by a supercurrent — the stored energy comes from the kinetic energy of the charge carriers instead. The Resistively Capacitance Shunted Junction (RCSJ) model,[21][22]or simply shunted junction model, includes the effect of AC impedance of an actual Josephson junction on top of the two basic Josephson relations stated above. As perThévenin's theorem,[23]the AC impedance of the junction can be represented by a capacitor and a shunt resistor, both parallel[24]to the ideal Josephson Junction. The complete expression for the current driveIext{\displaystyle I_{\text{ext}}}becomes: where the first term is displacement current withCJ{\displaystyle C_{J}}– effective capacitance, and the third is normal current withR{\displaystyle R}– effective resistance of the junction. The Josephson penetration depth characterizes the typical length on which an externally appliedmagnetic fieldpenetrates into thelong Josephson junction. It is usually denoted asλJ{\displaystyle \lambda _{J}}and is given by the following expression (in SI): whereΦ0{\displaystyle \Phi _{0}}is the magnetic flux quantum,jc{\displaystyle j_{c}}is thecritical supercurrent density(A/m2), andd′{\displaystyle d'}characterizes the inductance of the superconducting electrodes[25] wheredI{\displaystyle d_{I}}is the thickness of the Josephson barrier (usually insulator),d1{\displaystyle d_{1}}andd2{\displaystyle d_{2}}are the thicknesses of superconducting electrodes, andλ1{\displaystyle \lambda _{1}}andλ2{\displaystyle \lambda _{2}}are theirLondon penetration depths. The Josephson penetration depth usually ranges from a fewμmto several mm if the critical current density is very low.[26]
https://en.wikipedia.org/wiki/Josephson_effect
Inphysics, afluxonis aquantumofelectromagnetic flux. The term may have any of several related meanings. In the context ofsuperconductivity, intype II superconductorsfluxons (also known asAbrikosov vortices) can form when the applied field lies betweenBc1{\displaystyle B_{c_{1}}}andBc2{\displaystyle B_{c_{2}}}. The fluxon is a small whisker of normal phase surrounded by superconducting phase, andSupercurrentscirculate around the normal core. The magnetic field through such a whisker and its neighborhood, which has size of the order of London penetration depthλL{\displaystyle \lambda _{L}}(~100 nm), is quantized because of the phase properties of themagnetic vector potentialinquantum electrodynamics, seemagnetic flux quantumfor details. In the context oflong Superconductor-Insulator-Superconductor Josephson tunnel junctions, afluxon(akaJosephson vortex) is made of circulatingsupercurrentsand hasnonormal core in the tunneling barrier. Supercurrents circulate just around the mathematical center of a fluxon, which is situated with the (insulating) Josephson barrier. Again, the magnetic flux created by circulatingsupercurrentsis equal to amagnetic flux quantumΦ0{\displaystyle \Phi _{0}}(or less, if the superconducting electrodes of the Josephson junction are thinner thanλL{\displaystyle \lambda _{L}}). In the context of numericalMHDmodeling, a fluxon is a discretized magnetic field line, representing a finite amount of magnetic flux in a localized bundle in the model. Fluxon models are explicitly designed to preserve thetopologyof the magnetic field, overcomingnumerical resistivityeffects inEulerian models. This article abouttheoretical physicsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Fluxon
Shape wavesare excitations propagating alongJosephson vorticesorfluxons. In the case of two-dimensionalJosephson junctions(thicklong Josephson junctionswith an extra dimension) described by the 2Dsine-Gordon equation, shape waves are distortions of aJosephson vortexline of an arbitrary profile. Shape waves have remarkable properties exhibitingLorentz contractionandtime dilationsimilar to that inspecial relativity. Position of the shape wave excitation on aJosephson vortexacts like a “minute-hand” showing the time in the rest-frame associated with the vortex. At some conditions, a moving vortex with the shape excitation can have less energy than the same vortex without it. Thisphysics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Shape_waves
Apitch detection algorithm(PDA) is analgorithmdesigned to estimate thepitchorfundamental frequencyof aquasiperiodicoroscillatingsignal, usually adigital recordingofspeechor a musical note or tone. This can be done in thetime domain, thefrequency domain, or both. PDAs are used in various contexts (e.g.phonetics,music information retrieval,speech coding,musical performance systems) and so there may be different demands placed upon the algorithm. There is as yet[when?]no single ideal PDA, so a variety of algorithms exist, most falling broadly into the classes given below.[1] A PDA typically estimates the period of a quasiperiodic signal, then inverts that value to give the frequency. One simple approach would be to measure the distance betweenzero crossingpoints of the signal (i.e. thezero-crossing rate). However, this does not work well with complicatedwaveformswhich are composed of multiple sine waves with differing periods or noisy data. Nevertheless, there are cases in which zero-crossing can be a useful measure, e.g. in some speech applications where a single source is assumed.[citation needed]The algorithm's simplicity makes it "cheap" to implement. More sophisticated approaches compare segments of the signal with other segments offset by a trial period to find a match. AMDF (average magnitude difference function), ASMDF (Average Squared Mean Difference Function), and other similarautocorrelationalgorithms work this way. These algorithms can give quite accurate results for highly periodic signals. However, they have false detection problems (often "octave errors"), can sometimes cope badly with noisy signals (depending on the implementation), and - in their basic implementations - do not deal well withpolyphonicsounds (which involve multiple musical notes of different pitches).[citation needed] Current[when?]time-domain pitch detector algorithms tend to build upon the basic methods mentioned above, with additional refinements to bring the performance more in line with a human assessment of pitch. For example, the YIN algorithm[2]and the MPM algorithm[3]are both based uponautocorrelation. Frequency domain, polyphonic detection is possible, usually utilizing theperiodogramto convert the signal to an estimate of thefrequency spectrum[4]. This requires more processing power as the desired accuracy increases, although the well-known efficiency of theFFT, a key part of the periodogram algorithm, makes it suitably efficient for many purposes. Popular frequency domain algorithms include: theharmonic product spectrum;[5][6]cepstralanalysis[7]andmaximum likelihoodwhich attempts to match the frequency domain characteristics to pre-defined frequency maps (useful for detecting pitch of fixed tuning instruments); and the detection of peaks due to harmonic series.[8] To improve on the pitch estimate derived from the discrete Fourier spectrum, techniques such asspectral reassignment(phase based) orGrandke interpolation(magnitude based) can be used to go beyond the precision provided by the FFT bins. Another phase-based approach is offered by Brown and Puckette[9] Spectral/temporal pitch detection algorithms, e.g. theYAAPT pitch tracking algorithm,[10][11]are based upon a combination of time domain processing using anautocorrelationfunction such as normalized cross correlation, and frequency domain processing utilizing spectral information to identify the pitch. Then, among the candidates estimated from the two domains, a final pitch track can be computed usingdynamic programming. The advantage of these approaches is that the tracking error in one domain can be reduced by the process in the other domain. The fundamental frequency ofspeechcan vary from 40 Hz for low-pitched voices to 600 Hz for high-pitched voices.[12] Autocorrelation methods need at least two pitch periods to detect pitch. This means that in order to detect a fundamental frequency of 40 Hz, at least 50 milliseconds (ms) of the speech signal must be analyzed. However, during 50 ms, speech with higher fundamental frequencies may not necessarily have the same fundamental frequency throughout the window.[12]
https://en.wikipedia.org/wiki/Pitch_detection_algorithm
Instatistical signal processing, the goal ofspectral density estimation(SDE) or simplyspectral estimationis toestimatethespectral density(also known as thepower spectral density) of a signal from a sequence of time samples of the signal.[1]Intuitively speaking, the spectral density characterizes thefrequencycontent of the signal. One purpose of estimating the spectral density is to detect anyperiodicitiesin the data, by observing peaks at the frequencies corresponding to these periodicities. Some SDE techniques assume that a signal is composed of a limited (usually small) number of generating frequencies plus noise and seek to find the location and intensity of the generated frequencies. Others make no assumption on the number of components and seek to estimate the whole generating spectrum. Spectrum analysis, also referred to asfrequency domainanalysis or spectral density estimation, is the technical process of decomposing a complex signal into simpler parts. As described above, many physical processes are best described as a sum of many individual frequency components. Any process that quantifies the various amounts (e.g. amplitudes, powers, intensities) versus frequency (orphase) can be calledspectrum analysis. Spectrum analysis can be performed on the entire signal. Alternatively, a signal can be broken into short segments (sometimes calledframes), and spectrum analysis may be applied to these individual segments.Periodic functions(such assin⁡(t){\displaystyle \sin(t)}) are particularly well-suited for this sub-division. General mathematical techniques for analyzing non-periodic functions fall into the category ofFourier analysis. TheFourier transformof a function produces a frequency spectrum which contains all of the information about the original signal, but in a different form. This means that the original function can be completely reconstructed (synthesized) by aninverse Fourier transform. For perfect reconstruction, the spectrum analyzer must preserve both theamplitudeandphaseof each frequency component. These two pieces of information can be represented as a 2-dimensional vector, as acomplex number, or as magnitude (amplitude) and phase inpolar coordinates(i.e., as aphasor). A common technique in signal processing is to consider the squared amplitude, orpower; in this case the resulting plot is referred to as apower spectrum. Because of reversibility, the Fourier transform is called arepresentationof the function, in terms of frequency instead of time; thus, it is afrequency domainrepresentation. Linear operations that could be performed in the time domain have counterparts that can often be performed more easily in the frequency domain. Frequency analysis also simplifies the understanding and interpretation of the effects of various time-domain operations, both linear and non-linear. For instance, onlynon-linearortime-variantoperations can create new frequencies in the frequency spectrum. In practice, nearly all software and electronic devices that generate frequency spectra utilize adiscrete Fourier transform(DFT), which operates onsamplesof the signal, and which provides a mathematical approximation to the full integral solution. The DFT is almost invariably implemented by an efficient algorithm calledfast Fourier transform(FFT). The array of squared-magnitude components of a DFT is a type of power spectrum calledperiodogram, which is widely used for examining the frequency characteristics of noise-free functions such asfilter impulse responsesandwindow functions. But the periodogram does not provide processing-gain when applied to noiselike signals or even sinusoids at low signal-to-noise ratios[why?]. In other words, the variance of its spectral estimate at a given frequency does not decrease as the number of samples used in the computation increases. This can be mitigated by averaging over time (Welch's method[2])  or over frequency (smoothing). Welch's method is widely used for spectral density estimation (SDE). However, periodogram-based techniques introduce small biases that are unacceptable in some applications. So other alternatives are presented in the next section. Many other techniques for spectral estimation have been developed to mitigate the disadvantages of the basic periodogram. These techniques can generally be divided intonon-parametric,parametric,and more recentlysemi-parametric(also called sparse) methods.[3]The non-parametric approaches explicitly estimate thecovarianceor the spectrum of the process without assuming that the process has any particular structure. Some of the most common estimators in use for basic applications (e.g.Welch's method) are non-parametric estimators closely related to the periodogram. By contrast, the parametric approaches assume that the underlyingstationary stochastic processhas a certain structure that can be described using a small number of parameters (for example, using anauto-regressive or moving-average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. When using the semi-parametric methods, the underlying process is modeled using a non-parametric framework, with the additional assumption that the number of non-zero components of the model is small (i.e., the model is sparse). Similar approaches may also be used for missing data recovery[4]as well assignal reconstruction. Following is a partial list of spectral density estimation techniques: In parametric spectral estimation, one assumes that the signal is modeled by astationary processwhich has a spectral density function (SDF)S(f;a1,…,ap){\displaystyle S(f;a_{1},\ldots ,a_{p})}that is a function of the frequencyf{\displaystyle f}andp{\displaystyle p}parametersa1,…,ap{\displaystyle a_{1},\ldots ,a_{p}}.[8]The estimation problem then becomes one of estimating these parameters. The most common form of parametric SDF estimate uses as a model anautoregressive modelAR(p){\displaystyle {\text{AR}}(p)}of orderp{\displaystyle p}.[8]: 392A signal sequence{Yt}{\displaystyle \{Y_{t}\}}obeying a zero meanAR(p){\displaystyle {\text{AR}}(p)}process satisfies the equation where theϕ1,…,ϕp{\displaystyle \phi _{1},\ldots ,\phi _{p}}are fixed coefficients andϵt{\displaystyle \epsilon _{t}}is a white noise process with zero mean andinnovation varianceσp2{\displaystyle \sigma _{p}^{2}}. The SDF for this process is withΔt{\displaystyle \Delta t}the sampling time interval andfN{\displaystyle f_{N}}theNyquist frequency. There are a number of approaches to estimating the parametersϕ1,…,ϕp,σp2{\displaystyle \phi _{1},\ldots ,\phi _{p},\sigma _{p}^{2}}of theAR(p){\displaystyle {\text{AR}}(p)}process and thus the spectral density:[8]: 452-453 Alternative parametric methods include fitting to amoving-average model(MA) and to a fullautoregressive moving-average model(ARMA). Frequency estimationis the process ofestimatingthefrequency, amplitude, and phase-shift of asignalin the presence ofnoisegiven assumptions about the number of the components.[10]This contrasts with the general methods above, which do not make prior assumptions about the components. If one only wants to estimate the frequency of the single loudestpure-tone signal, one can use apitch detection algorithm. If the dominant frequency changes over time, then the problem becomes the estimation of theinstantaneous frequencyas defined in thetime–frequency representation. Methods for instantaneous frequency estimation include those based on theWigner–Ville distributionand higher orderambiguity functions.[11] If one wants to knowallthe (possibly complex) frequency components of a received signal (including transmitted signal and noise), one uses a multiple-tone approach. A typical model for a signalx(n){\displaystyle x(n)}consists of a sum ofp{\displaystyle p}complex exponentials in the presence ofwhite noise,w(n){\displaystyle w(n)} The power spectral density ofx(n){\displaystyle x(n)}is composed ofp{\displaystyle p}impulse functionsin addition to the spectral density function due to noise. The most common methods for frequency estimation involve identifying the noisesubspaceto extract these components. These methods are based oneigen decompositionof theautocorrelation matrixinto a signal subspace and a noise subspace. After these subspaces are identified, a frequency estimation function is used to find the component frequencies from the noise subspace. The most popular methods of noise subspace based frequency estimation arePisarenko's method, themultiple signal classification(MUSIC) method, the eigenvector method, and the minimum norm method. Supposexn{\displaystyle x_{n}}, fromn=0{\displaystyle n=0}toN−1{\displaystyle N-1}is a time series (discrete time) with zero mean. Suppose that it is a sum of a finite number of periodic components (all frequencies are positive): The variance ofxn{\displaystyle x_{n}}is, for a zero-mean function as above, given by If these data were samples taken from an electrical signal, this would be its average power (power is energy per unit time, so it is analogous to variance if energy is analogous to the amplitude squared). Now, for simplicity, suppose the signal extends infinitely in time, so we pass to the limit asN→∞.{\displaystyle N\to \infty .}If the average power is bounded, which is almost always the case in reality, then the following limit exists and is the variance of the data. Again, for simplicity, we will pass to continuous time, and assume that the signal extends infinitely in time in both directions. Then these two formulas become and The root mean square ofsin{\displaystyle \sin }is1/2{\displaystyle 1/{\sqrt {2}}}, so the variance ofAksin⁡(2πνkt+ϕk){\displaystyle A_{k}\sin(2\pi \nu _{k}t+\phi _{k})}is12Ak2.{\displaystyle {\tfrac {1}{2}}A_{k}^{2}.}Hence, the contribution to the average power ofx(t){\displaystyle x(t)}coming from the component with frequencyνk{\displaystyle \nu _{k}}is12Ak2.{\displaystyle {\tfrac {1}{2}}A_{k}^{2}.}All these contributions add up to the average power ofx(t).{\displaystyle x(t).} Then the power as a function of frequency is12Ak2,{\displaystyle {\tfrac {1}{2}}A_{k}^{2},}and its statisticalcumulative distribution functionS(ν){\displaystyle S(\nu )}will be S{\displaystyle S}is astep function, monotonically non-decreasing. Its jumps occur at the frequencies of theperiodiccomponents ofx{\displaystyle x}, and the value of each jump is the power or variance of that component. The variance is the covariance of the data with itself. If we now consider the same data but with a lag ofτ{\displaystyle \tau }, we can take thecovarianceofx(t){\displaystyle x(t)}withx(t+τ){\displaystyle x(t+\tau )}, and define this to be theautocorrelation functionc{\displaystyle c}of the signal (or data)x{\displaystyle x}: If it exists, it is an even function ofτ.{\displaystyle \tau .}If the average power is bounded, thenc{\displaystyle c}exists everywhere, is finite, and is bounded byc(0),{\displaystyle c(0),}which is the average power or variance of the data. It can be shown thatc{\displaystyle c}can be decomposed into periodic components with the same periods asx{\displaystyle x}: This is in fact the spectral decomposition ofc{\displaystyle c}over the different frequencies, and is related to the distribution of power ofx{\displaystyle x}over the frequencies: the amplitude of a frequency component ofc{\displaystyle c}is its contribution to the average power of the signal. The power spectrum of this example is not continuous, and therefore does not have a derivative, and therefore this signal does not have a power spectral density function. In general, the power spectrum will usually be the sum of two parts: a line spectrum such as in this example, which is not continuous and does not have a density function, and a residue, which is absolutely continuous and does have a density function.
https://en.wikipedia.org/wiki/Spectral_density_estimation#Single_tone
Inmathematics,Bhāskara I's sine approximation formulais arational expressionin onevariablefor thecomputationof theapproximate valuesof thetrigonometric sinesdiscovered byBhāskara I(c. 600 – c. 680), a seventh-century Indianmathematician.[1]Thisformulais given in his treatise titledMahabhaskariya. It is not known how Bhāskara I arrived at his approximation formula. However, severalhistoriansofmathematicshave put forward different hypotheses as to the method Bhāskara might have used to arrive at his formula. The formula is elegant and simple, and it enables the computation of reasonably accurate values of trigonometric sines without the use of geometry.[2] The formula is given in verses 17–19, chapter VII, Mahabhaskariya of Bhāskara I. A translation of the verses is given below:[3] (Now) I briefly state the rule (for finding thebhujaphalaand thekotiphala, etc.) without making use of the Rsine-differences 225, etc. Subtract the degrees of abhuja(orkoti) from the degrees of a half circle (that is, 180 degrees). Then multiply the remainder by the degrees of thebhujaorkotiand put down the result at two places. At one place subtract the result from 40500. By one-fourth of the remainder (thus obtained), divide the result at the other place as multiplied by theanthyaphala(that is, the epicyclic radius). Thus is obtained the entirebahuphala(or,kotiphala) for the sun, moon or the star-planets. So also are obtained the direct and inverse Rsines. (The reference "Rsine-differences 225" is an allusion toAryabhata's sine table.) In modern mathematical notations, for an anglexin degrees, this formula gives[3] Bhāskara I's sine approximation formula can be expressed using theradianmeasure ofanglesas follows:[1] For a positive integernthis takes the following form:[4] The formula acquires an even simpler form when expressed in terms of the cosine rather than the sine. Using radian measure for angles from−π2{\displaystyle -{\frac {\pi }{2}}}toπ2{\displaystyle {\frac {\pi }{2}}}and puttingx=12π+y{\displaystyle x={\tfrac {1}{2}}\pi +y}, one gets To express the previous formula with the constantτ=2π,{\displaystyle \tau =2\pi ,}one can use Equivalent forms of Bhāskara I's formula have been given by almost all subsequent astronomers and mathematicians of India. For example,Brahmagupta's (598–668CE)Brhma-Sphuta-Siddhanta(verses 23–24, chapter XIV)[3]gives the formula in the following form: Also,Bhāskara II(1114–1185CE) has given this formula in hisLilavati(Kshetra-vyavahara, Soka No. 48) in the following form: The approximation can also be used to derive formulas for inverse cosine and inverse sine: Alternatively, using absolute values and theSign function, each pair of functions can be rewritten as such: The formula is applicable for values ofx° in the range from 0° to 180°. The formula is remarkably accurate in this range. The graphs of sinxand the approximation formula are visually indistinguishable and are nearly identical. One of the accompanying figures gives the graph of the error function, namely, the function in using the formula. It shows that the maximum absolute error in using the formula is around 0.0016. From a plot of the percentage value of the absolute error, it is clear that the maximum relative error is less than 1.8%. The approximation formula thus gives sufficiently accurate values of sines for most practical purposes. However, it was not sufficient for the more accurate computational requirements of astronomy. The search for more accurate formulas by Indian astronomers eventually led to the discovery of thepower seriesexpansions of sinxand cosxbyMadhava of Sangamagrama(c. 1350 – c. 1425), the founder of theKerala school of astronomy and mathematics. Bhāskara had not indicated any method by which he arrived at his formula. Historians have speculated on various possibilities. No definitive answers have as yet been obtained. Beyond its historical importance of being a prime example of the mathematical achievements of ancient Indian astronomers, the formula is of significance from a modern perspective also. Mathematicians have attempted to derive the rule using modern concepts and tools. Around half a dozen methods have been suggested, each based on a separate set of premises.[2][3]Most of these derivations use only elementary concepts. Let thecircumferenceof acirclebe measured indegreesand let theradiusRof thecirclebe also measured indegrees. Choosing a fixed diameterABand an arbitrary pointPon the circle and dropping the perpendicularPMtoAB, we can compute the area of the triangleAPBin two ways. Equating the two expressions for the area one gets(1/2)AB×PM= (1/2)AP×BP. This gives Lettingxbe the length of the arcAP, the length of the arcBPis180 −x. These arcs are much bigger than the respective chords. Hence one gets One now seeks two constants α and β such that It is indeed not possible to obtain such constants. However, one may choose values for α and β so that the above expression is valid for two chosen values of the arc lengthx. Choosing 30° and 90° as these values and solving the resulting equations, one immediately gets Bhāskara I's sine approximation formula.[2][3] Assuming thatxis in radians, one may seek an approximation to sinxin the following form: The constantsa,b,c,p,qandr(only five of them are independent) can be determined by assuming that the formula must be exactly valid whenx= 0, π/6, π/2, π, and further assuming that it has to satisfy the property that sin(x) = sin(π −x).[2][3]This procedure produces the formula expressed usingradianmeasure of angles. The part of the graph of sinxin the range from 0° to 180° "looks like" part of a parabola through the points (0, 0) and (180, 0). The general form of such a parabola is The parabola that also passes through (90, 1) (which is the point corresponding to the value sin(90°) = 1) is The parabola which also passes through (30, 1/2) (which is the point corresponding to the value sin(30°) = 1/2) is These expressions suggest a varying denominator which takes the value 90 × 90 whenx= 90 and the value2 × 30 × 150whenx= 30. That this expression should also be symmetrical about the linex= 90 rules out the possibility of choosing a linear expression inx. Computations involvingx(180 −x) might immediately suggest that the expression could be of the form A little experimentation (or by setting up and solving two linear equations inaandb) will yield the valuesa= 5/4,b= −1/4. These give Bhāskara I's sine approximation formula.[4] Karel Stroethoff (2014) offers a similar, but simpler argument for Bhāskara I's choice. He also provides an analogous approximation for the cosine and extends the technique to second and third-order polynomials.[5]
https://en.wikipedia.org/wiki/Bh%C4%81skara_I%27s_sine_approximation_formula
For smallangles, thetrigonometric functionssine, cosine, and tangent can be calculated with reasonable accuracy by the following simple approximations: provided the angle is measured inradians. Angles measured indegreesmust first be converted to radians by multiplying them by⁠π/180{\displaystyle \pi /180}⁠. These approximations have a wide range of uses in branches ofphysicsandengineering, includingmechanics,electromagnetism,optics,cartography,astronomy, andcomputer science.[1][2]One reason for this is that they can greatly simplifydifferential equationsthat do not need to be answered with absolute precision. There are a number of ways to demonstrate the validity of the small-angle approximations. The most direct method is to truncate theMaclaurin seriesfor each of the trigonometric functions. Depending on theorder of the approximation,cos⁡θ{\displaystyle \textstyle \cos \theta }is approximated as either1{\displaystyle 1}or as1−12θ2{\textstyle 1-{\frac {1}{2}}\theta ^{2}}.[3] For a small angle,HandAare almost the same length, and thereforecosθis nearly 1. The segmentd(in red to the right) is the difference between the lengths of the hypotenuse,H, and the adjacent side,A, and has lengthH−H2−O2{\displaystyle \textstyle H-{\sqrt {H^{2}-O^{2}}}}, which for small angles is approximately equal toO2/2H≈12θ2H{\displaystyle \textstyle O^{2}\!/2H\approx {\tfrac {1}{2}}\theta ^{2}H}. As a second-order approximation,cos⁡θ≈1−θ22.{\displaystyle \cos {\theta }\approx 1-{\frac {\theta ^{2}}{2}}.} The opposite leg,O, is approximately equal to the length of the blue arc,s. The arcshas lengthθA, and by definitionsinθ=⁠O/H⁠andtanθ=⁠O/A⁠, and for a small angle,O≈sandH≈A, which leads to:sin⁡θ=OH≈OA=tan⁡θ=OA≈sA=AθA=θ.{\displaystyle \sin \theta ={\frac {O}{H}}\approx {\frac {O}{A}}=\tan \theta ={\frac {O}{A}}\approx {\frac {s}{A}}={\frac {A\theta }{A}}=\theta .} Or, more concisely,sin⁡θ≈tan⁡θ≈θ.{\displaystyle \sin \theta \approx \tan \theta \approx \theta .} Using thesqueeze theorem,[4]we can prove thatlimθ→0sin⁡(θ)θ=1,{\displaystyle \lim _{\theta \to 0}{\frac {\sin(\theta )}{\theta }}=1,}which is a formal restatement of the approximationsin⁡(θ)≈θ{\displaystyle \sin(\theta )\approx \theta }for small values ofθ. A more careful application of the squeeze theorem proves thatlimθ→0tan⁡(θ)θ=1,{\displaystyle \lim _{\theta \to 0}{\frac {\tan(\theta )}{\theta }}=1,}from which we conclude thattan⁡(θ)≈θ{\displaystyle \tan(\theta )\approx \theta }for small values ofθ. Finally,L'Hôpital's ruletells us thatlimθ→0cos⁡(θ)−1θ2=limθ→0−sin⁡(θ)2θ=−12,{\displaystyle \lim _{\theta \to 0}{\frac {\cos(\theta )-1}{\theta ^{2}}}=\lim _{\theta \to 0}{\frac {-\sin(\theta )}{2\theta }}=-{\frac {1}{2}},}which rearranges tocos⁡(θ)≈1−θ22{\textstyle \cos(\theta )\approx 1-{\frac {\theta ^{2}}{2}}}for small values ofθ. Alternatively, we can use thedouble angle formulacos⁡2A≡1−2sin2⁡A{\displaystyle \cos 2A\equiv 1-2\sin ^{2}A}. By lettingθ=2A{\displaystyle \theta =2A}, we get thatcos⁡θ=1−2sin2⁡θ2≈1−θ22{\textstyle \cos \theta =1-2\sin ^{2}{\frac {\theta }{2}}\approx 1-{\frac {\theta ^{2}}{2}}}. TheTaylor series expansions of trigonometric functionssine, cosine, and tangent near zero are:[5] sin⁡θ=θ−16θ3+1120θ5−⋯,cos⁡θ=1−12θ2+124θ4−⋯,tan⁡θ=θ+13θ3+215θ5+⋯.{\displaystyle {\begin{aligned}\sin \theta &=\theta -{\frac {1}{6}}\theta ^{3}+{\frac {1}{120}}\theta ^{5}-\cdots ,\\[6mu]\cos \theta &=1-{\frac {1}{2}}{\theta ^{2}}+{\frac {1}{24}}\theta ^{4}-\cdots ,\\[6mu]\tan \theta &=\theta +{\frac {1}{3}}\theta ^{3}+{\frac {2}{15}}\theta ^{5}+\cdots .\end{aligned}}} where⁠θ{\displaystyle \theta }⁠is the angle in radians. For very small angles, higher powers of⁠θ{\displaystyle \theta }⁠become extremely small, for instance if⁠θ=0.01{\displaystyle \theta =0.01}⁠, then⁠θ3=0.000001{\displaystyle \theta ^{3}=0.000\,001}⁠, just one ten-thousandth of⁠θ{\displaystyle \theta }⁠. Thus for many purposes it suffices to drop the cubic and higher terms and approximate the sine and tangent of a small angle using the radian measure of the angle,⁠sin⁡θ≈tan⁡θ≈θ{\displaystyle \sin \theta \approx \tan \theta \approx \theta }⁠, and drop the quadratic term and approximate the cosine as⁠cos⁡θ≈1{\displaystyle \cos \theta \approx 1}⁠. If additional precision is needed the quadratic and cubic terms can also be included,⁠sin⁡θ≈θ−16θ3{\displaystyle \sin \theta \approx \theta -{\tfrac {1}{6}}\theta ^{3}}⁠,⁠cos⁡θ≈1−12θ2{\displaystyle \cos \theta \approx 1-{\tfrac {1}{2}}\theta ^{2}}⁠, and⁠tan⁡θ≈θ+13θ3{\displaystyle \tan \theta \approx \theta +{\tfrac {1}{3}}\theta ^{3}}⁠. One may also usedual numbers, defined as numbers in the forma+bε{\displaystyle a+b\varepsilon }, witha,b∈R{\displaystyle a,b\in \mathbb {R} }andε{\displaystyle \varepsilon }satisfying by definitionε2=0{\displaystyle \varepsilon ^{2}=0}andε≠0{\displaystyle \varepsilon \neq 0}. By using the MacLaurin series of cosine and sine, one can show thatcos⁡(θε)=1{\displaystyle \cos(\theta \varepsilon )=1}andsin⁡(θε)=θε{\displaystyle \sin(\theta \varepsilon )=\theta \varepsilon }. Furthermore, it is not hard to prove that thePythagorean identityholds:sin2⁡(θε)+cos2⁡(θε)=(θε)2+12=θ2ε2+1=θ2⋅0+1=1{\displaystyle \sin ^{2}(\theta \varepsilon )+\cos ^{2}(\theta \varepsilon )=(\theta \varepsilon )^{2}+1^{2}=\theta ^{2}\varepsilon ^{2}+1=\theta ^{2}\cdot 0+1=1} Near zero, therelative errorof the approximations⁠cos⁡θ≈1{\displaystyle \cos \theta \approx 1}⁠,⁠sin⁡θ≈θ{\displaystyle \sin \theta \approx \theta }⁠, and⁠tan⁡θ≈θ{\displaystyle \tan \theta \approx \theta }⁠is quadratic in⁠θ{\displaystyle \theta }⁠: for each order of magnitude smaller the angle is, the relative error of these approximations shrinks by two orders of magnitude. The approximation⁠cos⁡θ≈1−12θ2{\displaystyle \textstyle \cos \theta \approx 1-{\tfrac {1}{2}}\theta ^{2}}⁠has relative error which is quartic in⁠θ{\displaystyle \theta }⁠: for each order of magnitude smaller the angle is, the relative error shrinks by four orders of magnitude. Figure 3 shows the relative errors of the small angle approximations. The angles at which the relative error exceeds 1% are as follows: Manyslide rules– especially "trig" and higher models – include an "ST" (sines and tangents) or "SRT" (sines, radians, and tangents) scale on the front or back of the slide, for computing with sines and tangents of angles smaller than about 0.1 radian.[6] The right-hand end of the ST or SRT scale cannot be accurate to three decimal places for both arcsine(0.1) = 5.74 degrees and arctangent(0.1) = 5.71 degrees, so sines and tangents of angles near 5 degrees are given with somewhat worse than the usual expected "slide-rule accuracy". Some slide rules, such as the K&E Deci-Lon in the photo, calibrate to be accurate for radian conversion, at 5.73 degrees (off by nearly 0.4% for the tangent and 0.2% for the sine for angles around 5 degrees). Others are calibrated to 5.725 degrees, to balance the sine and tangent errors at below 0.3%. Theangle addition and subtraction theoremsreduce to the following when one of the angles is small (β≈ 0): Inastronomy, theangular sizeor angle subtended by the image of a distant object is often only a fewarcseconds(denoted by the symbol ″), so it is well suited to the small angle approximation.[7]The linear size (D) is related to the angular size (X) and the distance from the observer (d) by the simple formula: whereXis measured in arcseconds. The quantity206265″is approximately equal to the number of arcseconds in acircle(1296000″), divided by2π, or, the number of arcseconds in 1 radian. The exact formula is and the above approximation follows whentanXis replaced byX. For example, theparsecis defined by the value of d whenD=1 AU,X=1 arcsecond, but the definition used is the small-angle approximation (the first equation above). The second-order cosine approximation is especially useful in calculating thepotential energyof apendulum, which can then be applied with aLagrangianto find the indirect (energy) equation of motion. When calculating theperiodof a simple pendulum, the small-angle approximation for sine is used to allow the resulting differential equation to be solved easily by comparison with the differential equation describingsimple harmonic motion.[8] In optics, the small-angle approximations form the basis of theparaxial approximation. The sine and tangent small-angle approximations are used in relation to thedouble-slit experimentor adiffraction gratingto develop simplified equations like the following, whereyis the distance of a fringe from the center of maximum light intensity,mis the order of the fringe,Dis the distance between the slits and projection screen, anddis the distance between the slits:[9]y≈mλDd{\displaystyle y\approx {\frac {m\lambda D}{d}}} The small-angle approximation also appears in structural mechanics, especially in stability and bifurcation analyses (mainly of axially-loaded columns ready to undergobuckling). This leads to significant simplifications, though at a cost in accuracy and insight into the true behavior. The1 in 60 ruleused inair navigationhas its basis in the small-angle approximation, plus the fact that one radian is approximately 60 degrees. The formulas foraddition and subtraction involving a small anglemay be used forinterpolatingbetweentrigonometric tablevalues: Example: sin(0.755)sin⁡(0.755)=sin⁡(0.75+0.005)≈sin⁡(0.75)+(0.005)cos⁡(0.75)≈(0.6816)+(0.005)(0.7317)≈0.6853.{\displaystyle {\begin{aligned}\sin(0.755)&=\sin(0.75+0.005)\\&\approx \sin(0.75)+(0.005)\cos(0.75)\\&\approx (0.6816)+(0.005)(0.7317)\\&\approx 0.6853.\end{aligned}}}where the values for sin(0.75) and cos(0.75) are obtained from trigonometric table. The result is accurate to the four digits given.
https://en.wikipedia.org/wiki/Small-angle_approximation
Thedifferentiation of trigonometric functionsis the mathematical process of finding thederivativeof atrigonometric function, or its rate of change with respect to a variable. For example, the derivative of the sine function is written sin′(a) = cos(a), meaning that the rate of change of sin(x) at a particular anglex = ais given by the cosine of that angle. All derivatives of circular trigonometric functions can be found from those of sin(x) and cos(x) by means of thequotient ruleapplied to functions such as tan(x) = sin(x)/cos(x). Knowing these derivatives, the derivatives of theinverse trigonometric functionsare found usingimplicit differentiation. The diagram at right shows a circle with centreOand radiusr =1. Let two radiiOAandOBmake an arc of θ radians. Since we are considering the limit asθtends to zero, we may assumeθis a small positive number, say 0 < θ <⁠1/2⁠π in the first quadrant. In the diagram, letR1be the triangleOAB,R2thecircular sectorOAB, andR3the triangleOAC. Thearea of triangleOABis: Thearea of the circular sectorOABis: The area of the triangleOACis given by: Since each region is contained in the next, one has: Moreover, sincesinθ> 0in the first quadrant, we may divide through by⁠1/2⁠sinθ, giving: In the last step we took the reciprocals of the three positive terms, reversing the inequities. We conclude that for 0 < θ <⁠1/2⁠π, the quantitysin(θ)/θisalwaysless than 1 andalwaysgreater than cos(θ). Thus, asθgets closer to 0,sin(θ)/θis "squeezed" between a ceiling at height 1 and a floor at heightcosθ, which rises towards 1; hence sin(θ)/θmust tend to 1 asθtends to 0 from the positive side: limθ→0+sin⁡θθ=1.{\displaystyle \lim _{\theta \to 0^{+}}{\frac {\sin \theta }{\theta }}=1\,.} For the case whereθis a small negative number –⁠1/2⁠π < θ < 0, we use the fact that sine is anodd function: The last section enables us to calculate this new limit relatively easily. This is done by employing a simple trick. In this calculation, the sign ofθis unimportant. Usingcos2θ– 1 = –sin2θ,the fact that the limit of a product is the product of limits, and the limit result from the previous section, we find that: Using the limit for thesinefunction, the fact that the tangent function is odd, and the fact that the limit of a product is the product of limits, we find: We calculate the derivative of thesine functionfrom thelimit definition: Using theangle addition formulasin(α+β) = sin α cos β + sin β cos α, we have: Using the limits for thesineandcosinefunctions: We again calculate the derivative of thecosine functionfrom the limit definition: Using the angle addition formulacos(α+β) = cos α cos β – sin α sin β, we have: Using the limits for thesineandcosinefunctions: To compute the derivative of the cosine function from the chain rule, first observe the following three facts: The first and the second aretrigonometric identities, and the third is proven above. Using these three facts, we can write the following, We can differentiate this using thechain rule. Lettingf(x)=sin⁡x,g(θ)=π2−θ{\displaystyle f(x)=\sin x,\ \ g(\theta )={\tfrac {\pi }{2}}-\theta }, we have: Therefore, we have proven that To calculate the derivative of thetangent functiontanθ, we usefirst principles. By definition: Using the well-known angle formulatan(α+β) = (tan α + tan β) / (1 - tan α tan β), we have: Using the fact that the limit of a product is the product of the limits: Using the limit for thetangentfunction, and the fact that tanδtends to 0 as δ tends to 0: We see immediately that: One can also compute the derivative of the tangent function using thequotient rule. The numerator can be simplified to 1 by thePythagorean identity, giving us, Therefore, The following derivatives are found by setting avariableyequal to theinverse trigonometric functionthat we wish to take the derivative of. Usingimplicit differentiationand then solving fordy/dx, the derivative of the inverse function is found in terms ofy. To convertdy/dxback into being in terms ofx, we can draw a reference triangle on the unit circle, lettingθbe y. Using thePythagorean theoremand the definition of the regular trigonometric functions, we can finally expressdy/dxin terms ofx. We let Where Then Taking the derivative with respect tox{\displaystyle x}on both sides and solving for dy/dx: Substitutingcos⁡y=1−sin2⁡y{\displaystyle \cos y={\sqrt {1-\sin ^{2}y}}}in from above, Substitutingx=sin⁡y{\displaystyle x=\sin y}in from above, We let Where Then Taking the derivative with respect tox{\displaystyle x}on both sides and solving for dy/dx: Substitutingsin⁡y=1−cos2⁡y{\displaystyle \sin y={\sqrt {1-\cos ^{2}y}}\,\!}in from above, we get Substitutingx=cos⁡y{\displaystyle x=\cos y\,\!}in from above, we get Alternatively, once the derivative ofarcsin⁡x{\displaystyle \arcsin x}is established, the derivative ofarccos⁡x{\displaystyle \arccos x}follows immediately by differentiating the identityarcsin⁡x+arccos⁡x=π/2{\displaystyle \arcsin x+\arccos x=\pi /2}so that(arccos⁡x)′=−(arcsin⁡x)′{\displaystyle (\arccos x)'=-(\arcsin x)'}. We let Where Then Taking the derivative with respect tox{\displaystyle x}on both sides and solving for dy/dx: Left side: Right side: Therefore, Substitutingx=tan⁡y{\displaystyle x=\tan y\,\!}in from above, we get We let where0<y<π{\displaystyle 0<y<\pi }. Then Taking the derivative with respect tox{\displaystyle x}on both sides and solving for dy/dx: Left side: Right side: Therefore, Substitutingx=cot⁡y{\displaystyle x=\cot y}, Alternatively, as the derivative ofarctan⁡x{\displaystyle \arctan x}is derived as shown above, then using the identityarctan⁡x+arccot⁡x=π2{\displaystyle \arctan x+\operatorname {arccot} x={\dfrac {\pi }{2}}}follows immediately thatddxarccot⁡x=ddx(π2−arctan⁡x)=−11+x2{\displaystyle {\begin{aligned}{\dfrac {d}{dx}}\operatorname {arccot} x&={\dfrac {d}{dx}}\left({\dfrac {\pi }{2}}-\arctan x\right)\\&=-{\dfrac {1}{1+x^{2}}}\end{aligned}}} Let Then (The absolute value in the expression is necessary as the product of secant and tangent in the interval of y is always nonnegative, while the radicalx2−1{\displaystyle {\sqrt {x^{2}-1}}}is always nonnegative by definition of the principal square root, so the remaining factor must also be nonnegative, which is achieved by using the absolute value of x.) Alternatively, the derivative of arcsecant may be derived from the derivative of arccosine using thechain rule. Let Where Then, applying the chain rule toarccos⁡(1x){\displaystyle \arccos \left({\frac {1}{x}}\right)}: Let Then (The absolute value in the expression is necessary as the product of cosecant and cotangent in the interval of y is always nonnegative, while the radicalx2−1{\displaystyle {\sqrt {x^{2}-1}}}is always nonnegative by definition of the principal square root, so the remaining factor must also be nonnegative, which is achieved by using the absolute value of x.) Alternatively, the derivative of arccosecant may be derived from the derivative of arcsine using thechain rule. Let Where Then, applying the chain rule toarcsin⁡(1x){\displaystyle \arcsin \left({\frac {1}{x}}\right)}:
https://en.wikipedia.org/wiki/Differentiation_of_trigonometric_functions
Inmathematics, abivectoror2-vectoris a quantity inexterior algebraorgeometric algebrathat extends the idea ofscalarsandvectors. Considering a scalar as a degree-zero quantity and a vector as a degree-one quantity, a bivector is of degree two. Bivectors have applications in many areas of mathematics and physics. They are related tocomplex numbersintwo dimensionsand to bothpseudovectorsandvector quaternionsin three dimensions. They can be used to generaterotationsin a space of any number of dimensions, and are a useful tool for classifying such rotations. Geometrically, a simple bivector can be interpreted as characterizing adirected plane segment(ororiented plane segment), much asvectorscan be thought of as characterizingdirected line segments.[2]The bivectora∧bhas anattitude(ordirection) of theplanespanned byaandb, has an area that is a scalar multiple of any referenceplane segmentwith the same attitude (and in geometric algebra, it has amagnitudeequal to the area of theparallelogramwith edgesaandb), and has anorientationbeing the side ofaon whichblies within the plane spanned byaandb.[2][3]In layman terms, any surface defines the same bivector if it is parallel to the same plane (same attitude), has the same area, and same orientation (see figure). Bivectors are generated by theexterior producton vectors: given two vectorsaandb, their exterior producta∧bis a bivector, as is any sum of bivectors. Not all bivectors can be expressed as an exterior product without such summation. More precisely, a bivector that can be expressed as an exterior product is calledsimple; in up to three dimensions all bivectors are simple, but in higher dimensions this is not the case.[4]The exterior product of two vectors isalternating, soa∧ais the zero bivector, andb∧a=−a∧b{\displaystyle \mathbf {b} \wedge \mathbf {a} =-\mathbf {a} \wedge \mathbf {b} }, producing the opposite orientation. Concepts directly related to bivector are rank-2antisymmetric tensorandskew-symmetric matrix. The bivector was first defined in 1844 by German mathematicianHermann Grassmanninexterior algebraas the result of theexterior productof two vectors. Just the previous year, in Ireland,William Rowan Hamiltonhad discoveredquaternions. Hamilton coined bothvectorandbivector, the latter in hisLectures on Quaternions(1853) as he introducedbiquaternions, which havebivectorsfor their vector parts. It was not until English mathematicianWilliam Kingdon Cliffordin 1888 added the geometric product to Grassmann's algebra, incorporating the ideas of both Hamilton and Grassmann, and foundedClifford algebra, that the bivector of this article arose.Henry Forderused the termbivectorto develop exterior algebra in 1941.[5] In the 1890sJosiah Willard GibbsandOliver Heavisidedevelopedvector calculus, which included separatecross productanddot productsthat were derived from quaternion multiplication.[6][7][8]The success of vector calculus, and of the bookVector Analysisby Gibbs andWilson, had the effect that the insights of Hamilton and Clifford were overlooked for a long time, since much of 20th century mathematics and physics was formulated in vector terms. Gibbs used vectors to fill the role of bivectors in three dimensions, and usedbivectorin Hamilton's sense, a use that has sometimes been copied.[9][10][11]Today the bivector is largely studied as a topic ingeometric algebra, a Clifford algebra overrealorcomplexvector spaceswith aquadratic form. Its resurgence was led byDavid Hesteneswho, along with others, applied geometric algebra to a range of new applications inphysics.[12] For this article, the bivector will be considered only in real geometric algebras, which may be applied in most areas of physics. Also unless otherwise stated, all examples have aEuclidean metricand so apositive-definitequadratic form. The bivector arises from the definition of thegeometric productover a vector space with an associated quadratic form sometimes called themetric. For vectorsa,bandc, the geometric product satisfies the following properties: From associativity,a(ab) =a2b, is a scalar timesb. Whenbis not parallel to and hence not a scalar multiple ofa,abcannot be a scalar. But is a sum of scalars and so a scalar. From thelaw of cosineson the triangle formed by the vectors its value is|a| |b| cosθ, whereθis the angle between the vectors. It is therefore identical to the scalar product between two vectors, and is written the same way, It is symmetric, scalar-valued, and can be used to determine the angle between two vectors: in particular ifaandbare orthogonal the product is zero. Just as the scalar product can be formulated as the symmetric part of the geometric product of another quantity, the exterior product (sometimes known as the "wedge" or "progressive" product) can be formulated as itsantisymmetric part: It is antisymmetric inaandb and by addition: That is, the geometric product is the sum of the symmetric scalar product and alternating exterior product. To examine the nature ofa∧b, consider the formula which using thePythagorean trigonometric identitygives the value of(a∧b)2 With a negative square, it cannot be a scalar or vector quantity, so it is a new sort of object, abivector. It hasmagnitude|a| |b| |sinθ|, whereθis the angle between the vectors, and so is zero for parallel vectors. To distinguish them from vectors, bivectors are written here with bold capitals, for example: although other conventions are used, in particular as vectors and bivectors are both elements of the geometric algebra. The algebra generated by the geometric product (that is, all objects formed by taking repeated sums and geometric products of scalars and vectors) is thegeometric algebraover the vector space. For an Euclidean vector space, this algebra is writtenGn{\displaystyle {\mathcal {G}}_{n}}orCln(R), wherenis the dimension of the vector spaceRn.Cln(R)is both a vector space and an algebra, generated by all the products between vectors inRn, so it contains all vectors and bivectors. More precisely, as a vector space it contains the vectors and bivectors aslinear subspaces, though not assubalgebras(since the geometric product of two vectors is not generally another vector). The space of all bivectors has dimension⁠1/2⁠n(n− 1)and is written⋀2Rn,[13]and is the secondexterior powerof the original vector space. The subalgebra generated by the bivectors is theeven subalgebraof the geometric algebra, writtenCl[0]n(R). This algebra results from considering all repeated sums and geometric products of scalars and bivectors. It has dimension2n−1, and contains⋀2Rnas a linear subspace. In two and three dimensions the even subalgebra contains only scalars and bivectors, and each is of particular interest. In two dimensions, the even subalgebra isisomorphicto thecomplex numbers,C, while in three it is isomorphic to thequaternions,H. The even subalgebra contains therotationsin any dimension. As noted in the previous section the magnitude of a simple bivector, that is one that is the exterior product of two vectorsaandb, is|a| |b| sinθ, whereθis the angle between the vectors. It is written|B|, whereBis the bivector. For general bivectors, the magnitude can be calculated by taking thenormof the bivector considered as a vector in the space⋀2Rn. If the magnitude is zero then all the bivector's components are zero, and the bivector is the zero bivector which as an element of the geometric algebra equals the scalar zero. A unit bivector is one with unit magnitude. Such a bivector can be derived from any non-zero bivector by dividing the bivector by its magnitude, that is Of particular utility are the unit bivectors formed from the products of thestandard basisof the vector space. Ifeiandejare distinct basis vectors then the productei∧ejis a bivector. Aseiandejare orthogonal,ei∧ej=eiej, writteneij, and has unit magnitude as the vectors areunit vectors. The set of all bivectors produced from the basis in this way form a basis for⋀2Rn. For instance, in four dimensions the basis for⋀2R4is (e1e2,e1e3,e1e4,e2e3,e2e4,e3e4) or (e12,e13,e14,e23,e24,e34).[14] The exterior product of two vectors is a bivector, but not all bivectors are exterior products of two vectors. For example, in four dimensions the bivector cannot be written as the exterior product of two vectors. A bivector that can be written as the exterior product of two vectors is simple. In two and three dimensions all bivectors are simple, but not in four or more dimensions; in four dimensions every bivector is the sum of at most two exterior products. A bivector has a real square if and only if it is simple, and only simple bivectors can be represented geometrically by a directed plane area.[4] The geometric product of two bivectors,AandB, is The quantityA·Bis the scalar-valued scalar product, whileA∧Bis the grade 4 exterior product that arises in four or more dimensions. The quantityA×Bis the bivector-valuedcommutatorproduct, given by The space of bivectors⋀2Rnis aLie algebraoverR, with the commutator product as the Lie bracket. The full geometric product of bivectors generates the even subalgebra. Of particular interest is the product of a bivector with itself. As the commutator product is antisymmetric the product simplifies to If the bivector issimplethe last term is zero and the product is the scalar-valuedA·A, which can be used as a check for simplicity. In particular the exterior product of bivectors only exists in four or more dimensions, so all bivectors in two and three dimensions are simple.[4] Bivectors are isomorphic toskew-symmetric matricesin any number of dimensions. For example, the general bivectorB23e23+B31e31+B12e12in three dimensions maps to the matrix This multiplied by vectors on both sides gives the same vector as the product of a vector and bivector minus the exterior product; an example is theangular velocity tensor. Skew symmetric matrices generateorthogonal matriceswithdeterminant1through the exponential map. In particular, applying the exponential map to a bivector that is associated with a rotation yields arotation matrix. The rotation matrixMRgiven by the skew-symmetric matrix above is The rotation described byMRis the same as that described by the rotorRgiven by and the matrixMRcan be also calculated directly from rotorR. In three dimensions, this is given by Bivectors are related to theeigenvaluesof a rotation matrix. Given a rotation matrixMthe eigenvalues can be calculated by solving thecharacteristic equationfor that matrix0 = det(M−λI). By thefundamental theorem of algebrathis has three roots (only one of which is real as there is only one eigenvector, i.e., the axis of rotation). The other roots must be a complex conjugate pair. They have unit magnitude so purely imaginary logarithms, equal to the magnitude of the bivector associated with the rotation, which is also the angle of rotation. The eigenvectors associated with the complex eigenvalues are in the plane of the bivector, so the exterior product of two non-parallel eigenvectors results in the bivector (or a multiple thereof). When working with coordinates in geometric algebra it is usual to write thebasis vectorsas (e1,e2, ...), a convention that will be used here. Avectorin real two-dimensional spaceR2can be writtena=a1e1+a2e2, wherea1anda2are real numbers,e1ande2areorthonormalbasis vectors. The geometric product of two such vectors is This can be split into the symmetric, scalar-valued, scalar product and an antisymmetric, bivector-valued exterior product: All bivectors in two dimensions are of this form, that is multiples of the bivectore1e2, writtene12to emphasise it is a bivector rather than a vector. The magnitude ofe12is1, with so it is called theunit bivector. The term unit bivector can be used in other dimensions but it is only uniquely defined (up to a sign) in two dimensions and all bivectors are multiples ofe12. As the highest grade element of the algebrae12is also thepseudoscalarwhich is given the symboli. With the properties of negative square and unit magnitude, the unit bivector can be identified with theimaginary unitfromcomplex numbers. The bivectors and scalars together form the even subalgebra of the geometric algebra, which isisomorphicto the complex numbersC. The even subalgebra has basis(1,e12), the whole algebra has basis(1,e1,e2,e12). The complex numbers are usually identified with thecoordinate axesand two-dimensional vectors, which would mean associating them with the vector elements of the geometric algebra. There is no contradiction in this, as to get from a general vector to a complex number an axis needs to be identified as the real axis,e1say. This multiplies by all vectors to generate the elements of even subalgebra. All the properties of complex numbers can be derived from bivectors, but two are of particular interest. First as with complex numbers products of bivectors and so the even subalgebra arecommutative. This is only true in two dimensions, so properties of the bivector in two dimensions that depend on commutativity do not usually generalise to higher dimensions. Second a general bivector can be written whereθis a real number. Putting this into theTaylor seriesfor theexponential mapand using the propertye122= −1results in a bivector version ofEuler's formula, which when multiplied by any vector rotates it through an angleθabout the origin: The product of a vector with a bivector in two dimensions isanticommutative, so the following products all generate the same rotation Of these the last product is the one that generalises into higher dimensions. The quantity needed is called arotorand is given the symbolR, so in two dimensions a rotor that rotates through angleθcan be written and the rotation it generates is[16] Inthree dimensionsthe geometric product of two vectors is This can be split into the symmetric, scalar-valued, scalar product and the antisymmetric, bivector-valued, exterior product: In three dimensions all bivectors are simple and so the result of an exterior product. The unit bivectorse23,e31ande12form a basis for the space of bivectors⋀2R3, which is itself a three-dimensional linear space. So if a general bivector is: they can be added like vectors while when multiplied they produce the following which can be split into symmetric scalar and antisymmetric bivector parts as follows The exterior product of two bivectors in three dimensions is zero. A bivectorBcan be written as the product of its magnitude and a unit bivector, so writingβfor|B|and using the Taylor series for the exponential map it can be shown that This is another version of Euler's formula, but with a general bivector in three dimensions. Unlike in two dimensions bivectors are not commutative so properties that depend on commutativity do not apply in three dimensions. For example, in generalexp(A+B) ≠ exp(A) exp(B)in three (or more) dimensions. The full geometric algebra in three dimensions,Cl3(R), has basis (1,e1,e2,e3,e23,e31,e12,e123). The elemente123is a trivector and thepseudoscalarfor the geometry. Bivectors in three dimensions are sometimes identified withpseudovectors[17]to which they are related, asdiscussed below. Bivectors are not closed under the geometric product, but the even subalgebra is. In three dimensions it consists of all scalar and bivector elements of the geometric algebra, so a general element can be written for examplea+A, whereais the scalar part andAis the bivector part. It is writtenCl[0]3and has basis(1,e23,e31,e12). The product of two general elements of the even subalgebra is The even subalgebra, that is the algebra consisting of scalars and bivectors, isisomorphicto thequaternions,H. This can be seen by comparing the basis to the quaternion basis, or from the above product which is identical to the quaternion product, except for a change of sign which relates to the negative products in the bivector scalar productA·B. Other quaternion properties can be similarly related to or derived from geometric algebra. This suggests that the usual split of a quaternion into scalar and vector parts would be better represented as a split into scalar and bivector parts; if this is done the quaternion product is merely the geometric product. It also relates quaternions in three dimensions to complex numbers in two, as each is isomorphic to the even subalgebra for the dimension, a relationship that generalises to higher dimensions. The rotation vector, from theaxis–angle representationof rotations, is a compact way of representing rotations in three dimensions. In its most compact form, it consists of a vector, the product of aunit vectorωthat is theaxis of rotationwith the (signed)angleof rotationθ, so that the magnitude of the overall rotation vectorθωequals the (unsigned) rotation angle. The quaternion associated with the rotation is In geometric algebra the rotation is represented by a bivector. This can be seen in its relation to quaternions. LetΩbe a unit bivector in the plane of rotation, and letθbe theangle of rotation. Then the rotation bivector isΩθ. The quaternion closely corresponds to the exponential of half of the bivectorΩθ. That is, the components of the quaternion correspond to the scalar and bivector parts of the following expression:exp⁡12Ωθ=cos⁡12θ+Ωsin⁡12θ{\displaystyle \exp {{\tfrac {1}{2}}{\boldsymbol {\Omega }}\theta }=\cos {{\tfrac {1}{2}}\theta }+{\boldsymbol {\Omega }}\sin {{\tfrac {1}{2}}\theta }} The exponential can be defined in terms of itspower series, and easily evaluated using the fact thatΩsquared is−1. So rotations can be represented by bivectors. Just as quaternions are elements of the geometric algebra, they are related by the exponential map in that algebra. The bivectorΩθgenerates a rotation through the exponential map. The even elements generated rotate a general vector in three dimensions in the same way as quaternions:v′=exp⁡(−12Ωθ)vexp⁡(12Ωθ).{\displaystyle \mathbf {v} '=\exp(-{\tfrac {1}{2}}{\boldsymbol {\Omega }}\theta )\,\mathbf {v} \exp({\tfrac {1}{2}}{\boldsymbol {\Omega }}\theta ).} As in two dimensions, the quantityexp(−⁠1/2⁠Ωθ)is called arotorand writtenR. The quantityexp(⁠1/2⁠Ωθ)is thenR−1, and they generate rotations asv′=RvR−1.{\displaystyle \mathbf {v} '=R\mathbf {v} R^{-1}.} This is identical to two dimensions, except here rotors are four-dimensional objects isomorphic to the quaternions. This can be generalised to all dimensions, with rotors, elements of the even subalgebra with unit magnitude, being generated by the exponential map from bivectors. They form adouble coverover the rotation group, so the rotorsRand−Rrepresent the same rotation. The rotation vector is an example of anaxial vector. Axial vectors, or pseudovectors, are vectors with the special feature that their coordinates undergo a sign change relative to the usual vectors (also called "polar vectors") under inversion through the origin, reflection in a plane, or other orientation-reversing linear transformation.[18]Examples include quantities liketorque,angular momentumand vectormagnetic fields. Quantities that would use axial vectors invector algebraare properly represented by bivectors in geometric algebra.[19]More precisely, if an underlying orientation is chosen, the axial vectors are naturally identified with the usual vectors; theHodge dualthen gives the isomorphism between axial vectors and bivectors, so each axial vector is associated with a bivector and vice versa; that is where⁠⋆{\displaystyle {\star }}⁠is the Hodge star. Note that if the underlying orientation is reversed by inversion through the origin, both the identification of the axial vectors with the usual vectors and the Hodge dual change sign, but the bivectors don't budge. Alternately, using theunit pseudoscalarinCl3(R),i=e1e2e3gives This is easier to use as the product is just the geometric product. But it is antisymmetric because (as in two dimensions) the unit pseudoscalarisquares to−1, so a negative is needed in one of the products. This relationship extends to operations like the vector-valuedcross productand bivector-valued exterior product, as when written asdeterminantsthey are calculated in the same way: so are related by the Hodge dual: Bivectors have a number of advantages over axial vectors. They better disambiguate axial and polar vectors, that is the quantities represented by them, so it is clearer which operations are allowed and what their results are. For example, the inner product of a polar vector and an axial vector resulting from the cross product in thetriple productshould result in apseudoscalar, a result which is more obvious if the calculation is framed as the exterior product of a vector and bivector. They generalise to other dimensions; in particular bivectors can be used to describe quantities like torque and angular momentum in two as well as three dimensions. Also, they closely match geometric intuition in a number of ways, as seen in the next section.[20] As suggested by their name and that of the algebra, bivectors have a natural geometric interpretation. This appies in any dimension number but is best illustrated in three, where parallels can be drawn with more familiar objects. In two dimensions the geometric interpretation is trivial, as the space is two-dimensional so has only one plane, and all bivectors are associated with it differing only by a scalar factor. All bivectors can be interpreted asplanes, or more precisely as directed plane segments. In three dimensions, there are three properties of a bivector that can be interpreted geometrically: In three dimensions, every bivector can be generated by the exterior product of two vectors. If the bivectorB=a∧bthen the magnitude ofBis whereθis the angle between the vectors. This is the area of theparallelogramwith edgesaandb, as shown in the diagram. One interpretation is that the area is swept out bybas it moves alonga. The exterior product is antisymmetric, so reversing the order ofaandbto makeamove alongbresults in a bivector with the opposite direction that is the negative of the first. The plane of bivectora∧bcontains bothaandbso they are both parallel to the plane. Bivectors and axial vectors are related as beingHodge dual. In a real vector space, the Hodge dual relates the blade that represents a subspace to itsorthogonal complement, so if a bivector represents a plane then the axial vector associated with it is simply the plane'ssurface normal. The plane has two normal sets of vbectors, one on each side, giving the two possibleorientationsfor the plane and bivector. In three dimensions, Hodge duality relates thecross productto theexterior product. It can also be used to represent physical quantities, liketorqueandangular momentum. In vector algebra they are usually represented by pseudovectors that are perpendicular to the plane of theforce,linear momentumor displacement that they are calculated from. But if a bivector is used instead, the plane is the plane of the bivector, so is a more natural way to represent the quantities and the way in which they act. Unlike the vector representation, it generalises to other dimensions. The geometic product of two bivectors has a geometric interpretation. For non-zero bivectorsAandBthe product can be split into symmetric and antisymmetric parts as follows: Like vectors these have magnitudes|A·B| = |A| |B| cosθand|A×B| = |A| |B| sinθ, whereθis the angle between the planes. In three dimensions it is the same as the angle between the normal vectors dual to the planes, and it generalises to some extent in higher dimensions. Bivectors can be added together as areas. Given two non-zero bivectorsBandCin three dimensions it is always possible to find a vector that is contained in both,asay, so the bivectors can be written as exterior products involvinga: This can be interpreted geometrically as seen in the diagram: the two areas sum to give a third, with the three areas forming faces of aprismwitha,b,candb+cas edges. This corresponds to the two ways of calculating the area using thedistributivityof the exterior product: This only works in three dimensions as it is the only number of dimensions in which a vector that is parallel to both bivectors must exist. In a higher number of dimensions, bivectors generally are not associated with a single plane, or if they are (simple bivectors), two bivectors may have no vector in common, and so sum to a non-simple bivector. In four dimensions, the basis elements for the space⋀2R4of bivectors are (e12,e13,e14,e23,e24,e34), so a general bivector is of the form In four dimensions, the Hodge dual of a bivector is a bivector, and the space⋀2R4is dual to itself. Normal vectors are not unique, instead every plane is orthogonal to all the vectors in its Hodge dual space. This can be used to partition the bivectors into two 'halves', in the following way. We have three pairs of orthogonal bivectors:(e12,e34),(e13,e24)and(e14,e23). There are four distinct ways of picking one bivector from each of the first two pairs, and once these first two are picked their sum yields the third bivector from the other pair. For example,(e12,e13,e14)and(e23,e24,e34). In four dimensions bivectors are generated by the exterior product of vectors inR4, but with one important difference fromR3andR2. In four dimensions not all bivectors are simple. There are bivectors such ase12+e34that cannot be generated by the exterior product of two vectors. This also means they do not have a real, that is scalar, square. In this case The elemente1234is the pseudoscalar inCl4, distinct from the scalar, so the square is non-scalar. All bivectors in four dimensions can be generated using at most two exterior products and four vectors. The above bivector can be written as Similarly, every bivector can be written as the sum of two simple bivectors. It is useful to choose two orthogonal bivectors for this, and this is always possible to do. Moreover, for a generic bivector the choice of simple bivectors is unique, that is, there is only one way to decompose into orthogonal bivectors; the only exception is when the two orthogonal bivectors have equal magnitudes (as in the above example): in this case the decomposition is not unique.[4]The decomposition is always unique in the case of simple bivectors, with the added bonus that one of the orthogonal parts is zero. As in three dimensions bivectors in four dimension generate rotations through the exponential map, and all rotations can be generated this way. As in three dimensions ifBis a bivector then the rotorRisexp⁠1/2⁠Band rotations are generated in the same way: The rotations generated are more complex though. They can be categorised as follows: These are generated by bivectors in a straightforward way. Simple rotations are generated by simple bivectors, with the fixed plane the dual or orthogonal to the plane of the bivector. The rotation can be said to take place about that plane, in the plane of the bivector. All other bivectors generate double rotations, with the two angles of the rotation equalling the magnitudes of the two simple bivectors that the non-simple bivector is composed of. Isoclinic rotations arise when these magnitudes are equal, in which case the decomposition into two simple bivectors is not unique.[21] Bivectors in general do not commute, but one exception is orthogonal bivectors and exponents of them. So if the bivectorB=B1+B2, whereB1andB2are orthogonal simple bivectors, is used to generate a rotation it decomposes into two simple rotations that commute as follows: It is always possible to do this as all bivectors can be expressed as sums of orthogonal bivectors. Spacetimeis a mathematical model for our universe used in special relativity. It consists of threespacedimensions and onetimedimension combined into a single four-dimensional space. It is naturally described using geometric algebra and bivectors, with theEuclidean metricreplaced by aMinkowski metric. That algebra is identical to that of Euclidean space, except thesignatureis changed, so (Note the order and indices above are not universal – heree4is the time-like dimension). The geometric algebra isCl3,1(R), and the subspace of bivectors is⋀2R3,1. The simple bivectors are of two types. The simple bivectorse23,e31ande12have negative squares and span the bivectors of the three-dimensional subspace corresponding to Euclidean space,R3. These bivectors generate ordinary rotations inR3. The simple bivectorse14,e24ande34have positive squares and as planes span a space dimension and the time dimension. These also generate rotations through the exponential map, but instead of trigonometric functions, hyperbolic functions are needed, which generates a rotor as follows: whereΩis the bivector (e14, etc.), identified via the metric with an antisymmetric linear transformation ofR3,1. These areLorentz boosts, expressed in a particularly compact way, using the same kind of algebra as inR3andR4. In general all spacetime rotations are generated from bivectors through the exponential map, that is, a general rotor generated by bivectorAis of the form The set of all rotations in spacetime form theLorentz group, and from them most of the consequences of special relativity can be deduced. More generally this show how transformations in Euclidean space and spacetime can all be described using the same kind of algebra. (Note: in this section traditional 3-vectors are indicated by lines over the symbols and spacetime vector and bivectors by bold symbols, with the vectorsJandAexceptionally in uppercase) Maxwell's equationsare used in physics to describe the relationship betweenelectricandmagneticfields. Normally given as four differential equations they have a particularly compact form when the fields are expressed as a spacetime bivector from⋀2R3,1. If the electric and magnetic fields inR3areEandBthen theelectromagnetic bivectoris wheree4is again the basis vector for the time-like dimension andcis thespeed of light. The productBe123yields the bivector that is Hodge dual toBin three dimensions, asdiscussed above, whileEe4as a product of orthogonal vectors is also bivector-valued. As a whole it is theelectromagnetic tensorexpressed more compactly as a bivector, and is used as follows. First it is related to the4-currentJ, a vector quantity given by wherejiscurrent densityandρischarge density. They are related by a differential operator ∂, which is The operator ∇ is adifferential operatorin geometric algebra, acting on the space dimensions and given by∇M= ∇·M+ ∇∧M. When applied to vectors∇·Mis thedivergenceand∇∧Mis thecurlbut with a bivector rather than vector result, that is dual in three dimensions to the curl. For general quantityMthey act as grade lowering and raising differential operators. In particular ifMis a scalar then this operator is just thegradient, and it can be thought of as a geometric algebraicdeloperator. Together these can be used to give a particularly compact form for Maxwell's equations with sources: This equation, when decomposed according to geometric algebra, using geometric products which have both grade raising and grade lowering effects, is equivalent to Maxwell's four equations. It is also related to theelectromagnetic four-potential, a vectorAgiven by whereAis the vector magnetic potential andVis the electric potential. It is related to the electromagnetic bivector as follows using the same differential operator∂.[22] As has been suggested in earlier sections much of geometric algebra generalises well into higher dimensions. The geometric algebra for the real spaceRnisCln(R), and the subspace of bivectors is⋀2Rn. The number of simple bivectors needed to form a general bivector rises with the dimension, so fornodd it is(n− 1) / 2, forneven it isn/ 2. So for four andfivedimensions only two simple bivectors are needed but three are required forsixandsevendimensions. For example, in six dimensions with standard basis (e1,e2,e3,e4,e5,e6) the bivector is the sum of three simple bivectors but no less. As in four dimensions it is always possible to find orthogonal simple bivectors for this sum. As in three and four dimensions rotors are generated by the exponential map, so is the rotor generated by bivectorB. Simple rotations, that take place in aplane of rotationaround a fixedbladeof dimension(n− 2)are generated by simple bivectors, while other bivectors generate more complex rotations which can be described in terms of the simple bivectors they are sums of, each related to a plane of rotation. All bivectors can be expressed as the sum of orthogonal and commutative simple bivectors, so rotations can always be decomposed into a set of commutative rotations about the planes associated with these bivectors. The group of the rotors inndimensions is thespin group,Spin(n). One notable feature, related to the number of simple bivectors and so rotation planes, is that in odd dimensions every rotation has a fixed axis – it is misleading to call it an axis of rotation as in higher dimensions rotations are taking place in multiple planes orthogonal to it. This is related to bivectors, as bivectors in odd dimensions decompose into the same number of bivectors as the even dimension below, so have the same number of planes, but one extra dimension. As each plane generates rotations in two dimensions in odd dimensions there must be one dimension, that is an axis, that is not being rotated.[23] Bivectors are also related to the rotation matrix inndimensions. As in three dimensions thecharacteristic equationof the matrix can be solved to find theeigenvalues. In odd dimensions this has one real root, with eigenvector the fixed axis, and in even dimensions it has no real roots, so either all or all but one of the roots are complex conjugate pairs. Each pair is associated with a simple component of the bivector associated with the rotation. In particular, the log of each pair is the magnitude up to a sign, while eigenvectors generated from the roots are parallel to and so can be used to generate the bivector. In general the eigenvalues and bivectors are unique, and the set of eigenvalues gives the full decomposition into simple bivectors; if roots are repeated then the decomposition of the bivector into simple bivectors is not unique. Geometric algebra can be applied toprojective geometryin a straightforward way. The geometric algebra used isCln(R),n≥ 3, the algebra of the real vector spaceRn. This is used to describe objects in thereal projective spaceRPn−1. The non-zero vectors inCln(R)orRnare associated with points in the projective space so vectors that differ only by a scale factor, so their exterior product is zero, map to the same point. Non-zero simple bivectors in⋀2Rnrepresent lines inRPn−1, with bivectors differing only by a (positive or negative) scale factor representing the same line. A description of the projective geometry can be constructed in the geometric algebra using basic operations. For example, given two distinct points inRPn−1represented by vectorsaandbthe line containing them is given bya∧b(orb∧a). Two lines intersect in a point ifA∧B= 0for their bivectorsAandB. This point is given by the vector The operation "∨" is the meet, which can be defined as above in terms of the join,J=A∧B[clarification needed]for non-zeroA∧B. Using these operations projective geometry can be formulated in terms of geometric algebra. For example, given a third (non-zero) bivectorCthe pointplies on the line given byCif and only if So the condition for the lines given byA,BandCto be collinear is which inCl3(R)andRP2simplifies to where the angle brackets denote the scalar part of the geometric product. In the same way all projective space operations can be written in terms of geometric algebra, with bivectors representing general lines in projective space, so the whole geometry can be developed using geometric algebra.[15] Asnoted abovea bivector can be written as a skew-symmetric matrix, which through the exponential map generates a rotation matrix that describes the same rotation as the rotor, also generated by the exponential map but applied to the vector. But it is also used with other bivectors such as theangular velocity tensorand theelectromagnetic tensor, respectively a 3×3 and 4×4 skew-symmetric matrix or tensor. Real bivectors in⋀2Rnare isomorphic ton×nskew-symmetric matrices, or alternately to antisymmetrictensorsof degree 2 onRn. While bivectors are isomorphic to vectors (via the dual) in three dimensions they can be represented by skew-symmetric matrices in any dimension. This is useful for relating bivectors to problems described by matrices, so they can be re-cast in terms of bivectors, given a geometric interpretation, then often solved more easily or related geometrically to other bivector problems.[24] More generally, every real geometric algebra isisomorphic to a matrix algebra. These contain bivectors as a subspace, though often in a way which is not especially useful. These matrices are mainly of interest as a way of classifying Clifford algebras.[25]
https://en.wikipedia.org/wiki/Bivector
Ingeometry, aplane of rotationis an abstract object used to describe or visualizerotationsin space. The main use for planes of rotation is in describing more complex rotations infour-dimensional spaceandhigher dimensions, where they can be used to break down the rotations into simpler parts. This can be done usinggeometric algebra, with the planes of rotations associated withsimple bivectorsin the algebra.[1] Planes of rotation are not used much intwoandthree dimensions, as in two dimensions there is only one plane (so, identifying the plane of rotation is trivial and rarely done), while in three dimensions the axis of rotation serves the same purpose and is the more established approach. Mathematically such planes can be described in a number of ways. They can be described in terms ofplanesandangles of rotation. They can be associated withbivectorsfromgeometric algebra. They are related to theeigenvalues and eigenvectorsof arotation matrix. And in particulardimensionsthey are related to other algebraic and geometric properties, which can then be generalised to other dimensions. For this article, allplanesare planes through theorigin, that is they contain thezero vector. Such a plane inn-dimensional spaceis a two-dimensionallinear subspaceof the space. It is completely specified by any two non-zero and non-parallel vectors that lie in the plane, that is by any two vectorsaandb, such that where∧is the exterior product fromexterior algebraorgeometric algebra(in three dimensions thecross productcan be used). More precisely, the quantitya∧bis the bivector associated with the plane specified byaandb, and has magnitude|a| |b| sinφ, whereφis the angle between the vectors; hence the requirement that the vectors be nonzero and nonparallel.[2] If the bivectora∧bis writtenB, then the condition that a point lies on the plane associated withBis simply[3] This is true in all dimensions, and can be taken as the definition on the plane. In particular, from the properties of the exterior product it is satisfied by bothaandb, and so by any vector of the form withλandμreal numbers. Asλandμrange over all real numbers,cranges over the whole plane, so this can be taken as another definition of the plane. Aplane of rotationfor a particularrotationis a plane that ismappedto itself by the rotation. The plane is not fixed, but all vectors in the plane are mapped to other vectors in the same plane by the rotation. This transformation of the plane to itself is always a rotation about the origin, through an angle which is theangle of rotationfor the plane. Every rotation except for theidentityrotation (with matrix theidentity matrix) has at least one plane of rotation, and up to planes of rotation, wherenis the dimension. The maximum number of planes up to eight dimensions is shown in this table: When a rotation has multiple planes of rotation they are alwaysorthogonalto each other, with only the origin in common. This is a stronger condition than to say the planes are atright angles; it instead means that the planes have no nonzero vectors in common, and that every vector in one plane is orthogonal to every vector in the other plane. This can only happen in four or more dimensions. In two dimensions there is only one plane, while in three dimensions all planes have at least one nonzero vector in common, along theirline of intersection.[4] In more than three dimensions planes of rotation are not always unique. For example the negative of theidentity matrixin four dimensions (thecentral inversion), describes a rotation in four dimensions in which every plane through the origin is a plane of rotation through an angleπ, so any pair of orthogonal planes generates the rotation. But for a general rotation it is at least theoretically possible to identify a unique set of orthogonal planes, in each of which points are rotated through an angle, so the set of planes and angles fully characterise the rotation.[5] Intwo-dimensional spacethere is only one plane of rotation, the plane of the space itself. In aCartesian coordinate systemit is the Cartesian plane, incomplex numbersit is thecomplex plane. Any rotation therefore is of the whole plane, i.e. of the space, keeping only theoriginfixed. It is specified completely by the signed angle of rotation, in the range for example −πtoπ. So if the angle isθthe rotation in the complex plane is given byEuler's formula: while the rotation in a Cartesian plane is given by the2 × 2rotation matrix:[6] Inthree-dimensional spacethere are an infinite number of planes of rotation, only one of which is involved in any given rotation. That is, for a general rotation there is precisely one plane which is associated with it or which the rotation takes place in. The only exception is the trivial rotation, corresponding to the identity matrix, in which no rotation takes place. In any rotation in three dimensions there is always a fixed axis, the axis of rotation. The rotation can be described by giving this axis, with the angle through which the rotation turns about it; this is theaxis anglerepresentation of a rotation. The plane of rotation is the plane orthogonal to this axis, so the axis is asurface normalof the plane. The rotation then rotates this plane through the same angle as it rotates around the axis, that is everything in the plane rotates by the same angle about the origin. One example is shown in the diagram, where the rotation takes place about thez-axis. The plane of rotation is thexy-plane, so everything in that plane it kept in the plane by the rotation. This could be described by a matrix like the following, with the rotation being through an angleθ(about the axis or in the plane): Another example is theEarth's rotation. The axis of rotation is the line joining theNorth PoleandSouth Poleand the plane of rotation is the plane through theequatorbetween theNorthernandSouthernHemispheres. Other examples include mechanical devices like agyroscopeorflywheelwhich storerotational energyin mass usually along the plane of rotation. In any three dimensional rotation the plane of rotation is uniquely defined. Together with the angle of rotation it fully describes the rotation. Or in a continuously rotating object the rotational properties such as the rate of rotation can be described in terms of the plane of rotation. It is perpendicular to, and so is defined by and defines, an axis of rotation, so any description of a rotation in terms of a plane of rotation can be described in terms of an axis of rotation, and vice versa. But unlike the axis of rotation the plane generalises into other, in particular higher, dimensions.[7] A general rotation infour-dimensional spacehas only one fixed point, the origin. Therefore an axis of rotation cannot be used in four dimensions. But planes of rotation can be used, and each non-trivial rotation in four dimensions has one or two planes of rotation. A rotation with only one plane of rotation is asimple rotation. In a simple rotation there is a fixed plane, and rotation can be said to take place about this plane, so points as they rotate do not change their distance from this plane. The plane of rotation is orthogonal to this plane, and the rotation can be said to take place in this plane. For example the following matrix fixes thexy-plane: points in that plane and only in that plane are unchanged. The plane of rotation is thezw-plane, points in this plane are rotated through an angleθ. A general point rotates only in thezw-plane, that is it rotates around thexy-plane by changing only itszandwcoordinates. In two and three dimensions all rotations are simple, in that they have only one plane of rotation. Only in four and more dimensions are there rotations that are not simple rotations. In particular in four dimensions there are also double and isoclinic rotations. In adouble rotationthere are two planes of rotation, no fixed planes, and the only fixed point is the origin. The rotation can be said to take place in both planes of rotation, as points in them are rotated within the planes. These planes are orthogonal, that is they have no vectors in common so every vector in one plane is at right angles to every vector in the other plane. The two rotation planes span four-dimensional space, so every point in the space can be specified by two points, one on each of the planes. A double rotation has two angles of rotation, one for each plane of rotation. The rotation is specified by giving the two planes and two non-zero angles,αandβ(if either angle is zero the rotation is simple). Points in the first plane rotate throughα, while points in the second plane rotate throughβ. All other points rotate through an angle betweenαandβ, so in a sense they together determine the amount of rotation. For a general double rotation the planes of rotation and angles are unique, and given a general rotation they can be calculated. For example a rotation ofαin thexy-plane andβin thezw-plane is given by the matrix A special case of the double rotation is when the angles are equal, that is ifα=β≠ 0. This is called anisoclinic rotation, and it differs from a general double rotation in a number of ways. For example in an isoclinic rotation, all non-zero points rotate through the same angle,α. Most importantly the planes of rotation are not uniquely identified. There are instead an infinite number of pairs of orthogonal planes that can be treated as planes of rotation. For example any point can be taken, and the plane it rotates in together with the plane orthogonal to it can be used as two planes of rotation.[8] As already noted the maximum number of planes of rotation inndimensions is so the complexity quickly increases with more than four dimensions and categorising rotations as above becomes too complex to be practical, but some observations can be made. Simple rotations can be identified in all dimensions, as rotations with just one plane of rotation. A simple rotation inndimensions takes place about (that is at a fixed distance from) an(n− 2)-dimensional subspace orthogonal to the plane of rotation. A general rotation is not simple, and has the maximum number of planes of rotation as given above. In the general case the angles of rotations in these planes are distinct and the planes are uniquely defined. If any of the angles are the same then the planes are not unique, as in four dimensions with an isoclinic rotation. In even dimensions (n= 2, 4, 6...) there are up to⁠n/2⁠planes of rotation span the space, so a general rotation rotates all points except the origin which is the only fixed point. In odd dimensions (n= 3, 5, 7, ...) there are⁠n− 1/2⁠planes and angles of rotation, the same as the even dimension one lower. These do not span the space, but leave a line which does not rotate – like theaxis of rotationin three dimensions, except rotations do not take place about this line but in multiple planes orthogonal to it.[1] The examples given above were chosen to be clear and simple examples of rotations, with planes generally parallel to the coordinate axes in three and four dimensions. But this is not generally the case: planes are not usually parallel to the axes, and the matrices cannot simply be written down. In all dimensions the rotations are fully described by the planes of rotation and their associated angles, so it is useful to be able to determine them, or at least find ways to describe them mathematically. Every simple rotation can be generated by tworeflections. Reflections can be specified inndimensions by giving an(n− 1)-dimensional subspace to reflect in, so a two-dimensional reflection is in a line, a three-dimensional reflection is in a plane, and so on. But this becomes increasingly difficult to apply in higher dimensions, so it is better to use vectors instead, as follows. A reflection inndimensions is specified by a vector perpendicular to the(n− 1)-dimensional subspace. To generate simple rotations only reflections that fix the origin are needed, so the vector does not have a position, just direction. It does also not matter which way it is facing: it can be replaced with its negative without changing the result. Similarlyunit vectorscan be used to simplify the calculations. So the reflection in a(n− 1)-dimensional space is given by the unit vector perpendicular to it,m, thus: where the product is the geometric product fromgeometric algebra. Ifx′is reflected in another, distinct,(n− 1)-dimensional space, described by a unit vectornperpendicular to it, the result is This is a simple rotation inndimensions, through twice the angle between the subspaces, which is also the angle between the vectorsmandn. It can be checked using geometric algebra that this is a rotation, and that it rotates all vectors as expected. The quantitymnis arotor, andnmis its inverse as So the rotation can be written whereR=mnis the rotor. The plane of rotation is the plane containingmandn, which must be distinct otherwise the reflections are the same and no rotation takes place. As either vector can be replaced by its negative the angle between them can always be acute, or at most⁠π/2⁠. The rotation is throughtwicethe angle between the vectors, up toπor a half-turn. The sense of the rotation is to rotate frommtowardsn: the geometric product is notcommutativeso the productnmis the inverse rotation, with sense fromntom. Conversely all simple rotations can be generated this way, with two reflections, by two unit vectors in the plane of rotation separated by half the desired angle of rotation. These can be composed to produce more general rotations, using up tonreflections if the dimensionnis even,n− 2ifnis odd, by choosing pairs of reflections given by two vectors in each plane of rotation.[9][10] Bivectorsare quantities fromgeometric algebra,clifford algebraand theexterior algebra, which generalise the idea of vectors into two dimensions. As vectors are to lines, so are bivectors to planes. So every plane (in any dimension) can be associated with a bivector, and everysimple bivectoris associated with a plane. This makes them a good fit for describing planes of rotation. Every rotation plane in a rotation has a simple bivector associated with it. This is parallel to the plane and has magnitude equal to the angle of rotation in the plane. These bivectors are summed to produce a single, generally non-simple, bivector for the whole rotation. This can generate arotorthrough theexponential map, which can be used to rotate an object. Bivectors are related to rotors through the exponential map (which applied to bivectors generates rotors and rotations usingDe Moivre's formula). In particular given any bivectorBthe rotor associated with it is This is a simple rotation if the bivector is simple, a more general rotation otherwise. When squared, it gives a rotor that rotates through twice the angle. IfBis simple then this is the same rotation as is generated by two reflections, as the productmngives a rotation through twice the angle between the vectors. These can be equated, from which it follows that the bivector associated with the plane of rotation containingmandnthat rotatesmtonis This is a simple bivector, associated with the simple rotation described. More general rotations in four or more dimensions are associated with sums of simple bivectors, one for each plane of rotation, calculated as above. Examples include the two rotations in four dimensions given above. The simple rotation in thezw-plane by an angleθhas bivectore34θ, a simple bivector. The double rotation byαandβin thexy-plane andzw-planes has bivectore12α+e34β, the sum of two simple bivectorse12αande34βwhich are parallel to the two planes of rotation and have magnitudes equal to the angles of rotation. Given a rotor the bivector associated with it can be recovered by taking the logarithm of the rotor, which can then be split into simple bivectors to determine the planes of rotation, although in practice for all but the simplest of cases this may be impractical. But given the simple bivectors geometric algebra is a useful tool for studying planes of rotation using algebra like the above.[1][11] The planes of rotations for a particular rotation using theeigenvalues. Given a general rotation matrix inndimensions itscharacteristic equationhas either one (in odd dimensions) or zero (in even dimensions) real roots. The other roots are in complex conjugate pairs, exactly such pairs. These correspond to the planes of rotation, theeigenplanesof the matrix, which can be calculated using algebraic techniques. In additionargumentsof the complex roots are the magnitudes of the bivectors associated with the planes of rotations. The form of the characteristic equation is related to the planes, making it possible to relate its algebraic properties like repeated roots to the bivectors, where repeated bivector magnitudes have particular geometric interpretations.[1][12]
https://en.wikipedia.org/wiki/Plane_of_rotation
Computer visiontasks include methods foracquiring,processing,analyzing, and understandingdigital images, and extraction ofhigh-dimensionaldata from the real world in order to produce numerical or symbolic information, e.g. in the form of decisions.[1][2][3][4]"Understanding" in this context signifies the transformation of visual images (the input to theretina) into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. Thescientific disciplineof computer vision is concerned with the theory behind artificial systems that extract information from images. Image data can take many forms, such as video sequences, views from multiple cameras, multi-dimensional data from a3D scanner, 3D point clouds fromLiDaRsensors, or medical scanning devices. The technological discipline of computer vision seeks to apply its theories and models to the construction of computer vision systems. Subdisciplines of computer vision includescene reconstruction,object detection,event detection,activity recognition,video tracking,object recognition,3D pose estimation, learning, indexing,motion estimation,visual servoing,3D scene modeling, andimage restoration. Computer vision is aninterdisciplinary fieldthat deals with how computers can be made to gain high-level understanding fromdigital imagesorvideos. From the perspective ofengineering, it seeks to automate tasks that thehuman visual systemcan do.[5][6][7]"Computer vision is concerned with the automatic extraction, analysis, and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding."[8]As ascientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from amedical scanner.[9]As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems.Machine visionrefers to a systems engineering discipline, especially in the context of factory automation. In more recent times, the terms computer vision and machine vision have converged to a greater degree.[10]: 13 In the late 1960s, computer vision began at universities that were pioneeringartificial intelligence. It was meant to mimic thehuman visual systemas a stepping stone to endowing robots with intelligent behavior.[11]In 1966, it was believed that this could be achieved through an undergraduate summer project,[12]by attaching a camera to a computer and having it "describe what it saw".[13][14] What distinguished computer vision from the prevalent field ofdigital image processingat that time was a desire to extractthree-dimensionalstructure from images with the goal of achieving full scene understanding. Studies in the 1970s formed the early foundations for many of the computer visionalgorithmsthat exist today, includingextraction of edgesfrom images, labeling of lines, non-polyhedral andpolyhedral modeling, representation of objects as interconnections of smaller structures,optical flow, andmotion estimation.[11] The next decade saw studies based on more rigorous mathematical analysis and quantitative aspects of computer vision. These include the concept ofscale-space, the inference of shape from various cues such asshading, texture and focus, andcontour models known as snakes. Researchers also realized that many of these mathematical concepts could be treated within the same optimization framework asregularizationandMarkov random fields.[15]By the 1990s, some of the previous research topics became more active than others. Research inprojective3-D reconstructionsled to better understanding ofcamera calibration. With the advent of optimization methods for camera calibration, it was realized that a lot of the ideas were already explored inbundle adjustmenttheory from the field ofphotogrammetry. This led to methods for sparse3-D reconstructions of scenes from multiple images. Progress was made on the dense stereocorrespondence problemand further multi-view stereo techniques. At the same time,variations of graph cutwere used to solveimage segmentation. This decade also marked the first time statistical learning techniques were used in practice to recognize faces in images (seeEigenface). Toward the end of the 1990s, a significant change came about with the increased interaction between the fields ofcomputer graphicsand computer vision. This includedimage-based rendering,image morphing, view interpolation,panoramic image stitchingand earlylight-field rendering.[11] Recent work has seen the resurgence offeature-based methods used in conjunction with machine learning techniques and complex optimization frameworks.[16][17]The advancement ofDeep Learningtechniques has brought further life to the field of computer vision. The accuracy of deep learning algorithms on several benchmark computer vision data sets for tasks ranging from classification,[18]segmentation and optical flow has surpassed prior methods.[19][20] Solid-state physicsis another field that is closely related to computer vision. Most computer vision systems rely onimage sensors, which detectelectromagnetic radiation, which is typically in the form of eithervisible,infraredorultraviolet light. The sensors are designed usingquantum physics. The process by which light interacts with surfaces is explained using physics. Physics explains the behavior ofopticswhich are a core part of most imaging systems. Sophisticatedimage sensorseven requirequantum mechanicsto provide a complete understanding of the image formation process.[11]Also, various measurement problems in physics can be addressed using computer vision, for example, motion in fluids. Neurobiologyhas greatly influenced the development of computer vision algorithms. Over the last century, there has been an extensive study of eyes, neurons, and brain structures devoted to the processing of visual stimuli in both humans and various animals. This has led to a coarse yet convoluted description of how natural vision systems operate in order to solve certain vision-related tasks. These results have led to a sub-field within computer vision where artificial systems are designed to mimic the processing and behavior of biological systems at different levels of complexity. Also, some of the learning-based methods developed within computer vision (e.g.neural netanddeep learningbased image and feature analysis and classification) have their background in neurobiology. TheNeocognitron, a neural network developed in the 1970s byKunihiko Fukushima, is an early example of computer vision taking direct inspiration from neurobiology, specifically theprimary visual cortex. Some strands of computer vision research are closely related to the study ofbiological vision—indeed, just as many strands ofAIresearch are closely tied with research into human intelligence and the use of stored knowledge to interpret, integrate, and utilize visual information. The field of biological vision studies and models the physiological processes behind visual perception in humans and other animals. Computer vision, on the other hand, develops and describes the algorithms implemented in software and hardware behind artificial vision systems. An interdisciplinary exchange between biological and computer vision has proven fruitful for both fields.[22] Yet another field related to computer vision issignal processing. Many methods for processing one-variable signals, typically temporal signals, can be extended in a natural way to the processing of two-variable signals or multi-variable signals in computer vision. However, because of the specific nature of images, there are many methods developed within computer vision that have no counterpart in the processing of one-variable signals. Together with the multi-dimensionality of the signal, this defines a subfield in signal processing as a part of computer vision. Robot navigationsometimes deals with autonomouspath planningor deliberation for robotic systems tonavigate through an environment.[23]A detailed understanding of these environments is required to navigate through them. Information about the environment could be provided by a computer vision system, acting as a vision sensor and providing high-level information about the environment and the robot Besides the above-mentioned views on computer vision, many of the related research topics can also be studied from a purely mathematical point of view. For example, many methods in computer vision are based onstatistics,optimizationorgeometry. Finally, a significant part of the field is devoted to the implementation aspect of computer vision; how existing methods can be realized in various combinations of software and hardware, or how these methods can be modified in order to gain processing speed without losing too much performance. Computer vision is also used in fashion eCommerce, inventory management, patent search, furniture, and the beauty industry.[24] The fields most closely related to computer vision areimage processing,image analysisandmachine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are similar, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences, and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented. In image processing, the input and output are both images, whereas in computer vision, the input is an image or video, and the output could be an enhanced image, an analysis of the image's content, or even a system's behavior based on that analysis. Computer graphicsproduces image data from 3D models, and computer vision often produces 3D models from image data.[25]There is also a trend towards a combination of the two disciplines,e.g., as explored inaugmented reality. The following characterizations appear relevant but should not be taken as universally accepted: Photogrammetryalso overlaps with computer vision, e.g.,stereophotogrammetryvs.computer stereo vision. Applications range from tasks such as industrialmachine visionsystems which, say, inspect bottles speeding by on a production line, to research into artificial intelligence and computers or robots that can comprehend the world around them. The computer vision and machine vision fields have significant overlap. Computer vision covers the core technology of automated image analysis which is used in many fields. Machine vision usually refers to a process of combining automated image analysis with other methods and technologies to provide automated inspection and robot guidance in industrial applications. In many computer-vision applications, computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common. Examples of applications of computer vision include systems for: For 2024, the leading areas of computer vision were industry (market size US$5.22 billion),[34]medicine (market size US$2.6 billion),[35]military (market size US$996.2 million).[36] One of the most prominent application fields ismedical computer vision, or medical image processing, characterized by the extraction of information from image data todiagnose a patient.[37]An example of this is the detection oftumours,arteriosclerosisor other malign changes, and a variety of dental pathologies; measurements of organ dimensions, blood flow, etc. are another example. It also supports medical research by providing new information:e.g., about the structure of the brain or the quality of medical treatments. Applications of computer vision in the medical area also include enhancement of images interpreted by humans—ultrasonic imagesorX-ray images, for example—to reduce the influence of noise. A second application area in computer vision is in industry, sometimes calledmachine vision, where information is extracted for the purpose of supporting a production process. One example is quality control where details or final products are being automatically inspected in order to find defects. One of the most prevalent fields for such inspection is theWaferindustry in which every single Wafer is being measured and inspected for inaccuracies or defects to prevent acomputer chipfrom coming to market in an unusable manner. Another example is a measurement of the position and orientation of details to be picked up by a robot arm. Machine vision is also heavily used in the agricultural processes to remove undesirable foodstuff from bulk material, a process calledoptical sorting.[38] The obvious examples are the detection of enemy soldiers or vehicles andmissile guidance. More advanced systems for missile guidance send the missile to an area rather than a specific target, and target selection is made when the missile reaches the area based on locally acquired image data. Modern military concepts, such as "battlefield awareness", imply that various sensors, including image sensors, provide a rich set of information about a combat scene that can be used to support strategic decisions. In this case, automatic processing of the data is used to reduce complexity and to fuse information from multiple sensors to increase reliability. One of the newer application areas is autonomous vehicles, which includesubmersibles, land-based vehicles (small robots with wheels, cars, or trucks), aerial vehicles, and unmanned aerial vehicles (UAV). The level of autonomy ranges from fully autonomous (unmanned) vehicles to vehicles where computer-vision-based systems support a driver or a pilot in various situations. Fully autonomous vehicles typically use computer vision for navigation, e.g., for knowing where they are or mapping their environment (SLAM), for detecting obstacles. It can also be used for detecting certain task-specific events,e.g., a UAV looking for forest fires. Examples of supporting systems are obstacle warning systems in cars, cameras and LiDAR sensors in vehicles, and systems for autonomous landing of aircraft. Several car manufacturers have demonstrated systems forautonomous driving of cars. There are ample examples of military autonomous vehicles ranging from advanced missiles to UAVs for recon missions or missile guidance. Space exploration is already being made with autonomous vehicles using computer vision,e.g.,NASA'sCuriosityandCNSA'sYutu-2rover. Materials such as rubber and silicon are being used to create sensors that allow for applications such as detecting microundulations and calibrating robotic hands. Rubber can be used in order to create a mold that can be placed over a finger, inside of this mold would be multiple strain gauges. The finger mold and sensors could then be placed on top of a small sheet of rubber containing an array of rubber pins. A user can then wear the finger mold and trace a surface. A computer can then read the data from the strain gauges and measure if one or more of the pins are being pushed upward. If a pin is being pushed upward then the computer can recognize this as an imperfection in the surface. This sort of technology is useful in order to receive accurate data on imperfections on a very large surface.[39]Another variation of this finger mold sensor are sensors that contain a camera suspended in silicon. The silicon forms a dome around the outside of the camera and embedded in the silicon are point markers that are equally spaced. These cameras can then be placed on devices such as robotic hands in order to allow the computer to receive highly accurate tactile data.[40] Other application areas include: Each of the application areas described above employ a range of computer vision tasks; more or less well-defined measurement problems or processing problems, which can be solved using a variety of methods. Some examples of typical computer vision tasks are presented below. Computer vision tasks include methods foracquiring,processing,analyzingand understanding digital images, and extraction ofhigh-dimensionaldata from the real world in order to produce numerical or symbolic information,e.g., in the forms of decisions.[1][2][3][4]Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.[45] The classical problem in computer vision, image processing, andmachine visionis that of determining whether or not the image data contains some specific object, feature, or activity. Different varieties of recognition problem are described in the literature.[46] Currently, the best algorithms for such tasks are based onconvolutional neural networks. An illustration of their capabilities is given by theImageNet Large Scale Visual Recognition Challenge; this is a benchmark in object classification and detection, with millions of images and 1000 object classes used in the competition.[47]Performance of convolutional neural networks on the ImageNet tests is now close to that of humans.[47]The best algorithms still struggle with objects that are small or thin, such as a small ant on the stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters (an increasingly common phenomenon with modern digital cameras). By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained classes, such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this with ease.[citation needed] Several specialized tasks based on recognition exist, such as: Several tasks relate to motion estimation, where an image sequence is processed to produce an estimate of the velocity either at each points in the image or in the 3D scene or even of the camera that produces the images. Examples of such tasks are: Given one or (typically) more images of a scene, or a video, scene reconstruction aims atcomputing a 3D modelof the scene. In the simplest case, the model can be a set of 3D points. More sophisticated methods produce a complete 3D surface model. The advent of 3D imaging not requiring motion or scanning, and related processing algorithms is enabling rapid advances in this field. Grid-based 3D sensing can be used to acquire 3D images from multiple angles. Algorithms are now available to stitch multiple 3D images together into point clouds and 3D models.[25] Image restoration comes into the picture when the original image is degraded or damaged due to some external factors like lens wrong positioning, transmission interference, low lighting or motion blurs, etc., which is referred to as noise. When the images are degraded or damaged, the information to be extracted from them also gets damaged. Therefore, we need to recover or restore the image as it was intended to be. The aim of image restoration is the removal of noise (sensor noise, motion blur, etc.) from images. The simplest possible approach for noise removal is various types of filters, such as low-pass filters or median filters. More sophisticated methods assume a model of how the local image structures look to distinguish them from noise. By first analyzing the image data in terms of the local image structures, such as lines or edges, and then controlling the filtering based on local information from the analysis step, a better level of noise removal is usually obtained compared to the simpler approaches. An example in this field isinpainting. The organization of a computer vision system is highly application-dependent. Some systems are stand-alone applications that solve a specific measurement or detection problem, while others constitute a sub-system of a larger design which, for example, also contains sub-systems for control of mechanical actuators, planning, information databases, man-machine interfaces, etc. The specific implementation of a computer vision system also depends on whether its functionality is pre-specified or if some part of it can be learned or modified during operation. Many functions are unique to the application. There are, however, typical functions that are found in many computer vision systems. Image-understanding systems (IUS) include three levels of abstraction as follows: low level includes image primitives such as edges, texture elements, or regions; intermediate level includes boundaries, surfaces and volumes; and high level includes objects, scenes, or events. Many of these requirements are entirely topics for further research. The representational requirements in the designing of IUS for these levels are: representation of prototypical concepts, concept organization, spatial knowledge, temporal knowledge, scaling, and description by comparison and differentiation. While inference refers to the process of deriving new, not explicitly represented facts from currently known facts, control refers to the process that selects which of the many inference, search, and matching techniques should be applied at a particular stage of processing. Inference and control requirements for IUS are: search and hypothesis activation, matching and hypothesis testing, generation and use of expectations, change and focus of attention, certainty and strength of belief, inference and goal satisfaction.[54] There are many kinds of computer vision systems; however, all of them contain these basic elements: a power source, at least one image acquisition device (camera, ccd, etc.), a processor, and control and communication cables or some kind of wireless interconnection mechanism. In addition, a practical vision system contains software, as well as a display in order to monitor the system. Vision systems for inner spaces, as most industrial ones, contain an illumination system and may be placed in a controlled environment. Furthermore, a completed system includes many accessories, such as camera supports, cables, and connectors. Most computer vision systems use visible-light cameras passively viewing a scene at frame rates of at most 60 frames per second (usually far slower). A few computer vision systems use image-acquisition hardware with active illumination or something other than visible light or both, such asstructured-light 3D scanners,thermographic cameras,hyperspectral imagers,radar imaging,lidarscanners,magnetic resonance images,side-scan sonar,synthetic aperture sonar, etc. Such hardware captures "images" that are then processed often using the same computer vision algorithms used to process visible-light images. While traditional broadcast and consumer video systems operate at a rate of 30 frames per second, advances indigital signal processingandconsumer graphics hardwarehas made high-speed image acquisition, processing, and display possible for real-time systems on the order of hundreds to thousands of frames per second. For applications in robotics, fast, real-time video systems are critically important and often can simplify the processing needed for certain algorithms. When combined with a high-speed projector, fast image acquisition allows 3D measurement and feature tracking to be realized.[55] Egocentric visionsystems are composed of a wearable camera that automatically take pictures from a first-person perspective. As of 2016,vision processing unitsare emerging as a new class of processors to complement CPUs andgraphics processing units(GPUs) in this role.[56]
https://en.wikipedia.org/wiki/Computer_vision
Inimage processing,computer visionand related fields, animage momentis a certain particularweighted average(moment) of the image pixels' intensities, or a function of such moments, usually chosen to have some attractive property or interpretation. Image moments are useful to describe objects aftersegmentation.Simple properties of the imagewhich are foundviaimage moments include area (or total intensity), itscentroid, andinformation about its orientation. For a 2D continuous functionf(x,y) themoment(sometimes called "raw moment") of order (p+q) is defined as forp,q= 0,1,2,... Adapting this to scalar (grayscale) image with pixel intensitiesI(x,y), raw image momentsMijare calculated by In some cases, this may be calculated by considering the image as aprobability density function,i.e., by dividing the above by A uniqueness theorem[1]states that iff(x,y) is piecewise continuous and has nonzero values only in a finite part of thexyplane, moments of all orders exist, and the moment sequence (Mpq) is uniquely determined byf(x,y).[2]Conversely, (Mpq) uniquely determinesf(x,y). In practice, the image is summarized with functions of a few lower order moments. Simple image properties derivedviaraw moments include: Central momentsare defined as wherex¯=M10M00{\displaystyle {\bar {x}}={\frac {M_{10}}{M_{00}}}}andy¯=M01M00{\displaystyle {\bar {y}}={\frac {M_{01}}{M_{00}}}}are the components of thecentroid. Ifƒ(x,y) is a digital image, then the previous equation becomes The central moments of order up to 3 are: μ00=M00,μ01=0,μ10=0,μ11=M11−x¯M01=M11−y¯M10,μ20=M20−x¯M10,μ02=M02−y¯M01,μ21=M21−2x¯M11−y¯M20+2x¯2M01,μ12=M12−2y¯M11−x¯M02+2y¯2M10,μ30=M30−3x¯M20+2x¯2M10,μ03=M03−3y¯M02+2y¯2M01.{\displaystyle {\begin{aligned}\mu _{00}&=M_{00},&\mu _{01}&=0,\\\mu _{10}&=0,&\mu _{11}&=M_{11}-{\bar {x}}M_{01}=M_{11}-{\bar {y}}M_{10},\\\mu _{20}&=M_{20}-{\bar {x}}M_{10},&\mu _{02}&=M_{02}-{\bar {y}}M_{01},\\\mu _{21}&=M_{21}-2{\bar {x}}M_{11}-{\bar {y}}M_{20}+2{\bar {x}}^{2}M_{01},&\mu _{12}&=M_{12}-2{\bar {y}}M_{11}-{\bar {x}}M_{02}+2{\bar {y}}^{2}M_{10},\\\mu _{30}&=M_{30}-3{\bar {x}}M_{20}+2{\bar {x}}^{2}M_{10},&\mu _{03}&=M_{03}-3{\bar {y}}M_{02}+2{\bar {y}}^{2}M_{01}.\end{aligned}}} It can be shown that: Central moments aretranslational invariant. Information about image orientation can be derived by first using the second order central moments to construct acovariance matrix. μ20′=μ20/μ00=M20/M00−x¯2μ02′=μ02/μ00=M02/M00−y¯2μ11′=μ11/μ00=M11/M00−x¯y¯{\displaystyle {\begin{aligned}\mu '_{20}&=\mu _{20}/\mu _{00}=M_{20}/M_{00}-{\bar {x}}^{2}\\\mu '_{02}&=\mu _{02}/\mu _{00}=M_{02}/M_{00}-{\bar {y}}^{2}\\\mu '_{11}&=\mu _{11}/\mu _{00}=M_{11}/M_{00}-{\bar {x}}{\bar {y}}\end{aligned}}} Thecovariance matrixof the imageI(x,y){\displaystyle I(x,y)}is now Theeigenvectorsof this matrix correspond to the major and minor axes of the image intensity, so theorientationcan thus be extracted from the angle of the eigenvector associated with the largest eigenvalue towards the axis closest to this eigenvector. It can be shown that this angle Θ is given by the following formula: The above formula holds as long as: Theeigenvaluesof the covariance matrix can easily be shown to be and are proportional to the squared length of the eigenvector axes. The relative difference in magnitude of the eigenvalues are thus an indication of the eccentricity of the image, or how elongated it is. Theeccentricityis Moments are well-known for their application in image analysis, since they can be used to deriveinvariantswith respect to specific transformation classes. The terminvariant momentsis often abused in this context. However, whilemoment invariantsare invariants that are formed from moments, the only moments that are invariants themselves are the central moments.[citation needed] Note that the invariants detailed below are exactly invariant only in the continuous domain. In a discrete domain, neither scaling nor rotation are well defined: a discrete image transformed in such a way is generally an approximation, and the transformation is not reversible. These invariants therefore are only approximately invariant when describing a shape in a discrete image. The central momentsμi jof any order are, by construction, invariant with respect totranslations. Invariantsηi jwith respect to bothtranslationandscalecan be constructed from central moments by dividing through a properly scaled zero-th central moment: wherei+j≥ 2. Note that translational invariance directly follows by only using central moments. As shown in the work of Hu,[3][4]invariants with respect totranslation,scale, androtationcan be constructed: I1=η20+η02{\displaystyle I_{1}=\eta _{20}+\eta _{02}} I2=(η20−η02)2+4η112{\displaystyle I_{2}=(\eta _{20}-\eta _{02})^{2}+4\eta _{11}^{2}} I3=(η30−3η12)2+(3η21−η03)2{\displaystyle I_{3}=(\eta _{30}-3\eta _{12})^{2}+(3\eta _{21}-\eta _{03})^{2}} I4=(η30+η12)2+(η21+η03)2{\displaystyle I_{4}=(\eta _{30}+\eta _{12})^{2}+(\eta _{21}+\eta _{03})^{2}} I5=(η30−3η12)(η30+η12)[(η30+η12)2−3(η21+η03)2]+(3η21−η03)(η21+η03)[3(η30+η12)2−(η21+η03)2]{\displaystyle I_{5}=(\eta _{30}-3\eta _{12})(\eta _{30}+\eta _{12})[(\eta _{30}+\eta _{12})^{2}-3(\eta _{21}+\eta _{03})^{2}]+(3\eta _{21}-\eta _{03})(\eta _{21}+\eta _{03})[3(\eta _{30}+\eta _{12})^{2}-(\eta _{21}+\eta _{03})^{2}]} I6=(η20−η02)[(η30+η12)2−(η21+η03)2]+4η11(η30+η12)(η21+η03){\displaystyle I_{6}=(\eta _{20}-\eta _{02})[(\eta _{30}+\eta _{12})^{2}-(\eta _{21}+\eta _{03})^{2}]+4\eta _{11}(\eta _{30}+\eta _{12})(\eta _{21}+\eta _{03})} I7=(3η21−η03)(η30+η12)[(η30+η12)2−3(η21+η03)2]−(η30−3η12)(η21+η03)[3(η30+η12)2−(η21+η03)2].{\displaystyle I_{7}=(3\eta _{21}-\eta _{03})(\eta _{30}+\eta _{12})[(\eta _{30}+\eta _{12})^{2}-3(\eta _{21}+\eta _{03})^{2}]-(\eta _{30}-3\eta _{12})(\eta _{21}+\eta _{03})[3(\eta _{30}+\eta _{12})^{2}-(\eta _{21}+\eta _{03})^{2}].} These are well-known asHu moment invariants. The first one,I1, is analogous to themoment of inertiaaround the image's centroid, where the pixels' intensities are analogous to physical density. The first six,I1...I6, are reflection symmetric, i.e. they are unchanged if the image is changed to a mirror image. The last one,I7, is reflection antisymmetric (changes sign under reflection), which enables it to distinguish mirror images of otherwise identical images. A general theory on deriving complete and independent sets of rotation moment invariants was proposed by J. Flusser.[5]He showed that the traditional set of Hu moment invariants is neither independent nor complete.I3is not very useful as it is dependent on the others (I3=(I52+I72)/I43{\displaystyle I_{3}=(I_{5}^{2}+I_{7}^{2})/I_{4}^{3}}). In the original Hu's set there is a missing third order independent moment invariant: LikeI7,I8is also reflection antisymmetric. Later, J. Flusser and T. Suk[6]specialized the theory for N-rotationally symmetric shapes case. Zhang et al. applied Hu moment invariants to solve the Pathological Brain Detection (PBD) problem.[7]Doerr and Florence used information of the object orientation related to the second order central moments to effectively extract translation- and rotation-invariant object cross-sections from micro-X-ray tomography image data.[8] D. A. Hoeltzel and Wei-Hua Chieng used Hu moment invariant to perform on a dimensionally-parameterized four bar mechanism which yielded 15 distinct coupler curve groups (patterns) from a total of 356 generated coupler curves.[9]
https://en.wikipedia.org/wiki/Image_moment
Inatomic physicsandchemistry, anatomic electron transition(also called an atomic transition, quantum jump, or quantum leap) is anelectronchanging from oneenergy levelto another within anatom[1]orartificial atom.[2]The time scale of a quantum jump has not been measured experimentally. However, theFranck–Condon principlebinds the upper limit of this parameter to the order ofattoseconds.[3] Electrons canrelaxinto states of lower energy by emittingelectromagnetic radiationin the form of a photon. Electrons can also absorb passing photons, whichexcitesthe electron into a state of higher energy. The larger the energy separation between the electron's initial and final state, the shorter the photons'wavelength.[4] Danish physicistNiels Bohrfirst theorized that electrons can perform quantum jumps in 1913.[5]Soon after,James FranckandGustav Ludwig Hertzproved experimentallythat atoms have quantized energy states.[6] The observability of quantum jumps was predicted byHans Dehmeltin 1975, and they were first observed usingtrapped ionsofbariumatUniversity of HamburgandmercuryatNISTin 1986.[4] An atom interacts with the oscillating electric field: with amplitude|E0|{\displaystyle |{\textbf {E}}_{0}|}, angular frequencyω{\displaystyle \omega }, and polarization vectore^rad{\displaystyle {\hat {\textbf {e}}}_{\mathrm {rad} }}.[7]Note that the actual phase is(ωt−k⋅r){\displaystyle (\omega t-{\textbf {k}}\cdot {\textbf {r}})}. However, in many cases, the variation ofk⋅r{\displaystyle {\textbf {k}}\cdot {\textbf {r}}}is small over the atom (or equivalently, the radiation wavelength is much greater than the size of an atom) and this term can be ignored. This is called the dipole approximation. The atom can also interact with the oscillating magnetic field produced by the radiation, although much more weakly. The Hamiltonian for this interaction, analogous to the energy of a classical dipole in an electric field, isHI=er⋅E(t){\displaystyle H_{I}=e{\textbf {r}}\cdot {\textbf {E}}(t)}. The stimulated transition rate can be calculated usingtime-dependent perturbation theory; however, the result can be summarized usingFermi's golden rule:Rate∝|eE0|2×|⟨2|r⋅e^rad|1⟩|2{\displaystyle Rate\propto |eE_{0}|^{2}\times |\langle 2|{\textbf {r}}\cdot {\hat {\textbf {e}}}_{\mathrm {rad} }|1\rangle |^{2}}The dipole matrix element can be decomposed into the product of the radial integral and the angular integral. The angular integral is zero unless theselection rulesfor the atomic transition are satisfied. In 2019, it was demonstrated in an experiment with a superconductingartificial atomconsisting of two strongly-hybridizedtransmon qubitsplaced inside a readout resonator cavity at 15 mK, that the evolution of some jumps is continuous, coherent, deterministic, and reversible.[8]On the other hand, other quantum jumps are inherently unpredictable.[9]
https://en.wikipedia.org/wiki/Atomic_electron_transition
In quantummechanicsandcomputing, theBloch sphereis a geometrical representation of thepure statespace of atwo-level quantum mechanical system(qubit), named after the physicistFelix Bloch.[1] Mathematically each quantum mechanical system is associated with aseparablecomplexHilbert spaceH{\displaystyle H}. A pure state of a quantum system is represented by a non-zero vectorψ{\displaystyle \psi }inH{\displaystyle H}. As the vectorsψ{\displaystyle \psi }andλψ{\displaystyle \lambda \psi }(withλ∈C∗{\displaystyle \lambda \in \mathbb {C} ^{*}}) represent the same state, the level of the quantum system corresponds to the dimension of the Hilbert space and pure states can be represented asequivalence classes, or,raysin aprojective Hilbert spaceP(Hn)=CPn−1{\displaystyle \mathbf {P} (H_{n})=\mathbb {C} \mathbf {P} ^{n-1}}.[2]For a two-dimensional Hilbert space, the space of all such states is thecomplex projective lineCP1.{\displaystyle \mathbb {C} \mathbf {P} ^{1}.}This is the Bloch sphere, which can be mapped to theRiemann sphere. The Bloch sphere is a unit2-sphere, withantipodal pointscorresponding to a pair of mutually orthogonal state vectors. The north and south poles of the Bloch sphere are typically chosen to correspond to the standard basis vectors|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }, respectively, which in turn might correspond e.g. to thespin-up andspin-down states of an electron. This choice is arbitrary, however. The points on the surface of the sphere correspond to thepure statesof the system, whereas the interior points correspond to themixed states.[3][4]The Bloch sphere may be generalized to ann-level quantum system, but then the visualization is less useful. The naturalmetricon the Bloch sphere is theFubini–Study metric. The mapping from the unit 3-sphere in the two-dimensional state spaceC2{\displaystyle \mathbb {C} ^{2}}to the Bloch sphere is theHopf fibration, with eachrayofspinorsmapping to one point on the Bloch sphere. Given an orthonormal basis, anypure state|ψ⟩{\displaystyle |\psi \rangle }of a two-level quantum system can be written as a superposition of the basis vectors|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }, where the coefficient of (or contribution from) each of the two basis vectors is acomplex number. This means that the state is described by four real numbers. However, only the relative phase between the coefficients of the two basis vectors has any physical meaning (the phase of the quantum system is not directlymeasurable), so that there is redundancy in this description. We can take the coefficient of|0⟩{\displaystyle |0\rangle }to be real and non-negative. This allows the state to be described by only three real numbers, giving rise to the three dimensions of the Bloch sphere. We also know from quantum mechanics that the total probability of the system has to be one: Given this constraint, we can write|ψ⟩{\displaystyle |\psi \rangle }using the following representation: The representation is always unique, because, even though the value ofϕ{\displaystyle \phi }is not unique when|ψ⟩{\displaystyle |\psi \rangle }is one of the states (seeBra-ket notation)|0⟩{\displaystyle |0\rangle }or|1⟩{\displaystyle |1\rangle }, the point represented byθ{\displaystyle \theta }andϕ{\displaystyle \phi }is unique. The parametersθ{\displaystyle \theta \,}andϕ{\displaystyle \phi \,}, re-interpreted inspherical coordinatesas respectively thecolatitudewith respect to thez-axis and thelongitudewith respect to thex-axis, specify a point on the unit sphere inR3{\displaystyle \mathbb {R} ^{3}}. Formixed states, one considers thedensity operator. Any two-dimensional density operatorρcan be expanded using the identityIand theHermitian,tracelessPauli matricesσ→{\displaystyle {\vec {\sigma }}}, wherea→∈R3{\displaystyle {\vec {a}}\in \mathbb {R} ^{3}}is called theBloch vector. It is this vector that indicates the point within the sphere that corresponds to a given mixed state. Specifically, as a basic feature of thePauli vector, the eigenvalues ofρare12(1±|a→|){\displaystyle {\frac {1}{2}}\left(1\pm |{\vec {a}}|\right)}. Density operators must be positive-semidefinite, so it follows that|a→|≤1{\displaystyle \left|{\vec {a}}\right|\leq 1}. For pure states, one then has in comportance with the above.[5] As a consequence, the surface of the Bloch sphere represents all the pure states of a two-dimensional quantum system, whereas the interior corresponds to all the mixed states. The Bloch vectora→=(u,v,w){\displaystyle {\vec {a}}=(u,v,w)}can be represented in the following basis, with reference to the density operatorρ{\displaystyle \rho }:[6] where This basis is often used inlasertheory, wherew{\displaystyle w}is known as thepopulation inversion.[7]In this basis, the valuesu,v,w{\displaystyle u,v,w}are the expectations of the threePauli matricesX,Y,Z{\displaystyle X,Y,Z}, allowing one to identify the three coordinates with x y and z axes. Consider ann-level quantum mechanical system. This system is described by ann-dimensionalHilbert spaceHn. The pure state space is by definition the set of rays ofHn. Theorem. LetU(n)be theLie groupof unitary matrices of sizen. Then the pure state space ofHncan be identified with the compact coset space To prove this fact, note that there is anaturalgroup actionof U(n) on the set of states ofHn. This action is continuous andtransitiveon the pure states. For any state|ψ⟩{\displaystyle |\psi \rangle }, theisotropy groupof|ψ⟩{\displaystyle |\psi \rangle }, (defined as the set of elementsg{\displaystyle g}of U(n) such thatg|ψ⟩=|ψ⟩{\displaystyle g|\psi \rangle =|\psi \rangle }) is isomorphic to the product group In linear algebra terms, this can be justified as follows. Anyg{\displaystyle g}of U(n) that leaves|ψ⟩{\displaystyle |\psi \rangle }invariant must have|ψ⟩{\displaystyle |\psi \rangle }as aneigenvector. Since the corresponding eigenvalue must be a complex number of modulus 1, this gives the U(1) factor of the isotropy group. The other part of the isotropy group is parametrized by the unitary matrices on the orthogonal complement of|ψ⟩{\displaystyle |\psi \rangle }, which is isomorphic to U(n− 1). From this the assertion of the theorem follows from basic facts about transitive group actions of compact groups. The important fact to note above is that theunitary group acts transitivelyon pure states. Now the (real)dimensionof U(n) isn2. This is easy to see since the exponential map is a local homeomorphism from the space of self-adjoint complex matrices to U(n). The space of self-adjoint complex matrices has real dimensionn2. Corollary. The real dimension of the pure state space ofHnis 2n− 2. In fact, Let us apply this to consider the real dimension of anmqubit quantum register. The corresponding Hilbert space has dimension 2m. Corollary. The real dimension of the pure state space of anm-qubitquantum registeris 2m+1− 2. Mathematically the Bloch sphere for a two-spinor state can be mapped to aRiemann sphereCP1{\displaystyle \mathbb {C} \mathbf {P} ^{1}}, i.e., theprojective Hilbert spaceP(H2){\displaystyle \mathbf {P} (H_{2})}with the 2-dimensional complex Hilbert spaceH2{\displaystyle H_{2}}arepresentation spaceofSO(3).[8]Given a pure state whereα{\displaystyle \alpha }andβ{\displaystyle \beta }are complex numbers which are normalized so that and such that⟨↓|↑⟩=0{\displaystyle \langle \downarrow |\uparrow \rangle =0}and⟨↓|↓⟩=⟨↑|↑⟩=1{\displaystyle \langle \downarrow |\downarrow \rangle =\langle \uparrow |\uparrow \rangle =1}, i.e., such that|↑⟩{\displaystyle \left|\uparrow \right\rangle }and|↓⟩{\displaystyle \left|\downarrow \right\rangle }form a basis and have diametrically opposite representations on the Bloch sphere, then let be their ratio. If the Bloch sphere is thought of as being embedded inR3{\displaystyle \mathbb {R} ^{3}}with its center at the origin and with radius one, then the planez= 0 (which intersects the Bloch sphere at a great circle; the sphere's equator, as it were) can be thought of as anArgand diagram. Plot pointuin this plane — so that inR3{\displaystyle \mathbb {R} ^{3}}it has coordinates(ux,uy,0){\displaystyle (u_{x},u_{y},0)}. Draw a straight line throughuand through the point on the sphere that represents|↓⟩{\displaystyle \left|\downarrow \right\rangle }. (Let (0,0,1) represent|↑⟩{\displaystyle \left|\uparrow \right\rangle }and (0,0,−1) represent|↓⟩{\displaystyle \left|\downarrow \right\rangle }.) This line intersects the sphere at another point besides|↓⟩{\displaystyle \left|\downarrow \right\rangle }. (The only exception is whenu=∞{\displaystyle u=\infty }, i.e., whenα=0{\displaystyle \alpha =0}andβ≠0{\displaystyle \beta \neq 0}.) Call this pointP. Pointuon the planez= 0 is thestereographic projectionof pointPon the Bloch sphere. The vector with tail at the origin and tip atPis the direction in 3-D space corresponding to the spinor|↗⟩{\displaystyle \left|\nearrow \right\rangle }. The coordinates ofPare Formulations of quantum mechanics in terms of pure states are adequate for isolated systems; in general quantum mechanical systems need to be described in terms ofdensity operators. The Bloch sphere parametrizes not only pure states but mixed states for 2-level systems. The density operator describing the mixed-state of a 2-level quantum system (qubit) corresponds to a pointinsidethe Bloch sphere with the following coordinates: wherepi{\displaystyle p_{i}}is the probability of the individual states within the ensemble andxi,yi,zi{\displaystyle x_{i},y_{i},z_{i}}are the coordinates of the individual states (on thesurfaceof Bloch sphere). The set of all points on and inside the Bloch sphere is known as theBloch ball. For states of higher dimensions there is difficulty in extending this to mixed states. The topological description is complicated by the fact that the unitary group does not act transitively on density operators. The orbits moreover are extremely diverse as follows from the following observation: Theorem. SupposeAis a density operator on annlevel quantum mechanical system whose distinct eigenvalues are μ1, ..., μkwith multiplicitiesn1, ...,nk. Then the group of unitary operatorsVsuch thatV A V* =Ais isomorphic (as a Lie group) to In particular the orbit ofAis isomorphic to It is possible to generalize the construction of the Bloch ball to dimensions larger than 2, but the geometry of such a "Bloch body" is more complicated than that of a ball.[9] A useful advantage of the Bloch sphere representation is that the evolution of the qubit state is describable by rotations of the Bloch sphere. The most concise explanation for why this is the case is that theLie algebrafor the group of unitary and hermitian matricesSU(2){\displaystyle SU(2)}is isomorphic to the Lie algebra of the group of three dimensional rotationsSO(3){\displaystyle SO(3)}.[10] The rotations of the Bloch sphere about the Cartesian axes in the Bloch basis are given by[11] Ifn^=(nx,ny,nz){\displaystyle {\hat {n}}=(n_{x},n_{y},n_{z})}is a real unit vector in three dimensions, the rotation of the Bloch sphere about this axis is given by: An interesting thing to note is that this expression is identical under relabelling to the extended Euler formula forquaternions. Ballentine[12]presents an intuitive derivation for the infinitesimal unitary transformation. This is important for understanding why the rotations of Bloch spheres are exponentials of linear combinations of Pauli matrices. Hence a brief treatment on this is given here. A more complete description in a quantum mechanical context can be foundhere. Consider a family of unitary operatorsU{\displaystyle U}representing a rotation about some axis. Since the rotation has one degree of freedom, the operator acts on a field of scalarsS{\displaystyle S}such that: where0,s1,s2,∈S{\displaystyle 0,s_{1},s_{2},\in S} We define the infinitesimal unitary as the Taylor expansion truncated at second order. By the unitary condition: Hence For this equality to hold true (assumingO(s2){\displaystyle O\left(s^{2}\right)}is negligible) we require This results in a solution of the form: whereK{\displaystyle K}is any Hermitian transformation, and is called the generator of the unitary family. Hence Since the Pauli matrices(σx,σy,σz){\displaystyle (\sigma _{x},\sigma _{y},\sigma _{z})}are unitary Hermitian matrices and have eigenvectors corresponding to the Bloch basis,(x^,y^,z^){\displaystyle ({\hat {x}},{\hat {y}},{\hat {z}})}, we can naturally see how a rotation of the Bloch sphere about an arbitrary axisn^{\displaystyle {\hat {n}}}is described by with the rotation generator given byK=n^⋅σ→/2.{\displaystyle K={\hat {n}}\cdot {\vec {\sigma }}/2.}
https://en.wikipedia.org/wiki/Bloch_sphere
Inphysics, in the area ofquantum information theory, aGreenberger–Horne–Zeilinger(GHZ)stateis anentangledquantum statethat involves at least three subsystems (particle states,qubits, orqudits). Named for the three authors that first described this state, the GHZ state predicts outcomes from experiments that directly contradict predictions by every classical localhidden-variable theory. The state has applications inquantum computing. The four-particle version was first studied byDaniel Greenberger,Michael HorneandAnton Zeilingerin 1989.[1]The following yearAbner Shimonyjoined in and they published a three-particle version[2]based on suggestions byN. David Mermin.[3][4]Experimental measurements on such states contradict intuitive notions of locality and causality. GHZ states for large numbers of qubits are theorized to give enhanced performance for metrology compared to other qubit superposition states.[5] The GHZ state is anentangledquantum statefor 3qubitsand it can be written|GHZ⟩=|000⟩+|111⟩2.{\displaystyle |\mathrm {GHZ} \rangle ={\frac {|000\rangle +|111\rangle }{\sqrt {2}}}.}where the0or1values of the qubit correspond to any two physical states. For example the two states may correspond to spin-down and spin up along some physical axis. In physics applications the state may be written|GHZ⟩=|1,1,1⟩+|−1,−1,−1⟩2.{\displaystyle |\mathrm {GHZ} \rangle ={\frac {|1,1,1\rangle +|-1,-1,-1\rangle }{\sqrt {2}}}.}where the numbering of the states represents spin eigenvalues.[3] Another example[6]of a GHZ state is threephotonsin anentangledstate, with the photons being in asuperpositionof being all horizontallypolarized(HHH) or all vertically polarized (VVV), with respect to somecoordinate system. The GHZ state can be written inbra–ket notationas Prior to any measurements being made, the polarizations of the photons are indeterminate. If a measurement is made on one of the photons using a two-channelpolarizeraligned with the axes of the coordinate system, each orientation will be observed, with 50% probability. However the result of all three measurements on the state gives the same result: all three polarizations are observed along the same axis. The generalized GHZ state is an entangled quantum state ofM> 2subsystems. If each system has dimensiond{\displaystyle d}, i.e., the localHilbert spaceis isomorphic toCd{\displaystyle \mathbb {C} ^{d}}, then the total Hilbert space of anM{\displaystyle M}-partite system isHtot=(Cd)⊗M{\displaystyle {\mathcal {H}}_{\rm {tot}}=(\mathbb {C} ^{d})^{\otimes M}}. This GHZ state is also called anM{\displaystyle M}-partite qudit GHZ state. Its formula as a tensor product is In the case of each of the subsystems being two-dimensional, that is for a collection ofMqubits, it reads In the language ofquantum computation, the polarization state of each photon is aqubit, the basis of which can be chosen to be With appropriately chosenphase factorsfor|H⟩{\displaystyle |\mathrm {H} \rangle }and|V⟩{\displaystyle |\mathrm {V} \rangle }, both types of measurements used in the experiment becomesPauli measurements, with the two possible results represented as +1 and −1 respectively:[citation needed] A combination of those measurements on each of the three qubits can be regarded as a destructivemulti-qubit Paulimeasurement, the result of which being the product of each single-qubit Pauli measurement. For example, the combination "circular polarizer on photons 1 and 2, 45° linear polarizer on photon 3" corresponds to aY1Y2X3{\displaystyle Y_{1}Y_{2}X_{3}}measurement, and the four possible result combinations (RL+, LR+, RR−, LL−) are exactly the ones corresponding to an overall result of −1. The quantum mechanical predictions of the GHZ experiment can then be summarized as which is consistent in quantum mechanics because all these multi-qubit Paulis commute with each other, and due to theanticommutativitybetweenX{\displaystyle X}andY{\displaystyle Y}. These results lead to a contradiction in any local hidden variable theory, where each measurement must have definite (classical) valuesxi,yi=±1{\displaystyle x_{i},y_{i}=\pm 1}determined by hidden variables, because must equal +1, not −1.[3] The results of actual experiments agree with the predictions of quantum mechanics, not those of local realism.[7] There is no standard measure of multi-partite entanglement because different, not mutually convertible, types of multi-partite entanglement exist. Nonetheless, many measures define the GHZ state to be amaximally entangled state.[citation needed] Another important property of the GHZ state is that taking thepartial traceover one of the three systems yields which is an unentangledmixed state. It has certain two-particle (qubit) correlations, but these areof a classical nature. On the other hand, if we were to measure one of the subsystems in such a way that the measurement distinguishes between the states 0 and 1, we will leave behind either|00⟩{\displaystyle |00\rangle }or|11⟩{\displaystyle |11\rangle }, which are unentangled pure states. This is unlike theW state, which leaves bipartite entanglements even when we measure one of its subsystems.[citation needed] A pure state|ψ⟩{\displaystyle |\psi \rangle }ofN{\displaystyle N}parties is calledbiseparable, if one can find a partition of the parties in two nonempty disjoint subsetsA{\displaystyle A}andB{\displaystyle B}withA∪B={1,…,N}{\displaystyle A\cup B=\{1,\dots ,N\}}such that|ψ⟩=|ϕ⟩A⊗|γ⟩B{\displaystyle |\psi \rangle =|\phi \rangle _{A}\otimes |\gamma \rangle _{B}}, i.e.|ψ⟩{\displaystyle |\psi \rangle }is aproduct statewith respect to the partitionA|B{\displaystyle A|B}. The GHZ state is non-biseparable and is the representative of one of the two non-biseparable classes of 3-qubit states which cannot be transformed (not even probabilistically) into each other bylocal quantum operations, the other being theW state,|W⟩=(|001⟩+|010⟩+|100⟩)/3{\displaystyle |\mathrm {W} \rangle =(|001\rangle +|010\rangle +|100\rangle )/{\sqrt {3}}}.[8]: 903Thus|GHZ⟩{\displaystyle |\mathrm {GHZ} \rangle }and|W⟩{\displaystyle |\mathrm {W} \rangle }represent two very different kinds of entanglement for three or more particles.[9]The W state is, in a certain sense "less entangled" than the GHZ state; however, that entanglement is, in a sense, more robust against single-particle measurements, in that, for anN-qubit W state, an entangled (N− 1)-qubit state remains after a single-particle measurement. By contrast, certain measurements on the GHZ state collapse it into a mixture or a pure state. Experiments on the GHZ state lead to striking non-classical correlations (1989). Particles prepared in this state lead to a version ofBell's theorem, which shows the internal inconsistency of the notion of elements-of-reality introduced in the famousEinstein–Podolsky–Rosenarticle. The first laboratory observation of GHZ correlations was by the group ofAnton Zeilinger(1998), who was awarded a share of the 2022 Nobel Prize in physics for this work.[10]Many more accurate observations followed. The correlations can be utilized in somequantum informationtasks. These include multipartnerquantum cryptography(1998) andcommunication complexitytasks (1997, 2004). Although a measurement of the third particle of the GHZ state that distinguishes the two states results in an unentangled pair, a measurement along an orthogonal direction can leave behind a maximally entangledBell state. This is illustrated below. The 3-qubit GHZ state can be written as where the third particle is written as a superposition in theXbasis (as opposed to theZbasis) as|0⟩=(|+⟩+|−⟩)/2{\displaystyle |0\rangle =(|+\rangle +|-\rangle )/{\sqrt {2}}}and|1⟩=(|+⟩−|−⟩)/2{\displaystyle |1\rangle =(|+\rangle -|-\rangle )/{\sqrt {2}}}. A measurement of the GHZ state along theXbasis for the third particle then yields either|Φ+⟩=(|00⟩+|11⟩)/2{\displaystyle |\Phi ^{+}\rangle =(|00\rangle +|11\rangle )/{\sqrt {2}}}, if|+⟩{\displaystyle |+\rangle }was measured, or|Φ−⟩=(|00⟩−|11⟩)/2{\displaystyle |\Phi ^{-}\rangle =(|00\rangle -|11\rangle )/{\sqrt {2}}}, if|−⟩{\displaystyle |-\rangle }was measured. In the later case, the phase can be rotated by applying aZquantum gateto give|Φ+⟩{\displaystyle |\Phi ^{+}\rangle }, while in the former case, no additional transformations are applied. In either case, the result of the operations is a maximally entangled Bell state. This example illustrates that, depending on which measurement is made of the GHZ state is more subtle than it first appears: a measurement along an orthogonal direction, followed by a quantum transform that depends on the measurement outcome, can leave behind amaximally entangled state. GHZ states are used in several protocols in quantum communication and cryptography, for example, in secret sharing[11]or in thequantum Byzantine agreement.
https://en.wikipedia.org/wiki/Greenberger%E2%80%93Horne%E2%80%93Zeilinger_state
Theground stateof aquantum-mechanicalsystem is itsstationary stateof lowestenergy; the energy of the ground state is known as thezero-point energyof the system. Anexcited stateis any state with energy greater than the ground state. Inquantum field theory, the ground state is usually called thevacuum stateor thevacuum. If more than one ground state exists, they are said to bedegenerate. Many systems have degenerate ground states. Degeneracy occurs whenever there exists aunitary operatorthat acts non-trivially on a ground state andcommuteswith theHamiltonianof the system. According to thethird law of thermodynamics, a system atabsolute zerotemperatureexists in its ground state; thus, itsentropyis determined by the degeneracy of the ground state. Many systems, such as a perfectcrystal lattice, have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to haveabsolute zerotemperature for systems that exhibitnegative temperature. In onedimension, the ground state of theSchrödinger equationcan beprovento have nonodes.[1] Consider theaverage energyof a state with a node atx= 0; i.e.,ψ(0) = 0. The average energy in this state would be ⟨ψ|H|ψ⟩=∫dx(−ℏ22mψ∗d2ψdx2+V(x)|ψ(x)|2),{\displaystyle \langle \psi |H|\psi \rangle =\int dx\,\left(-{\frac {\hbar ^{2}}{2m}}\psi ^{*}{\frac {d^{2}\psi }{dx^{2}}}+V(x)|\psi (x)|^{2}\right),} whereV(x)is the potential. Withintegration by parts: ∫abψ∗d2ψdx2dx=[ψ∗dψdx]ab−∫abdψ∗dxdψdxdx=[ψ∗dψdx]ab−∫ab|dψdx|2dx{\displaystyle \int _{a}^{b}\psi ^{*}{\frac {d^{2}\psi }{dx^{2}}}dx=\left[\psi ^{*}{\frac {d\psi }{dx}}\right]_{a}^{b}-\int _{a}^{b}{\frac {d\psi ^{*}}{dx}}{\frac {d\psi }{dx}}dx=\left[\psi ^{*}{\frac {d\psi }{dx}}\right]_{a}^{b}-\int _{a}^{b}\left|{\frac {d\psi }{dx}}\right|^{2}dx} Hence in case that[ψ∗dψdx]−∞∞=limb→∞ψ∗(b)dψdx(b)−lima→−∞ψ∗(a)dψdx(a){\displaystyle \left[\psi ^{*}{\frac {d\psi }{dx}}\right]_{-\infty }^{\infty }=\lim _{b\to \infty }\psi ^{*}(b){\frac {d\psi }{dx}}(b)-\lim _{a\to -\infty }\psi ^{*}(a){\frac {d\psi }{dx}}(a)}is equal tozero, one gets:−ℏ22m∫−∞∞ψ∗d2ψdx2dx=ℏ22m∫−∞∞|dψdx|2dx{\displaystyle -{\frac {\hbar ^{2}}{2m}}\int _{-\infty }^{\infty }\psi ^{*}{\frac {d^{2}\psi }{dx^{2}}}dx={\frac {\hbar ^{2}}{2m}}\int _{-\infty }^{\infty }\left|{\frac {d\psi }{dx}}\right|^{2}dx} Now, consider a smallintervalaroundx=0{\displaystyle x=0}; i.e.,x∈[−ε,ε]{\displaystyle x\in [-\varepsilon ,\varepsilon ]}. Take a new (deformed)wave functionψ'(x)to be defined asψ′(x)=ψ(x){\displaystyle \psi '(x)=\psi (x)}, forx<−ε{\displaystyle x<-\varepsilon }; andψ′(x)=−ψ(x){\displaystyle \psi '(x)=-\psi (x)}, forx>ε{\displaystyle x>\varepsilon }; andconstantforx∈[−ε,ε]{\displaystyle x\in [-\varepsilon ,\varepsilon ]}. Ifε{\displaystyle \varepsilon }is small enough, this is always possible to do, so thatψ'(x)is continuous. Assumingψ(x)≈−cx{\displaystyle \psi (x)\approx -cx}aroundx=0{\displaystyle x=0}, one may writeψ′(x)=N{|ψ(x)|,|x|>ε,cε,|x|≤ε,{\displaystyle \psi '(x)=N{\begin{cases}|\psi (x)|,&|x|>\varepsilon ,\\c\varepsilon ,&|x|\leq \varepsilon ,\end{cases}}}whereN=11+43|c|2ε3{\displaystyle N={\frac {1}{\sqrt {1+{\frac {4}{3}}|c|^{2}\varepsilon ^{3}}}}}is the norm. Note that the kinetic-energy densities holdℏ22m|dψ′dx|2<ℏ22m|dψdx|2{\textstyle {\frac {\hbar ^{2}}{2m}}\left|{\frac {d\psi '}{dx}}\right|^{2}<{\frac {\hbar ^{2}}{2m}}\left|{\frac {d\psi }{dx}}\right|^{2}}everywhere because of the normalization. More significantly, the averagekinetic energyis lowered byO(ε){\displaystyle O(\varepsilon )}by the deformation toψ'. Now, consider thepotential energy. For definiteness, let us chooseV(x)≥0{\displaystyle V(x)\geq 0}. Then it is clear that, outside the intervalx∈[−ε,ε]{\displaystyle x\in [-\varepsilon ,\varepsilon ]}, the potential energy density is smaller for theψ'because|ψ′|<|ψ|{\displaystyle |\psi '|<|\psi |}there. On the other hand, in the intervalx∈[−ε,ε]{\displaystyle x\in [-\varepsilon ,\varepsilon ]}we haveVavgε′=∫−εεdxV(x)|ψ′|2=ε2|c|21+43|c|2ε3∫−εεdxV(x)≃2ε3|c|2V(0)+⋯,{\displaystyle {V_{\text{avg}}^{\varepsilon }}'=\int _{-\varepsilon }^{\varepsilon }dx\,V(x)|\psi '|^{2}={\frac {\varepsilon ^{2}|c|^{2}}{1+{\frac {4}{3}}|c|^{2}\varepsilon ^{3}}}\int _{-\varepsilon }^{\varepsilon }dx\,V(x)\simeq 2\varepsilon ^{3}|c|^{2}V(0)+\cdots ,}which holds to orderε3{\displaystyle \varepsilon ^{3}}. However, the contribution to the potential energy from this region for the stateψwith a node isVavgε=∫−εεdxV(x)|ψ|2=|c|2∫−εεdxx2V(x)≃23ε3|c|2V(0)+⋯,{\displaystyle V_{\text{avg}}^{\varepsilon }=\int _{-\varepsilon }^{\varepsilon }dx\,V(x)|\psi |^{2}=|c|^{2}\int _{-\varepsilon }^{\varepsilon }dx\,x^{2}V(x)\simeq {\frac {2}{3}}\varepsilon ^{3}|c|^{2}V(0)+\cdots ,}lower, but still of the same lower orderO(ε3){\displaystyle O(\varepsilon ^{3})}as for the deformed stateψ', and subdominant to the lowering of the average kinetic energy. Therefore, the potential energy is unchanged up to orderε2{\displaystyle \varepsilon ^{2}}, if we deform the stateψ{\displaystyle \psi }with a node into a stateψ'without a node, and the change can be ignored. We can therefore remove all nodes and reduce the energy byO(ε){\displaystyle O(\varepsilon )}, which implies thatψ'cannot be the ground state. Thus the ground-state wave function cannot have a node. This completes the proof. (The average energy may then be further lowered by eliminating undulations, to the variational absolute minimum.) As the ground state has no nodes it isspatiallynon-degenerate, i.e. there are no twostationary quantum stateswith theenergy eigenvalueof the ground state (let's name itEg{\displaystyle E_{g}}) and the samespin stateand therefore would only differ in their position-spacewave functions.[1] The reasoning goes bycontradiction: For if the ground state would be degenerate then there would be two orthonormal[2]stationary states|ψ1⟩{\displaystyle \left|\psi _{1}\right\rangle }and|ψ2⟩{\displaystyle \left|\psi _{2}\right\rangle }— later on represented by their complex-valued position-space wave functionsψ1(x,t)=ψ1(x,0)⋅e−iEgt/ℏ{\displaystyle \psi _{1}(x,t)=\psi _{1}(x,0)\cdot e^{-iE_{g}t/\hbar }}andψ2(x,t)=ψ2(x,0)⋅e−iEgt/ℏ{\displaystyle \psi _{2}(x,t)=\psi _{2}(x,0)\cdot e^{-iE_{g}t/\hbar }}— and anysuperposition|ψ3⟩:=c1|ψ1⟩+c2|ψ2⟩{\displaystyle \left|\psi _{3}\right\rangle :=c_{1}\left|\psi _{1}\right\rangle +c_{2}\left|\psi _{2}\right\rangle }with the complex numbersc1,c2{\displaystyle c_{1},c_{2}}fulfilling the condition|c1|2+|c2|2=1{\displaystyle |c_{1}|^{2}+|c_{2}|^{2}=1}would also be a be such a state, i.e. would have the same energy-eigenvalueEg{\displaystyle E_{g}}and the same spin-state. Now letx0{\displaystyle x_{0}}be some random point (where both wave functions are defined) and set:c1=ψ2(x0,0)a{\displaystyle c_{1}={\frac {\psi _{2}(x_{0},0)}{a}}}andc2=−ψ1(x0,0)a{\displaystyle c_{2}={\frac {-\psi _{1}(x_{0},0)}{a}}}witha=|ψ1(x0,0)|2+|ψ2(x0,0)|2>0{\displaystyle a={\sqrt {|\psi _{1}(x_{0},0)|^{2}+|\psi _{2}(x_{0},0)|^{2}}}>0}(according to the premiseno nodes). Therefore, the position-space wave function of|ψ3⟩{\displaystyle \left|\psi _{3}\right\rangle }isψ3(x,t)=c1ψ1(x,t)+c2ψ2(x,t)=1a(ψ2(x0,0)⋅ψ1(x,0)−ψ1(x0,0)⋅ψ2(x,0))⋅e−iEgt/ℏ.{\displaystyle \psi _{3}(x,t)=c_{1}\psi _{1}(x,t)+c_{2}\psi _{2}(x,t)={\frac {1}{a}}\left(\psi _{2}(x_{0},0)\cdot \psi _{1}(x,0)-\psi _{1}(x_{0},0)\cdot \psi _{2}(x,0)\right)\cdot e^{-iE_{g}t/\hbar }.} Henceψ3(x0,t)=1a(ψ2(x0,0)⋅ψ1(x0,0)−ψ1(x0,0)⋅ψ2(x0,0))⋅e−iEgt/ℏ=0{\displaystyle \psi _{3}(x_{0},t)={\frac {1}{a}}\left(\psi _{2}(x_{0},0)\cdot \psi _{1}(x_{0},0)-\psi _{1}(x_{0},0)\cdot \psi _{2}(x_{0},0)\right)\cdot e^{-iE_{g}t/\hbar }=0}for allt{\displaystyle t}. But⟨ψ3|ψ3⟩=|c1|2+|c2|2=1{\displaystyle \left\langle \psi _{3}|\psi _{3}\right\rangle =|c_{1}|^{2}+|c_{2}|^{2}=1}i.e.,x0{\displaystyle x_{0}}isa nodeof the ground state wave function and that is in contradiction to the premise that this wave function cannot have a node. Note that the ground state could be degenerate because of differentspin stateslike|↑⟩{\displaystyle \left|\uparrow \right\rangle }and|↓⟩{\displaystyle \left|\downarrow \right\rangle }while having the same position-space wave function: Any superposition of these states would create a mixed spin state but leave the spatial part (as a common factor of both) unaltered.
https://en.wikipedia.org/wiki/Ground_state
Quantum mechanicsis the study ofmatterand its interactions withenergyon thescaleofatomicandsubatomic particles. By contrast,classical physicsexplains matter and energy only on a scale familiar to human experience, including the behavior of astronomical bodies such as the Moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain.[1]The desire to resolve inconsistencies between observed phenomena and classical theory led to a revolution in physics, a shift in the originalscientific paradigm:[2]the development ofquantum mechanics. Many aspects of quantum mechanics are counterintuitive[3]and can seemparadoxicalbecause they describe behavior quite different from that seen at larger scales. In the words of quantum physicistRichard Feynman, quantum mechanics deals with "nature as She is—absurd".[4]Features of quantum mechanics often defy simple explanations in everyday language. One example of this is theuncertainty principle: precise measurements of position cannot be combined with precise measurements of velocity. Another example isentanglement: a measurement made on one particle (such as anelectronthat is measured to havespin'up') will correlate with a measurement on a second particle (an electron will be found to have spin 'down') if the two particles have a shared history. This will apply even if it is impossible for the result of the first measurement to have been transmitted to the second particle before the second measurement takes place. Quantum mechanics helps people understandchemistry, because it explains how atoms interact with each other and formmolecules. Many remarkable phenomena can be explained using quantum mechanics, likesuperfluidity. For example, if liquidheliumcooled to a temperature nearabsolute zerois placed in a container, it spontaneously flows up and over the rim of its container; this is an effect which cannot be explained by classical physics. James C. Maxwell'sunification of the equationsgoverning electricity, magnetism, and light in the late 19th century led to experiments on the interaction of light and matter. Some of these experiments had aspects which could not be explained until quantum mechanics emerged in the early part of the 20th century.[5] The seeds of the quantum revolution appear in the discovery byJ.J. Thomsonin 1897 thatcathode rayswere not continuous but "corpuscles" (electrons). Electrons had been named just six years earlier as part of the emerging theory ofatoms. In 1900,Max Planck, unconvinced by theatomic theory, discovered that he needed discrete entities like atoms or electrons to explainblack-body radiation.[6] Very hot – red hot or white hot – objects look similar when heated to the same temperature. This look results from a common curve of light intensity at different frequencies (colors), which is called black-body radiation. White hot objects have intensity across many colors in the visible range. The lowest frequencies above visible colors areinfrared light, which also give off heat. Continuous wave theories of light and matter cannot explain the black-body radiation curve. Planck spread the heat energy among individual "oscillators" of an undefined character but with discrete energy capacity; this model explained black-body radiation. At the time, electrons, atoms, and discrete oscillators were all exotic ideas to explain exotic phenomena. But in 1905Albert Einsteinproposed that light was also corpuscular, consisting of "energy quanta", in contradiction to the established science of light as a continuous wave, stretching back a hundred years toThomas Young's work ondiffraction. Einstein's revolutionary proposal started by reanalyzing Planck's black-body theory, arriving at the same conclusions by using the new "energy quanta". Einstein then showed how energy quanta connected to Thomson's electron. In 1902,Philipp Lenarddirected light from an arc lamp onto freshly cleaned metal plates housed in an evacuated glass tube. He measured the electric current coming off the metal plate, at higher and lower intensities of light and for different metals. Lenard showed that amount of current – the number of electrons – depended on the intensity of the light, but that the velocity of these electrons did not depend on intensity. This is thephotoelectric effect. The continuous wave theories of the time predicted that more light intensity would accelerate the same amount of current to higher velocity, contrary to this experiment. Einstein's energy quanta explained the volume increase: one electron is ejected for each quantum: more quanta mean more electrons.[6]: 23 Einstein then predicted that the electron velocity would increase in direct proportion to the light frequency above a fixed value that depended upon the metal. Here the idea is that energy in energy-quanta depends upon the light frequency; the energy transferred to the electron comes in proportion to the light frequency. The type of metal gives abarrier, the fixed value, that the electrons must climb over to exit their atoms, to be emitted from the metal surface and be measured. Ten years elapsed before Millikan's definitive experiment[7]verified Einstein's prediction. During that time many scientists rejected the revolutionary idea of quanta.[8]But Planck's and Einstein's concept was in the air and soon began to affect other physics and quantum theories. Experiments with light and matter in the late 1800s uncovered a reproducible but puzzling regularity. When light was shown through purified gases, certain frequencies (colors) did not pass. These dark absorption 'lines' followed a distinctive pattern: the gaps between the lines decreased steadily. By 1889, theRydberg formulapredicted the lines for hydrogen gas using only a constant number and the integers to index the lines.[5]: v1:376The origin of this regularity was unknown. Solving this mystery would eventually become the first major step toward quantum mechanics. Throughout the 19th century evidence grew for theatomicnature of matter. With Thomson's discovery of the electron in 1897, scientist began the search for a model of the interior of the atom. Thomson proposednegative electrons swimming in a pool of positive charge. Between 1908 and 1911,Rutherfordshowed that the positive part was only 1/3000th of the diameter of the atom.[6]: 26 Models of "planetary" electrons orbiting a nuclear "Sun" were proposed, but cannot explain why the electron does not simply fall into the positive charge. In 1913Niels BohrandErnest Rutherfordconnected the new atom models to the mystery of the Rydberg formula: the orbital radius of the electrons were constrained and the resulting energy differences matched the energy differences in the absorption lines. This meant that absorption and emission of light from atoms was energy quantized: only specific energies that matched the difference in orbital energy would be emitted or absorbed.[6]: 31 Trading one mystery – the regular pattern of the Rydberg formula – for another mystery – constraints on electron orbits – might not seem like a big advance, but the new atom model summarized many other experimental findings. The quantization of the photoelectric effect and now the quantization of the electron orbits set the stage for the final revolution. Throughout the first and the modern era of quantum mechanics the concept that classical mechanics must be valid macroscopically constrained possible quantum models. This concept was formalized by Bohr in 1923 as thecorrespondence principle. It requires quantum theory to converge to classical limits.[9]: 29A related concept isEhrenfest's theorem, which shows that the average values obtained from quantum mechanics (e.g. position and momentum) obey classical laws.[10] In 1922Otto SternandWalther Gerlachdemonstratedthat the magnetic properties of silver atoms defy classical explanation, the work contributing to Stern’s 1943Nobel Prize in Physics. They fired a beam of silver atoms through a magnetic field. According to classical physics, the atoms should have emerged in a spray, with a continuous range of directions. Instead, the beam separated into two, and only two, diverging streams of atoms.[11]Unlike the other quantum effects known at the time, this striking result involves the state of a single atom.[5]: v2:130In 1927, T.E. Phipps and J.B. Taylor obtained a similar, but less pronounced effect usinghydrogenatoms in theirground state, thereby eliminating any doubts that may have been caused by the use ofsilveratoms.[12] In 1924, Wolfgang Pauli called it "two-valuedness not describable classically" and associated it with electrons in the outermost shell.[13]The experiments lead to formulation of its theory described to arise from spin of the electron in 1925, bySamuel GoudsmitandGeorge Uhlenbeck, under the advice ofPaul Ehrenfest.[14] In 1924Louis de Broglieproposed[15]that electrons in an atom are constrained not in "orbits" but as standing waves. In detail his solution did not work, but his hypothesis – that the electron "corpuscle" moves in the atom as a wave – spurredErwin Schrödingerto develop awave equationfor electrons; when applied to hydrogen the Rydberg formula was accurately reproduced.[6]: 65 Max Born's 1924 paper"Zur Quantenmechanik"was the first use of the words "quantum mechanics" in print.[16][17]His later work included developing quantum collision models; in a footnote to a 1926 paper he proposed theBorn ruleconnecting theoretical models to experiment.[18] In 1927 at Bell Labs,Clinton DavissonandLester Germerfiredslow-movingelectronsat acrystallinenickeltarget which showed a diffraction pattern[19][20][21][22]indicating wave nature of electron whose theory was fully explained byHans Bethe.[23]A similar experiment byGeorge Paget Thomsonand Alexander Reid, firing electrons at thin celluloid foils and later metal films, observing rings, independently discoveredmatter wavenature of electrons.[24] In 1928Paul Diracpublished hisrelativistic wave equationsimultaneously incorporatingrelativity, predictinganti-matter, and providing a complete theory for the Stern–Gerlach result.[6]: 131These successes launched a new fundamental understanding of our world at small scale:quantum mechanics. Planck and Einstein started the revolution with quanta that broke down the continuous models of matter and light. Twenty years later "corpuscles" like electrons came to be modeled as continuous waves. This result came to be called wave-particle duality, one iconic idea along with the uncertainty principle that sets quantum mechanics apart from older models of physics. In 1923Comptondemonstrated that the Planck-Einstein energy quanta from light also had momentum; three years later the "energy quanta" got a new name "photon"[25]Despite its role in almost all stages of the quantum revolution, no explicit model for light quanta existed until 1927 whenPaul Diracbegan work on a quantum theory of radiation[26]that becamequantum electrodynamics. Over the following decades this work evolved intoquantum field theory, the basis for modernquantum opticsandparticle physics. The concept of wave–particle duality says that neither the classical concept of "particle" nor of "wave" can fully describe the behavior of quantum-scale objects, either photons or matter. Wave–particle duality is an example of theprinciple of complementarityin quantum physics.[27][28][29][30][31]An elegant example of wave-particle duality is the double-slit experiment. In the double-slit experiment, as originally performed byThomas Youngin 1803,[32]and thenAugustin Fresnela decade later,[32]a beam of light is directed through two narrow, closely spaced slits, producing aninterference patternof light and dark bands on a screen. The same behavior can be demonstrated in water waves: the double-slit experiment was seen as a demonstration of the wave nature of light. Variations of the double-slit experiment have been performed using electrons, atoms, and even large molecules,[33][34]and the same type of interference pattern is seen. Thus it has been demonstrated that allmatterpossesses wave characteristics. If the source intensity is turned down, the same interference pattern will slowly build up, one "count" or particle (e.g. photon or electron) at a time. The quantum system acts as a wave when passing through the double slits, but as a particle when it is detected. This is a typical feature of quantum complementarity: a quantum system acts as a wave in an experiment to measure its wave-like properties, and like a particle in an experiment to measure its particle-like properties. The point on the detector screen where any individual particle shows up is the result of a random process. However, the distribution pattern of many individual particles mimics the diffraction pattern produced by waves. Suppose it is desired to measure the position and speed of an object—for example, a car going through a radar speed trap. It can be assumed that the car has a definite position and speed at a particular moment in time. How accurately these values can be measured depends on the quality of the measuring equipment. If the precision of the measuring equipment is improved, it provides a result closer to the true value. It might be assumed that the speed of the car and its position could be operationally defined and measured simultaneously, as precisely as might be desired. In 1927, Heisenberg proved that this last assumption is not correct.[36]Quantum mechanics shows that certain pairs of physical properties, for example, position and speed, cannot be simultaneously measured, nor defined in operational terms, to arbitrary precision: the more precisely one property is measured, or defined in operational terms, the less precisely can the other be thus treated. This statement is known as theuncertainty principle. The uncertainty principle is not only a statement about the accuracy of our measuring equipment but, more deeply, is about the conceptual nature of the measured quantities—the assumption that the car had simultaneously defined position and speed does not work in quantum mechanics. On a scale of cars and people, these uncertainties are negligible, but when dealing with atoms and electrons they become critical.[37] Heisenberg gave, as an illustration, the measurement of the position andmomentumof an electron using a photon of light. In measuring the electron's position, the higher the frequency of the photon, the more accurate is the measurement of the position of the impact of the photon with the electron, but the greater is the disturbance of the electron. This is because from the impact with the photon, the electron absorbs a random amount of energy, rendering the measurement obtained of its momentum increasingly uncertain, for one is necessarily measuring its post-impact disturbed momentum from the collision products and not its original momentum (momentum which should be simultaneously measured with position). With a photon of lower frequency, the disturbance (and hence uncertainty) in the momentum is less, but so is the accuracy of the measurement of the position of the impact.[38] At the heart of the uncertainty principle is a fact that for any mathematical analysis in the position and velocity domains, achieving a sharper (more precise) curve in the position domain can only be done at the expense of a more gradual (less precise) curve in the speed domain, and vice versa. More sharpness in the position domain requires contributions from more frequencies in the speed domain to create the narrower curve, and vice versa. It is a fundamental tradeoff inherent in any such related orcomplementarymeasurements, but is only really noticeable at the smallest (Planck) scale, near the size ofelementary particles. The uncertainty principle shows mathematically that the product of the uncertainty in the position andmomentumof a particle (momentum is velocity multiplied by mass) could never be less than a certain value, and that this value is related to thePlanck constant. Wave function collapsemeans that a measurement has forced or converted a quantum (probabilistic or potential) state into a definite measured value. This phenomenon is only seen in quantum mechanics rather than classical mechanics. For example, before a photon actually "shows up" on a detection screen it can be described only with a set of probabilities for where it might show up. When it does appear, for instance in theCCDof an electronic camera, the time and space where it interacted with the device are known within very tight limits. However, the photon has disappeared in the process of being captured (measured), and its quantumwave functionhas disappeared with it. In its place, some macroscopic physical change in the detection screen has appeared, e.g., an exposed spot in a sheet of photographic film, or a change in electric potential in some cell of a CCD. Because of the uncertainty principle, statements about both the position and momentum of particles can assign only aprobabilitythat the position or momentum has some numerical value. Therefore, it is necessary to formulate clearly the difference between the state of something indeterminate, such as an electron in a probability cloud, and the state of something having a definite value. When an object can definitely be "pinned-down" in some respect, it is said to possess aneigenstate. In the Stern–Gerlach experiment discussedabove, the quantum model predicts two possible values of spin for the atom compared to the magnetic axis. These two eigenstates are named arbitrarily 'up' and 'down'. The quantum model predicts these states will be measured with equal probability, but no intermediate values will be seen. This is what the Stern–Gerlach experiment shows. The eigenstates of spin about the vertical axis are not simultaneously eigenstates of spin about the horizontal axis, so this atom has an equal probability of being found to have either value of spin about the horizontal axis. As described in the sectionabove, measuring the spin about the horizontal axis can allow an atom that was spun up to spin down: measuring its spin about the horizontal axis collapses its wave function into one of the eigenstates of this measurement, which means it is no longer in an eigenstate of spin about the vertical axis, so can take either value. In 1924,Wolfgang Pauliproposed a new quantum degree of freedom (orquantum number), with two possible values, to resolve inconsistencies between observed molecular spectra and the predictions of quantum mechanics. In particular, thespectrum of atomic hydrogenhad adoublet, or pair of lines differing by a small amount, where only one line was expected. Pauli formulated hisexclusion principle, stating, "There cannot exist an atom in such a quantum state that two electrons within [it] have the same set of quantum numbers."[39] A year later,UhlenbeckandGoudsmitidentified Pauli's new degree of freedom with the property calledspinwhose effects were observed in theStern–Gerlach experiment. In 1928, Paul Dirac extended thePauli equation, which described spinning electrons, to account forspecial relativity. The result was a theory that dealt properly with events, such as the speed at which an electron orbits the nucleus, occurring at a substantial fraction of thespeed of light. By using the simplestelectromagnetic interaction, Dirac was able to predict the value of the magnetic moment associated with the electron's spin and found the experimentally observed value, which was too large to be that of a spinning charged sphere governed byclassical physics. He was able to solve for thespectral lines of the hydrogen atomand to reproduce from physical first principlesSommerfeld's successful formula for thefine structureof the hydrogen spectrum. Dirac's equations sometimes yielded a negative value for energy, for which he proposed a novel solution: he posited the existence of anantielectronand a dynamical vacuum. This led to the many-particlequantum field theory. In quantum physics, a group ofparticlescan interact or be created together in such a way that thequantum stateof each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. This is known asquantum entanglement. An early landmark in the study of entanglement was theEinstein–Podolsky–Rosen (EPR) paradox, athought experimentproposed by Albert Einstein,Boris PodolskyandNathan Rosenwhich argues that the description of physical reality provided byquantum mechanicsis incomplete.[40]In a 1935 paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing thesehidden variables. The thought experiment involves a pair of particles prepared in what would later become known as an entangled state. Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by thetheory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that: "If, without in any way disturbing a system, we can predict with certainty (i.e., withprobabilityequal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observablesincompatibleand thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality.[41]In the same year,Erwin Schrödingerused the word "entanglement" and declared: "I would not call thatonebut ratherthecharacteristic trait of quantum mechanics."[42] The Irish physicistJohn Stewart Bellcarried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named theBell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles are able to interact instantaneously no matter how widely they ever become separated.[43][44]Performing experiments like those that Bell suggested, physicists have found that nature obeys quantum mechanics and violates Bell inequalities. In other words, the results of these experiments are incompatible with any local hidden variable theory.[45][46] The idea of quantum field theory began in the late 1920s with British physicist Paul Dirac, when he attempted toquantizethe energy of theelectromagnetic field; just as in quantum mechanics the energy of an electron in the hydrogen atom was quantized. Quantization is a procedure for constructing a quantum theory starting from a classical theory. Merriam-Websterdefines afieldin physics as "a region or space in which a given effect (such asmagnetism) exists".[47]Other effects that manifest themselves as fields aregravitationandstatic electricity.[48]In 2008, physicist Richard Hammond wrote: Sometimes we distinguish between quantum mechanics (QM) and quantum field theory (QFT). QM refers to a system in which the number of particles is fixed, and the fields (such as the electromechanical field) are continuous classical entities. QFT ... goes a step further and allows for the creation and annihilation of particles ... He added, however, thatquantum mechanicsis often used to refer to "the entire notion of quantum view".[49]: 108 In 1931, Dirac proposed the existence of particles that later became known asantimatter.[50]Dirac shared theNobel Prize in Physicsfor 1933 with Schrödinger "for the discovery of new productive forms of atomic theory".[51] Quantum electrodynamics (QED) is the name of the quantum theory of theelectromagnetic force. Understanding QED begins with understandingelectromagnetism. Electromagnetism can be called "electrodynamics" because it is a dynamic interaction between electrical andmagnetic forces. Electromagnetism begins with theelectric charge. Electric charges are the sources of and create,electric fields. An electric field is a field that exerts a force on any particles that carry electric charges, at any point in space. This includes the electron, proton, and evenquarks, among others. As a force is exerted, electric charges move, a current flows, and a magnetic field is produced. The changing magnetic field, in turn, causeselectric current(often moving electrons). The physical description of interactingcharged particles, electrical currents, electrical fields, and magnetic fields is called electromagnetism. In 1928 Paul Dirac produced a relativistic quantum theory of electromagnetism. This was the progenitor to modern quantum electrodynamics, in that it had essential ingredients of the modern theory. However, the problem of unsolvable infinities developed in thisrelativistic quantum theory. Years later,renormalizationlargely solved this problem. Initially viewed as a provisional, suspect procedure by some of its originators, renormalization eventually was embraced as an important and self-consistent tool in QED and other fields of physics. Also, in the late 1940sFeynman diagramsprovided a way to make predictions with QED by finding a probability amplitude for each possible way that an interaction could occur. The diagrams showed in particular that the electromagnetic force is the exchange of photons between interacting particles.[52] TheLamb shiftis an example of a quantum electrodynamics prediction that has been experimentally verified. It is an effect whereby the quantum nature of the electromagnetic field makes the energy levels in an atom or ion deviate slightly from what they would otherwise be. As a result, spectral lines may shift or split. Similarly, within a freely propagating electromagnetic wave, the current can also be just an abstractdisplacement current, instead of involving charge carriers. In QED, its full description makes essential use of short-livedvirtual particles. There, QED again validates an earlier, rather mysterious concept. TheStandard Modelof particle physics is the quantum field theory that describes three of the four knownfundamental forces(electromagnetic,weakandstrong interactions– excludinggravity) in theuniverseand classifies all knownelementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s uponexperimental confirmationof the existence ofquarks. Since then, proof of thetop quark(1995), thetau neutrino(2000), and theHiggs boson(2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties ofweak neutral currentsand theW and Z bosonswith great accuracy. Although the Standard Model is believed to be theoretically self-consistent and has demonstrated success in providingexperimental predictions, it leaves somephysical phenomena unexplainedand so falls short of being acomplete theory of fundamental interactions. For example, it does not fully explainbaryon asymmetry, incorporate the fulltheory of gravitationas described bygeneral relativity, or account for theuniverse's accelerating expansionas possibly described bydark energy. The model does not contain any viabledark matterparticle that possesses all of the required properties deduced from observationalcosmology. It also does not incorporateneutrino oscillationsand their non-zero masses. Accordingly, it is used as a basis for building more exotic models that incorporatehypothetical particles,extra dimensions, and elaborate symmetries (such assupersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations. The physical measurements, equations, and predictions pertinent to quantum mechanics are all consistent and hold a very high level of confirmation. However, the question of what these abstract models say about the underlying nature of the real world has received competing answers. These interpretations are widely varying and sometimes somewhat abstract. For instance, theCopenhagen interpretationstates that before a measurement, statements about a particle's properties are completely meaningless, while themany-worlds interpretationdescribes the existence of amultiversemade up of every possible universe.[53] Light behaves in some aspects like particles and in other aspects like waves. Matter—the "stuff" of the universe consisting of particles such aselectronsandatoms—exhibitswavelike behaviortoo. Some light sources, such asneon lights, give off only certain specific frequencies of light, a small set of distinct pure colors determined by neon's atomic structure. Quantum mechanics shows that light, along with all other forms ofelectromagnetic radiation, comes in discrete units, calledphotons, and predicts itsspectralenergies (corresponding to pure colors), and theintensitiesof its light beams. A single photon is aquantum, or smallest observable particle, of the electromagnetic field. A partial photon is never experimentally observed. More broadly, quantum mechanics shows that many properties of objects, such as position, speed, andangular momentum, that appeared continuous in the zoomed-out view of classical mechanics, turn out to be (in the very tiny, zoomed-in scale of quantum mechanics)quantized. Such properties ofelementary particlesare required to take on one of a set of small, discrete allowable values, and since the gap between these values is also small, the discontinuities are only apparent at very tiny (atomic) scales. The relationship between the frequency of electromagnetic radiation and the energy of each photon is whyultravioletlight can causesunburn, but visible orinfraredlight cannot. A photon of ultraviolet light delivers a high amount ofenergy—enough to contribute to cellular damage such as occurs in a sunburn. A photon of infrared light delivers less energy—only enough to warm one's skin. So, an infrared lamp can warm a large surface, perhaps large enough to keep people comfortable in a cold room, but it cannot give anyone a sunburn.[54] Applications of quantum mechanics include thelaser, thetransistor, theelectron microscope, andmagnetic resonance imaging. A special class of quantum mechanical applications is related tomacroscopic quantum phenomenasuch as superfluid helium and superconductors. The study of semiconductors led to the invention of thediodeand thetransistor, which are indispensable for modernelectronics. In even a simplelight switch,quantum tunnelingis absolutely vital, as otherwise the electrons in theelectric currentcould not penetrate the potential barrier made up of a layer of oxide.Flash memorychips found inUSB drivesalso use quantum tunneling, to erase their memory cells.[55] The following titles, all by working physicists, attempt to communicate quantum theory to laypeople, using a minimum of technical apparatus.
https://en.wikipedia.org/wiki/Introduction_to_quantum_mechanics
Inphysics, theno-cloning theoremstates that it is impossible to create an independent and identical copy of an arbitrary unknownquantum state, a statement which has profound implications in the field ofquantum computingamong others. The theorem is an evolution of the 1970no-go theoremauthored by James Park,[1]in which he demonstrates that a non-disturbing measurement scheme which is both simple and perfect cannot exist (the same result would be independently derived in 1982 byWilliam WoottersandWojciech H. Zurek[2]as well asDennis Dieks[3]the same year). The aforementioned theorems do not preclude the state of one system becomingentangledwith the state of another as cloning specifically refers to the creation of aseparable statewith identical factors. For example, one might use thecontrolled NOT gateand theWalsh–Hadamard gateto entangle twoqubitswithout violating the no-cloning theorem as no well-defined state may be defined in terms of a subsystem of an entangled state. The no-cloning theorem (as generally understood) concerns onlypure stateswhereas the generalized statement regardingmixed statesis known as theno-broadcast theorem. The no-cloning theorem has a time-reverseddual, theno-deleting theorem. According toAsher Peres[4]andDavid Kaiser,[5]the publication of the 1982 proof of the no-cloning theorem byWoottersandZurek[2]and byDieks[3]was prompted by a proposal ofNick Herbert[6]for asuperluminal communicationdevice using quantum entanglement, andGiancarlo Ghirardi[7]had proven the theorem 18 months prior to the published proof by Wootters and Zurek in his referee report to said proposal (as evidenced by a letter from the editor[7]). However, Juan Ortigoso[8]pointed out in 2018 that a complete proof along with an interpretation in terms of the lack of simple nondisturbing measurements in quantum mechanics was already delivered by Park in 1970.[1] Suppose we have two quantum systemsAandBwith a common Hilbert spaceH=HA=HB{\displaystyle H=H_{A}=H_{B}}. Suppose we want to have a procedure to copy the state|ϕ⟩A{\displaystyle |\phi \rangle _{A}}of quantum systemA, over the state|e⟩B{\displaystyle |e\rangle _{B}}of quantum systemB,for any original state|ϕ⟩A{\displaystyle |\phi \rangle _{A}}(seebra–ket notation). That is, beginning with the state|ϕ⟩A⊗|e⟩B{\displaystyle |\phi \rangle _{A}\otimes |e\rangle _{B}}, we want to end up with the state|ϕ⟩A⊗|ϕ⟩B{\displaystyle |\phi \rangle _{A}\otimes |\phi \rangle _{B}}. To make a "copy" of the stateA, we combine it with systemBin some unknown initial, or blank, state|e⟩B{\displaystyle |e\rangle _{B}}independent of|ϕ⟩A{\displaystyle |\phi \rangle _{A}}, of which we have no prior knowledge. The state of the initial composite system is then described by the followingtensor product:|ϕ⟩A⊗|e⟩B.{\displaystyle |\phi \rangle _{A}\otimes |e\rangle _{B}.}(in the following we will omit the⊗{\displaystyle \otimes }symbol and keep it implicit). There are only two permissiblequantum operationswith which we may manipulate the composite system: The no-cloning theorem answers the following question in the negative: Is it possible to construct a unitary operatorU, acting onHA⊗HB=H⊗H{\displaystyle H_{A}\otimes H_{B}=H\otimes H}, under which the state the system B is in always evolves into the state the system A is in,regardlessof the state system A is in? Theorem—There is no unitary operatorUonH⊗H{\displaystyle H\otimes H}such that for all normalised states|ϕ⟩A{\displaystyle |\phi \rangle _{A}}and|e⟩B{\displaystyle |e\rangle _{B}}inH{\displaystyle H}U(|ϕ⟩A|e⟩B)=eiα(ϕ,e)|ϕ⟩A|ϕ⟩B{\displaystyle U(|\phi \rangle _{A}|e\rangle _{B})=e^{i\alpha (\phi ,e)}|\phi \rangle _{A}|\phi \rangle _{B}}for some real numberα{\displaystyle \alpha }depending onϕ{\displaystyle \phi }ande{\displaystyle e}. The extra phase factor expresses the fact that a quantum-mechanical state defines a normalised vector in Hilbert space only up to a phase factor i.e. as an element ofprojectivised Hilbert space. To prove the theorem, we select an arbitrary pair of states|ϕ⟩A{\displaystyle |\phi \rangle _{A}}and|ψ⟩A{\displaystyle |\psi \rangle _{A}}in the Hilbert spaceH{\displaystyle H}. BecauseUis supposed to be unitary, we would have⟨ϕ|ψ⟩⟨e|e⟩≡⟨ϕ|A⟨e|B|ψ⟩A|e⟩B=⟨ϕ|A⟨e|BU†U|ψ⟩A|e⟩B=e−i(α(ϕ,e)−α(ψ,e))⟨ϕ|A⟨ϕ|B|ψ⟩A|ψ⟩B≡e−i(α(ϕ,e)−α(ψ,e))⟨ϕ|ψ⟩2.{\displaystyle \langle \phi |\psi \rangle \langle e|e\rangle \equiv \langle \phi |_{A}\langle e|_{B}|\psi \rangle _{A}|e\rangle _{B}=\langle \phi |_{A}\langle e|_{B}U^{\dagger }U|\psi \rangle _{A}|e\rangle _{B}=e^{-i(\alpha (\phi ,e)-\alpha (\psi ,e))}\langle \phi |_{A}\langle \phi |_{B}|\psi \rangle _{A}|\psi \rangle _{B}\equiv e^{-i(\alpha (\phi ,e)-\alpha (\psi ,e))}\langle \phi |\psi \rangle ^{2}.}Since the quantum state|e⟩{\displaystyle |e\rangle }is assumed to be normalized, we thus get|⟨ϕ|ψ⟩|2=|⟨ϕ|ψ⟩|.{\displaystyle |\langle \phi |\psi \rangle |^{2}=|\langle \phi |\psi \rangle |.} This implies that either|⟨ϕ|ψ⟩|=1{\displaystyle |\langle \phi |\psi \rangle |=1}or|⟨ϕ|ψ⟩|=0{\displaystyle |\langle \phi |\psi \rangle |=0}. Hence by theCauchy–Schwarz inequalityeither|ϕ⟩=eiβ|ψ⟩{\displaystyle |\phi \rangle =e^{i\beta }|\psi \rangle }or|ϕ⟩{\displaystyle |\phi \rangle }isorthogonalto|ψ⟩{\displaystyle |\psi \rangle }. However, this cannot be the case for twoarbitrarystates. Therefore, a single universalUcannot clone ageneralquantum state. This proves the no-cloning theorem. Take a qubit for example. It can be represented by twocomplex numbers, calledprobability amplitudes(normalised to 1), that is three real numbers (two polar angles and one radius). Copying three numbers on a classical computer using anycopy and pasteoperation is trivial (up to a finite precision) but the problem manifests if the qubit is unitarily transformed (e.g. by theHadamard quantum gate) to be polarised (whichunitary transformationis asurjective isometry). In such a case the qubit can be represented by just two real numbers (one polar angle and one radius equal to 1), while the value of the third can be arbitrary in such a representation. Yet arealisationof a qubit (polarisation-encoded photon, for example) is capable of storing the whole qubit information support within its "structure". Thus no single universal unitary evolutionUcan clone an arbitrary quantum state according to the no-cloning theorem. It would have to depend on the transformed qubit (initial) state and thus would not have beenuniversal. In the statement of the theorem, two assumptions were made: the state to be copied is apure stateand the proposed copier acts via unitary time evolution. These assumptions cause no loss of generality. If the state to be copied is amixed state, it can be"purified," i.e. treated as a pure state of a larger system. Alternately, a different proof can be given that works directly with mixed states; in this case, the theorem is often known as the no-broadcast theorem.[9][10]Similarly, an arbitraryquantum operationcan be implemented via introducing anancillaand performing a suitable unitary evolution.[clarification needed]Thus the no-cloning theorem holds in full generality. Even though it is impossible to make perfect copies of an unknown quantum state, it is possible to produce imperfect copies. This can be done by coupling a larger auxiliary system to the system that is to be cloned, and applying aunitary transformationto the combined system. If the unitary transformation is chosen correctly, several components of the combined system will evolve into approximate copies of the original system. In 1996, V. Buzek and M. Hillery showed that a universal cloning machine can make a clone of an unknown state with the surprisingly high fidelity of 5/6.[11] Imperfectquantum cloningcan be used as aneavesdropping attackonquantum cryptographyprotocols, among other uses in quantum information science.
https://en.wikipedia.org/wiki/No-cloning_theorem
Inmathematics, particularlylinear algebra, anorthonormal basisfor aninner product spaceV{\displaystyle V}with finitedimensionis abasisforV{\displaystyle V}whose vectors areorthonormal, that is, they are allunit vectorsandorthogonalto each other.[1][2][3]For example, thestandard basisfor aEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}is an orthonormal basis, where the relevant inner product is thedot productof vectors. Theimageof the standard basis under arotationorreflection(or anyorthogonal transformation) is also orthonormal, and every orthonormal basis forRn{\displaystyle \mathbb {R} ^{n}}arises in this fashion. An orthonormal basis can be derived from anorthogonal basisvianormalization.The choice of anoriginand an orthonormal basis forms acoordinate frameknown as anorthonormal frame. For a general inner product spaceV,{\displaystyle V,}an orthonormal basis can be used to define normalizedorthogonal coordinatesonV.{\displaystyle V.}Under these coordinates, the inner product becomes a dot product of vectors. Thus the presence of an orthonormal basis reduces the study of afinite-dimensionalinner product space to the study ofRn{\displaystyle \mathbb {R} ^{n}}under the dot product. Every finite-dimensional inner product space has an orthonormal basis, which may be obtained from an arbitrary basis using theGram–Schmidt process. Infunctional analysis, the concept of an orthonormal basis can be generalized to arbitrary (infinite-dimensional)inner product spaces.[4]Given a pre-Hilbert spaceH,{\displaystyle H,}anorthonormal basisforH{\displaystyle H}is an orthonormal set of vectors with the property that every vector inH{\displaystyle H}can be written as aninfinite linear combinationof the vectors in the basis. In this case, the orthonormal basis is sometimes called aHilbert basisforH.{\displaystyle H.}Note that an orthonormal basis in this sense is not generally aHamel basis, since infinite linear combinations are required.[5]Specifically, thelinear spanof the basis must bedenseinH,{\displaystyle H,}although not necessarily the entire space. If we go on toHilbert spaces, a non-orthonormal set of vectors having the same linear span as an orthonormal basis may not be a basis at all. For instance, anysquare-integrable functionon the interval[−1,1]{\displaystyle [-1,1]}can be expressed (almost everywhere) as an infinite sum ofLegendre polynomials(an orthonormal basis), but not necessarily as an infinite sum of themonomialsxn.{\displaystyle x^{n}.} A different generalisation is to pseudo-inner product spaces, finite-dimensional vector spacesM{\displaystyle M}equipped with a non-degeneratesymmetric bilinear formknown as themetric tensor. In such a basis, the metric takes the formdiag(+1,⋯,+1,−1,⋯,−1){\displaystyle {\text{diag}}(+1,\cdots ,+1,-1,\cdots ,-1)}withp{\displaystyle p}positive ones andq{\displaystyle q}negative ones. IfB{\displaystyle B}is an orthogonal basis ofH,{\displaystyle H,}then every elementx∈H{\displaystyle x\in H}may be written asx=∑b∈B⟨x,b⟩‖b‖2b.{\displaystyle x=\sum _{b\in B}{\frac {\langle x,b\rangle }{\lVert b\rVert ^{2}}}b.} WhenB{\displaystyle B}is orthonormal, this simplifies tox=∑b∈B⟨x,b⟩b{\displaystyle x=\sum _{b\in B}\langle x,b\rangle b}and the square of thenormofx{\displaystyle x}can be given by‖x‖2=∑b∈B|⟨x,b⟩|2.{\displaystyle \|x\|^{2}=\sum _{b\in B}|\langle x,b\rangle |^{2}.} Even ifB{\displaystyle B}isuncountable, only countably many terms in this sum will be non-zero, and the expression is therefore well-defined. This sum is also called theFourier expansionofx,{\displaystyle x,}and the formula is usually known asParseval's identity. IfB{\displaystyle B}is an orthonormal basis ofH,{\displaystyle H,}thenH{\displaystyle H}isisomorphictoℓ2(B){\displaystyle \ell ^{2}(B)}in the following sense: there exists abijectivelinearmapΦ:H→ℓ2(B){\displaystyle \Phi :H\to \ell ^{2}(B)}such that⟨Φ(x),Φ(y)⟩=⟨x,y⟩∀x,y∈H.{\displaystyle \langle \Phi (x),\Phi (y)\rangle =\langle x,y\rangle \ \ \forall \ x,y\in H.} A setS{\displaystyle S}of mutually orthonormal vectors in a Hilbert spaceH{\displaystyle H}is called an orthonormal system. An orthonormal basis is an orthonormal system with the additional property that the linear span ofS{\displaystyle S}is dense inH{\displaystyle H}.[6]Alternatively, the setS{\displaystyle S}can be regarded as eithercompleteorincompletewith respect toH{\displaystyle H}. That is, we can take the smallest closed linear subspaceV⊆H{\displaystyle V\subseteq H}containingS.{\displaystyle S.}ThenS{\displaystyle S}will be an orthonormal basis ofV;{\displaystyle V;}which may of course be smaller thanH{\displaystyle H}itself, being anincompleteorthonormal set, or beH,{\displaystyle H,}when it is acompleteorthonormal set. UsingZorn's lemmaand theGram–Schmidt process(or more simply well-ordering and transfinite recursion), one can show thateveryHilbert space admits an orthonormal basis;[7]furthermore, any two orthonormal bases of the same space have the samecardinality(this can be proven in a manner akin to that of the proof of the usualdimension theorem for vector spaces, with separate cases depending on whether the larger basis candidate is countable or not). A Hilbert space isseparableif and only if it admits acountableorthonormal basis. (One can prove this last statement without using theaxiom of choice. However, one would have to use theaxiom of countable choice.) For concreteness we discuss orthonormal bases for a real,n{\displaystyle n}-dimensional vector spaceV{\displaystyle V}with a positive definite symmetric bilinear formϕ=⟨⋅,⋅⟩{\displaystyle \phi =\langle \cdot ,\cdot \rangle }. One way to view an orthonormal basis with respect toϕ{\displaystyle \phi }is as a set of vectorsB={ei}{\displaystyle {\mathcal {B}}=\{e_{i}\}}, which allow us to writev=viei∀v∈V{\displaystyle v=v^{i}e_{i}\ \ \forall \ v\in V}, andvi∈R{\displaystyle v^{i}\in \mathbb {R} }or(vi)∈Rn{\displaystyle (v^{i})\in \mathbb {R} ^{n}}. With respect to this basis, the components ofϕ{\displaystyle \phi }are particularly simple:ϕ(ei,ej)=δij{\displaystyle \phi (e_{i},e_{j})=\delta _{ij}}(whereδij{\displaystyle \delta _{ij}}is theKronecker delta). We can now view the basis as a mapψB:V→Rn{\displaystyle \psi _{\mathcal {B}}:V\rightarrow \mathbb {R} ^{n}}which is an isomorphism of inner product spaces: to make this more explicit we can write Explicitly we can write(ψB(v))i=ei(v)=ϕ(ei,v){\displaystyle (\psi _{\mathcal {B}}(v))^{i}=e^{i}(v)=\phi (e_{i},v)}whereei{\displaystyle e^{i}}is the dual basis element toei{\displaystyle e_{i}}. The inverse is a component map These definitions make it manifest that there is a bijection The space of isomorphisms admits actions of orthogonal groups at either theV{\displaystyle V}side or theRn{\displaystyle \mathbb {R} ^{n}}side. For concreteness we fix the isomorphisms to point in the directionRn→V{\displaystyle \mathbb {R} ^{n}\rightarrow V}, and consider the space of such maps,Iso(Rn→V){\displaystyle {\text{Iso}}(\mathbb {R} ^{n}\rightarrow V)}. This space admits a left action by the group of isometries ofV{\displaystyle V}, that is,R∈GL(V){\displaystyle R\in {\text{GL}}(V)}such thatϕ(⋅,⋅)=ϕ(R⋅,R⋅){\displaystyle \phi (\cdot ,\cdot )=\phi (R\cdot ,R\cdot )}, with the action given by composition:R∗C=R∘C.{\displaystyle R*C=R\circ C.} This space also admits a right action by the group of isometries ofRn{\displaystyle \mathbb {R} ^{n}}, that is,Rij∈O(n)⊂Matn×n(R){\displaystyle R_{ij}\in {\text{O}}(n)\subset {\text{Mat}}_{n\times n}(\mathbb {R} )}, with the action again given by composition:C∗Rij=C∘Rij{\displaystyle C*R_{ij}=C\circ R_{ij}}. The set of orthonormal bases forRn{\displaystyle \mathbb {R} ^{n}}with the standard inner product is aprincipal homogeneous spaceor G-torsor for theorthogonal groupG=O(n),{\displaystyle G={\text{O}}(n),}and is called theStiefel manifoldVn(Rn){\displaystyle V_{n}(\mathbb {R} ^{n})}of orthonormaln{\displaystyle n}-frames.[8] In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: given the space of orthonormal bases, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-one correspondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends a given basis: just as an invertible map can take any basis to any other basis, an orthogonal map can take anyorthogonalbasis to any otherorthogonalbasis. The other Stiefel manifoldsVk(Rn){\displaystyle V_{k}(\mathbb {R} ^{n})}fork<n{\displaystyle k<n}ofincompleteorthonormal bases (orthonormalk{\displaystyle k}-frames) are still homogeneous spaces for the orthogonal group, but notprincipalhomogeneous spaces: anyk{\displaystyle k}-frame can be taken to any otherk{\displaystyle k}-frame by an orthogonal map, but this map is not uniquely determined.
https://en.wikipedia.org/wiki/Orthonormal_basis
ThePusey–Barrett–Rudolph(PBR)theorem[1]is ano-go theoreminquantum foundationsdue to Matthew Pusey, Jonathan Barrett, andTerry Rudolph(for whom the theorem is named) in 2012. It has particular significance for how one may interpret the nature of thequantum state. With respect to certain realisthidden variable theoriesthat attempt to explain the predictions ofquantum mechanics, the theorem rules that pure quantum states must be "ontic" in the sense that they correspond directly to states of reality, rather than "epistemic" in the sense that they represent probabilistic or incomplete states of knowledge about reality. The PBR theorem may also be compared with other no-go theorems likeBell's theoremand theBell–Kochen–Specker theorem, which, respectively, rule out the possibility of explaining the predictions of quantum mechanics withlocalhidden variable theories and noncontextual hidden variable theories. Similarly, the PBR theorem could be said to rule outpreparation independenthidden variable theories, in which quantum states that are prepared independently have independent hidden variable descriptions. This result was cited by theoretical physicistAntony Valentinias "the most important general theorem relating to the foundations of quantum mechanics sinceBell's theorem".[2] This theorem, which first appeared as anarXivpreprint[3]and was subsequently published inNature Physics,[1]concerns the interpretational status of pure quantum states. Under the classification of hidden variable models of Harrigan and Spekkens,[4]the interpretation of the quantum wavefunction|ψ⟩{\displaystyle |\psi \rangle }can be categorized as eitherψ-ontic if "every complete physical state or ontic state in the theory is consistent with only one pure quantum state" andψ-epistemic "if there exist ontic states that are consistent with more than one pure quantum state." The PBR theorem proves that either the quantum state|ψ⟩{\displaystyle |\psi \rangle }isψ-ontic, or else non-entangledquantum states violate the assumption of preparation independence, which would entailaction at a distance. In conclusion, we have presented ano-go theorem, which—modulo assumptions—shows that models in which the quantum state is interpreted as mereinformationabout an objective physical state of a system cannot reproduce the predictions of quantum theory. The result is in the same spirit as Bell’s theorem, which states that no local theory can reproduce the predictions of quantum theory.
https://en.wikipedia.org/wiki/PBR_theorem
Thequantum harmonic oscillatoris thequantum-mechanicalanalog of theclassical harmonic oscillator. Because an arbitrary smoothpotentialcan usually be approximated as aharmonic potentialat the vicinity of a stableequilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact,analytical solutionis known.[1][2][3] TheHamiltonianof the particle is:H^=p^22m+12kx^2=p^22m+12mω2x^2,{\displaystyle {\hat {H}}={\frac {{\hat {p}}^{2}}{2m}}+{\frac {1}{2}}k{\hat {x}}^{2}={\frac {{\hat {p}}^{2}}{2m}}+{\frac {1}{2}}m\omega ^{2}{\hat {x}}^{2}\,,}wheremis the particle's mass,kis the force constant,ω=k/m{\textstyle \omega ={\sqrt {k/m}}}is theangular frequencyof the oscillator,x^{\displaystyle {\hat {x}}}is theposition operator(given byxin the coordinate basis), andp^{\displaystyle {\hat {p}}}is themomentum operator(given byp^=−iℏ∂/∂x{\displaystyle {\hat {p}}=-i\hbar \,\partial /\partial x}in the coordinate basis). The first term in the Hamiltonian represents the kinetic energy of the particle, and the second term represents its potential energy, as inHooke's law.[4] The time-independentSchrödinger equation(TISE) is,H^|ψ⟩=E|ψ⟩,{\displaystyle {\hat {H}}\left|\psi \right\rangle =E\left|\psi \right\rangle ~,}whereE{\displaystyle E}denotes a real number (which needs to be determined) that will specify a time-independentenergy level, oreigenvalue, and the solution|ψ⟩{\displaystyle |\psi \rangle }denotes that level's energyeigenstate.[5] Then solve the differential equation representing this eigenvalue problem in the coordinate basis, for thewave function⟨x|ψ⟩=ψ(x){\displaystyle \langle x|\psi \rangle =\psi (x)}, using aspectral method. It turns out that there is a family of solutions. In this basis, they amount toHermite functions,[6][7]ψn(x)=12nn!(mωπℏ)1/4e−mωx22ℏHn(mωℏx),n=0,1,2,….{\displaystyle \psi _{n}(x)={\frac {1}{\sqrt {2^{n}\,n!}}}\left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}e^{-{\frac {m\omega x^{2}}{2\hbar }}}H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad n=0,1,2,\ldots .} The functionsHnare the physicists'Hermite polynomials,Hn(z)=(−1)nez2dndzn(e−z2).{\displaystyle H_{n}(z)=(-1)^{n}~e^{z^{2}}{\frac {d^{n}}{dz^{n}}}\left(e^{-z^{2}}\right).} The corresponding energy levels are[8]En=ℏω(n+12).{\displaystyle E_{n}=\hbar \omega {\bigl (}n+{\tfrac {1}{2}}{\bigr )}.}The expectation values of position and momentum combined with variance of each variable can be derived from the wavefunction to understand the behavior of the energy eigenkets. They are shown to be⟨x^⟩=0{\textstyle \langle {\hat {x}}\rangle =0}and⟨p^⟩=0{\textstyle \langle {\hat {p}}\rangle =0}owing to the symmetry of the problem, whereas: ⟨x^2⟩=(2n+1)ℏ2mω=σx2{\displaystyle \langle {\hat {x}}^{2}\rangle =(2n+1){\frac {\hbar }{2m\omega }}=\sigma _{x}^{2}} ⟨p^2⟩=(2n+1)mℏω2=σp2{\displaystyle \langle {\hat {p}}^{2}\rangle =(2n+1){\frac {m\hbar \omega }{2}}=\sigma _{p}^{2}} The variance in both position and momentum are observed to increase for higher energy levels. The lowest energy level has value ofσxσp=ℏ2{\textstyle \sigma _{x}\sigma _{p}={\frac {\hbar }{2}}}which is its minimum value due to uncertainty relation and also corresponds to a gaussian wavefunction.[9] This energy spectrum is noteworthy for three reasons. First, the energies are quantized, meaning that only discrete energy values (integer-plus-half multiples ofħω) are possible; this is a general feature of quantum-mechanical systems when a particle is confined. Second, these discrete energy levels are equally spaced, unlike in theBohr modelof the atom, or theparticle in a box. Third, the lowest achievable energy (the energy of then= 0state, called theground state) is not equal to the minimum of the potential well, butħω/2above it; this is calledzero-point energy. Because of the zero-point energy, the position and momentum of the oscillator in the ground state are not fixed (as they would be in a classical oscillator), but have a small range of variance, in accordance with theHeisenberg uncertainty principle. The ground state probability density is concentrated at the origin, which means the particle spends most of its time at the bottom of the potential well, as one would expect for a state with little energy. As the energy increases, the probability density peaks at the classical "turning points", where the state's energy coincides with the potential energy. (See the discussion below of the highly excited states.) This is consistent with the classical harmonic oscillator, in which the particle spends more of its time (and is therefore more likely to be found) near the turning points, where it is moving the slowest. Thecorrespondence principleis thus satisfied. Moreover, special nondispersivewave packets, with minimum uncertainty, calledcoherent statesoscillate very much like classical objects, as illustrated in the figure; they arenoteigenstates of the Hamiltonian. The "ladder operator" method, developed byPaul Dirac, allows extraction of the energy eigenvalues without directly solving the differential equation.[10]It is generalizable to more complicated problems, notably inquantum field theory. Following this approach, we define the operatorsaand itsadjointa†,a=mω2ℏ(x^+imωp^)a†=mω2ℏ(x^−imωp^){\displaystyle {\begin{aligned}a&={\sqrt {m\omega \over 2\hbar }}\left({\hat {x}}+{i \over m\omega }{\hat {p}}\right)\\a^{\dagger }&={\sqrt {m\omega \over 2\hbar }}\left({\hat {x}}-{i \over m\omega }{\hat {p}}\right)\end{aligned}}}Note these operators classically are exactly thegeneratorsof normalized rotation in the phase space ofx{\displaystyle x}andmdxdt{\displaystyle m{\frac {dx}{dt}}},i.ethey describe the forwards and backwards evolution in time of a classical harmonic oscillator.[clarification needed] These operators lead to the following representation ofx^{\displaystyle {\hat {x}}}andp^{\displaystyle {\hat {p}}},x^=ℏ2mω(a†+a)p^=iℏmω2(a†−a).{\displaystyle {\begin{aligned}{\hat {x}}&={\sqrt {\frac {\hbar }{2m\omega }}}(a^{\dagger }+a)\\{\hat {p}}&=i{\sqrt {\frac {\hbar m\omega }{2}}}(a^{\dagger }-a)~.\end{aligned}}} The operatorais notHermitian, since itself and its adjointa†are not equal. The energy eigenstates|n⟩, when operated on by these ladder operators, givea†|n⟩=n+1|n+1⟩a|n⟩=n|n−1⟩.{\displaystyle {\begin{aligned}a^{\dagger }|n\rangle &={\sqrt {n+1}}|n+1\rangle \\a|n\rangle &={\sqrt {n}}|n-1\rangle .\end{aligned}}} From the relations above, we can also define a number operatorN, which has the following property:N=a†aN|n⟩=n|n⟩.{\displaystyle {\begin{aligned}N&=a^{\dagger }a\\N\left|n\right\rangle &=n\left|n\right\rangle .\end{aligned}}} The followingcommutatorscan be easily obtained by substituting thecanonical commutation relation,[a,a†]=1,[N,a†]=a†,[N,a]=−a,{\displaystyle [a,a^{\dagger }]=1,\qquad [N,a^{\dagger }]=a^{\dagger },\qquad [N,a]=-a,} and the Hamilton operator can be expressed asH^=ℏω(N+12),{\displaystyle {\hat {H}}=\hbar \omega \left(N+{\frac {1}{2}}\right),} so the eigenstates ofNare also the eigenstates of energy. To see that, we can applyH^{\displaystyle {\hat {H}}}to a number state|n⟩{\displaystyle |n\rangle }: H^|n⟩=ℏω(N^+12)|n⟩.{\displaystyle {\hat {H}}|n\rangle =\hbar \omega \left({\hat {N}}+{\frac {1}{2}}\right)|n\rangle .} Using the property of the number operatorN^{\displaystyle {\hat {N}}}: N^|n⟩=n|n⟩,{\displaystyle {\hat {N}}|n\rangle =n|n\rangle ,} we get: H^|n⟩=ℏω(n+12)|n⟩.{\displaystyle {\hat {H}}|n\rangle =\hbar \omega \left(n+{\frac {1}{2}}\right)|n\rangle .} Thus, since|n⟩{\displaystyle |n\rangle }solves the TISE for the Hamiltonian operatorH^{\displaystyle {\hat {H}}}, is also one of its eigenstates with the corresponding eigenvalue: En=ℏω(n+12).{\displaystyle E_{n}=\hbar \omega \left(n+{\frac {1}{2}}\right).} QED. The commutation property yieldsNa†|n⟩=(a†N+[N,a†])|n⟩=(a†N+a†)|n⟩=(n+1)a†|n⟩,{\displaystyle {\begin{aligned}Na^{\dagger }|n\rangle &=\left(a^{\dagger }N+[N,a^{\dagger }]\right)|n\rangle \\&=\left(a^{\dagger }N+a^{\dagger }\right)|n\rangle \\&=(n+1)a^{\dagger }|n\rangle ,\end{aligned}}} and similarly,Na|n⟩=(n−1)a|n⟩.{\displaystyle Na|n\rangle =(n-1)a|n\rangle .} This means thataacts on|n⟩to produce, up to a multiplicative constant,|n–1⟩, anda†acts on|n⟩to produce|n+1⟩. For this reason,ais called anannihilation operator("lowering operator"), anda†acreation operator("raising operator"). The two operators together are calledladder operators. Given any energy eigenstate, we can act on it with the lowering operator,a, to produce another eigenstate withħωless energy. By repeated application of the lowering operator, it seems that we can produce energy eigenstates down toE= −∞. However, sincen=⟨n|N|n⟩=⟨n|a†a|n⟩=(a|n⟩)†a|n⟩⩾0,{\displaystyle n=\langle n|N|n\rangle =\langle n|a^{\dagger }a|n\rangle ={\Bigl (}a|n\rangle {\Bigr )}^{\dagger }a|n\rangle \geqslant 0,} the smallest eigenvalue of the number operator is 0, anda|0⟩=0.{\displaystyle a\left|0\right\rangle =0.} In this case, subsequent applications of the lowering operator will just produce zero, instead of additional energy eigenstates. Furthermore, we have shown above thatH^|0⟩=ℏω2|0⟩{\displaystyle {\hat {H}}\left|0\right\rangle ={\frac {\hbar \omega }{2}}\left|0\right\rangle } Finally, by acting on |0⟩ with the raising operator and multiplying by suitablenormalization factors, we can produce an infinite set of energy eigenstates{|0⟩,|1⟩,|2⟩,…,|n⟩,…},{\displaystyle \left\{\left|0\right\rangle ,\left|1\right\rangle ,\left|2\right\rangle ,\ldots ,\left|n\right\rangle ,\ldots \right\},} such thatH^|n⟩=ℏω(n+12)|n⟩,{\displaystyle {\hat {H}}\left|n\right\rangle =\hbar \omega \left(n+{\frac {1}{2}}\right)\left|n\right\rangle ,}which matches the energy spectrum given in the preceding section. Arbitrary eigenstates can be expressed in terms of |0⟩,[11]|n⟩=(a†)nn!|0⟩.{\displaystyle |n\rangle ={\frac {(a^{\dagger })^{n}}{\sqrt {n!}}}|0\rangle .} ⟨n|aa†|n⟩=⟨n|([a,a†]+a†a)|n⟩=⟨n|(N+1)|n⟩=n+1⇒a†|n⟩=n+1|n+1⟩⇒|n⟩=1na†|n−1⟩=1n(n−1)(a†)2|n−2⟩=⋯=1n!(a†)n|0⟩.{\displaystyle {\begin{aligned}\langle n|aa^{\dagger }|n\rangle &=\langle n|\left([a,a^{\dagger }]+a^{\dagger }a\right)\left|n\right\rangle =\langle n|\left(N+1\right)|n\rangle =n+1\\[1ex]\Rightarrow a^{\dagger }|n\rangle &={\sqrt {n+1}}|n+1\rangle \\[1ex]\Rightarrow |n\rangle &={\frac {1}{\sqrt {n}}}a^{\dagger }\left|n-1\right\rangle ={\frac {1}{\sqrt {n(n-1)}}}\left(a^{\dagger }\right)^{2}\left|n-2\right\rangle =\cdots ={\frac {1}{\sqrt {n!}}}\left(a^{\dagger }\right)^{n}\left|0\right\rangle .\end{aligned}}} The preceding analysis is algebraic, using only the commutation relations between the raising and lowering operators. Once the algebraic analysis is complete, one should turn to analytical questions. First, one should find the ground state, that is, the solution of the equationaψ0=0{\displaystyle a\psi _{0}=0}. In the position representation, this is the first-order differential equation(x+ℏmωddx)ψ0=0,{\displaystyle \left(x+{\frac {\hbar }{m\omega }}{\frac {d}{dx}}\right)\psi _{0}=0,}whose solution is easily found to be theGaussian[nb 1]ψ0(x)=Ce−mωx22ℏ.{\displaystyle \psi _{0}(x)=Ce^{-{\frac {m\omega x^{2}}{2\hbar }}}.}Conceptually, it is important that there is only one solution of this equation; if there were, say, two linearly independent ground states, we would get two independent chains of eigenvectors for the harmonic oscillator. Once the ground state is computed, one can show inductively that the excited states are Hermite polynomials times the Gaussian ground state, using the explicit form of the raising operator in the position representation. One can also prove that, as expected from the uniqueness of the ground state, the Hermite functions energy eigenstatesψn{\displaystyle \psi _{n}}constructed by the ladder method form acompleteorthonormal set of functions.[12] Explicitly connecting with the previous section, the ground state |0⟩ in the position representation is determined bya|0⟩=0{\displaystyle a|0\rangle =0},⟨x∣a∣0⟩=0⇒(x+ℏmωddx)⟨x∣0⟩=0⇒{\displaystyle \left\langle x\mid a\mid 0\right\rangle =0\qquad \Rightarrow \left(x+{\frac {\hbar }{m\omega }}{\frac {d}{dx}}\right)\left\langle x\mid 0\right\rangle =0\qquad \Rightarrow }⟨x∣0⟩=(mωπℏ)14exp⁡(−mω2ℏx2)=ψ0,{\displaystyle \left\langle x\mid 0\right\rangle =\left({\frac {m\omega }{\pi \hbar }}\right)^{\frac {1}{4}}\exp \left(-{\frac {m\omega }{2\hbar }}x^{2}\right)=\psi _{0}~,}hence⟨x∣a†∣0⟩=ψ1(x),{\displaystyle \langle x\mid a^{\dagger }\mid 0\rangle =\psi _{1}(x)~,}so thatψ1(x,t)=⟨x∣e−3iωt/2a†∣0⟩{\displaystyle \psi _{1}(x,t)=\langle x\mid e^{-3i\omega t/2}a^{\dagger }\mid 0\rangle }, and so on. The quantum harmonic oscillator possesses natural scales for length and energy, which can be used to simplify the problem. These can be found bynondimensionalization. The result is that, ifenergyis measured in units ofħωanddistancein units of√ħ/(mω), then the Hamiltonian simplifies toH=−12d2dx2+12x2,{\displaystyle H=-{\frac {1}{2}}{d^{2} \over dx^{2}}+{\frac {1}{2}}x^{2},}while the energy eigenfunctions and eigenvalues simplify to Hermite functions and integers offset by a half,ψn(x)=⟨x∣n⟩=12nn!π−1/4exp⁡(−x2/2)Hn(x),{\displaystyle \psi _{n}(x)=\left\langle x\mid n\right\rangle ={1 \over {\sqrt {2^{n}n!}}}~\pi ^{-1/4}\exp(-x^{2}/2)~H_{n}(x),}En=n+12,{\displaystyle E_{n}=n+{\tfrac {1}{2}}~,}whereHn(x)are theHermite polynomials. To avoid confusion, these "natural units" will mostly not be adopted in this article. However, they frequently come in handy when performing calculations, by bypassing clutter. For example, thefundamental solution(propagator) ofH−i∂t, the time-dependent Schrödinger operator for this oscillator, simply boils down to theMehler kernel,[13][14]⟨x∣exp⁡(−itH)∣y⟩≡K(x,y;t)=12πisin⁡texp⁡(i2sin⁡t((x2+y2)cos⁡t−2xy)),{\displaystyle \langle x\mid \exp(-itH)\mid y\rangle \equiv K(x,y;t)={\frac {1}{\sqrt {2\pi i\sin t}}}\exp \left({\frac {i}{2\sin t}}\left((x^{2}+y^{2})\cos t-2xy\right)\right)~,}whereK(x,y;0) =δ(x−y). The most general solution for a given initial configurationψ(x,0)then is simplyψ(x,t)=∫dyK(x,y;t)ψ(y,0).{\displaystyle \psi (x,t)=\int dy~K(x,y;t)\psi (y,0)\,.} Thecoherent states(also known as Glauber states) of the harmonic oscillator are special nondispersivewave packets, with minimum uncertaintyσxσp=ℏ⁄2, whoseobservables'expectation valuesevolve like a classical system. They are eigenvectors of the annihilation operator,notthe Hamiltonian, and form anovercompletebasis which consequentially lacks orthogonality.[15] The coherent states are indexed byα∈C{\displaystyle \alpha \in \mathbb {C} }and expressed in the|n⟩basis as |α⟩=∑n=0∞|n⟩⟨n|α⟩=e−12|α|2∑n=0∞αnn!|n⟩=e−12|α|2eαa†e−α∗a|0⟩.{\displaystyle |\alpha \rangle =\sum _{n=0}^{\infty }|n\rangle \langle n|\alpha \rangle =e^{-{\frac {1}{2}}|\alpha |^{2}}\sum _{n=0}^{\infty }{\frac {\alpha ^{n}}{\sqrt {n!}}}|n\rangle =e^{-{\frac {1}{2}}|\alpha |^{2}}e^{\alpha a^{\dagger }}e^{-{\alpha ^{*}a}}|0\rangle .} Since coherent states are not energy eigenstates, their time evolution is not a simple shift in wavefunction phase. The time-evolved states are, however, also coherent states but with phase-shifting parameterαinstead:α(t)=α(0)e−iωt=α0e−iωt{\displaystyle \alpha (t)=\alpha (0)e^{-i\omega t}=\alpha _{0}e^{-i\omega t}}.|α(t)⟩=∑n=0∞e−i(n+12)ωt|n⟩⟨n|α⟩=e−iωt2e−12|α|2∑n=0∞(αe−iωt)nn!|n⟩=e−iωt2|αe−iωt⟩{\displaystyle |\alpha (t)\rangle =\sum _{n=0}^{\infty }e^{-i\left(n+{\frac {1}{2}}\right)\omega t}|n\rangle \langle n|\alpha \rangle =e^{\frac {-i\omega t}{2}}e^{-{\frac {1}{2}}|\alpha |^{2}}\sum _{n=0}^{\infty }{\frac {(\alpha e^{-i\omega t})^{n}}{\sqrt {n!}}}|n\rangle =e^{-{\frac {i\omega t}{2}}}|\alpha e^{-i\omega t}\rangle } Becausea|0⟩=0{\displaystyle a\left|0\right\rangle =0}and via the Kermack-McCrae identity, the last form is equivalent to aunitarydisplacement operatoracting on the ground state:|α⟩=eαa^†−α∗a^|0⟩=D(α)|0⟩{\displaystyle |\alpha \rangle =e^{\alpha {\hat {a}}^{\dagger }-\alpha ^{*}{\hat {a}}}|0\rangle =D(\alpha )|0\rangle }. Calculating the expectation values: ⟨x^⟩α(t)=2ℏmω|α0|cos⁡(ωt−ϕ){\displaystyle \langle {\hat {x}}\rangle _{\alpha (t)}={\sqrt {\frac {2\hbar }{m\omega }}}|\alpha _{0}|\cos {(\omega t-\phi )}} ⟨p^⟩α(t)=−2mℏω|α0|sin⁡(ωt−ϕ){\displaystyle \langle {\hat {p}}\rangle _{\alpha (t)}=-{\sqrt {2m\hbar \omega }}|\alpha _{0}|\sin {(\omega t-\phi )}} whereϕ{\displaystyle \phi }is the phase contributed by complexα. These equations confirm the oscillating behavior of the particle. The uncertainties calculated using the numeric method are: σx(t)=ℏ2mω{\displaystyle \sigma _{x}(t)={\sqrt {\frac {\hbar }{2m\omega }}}} σp(t)=mℏω2{\displaystyle \sigma _{p}(t)={\sqrt {\frac {m\hbar \omega }{2}}}} which givesσx(t)σp(t)=ℏ2{\textstyle \sigma _{x}(t)\sigma _{p}(t)={\frac {\hbar }{2}}}. Since the only wavefunction that can have lowest position-momentum uncertainty,ℏ2{\textstyle {\frac {\hbar }{2}}}, is a gaussian wavefunction, and since the coherent state wavefunction has minimum position-momentum uncertainty, we note that the general gaussian wavefunction in quantum mechanics has the form:ψα(x′)=(mωπℏ)14eiℏ⟨p^⟩α(x′−⟨x^⟩α2)−mω2ℏ(x′−⟨x^⟩α)2.{\displaystyle \psi _{\alpha }(x')=\left({\frac {m\omega }{\pi \hbar }}\right)^{\frac {1}{4}}e^{{\frac {i}{\hbar }}\langle {\hat {p}}\rangle _{\alpha }(x'-{\frac {\langle {\hat {x}}\rangle _{\alpha }}{2}})-{\frac {m\omega }{2\hbar }}(x'-\langle {\hat {x}}\rangle _{\alpha })^{2}}.}Substituting the expectation values as a function of time, gives the required time varying wavefunction. The probability of each energy eigenstates can be calculated to find the energy distribution of the wavefunction: P(En)=|⟨n|α⟩|2=e−|α|2|α|2nn!{\displaystyle P(E_{n})=|\langle n|\alpha \rangle |^{2}={\frac {e^{-|\alpha |^{2}}|\alpha |^{2n}}{n!}}} which corresponds toPoisson distribution. Whennis large, the eigenstates are localized into the classical allowed region, that is, the region in which a classical particle with energyEncan move. The eigenstates are peaked near the turning points: the points at the ends of the classically allowed region where the classical particle changes direction. This phenomenon can be verified throughasymptotics of the Hermite polynomials, and also through theWKB approximation. The frequency of oscillation atxis proportional to the momentump(x)of a classical particle of energyEnand positionx. Furthermore, the square of the amplitude (determining the probability density) isinverselyproportional top(x), reflecting the length of time the classical particle spends nearx. The system behavior in a small neighborhood of the turning point does not have a simple classical explanation, but can be modeled using anAiry function. Using properties of the Airy function, one may estimate the probability of finding the particle outside the classically allowed region, to be approximately2n1/332/3Γ2(13)=1n1/3⋅7.46408092658...{\displaystyle {\frac {2}{n^{1/3}3^{2/3}\Gamma ^{2}({\tfrac {1}{3}})}}={\frac {1}{n^{1/3}\cdot 7.46408092658...}}}This is also given, asymptotically, by the integral12π∫0∞e(2n+1)(x−12sinh⁡(2x))dx.{\displaystyle {\frac {1}{2\pi }}\int _{0}^{\infty }e^{(2n+1)\left(x-{\tfrac {1}{2}}\sinh(2x)\right)}dx~.} In thephase space formulationof quantum mechanics, eigenstates of the quantum harmonic oscillator inseveral different representationsof thequasiprobability distributioncan be written in closed form. The most widely used of these is for theWigner quasiprobability distribution. The Wigner quasiprobability distribution for the energy eigenstate|n⟩is, in the natural units described above,[citation needed]Fn(x,p)=(−1)nπℏLn(2(x2+p2))e−(x2+p2),{\displaystyle F_{n}(x,p)={\frac {(-1)^{n}}{\pi \hbar }}L_{n}\left(2(x^{2}+p^{2})\right)e^{-(x^{2}+p^{2})}\,,}whereLnare theLaguerre polynomials. This example illustrates how the Hermite and Laguerre polynomials arelinkedthrough theWigner map. Meanwhile, theHusimi Q functionof the harmonic oscillator eigenstates have an even simpler form. If we work in the natural units described above, we haveQn(x,p)=(x2+p2)nn!e−(x2+p2)π{\displaystyle Q_{n}(x,p)={\frac {(x^{2}+p^{2})^{n}}{n!}}{\frac {e^{-(x^{2}+p^{2})}}{\pi }}}This claim can be verified using theSegal–Bargmann transform. Specifically, since theraising operator in the Segal–Bargmann representationis simply multiplication byz=x+ip{\displaystyle z=x+ip}and the ground state is the constant function 1, the normalized harmonic oscillator states in this representation are simplyzn/n!{\displaystyle z^{n}/{\sqrt {n!}}}. At this point, we can appeal to the formula for the Husimi Q function in terms of the Segal–Bargmann transform. The one-dimensional harmonic oscillator is readily generalizable toNdimensions, whereN= 1, 2, 3, .... In one dimension, the position of the particle was specified by a singlecoordinate,x. InNdimensions, this is replaced byNposition coordinates, which we labelx1, ...,xN. Corresponding to each position coordinate is a momentum; we label thesep1, ...,pN. Thecanonical commutation relationsbetween these operators are[xi,pj]=iℏδi,j[xi,xj]=0[pi,pj]=0{\displaystyle {\begin{aligned}{[}x_{i},p_{j}{]}&=i\hbar \delta _{i,j}\\{[}x_{i},x_{j}{]}&=0\\{[}p_{i},p_{j}{]}&=0\end{aligned}}} The Hamiltonian for this system isH=∑i=1N(pi22m+12mω2xi2).{\displaystyle H=\sum _{i=1}^{N}\left({p_{i}^{2} \over 2m}+{1 \over 2}m\omega ^{2}x_{i}^{2}\right).} As the form of this Hamiltonian makes clear, theN-dimensional harmonic oscillator is exactly analogous toNindependent one-dimensional harmonic oscillators with the same mass and spring constant. In this case, the quantitiesx1, ...,xNwould refer to the positions of each of theNparticles. This is a convenient property of ther2potential, which allows the potential energy to be separated into terms depending on one coordinate each. This observation makes the solution straightforward. For a particular set of quantum numbers{n}≡{n1,n2,…,nN}{\displaystyle \{n\}\equiv \{n_{1},n_{2},\dots ,n_{N}\}}the energy eigenfunctions for theN-dimensional oscillator are expressed in terms of the 1-dimensional eigenfunctions as: ⟨x|ψ{n}⟩=∏i=1N⟨xi∣ψni⟩{\displaystyle \langle \mathbf {x} |\psi _{\{n\}}\rangle =\prod _{i=1}^{N}\langle x_{i}\mid \psi _{n_{i}}\rangle } In the ladder operator method, we defineNsets of ladder operators, ai=mω2ℏ(xi+imωpi),ai†=mω2ℏ(xi−imωpi).{\displaystyle {\begin{aligned}a_{i}&={\sqrt {m\omega \over 2\hbar }}\left(x_{i}+{i \over m\omega }p_{i}\right),\\a_{i}^{\dagger }&={\sqrt {m\omega \over 2\hbar }}\left(x_{i}-{i \over m\omega }p_{i}\right).\end{aligned}}} By an analogous procedure to the one-dimensional case, we can then show that each of theaianda†ioperators lower and raise the energy byℏωrespectively. The Hamiltonian isH=ℏω∑i=1N(ai†ai+12).{\displaystyle H=\hbar \omega \,\sum _{i=1}^{N}\left(a_{i}^{\dagger }\,a_{i}+{\frac {1}{2}}\right).}This Hamiltonian is invariant under the dynamic symmetry groupU(N)(the unitary group inNdimensions), defined byUai†U†=∑j=1Naj†Ujifor allU∈U(N),{\displaystyle U\,a_{i}^{\dagger }\,U^{\dagger }=\sum _{j=1}^{N}a_{j}^{\dagger }\,U_{ji}\quad {\text{for all}}\quad U\in U(N),}whereUji{\displaystyle U_{ji}}is an element in the defining matrix representation ofU(N). The energy levels of the system areE=ℏω[(n1+⋯+nN)+N2].{\displaystyle E=\hbar \omega \left[(n_{1}+\cdots +n_{N})+{N \over 2}\right].}ni=0,1,2,…(the energy level in dimensioni).{\displaystyle n_{i}=0,1,2,\dots \quad ({\text{the energy level in dimension }}i).} As in the one-dimensional case, the energy is quantized. The ground state energy isNtimes the one-dimensional ground energy, as we would expect using the analogy toNindependent one-dimensional oscillators. There is one further difference: in the one-dimensional case, each energy level corresponds to a unique quantum state. InN-dimensions, except for the ground state, the energy levels aredegenerate, meaning there are several states with the same energy. The degeneracy can be calculated relatively easily. As an example, consider the 3-dimensional case: Definen=n1+n2+n3. All states with the samenwill have the same energy. For a givenn, we choose a particularn1. Thenn2+n3=n−n1. There aren−n1+ 1possible pairs{n2,n3}.n2can take on the values0ton−n1, and for eachn2the value ofn3is fixed. The degree of degeneracy therefore is:gn=∑n1=0nn−n1+1=(n+1)(n+2)2{\displaystyle g_{n}=\sum _{n_{1}=0}^{n}n-n_{1}+1={\frac {(n+1)(n+2)}{2}}}Formula for generalNandn[gnbeing the dimension of the symmetric irreduciblen-th power representation of the unitary groupU(N)]:gn=(N+n−1n){\displaystyle g_{n}={\binom {N+n-1}{n}}}The special caseN= 3, given above, follows directly from this general equation. This is however, only true for distinguishable particles, or one particle inNdimensions (as dimensions are distinguishable). For the case ofNbosons in a one-dimension harmonic trap, the degeneracy scales as the number of ways to partition an integernusing integers less than or equal toN. gn=p(N−,n){\displaystyle g_{n}=p(N_{-},n)} This arises due to the constraint of puttingNquanta into a state ket where∑k=0∞knk=n{\textstyle \sum _{k=0}^{\infty }kn_{k}=n}and∑k=0∞nk=N{\textstyle \sum _{k=0}^{\infty }n_{k}=N}, which are the same constraints as in integer partition. The Schrödinger equation for a particle in a spherically-symmetric three-dimensional harmonic oscillator can be solved explicitly by separation of variables. This procedure is analogous to the separation performed in thehydrogen-like atomproblem, but with a differentspherically symmetric potentialV(r)=12μω2r2,{\displaystyle V(r)={1 \over 2}\mu \omega ^{2}r^{2},}whereμis the mass of the particle. Becausemwill be used below for the magnetic quantum number, mass is indicated byμ, instead ofm, as earlier in this article. The solution to the equation is:[16]ψklm(r,θ,ϕ)=Nklrle−νr2Lk(l+12)(2νr2)Ylm(θ,ϕ){\displaystyle \psi _{klm}(r,\theta ,\phi )=N_{kl}r^{l}e^{-\nu r^{2}}L_{k}^{\left(l+{1 \over 2}\right)}(2\nu r^{2})Y_{lm}(\theta ,\phi )}where aregeneralized Laguerre polynomials; The orderkof the polynomial is a non-negative integer; The energy eigenvalue isE=ℏω(2k+l+32).{\displaystyle E=\hbar \omega \left(2k+l+{\frac {3}{2}}\right).}The energy is usually described by the singlequantum numbern≡2k+l.{\displaystyle n\equiv 2k+l\,.} Becausekis a non-negative integer, for every evennwe haveℓ= 0, 2, ...,n− 2,nand for every oddnwe haveℓ= 1, 3, ...,n− 2,n. The magnetic quantum numbermis an integer satisfying−ℓ≤m≤ℓ, so for everynandℓthere are 2ℓ+ 1 differentquantum states, labeled bym. Thus, the degeneracy at levelnis∑l=…,n−2,n(2l+1)=(n+1)(n+2)2,{\displaystyle \sum _{l=\ldots ,n-2,n}(2l+1)={(n+1)(n+2) \over 2}\,,}where the sum starts from 0 or 1, according to whethernis even or odd. This result is in accordance with the dimension formula above, and amounts to the dimensionality of a symmetric representation ofSU(3),[17]the relevant degeneracy group. The notation of a harmonic oscillator can be extended to a one-dimensional lattice of many particles. Consider a one-dimensional quantum mechanicalharmonic chainofNidentical atoms. This is the simplest quantum mechanical model of a lattice, and we will see howphononsarise from it. The formalism that we will develop for this model is readily generalizable to two and three dimensions. As in the previous section, we denote the positions of the masses byx1,x2, ..., as measured from their equilibrium positions (i.e.xi= 0if the particleiis at its equilibrium position). In two or more dimensions, thexiare vector quantities. TheHamiltonianfor this system is H=∑i=1Npi22m+12mω2∑{ij}(nn)(xi−xj)2,{\displaystyle \mathbf {H} =\sum _{i=1}^{N}{p_{i}^{2} \over 2m}+{1 \over 2}m\omega ^{2}\sum _{\{ij\}(nn)}(x_{i}-x_{j})^{2}\,,}wheremis the (assumed uniform) mass of each atom, andxiandpiare the position andmomentumoperators for theith atom and the sum is made over the nearest neighbors (nn). However, it is customary to rewrite the Hamiltonian in terms of thenormal modesof thewavevectorrather than in terms of the particle coordinates so that one can work in the more convenientFourier space. We introduce, then, a set ofN"normal coordinates"Qk, defined as thediscrete Fourier transformsof thexs, andN"conjugate momenta"Πdefined as the Fourier transforms of theps,Qk=1N∑leikalxl{\displaystyle Q_{k}={1 \over {\sqrt {N}}}\sum _{l}e^{ikal}x_{l}}Πk=1N∑le−ikalpl.{\displaystyle \Pi _{k}={1 \over {\sqrt {N}}}\sum _{l}e^{-ikal}p_{l}\,.} The quantityknwill turn out to be thewave numberof the phonon, i.e. 2πdivided by thewavelength. It takes on quantized values, because the number of atoms is finite. This preserves the desired commutation relations in either real space or wave vector space [xl,pm]=iℏδl,m[Qk,Πk′]=1N∑l,meikale−ik′am[xl,pm]=iℏN∑meiam(k−k′)=iℏδk,k′[Qk,Qk′]=[Πk,Πk′]=0.{\displaystyle {\begin{aligned}\left[x_{l},p_{m}\right]&=i\hbar \delta _{l,m}\\\left[Q_{k},\Pi _{k'}\right]&={1 \over N}\sum _{l,m}e^{ikal}e^{-ik'am}[x_{l},p_{m}]\\&={i\hbar \over N}\sum _{m}e^{iam(k-k')}=i\hbar \delta _{k,k'}\\\left[Q_{k},Q_{k'}\right]&=\left[\Pi _{k},\Pi _{k'}\right]=0~.\end{aligned}}} From the general result∑lxlxl+m=1N∑kk′QkQk′∑leial(k+k′)eiamk′=∑kQkQ−keiamk∑lpl2=∑kΠkΠ−k,{\displaystyle {\begin{aligned}\sum _{l}x_{l}x_{l+m}&={1 \over N}\sum _{kk'}Q_{k}Q_{k'}\sum _{l}e^{ial\left(k+k'\right)}e^{iamk'}=\sum _{k}Q_{k}Q_{-k}e^{iamk}\\\sum _{l}{p_{l}}^{2}&=\sum _{k}\Pi _{k}\Pi _{-k}~,\end{aligned}}}it is easy to show, through elementary trigonometry, that the potential energy term is12mω2∑j(xj−xj+1)2=12mω2∑kQkQ−k(2−eika−e−ika)=12m∑kωk2QkQ−k,{\displaystyle {1 \over 2}m\omega ^{2}\sum _{j}(x_{j}-x_{j+1})^{2}={1 \over 2}m\omega ^{2}\sum _{k}Q_{k}Q_{-k}(2-e^{ika}-e^{-ika})={1 \over 2}m\sum _{k}{\omega _{k}}^{2}Q_{k}Q_{-k}~,}whereωk=2ω2(1−cos⁡(ka)).{\displaystyle \omega _{k}={\sqrt {2\omega ^{2}(1-\cos(ka))}}~.} The Hamiltonian may be written in wave vector space asH=12m∑k(ΠkΠ−k+m2ωk2QkQ−k).{\displaystyle \mathbf {H} ={1 \over {2m}}\sum _{k}\left({\Pi _{k}\Pi _{-k}}+m^{2}\omega _{k}^{2}Q_{k}Q_{-k}\right)~.} Note that the couplings between the position variables have been transformed away; if theQs andΠs werehermitian(which they are not), the transformed Hamiltonian would describeNuncoupledharmonic oscillators. The form of the quantization depends on the choice of boundary conditions; for simplicity, we imposeperiodicboundary conditions, defining the(N+ 1)-th atom as equivalent to the first atom. Physically, this corresponds to joining the chain at its ends. The resulting quantization is k=kn=2nπNaforn=0,±1,±2,…,±N2.{\displaystyle k=k_{n}={2n\pi \over Na}\quad {\hbox{for}}\ n=0,\pm 1,\pm 2,\ldots ,\pm {N \over 2}.} The upper bound toncomes from the minimum wavelength, which is twice the lattice spacinga, as discussed above. The harmonic oscillator eigenvalues or energy levels for the modeωkareEn=(12+n)ℏωkforn=0,1,2,3,…{\displaystyle E_{n}=\left({1 \over 2}+n\right)\hbar \omega _{k}\quad {\hbox{for}}\quad n=0,1,2,3,\ldots } If we ignore thezero-point energythen the levels are evenly spaced at0,ℏω,2ℏω,3ℏω,⋯{\displaystyle 0,\ \hbar \omega ,\ 2\hbar \omega ,\ 3\hbar \omega ,\ \cdots } So anexactamount ofenergyħω, must be supplied to the harmonic oscillator lattice to push it to the next energy level. In analogy to thephotoncase when theelectromagnetic fieldis quantised, the quantum of vibrational energy is called aphonon. All quantum systems show wave-like and particle-like properties. The particle-like properties of the phonon are best understood using the methods ofsecond quantizationand operator techniques described elsewhere.[18] In thecontinuum limit,a→ 0,N→ ∞, whileNais held fixed. The canonical coordinatesQkdevolve to the decoupled momentum modes of a scalar field,ϕk{\displaystyle \phi _{k}}, whilst the location indexi(not the displacement dynamical variable) becomes the parameterxargument of the scalar field,ϕ(x,t){\displaystyle \phi (x,t)}.
https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator
Inquantum computingand specifically thequantum circuitmodel of computation, aquantum logic gate(or simplyquantum gate) is a basic quantum circuit operating on a small number ofqubits. Quantum logic gates are the building blocks of quantum circuits, like classicallogic gatesare for conventional digital circuits. Unlike many classical logic gates, quantum logic gates arereversible. It is possible to perform classical computing using only reversible gates. For example, the reversibleToffoli gatecan implement allBoolean functions, often at the cost of having to useancilla bits. The Toffoli gate has a direct quantum equivalent, showing that quantum circuits can perform all operations performed by classical circuits. Quantum gates areunitary operators, and are described asunitary matricesrelative to someorthonormalbasis. Usually thecomputational basisis used, which unless comparing it with something, just means that for ad-level quantum system (such as aqubit, aquantum register, orqutritsandqudits)[1]: 22–23theorthonormal basisvectorsare labeled|0⟩,|1⟩,…,|d−1⟩{\displaystyle |0\rangle ,|1\rangle ,\dots ,|d-1\rangle },or usebinary notation. The current notation for quantum gates was developed by many of the founders ofquantum information scienceincluding Adriano Barenco,Charles Bennett,Richard Cleve,David P. DiVincenzo,Norman Margolus,Peter Shor, Tycho Sleator,John A. Smolin, and Harald Weinfurter,[2]building on notation introduced byRichard Feynmanin 1986.[3] Quantum logic gates are represented byunitary matrices. A gate that acts onn{\displaystyle n}qubits(aregister) is represented by a2n×2n{\displaystyle 2^{n}\times 2^{n}}unitary matrix, and thesetof all such gates with the group operation ofmatrix multiplication[a]is theunitary groupU(2n).[2]Thequantum statesthat the gates act upon areunit vectorsin2n{\displaystyle 2^{n}}complexdimensions, with thecomplex Euclidean norm(the2-norm).[4]: 66[5]: 56, 65Thebasis vectors(sometimes calledeigenstates) are the possible outcomes if the state of the qubits ismeasured, and a quantum state is alinear combinationof these outcomes. The most common quantum gates operate onvector spacesof one or two qubits, just like the commonclassical logic gatesoperate on one or twobits. Even though the quantum logic gates belong tocontinuous symmetry groups, realhardwareis inexact and thus limited in precision. The application of gates typically introduces errors, and thequantum states' fidelitiesdecrease over time. Iferror correctionis used, the usable gates are further restricted to a finite set.[4]: ch. 10[1]: ch. 14Later in this article, this is ignored as the focus is on the ideal quantum gates' properties. Quantum states are typically represented by "kets", from a notation known asbra–ket. The vector representation of a singlequbitis Here,v0{\displaystyle v_{0}}andv1{\displaystyle v_{1}}are the complexprobability amplitudesof the qubit. These values determine the probability of measuring a 0 or a 1, when measuring the state of the qubit. Seemeasurementbelow for details. The value zero is represented by the ket|0⟩=[10]{\displaystyle |0\rangle ={\begin{bmatrix}1\\0\end{bmatrix}}},and the value one is represented by the ket|1⟩=[01]{\displaystyle |1\rangle ={\begin{bmatrix}0\\1\end{bmatrix}}}. Thetensor product(orKronecker product) is used to combine quantum states. The combined state for aqubit registeris the tensor product of the constituent qubits. The tensor product is denoted by the symbol⊗{\displaystyle \otimes }. The vector representation of two qubits is:[6] The action of the gate on a specific quantum state is found bymultiplyingthe vector|ψ1⟩{\displaystyle |\psi _{1}\rangle }, which represents the state by the matrixU{\displaystyle U}representing the gate. The result is a new quantum state|ψ2⟩{\displaystyle |\psi _{2}\rangle }: TheSchrödinger equationdescribes how quantum systems that are notobservedevolve over time, and isiℏddt|Ψ⟩=H^|Ψ⟩.{\displaystyle i\hbar {\frac {d}{dt}}|\Psi \rangle ={\hat {H}}|\Psi \rangle .}When the system is in a stable environment, so it has a constantHamiltonian, the solution to this equation isU(t)=e−iH^t/ℏ.{\displaystyle U(t)=e^{-i{\hat {H}}t/\hbar }.}[1]: 24–25If the timet{\displaystyle t}is always the same it may be omitted for simplicity, and the way quantum states evolve can be described asU|ψ1⟩=|ψ2⟩,{\displaystyle U|\psi _{1}\rangle =|\psi _{2}\rangle ,}just as in the above section. That is, a quantum gate is how a quantum system that is not observed evolves over some specific time, or equivalently, a gate is the unitarytime evolutionoperatorU{\displaystyle U}acting on a quantum state for a specific duration. There exists anuncountably infinitenumber of gates. Some of them have been named by various authors,[1][2][4][5][7][8][9]and below follow some of those most often used in the literature. The identity gate is theidentity matrix, usually written asI, and is defined for a single qubit as whereIis basis independent and does not modify the quantum state. The identity gate is most useful when describing mathematically the result of various gate operations or when discussing multi-qubit circuits. The Pauli gates(X,Y,Z){\displaystyle (X,Y,Z)}are the threePauli matrices(σx,σy,σz){\displaystyle (\sigma _{x},\sigma _{y},\sigma _{z})}and act on a single qubit. The PauliX,YandZequate, respectively, to a rotation around thex,yandzaxes of theBloch spherebyπ{\displaystyle \pi }radians.[b] The Pauli-Xgate is the quantum equivalent of theNOT gatefor classical computers with respect to the standard basis|0⟩{\displaystyle |0\rangle },|1⟩{\displaystyle |1\rangle },which distinguishes thezaxis on theBloch sphere. It is sometimes called a bit-flip as it maps|0⟩{\displaystyle |0\rangle }to|1⟩{\displaystyle |1\rangle }and|1⟩{\displaystyle |1\rangle }to|0⟩{\displaystyle |0\rangle }. Similarly, the Pauli-Ymaps|0⟩{\displaystyle |0\rangle }toi|1⟩{\displaystyle i|1\rangle }and|1⟩{\displaystyle |1\rangle }to−i|0⟩{\displaystyle -i|0\rangle }. PauliZleaves the basis state|0⟩{\displaystyle |0\rangle }unchanged and maps|1⟩{\displaystyle |1\rangle }to−|1⟩{\displaystyle -|1\rangle }.Due to this nature, PauliZis sometimes called phase-flip. These matrices are usually represented as The Pauli matrices areinvolutory, meaning that the square of a Pauli matrix is theidentity matrix. The Pauli matrices alsoanti-commute, for exampleZX=iY=−XZ.{\displaystyle ZX=iY=-XZ.} Thematrix exponentialof a Pauli matrixσj{\displaystyle \sigma _{j}}is arotation operator, often written ase−iσjθ/2.{\displaystyle e^{-i\sigma _{j}\theta /2}.} Controlled gates act on 2 or more qubits, where one or more qubits act as a control for some operation.[2]For example, thecontrolled NOT gate(or CNOT or CX) acts on 2 qubits, and performs the NOT operation on the second qubit only when the first qubit is|1⟩{\displaystyle |1\rangle },and otherwise leaves it unchanged. With respect to the basis|00⟩{\displaystyle |00\rangle },|01⟩{\displaystyle |01\rangle },|10⟩{\displaystyle |10\rangle },|11⟩{\displaystyle |11\rangle },it is represented by theHermitianunitarymatrix: The CNOT (or controlled Pauli-X) gate can be described as the gate that maps the basis states|a,b⟩↦|a,a⊕b⟩{\displaystyle |a,b\rangle \mapsto |a,a\oplus b\rangle }, where⊕{\displaystyle \oplus }isXOR. The CNOT can be expressed in thePauli basisas: Being a Hermitian unitary operator, CNOThas the propertythateiθU=(cos⁡θ)I+(isin⁡θ)U{\displaystyle e^{i\theta U}=(\cos \theta )I+(i\sin \theta )U}andU=eiπ2(I−U)=e−iπ2(I−U){\displaystyle U=e^{i{\frac {\pi }{2}}(I-U)}=e^{-i{\frac {\pi }{2}}(I-U)}}, and isinvolutory. More generally ifUis a gate that operates on a single qubit with matrix representation then thecontrolled-U gateis a gate that operates on two qubits in such a way that the first qubit serves as a control. It maps the basis states as follows. The matrix representing the controlledUis WhenUis one of the Pauli operators,X,Y,Z, the respective terms "controlled-X", "controlled-Y", or "controlled-Z" are sometimes used.[4]: 177–185Sometimes this is shortened to just CX, CYand CZ. In general, any single qubitunitary gatecan be expressed asU=eiH{\displaystyle U=e^{iH}}, whereHis aHermitian matrix, and then the controlledUisCU=ei12(I−Z1)H2.{\displaystyle CU=e^{i{\frac {1}{2}}(I-Z_{1})H_{2}}.} Control can be extended to gates with arbitrary number of qubits[2]and functions in programming languages.[10]Functions can be conditioned on superposition states.[11][12] Gates can also be controlled by classical logic. A quantum computer is controlled by aclassical computer, and behaves like acoprocessorthat receives instructions from the classical computer about what gates to execute on which qubits.[13]: 42–43[14]Classical control is simply the inclusion, or omission, of gates in the instruction sequence for the quantum computer.[4]: 26–28[1]: 87–88 The phase shift is a family of single-qubit gates that map the basis states|0⟩↦|0⟩{\displaystyle |0\rangle \mapsto |0\rangle }and|1⟩↦eiφ|1⟩{\displaystyle |1\rangle \mapsto e^{i\varphi }|1\rangle }. The probability of measuring a|0⟩{\displaystyle |0\rangle }or|1⟩{\displaystyle |1\rangle }is unchanged after applying this gate, however it modifies the phase of the quantum state. This is equivalent to tracing a horizontal circle (a line of constant latitude), or a rotation about the z-axis on theBloch spherebyφ{\displaystyle \varphi }radians. The phase shift gate is represented by the matrix: whereφ{\displaystyle \varphi }is thephase shiftwith theperiod2π. Some common examples are theTgate whereφ=π4{\textstyle \varphi ={\frac {\pi }{4}}}(historically known as theπ/8{\displaystyle \pi /8}gate), the phase gate (also known as the S gate, written asS, thoughSis sometimes used for SWAP gates) whereφ=π2{\textstyle \varphi ={\frac {\pi }{2}}}and thePauli-Zgatewhereφ=π.{\displaystyle \varphi =\pi .} The phase shift gates are related to each other as follows: Note that the phase gateP(φ){\displaystyle P(\varphi )}is notHermitian(except for allφ=nπ,n∈Z{\displaystyle \varphi =n\pi ,n\in \mathbb {Z} }). These gates are different from their Hermitian conjugates:P†(φ)=P(−φ){\displaystyle P^{\dagger }(\varphi )=P(-\varphi )}. The twoadjoint(orconjugate transpose) gatesS†{\displaystyle S^{\dagger }}andT†{\displaystyle T^{\dagger }}are sometimes included in instruction sets.[15][16] The Hadamard or Walsh-Hadamard gate, named afterJacques Hadamard(French:[adamaʁ]) andJoseph L. Walsh, acts on a single qubit. It maps the basis states|0⟩↦|0⟩+|1⟩2{\textstyle |0\rangle \mapsto {\frac {|0\rangle +|1\rangle }{\sqrt {2}}}}and|1⟩↦|0⟩−|1⟩2{\textstyle |1\rangle \mapsto {\frac {|0\rangle -|1\rangle }{\sqrt {2}}}}(it creates an equal superposition state if given a computational basis state). The two states(|0⟩+|1⟩)/2{\displaystyle (|0\rangle +|1\rangle )/{\sqrt {2}}}and(|0⟩−|1⟩)/2{\displaystyle (|0\rangle -|1\rangle )/{\sqrt {2}}}are sometimes written|+⟩{\displaystyle |+\rangle }and|−⟩{\displaystyle |-\rangle }respectively. The Hadamard gate performs a rotation ofπ{\displaystyle \pi }about the axis(x^+z^)/2{\displaystyle ({\hat {x}}+{\hat {z}})/{\sqrt {2}}}at theBloch sphere, and is thereforeinvolutory. It is represented by theHadamard matrix: If theHermitian(soH†=H−1=H{\displaystyle H^{\dagger }=H^{-1}=H}) Hadamard gate is used to perform achange of basis, it flipsx^{\displaystyle {\hat {x}}}andz^{\displaystyle {\hat {z}}}. For example,HZH=X{\displaystyle HZH=X}andHXH=Z=S.{\displaystyle H{\sqrt {X}}\;H={\sqrt {Z}}=S.} The swap gate swaps two qubits. With respect to the basis|00⟩{\displaystyle |00\rangle },|01⟩{\displaystyle |01\rangle },|10⟩{\displaystyle |10\rangle },|11⟩{\displaystyle |11\rangle }, it is represented by the matrix The swap gate can be decomposed into summation form: The Toffoli gate, named afterTommaso Toffoliand also called the CCNOT gate orDeutsch gateD(π/2){\displaystyle D(\pi /2)}, is a 3-bit gate that isuniversalfor classical computation but not for quantum computation. The quantum Toffoli gate is the same gate, defined for 3 qubits. If we limit ourselves to only accepting input qubits that are|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle },then if the first two bits are in the state|1⟩{\displaystyle |1\rangle }it applies a Pauli-X(or NOT) on the third bit, else it does nothing. It is an example of a CC-U (controlled-controlled Unitary) gate. Since it is the quantum analog of a classical gate, it is completely specified by its truth table. The Toffoli gate is universal when combined with the single qubit Hadamard gate.[17] [1000000001000000001000000001000000001000000001000000000100000010]{\displaystyle {\begin{bmatrix}1&0&0&0&0&0&0&0\\0&1&0&0&0&0&0&0\\0&0&1&0&0&0&0&0\\0&0&0&1&0&0&0&0\\0&0&0&0&1&0&0&0\\0&0&0&0&0&1&0&0\\0&0&0&0&0&0&0&1\\0&0&0&0&0&0&1&0\\\end{bmatrix}}} The Toffoli gate is related to the classicalAND(∧{\displaystyle \land }) andXOR(⊕{\displaystyle \oplus }) operations as it performs the mapping|a,b,c⟩↦|a,b,c⊕(a∧b)⟩{\displaystyle |a,b,c\rangle \mapsto |a,b,c\oplus (a\land b)\rangle }on states in the computational basis. The Toffoli gate can be expressed usingPauli matricesas A set ofuniversal quantum gatesis any set of gates to which any operation possible on a quantum computer can be reduced, that is, any other unitary operation can be expressed as a finite sequence of gates from the set. Technically, this is impossible with anything less than anuncountableset of gates since the number of possible quantum gates is uncountable, whereas the number of finite sequences from a finite set iscountable. To solve this problem, we only require that any quantum operation can be approximated by a sequence of gates from this finite set. Moreover, forunitarieson a constant number of qubits, theSolovay–Kitaev theoremguarantees that this can be done efficiently. Checking if a set of quantum gates is universal can be done usinggroup theorymethods[18]and/or relation to (approximate)unitary t-designs[19] Some universal quantum gate sets include: A single-gate set of universal quantum gates can also be formulated using the parametrized three-qubit Deutsch gateD(θ){\displaystyle D(\theta )},[21]named after physicistDavid Deutsch. It is a general case ofCC-U, orcontrolled-controlled-unitarygate, and is defined as Unfortunately, a working Deutsch gate has remained out of reach, due to lack of a protocol. There are some proposals to realize a Deutsch gate with dipole–dipole interaction in neutral atoms.[22] A universal logic gate for reversible classical computing, the Toffoli gate, is reducible to the Deutsch gateD(π/2){\displaystyle D(\pi /2)}, thus showing that all reversible classical logic operations can be performed on a universal quantum computer. There also exist single two-qubit gates sufficient for universality. In 1996, Adriano Barenco showed that the Deutsch gate can be decomposed using only a single two-qubit gate (Barenco gate), but it is hard to realize experimentally.[1]: 93This feature is exclusive to quantum circuits, as there is no classical two-bit gate that is both reversible and universal.[1]: 93Universal two-qubit gates could be implemented to improve classical reversible circuits in fast low-power microprocessors.[1]: 93 Assume that we have two gatesAandBthat both act onn{\displaystyle n}qubits. WhenBis put afterAin a series circuit, then the effect of the two gates can be described as a single gateC. where⋅{\displaystyle \cdot }ismatrix multiplication. The resulting gateCwill have the same dimensions asAandB. The order in which the gates would appear in a circuit diagram is reversed when multiplying them together.[4]: 17–18,22–23,62–64[5]: 147–169 For example, putting the PauliXgate after the PauliYgate, both of which act on a single qubit, can be described as a single combined gateC: The product symbol (⋅{\displaystyle \cdot }) is often omitted. Allrealexponents ofunitary matricesare also unitary matrices, and all quantum gates are unitary matrices. Positive integer exponents are equivalent to sequences of serially wired gates (e.g.X3=X⋅X⋅X{\displaystyle X^{3}=X\cdot X\cdot X}),and the real exponents is a generalization of the series circuit. For example,Xπ{\displaystyle X^{\pi }}andX=X1/2{\displaystyle {\sqrt {X}}=X^{1/2}}are both valid quantum gates. U0=I{\displaystyle U^{0}=I}for any unitary matrixU{\displaystyle U}. Theidentity matrix(I{\displaystyle I}) behaves like aNOP[23][24]and can be represented as bare wire in quantum circuits, or not shown at all. All gates are unitary matrices, so thatU†U=UU†=I{\displaystyle U^{\dagger }U=UU^{\dagger }=I}andU†=U−1{\displaystyle U^{\dagger }=U^{-1}},where†{\displaystyle \dagger }is theconjugate transpose. This means that negative exponents of gates areunitary inversesof their positively exponentiated counterparts:U−n=(Un)†{\displaystyle U^{-n}=(U^{n})^{\dagger }}.For example, some negative exponents of thephase shift gatesareT−1=T†{\displaystyle T^{-1}=T^{\dagger }}andT−2=(T2)†=S†{\displaystyle T^{-2}=(T^{2})^{\dagger }=S^{\dagger }}. Note that for aHermitian matrixH†=H,{\displaystyle H^{\dagger }=H,}and because of unitarity,HH†=I,{\displaystyle HH^{\dagger }=I,}soH2=I{\displaystyle H^{2}=I}for all Hermitian gates. They areinvolutory. Examples of Hermitian gates are thePauli gates,Hadamard,CNOT,SWAPandToffoli. Each Hermitian unitary matrixH{\displaystyle H}has the propertythateiθH=(cos⁡θ)I+(isin⁡θ)H{\displaystyle e^{i\theta H}=(\cos \theta )I+(i\sin \theta )H}whereH=eiπ2(I−H)=e−iπ2(I−H).{\displaystyle H=e^{i{\frac {\pi }{2}}(I-H)}=e^{-i{\frac {\pi }{2}}(I-H)}.} The exponent of a gate is a multiple of the duration of time that thetime evolution operatoris applied to a quantum state. E.g. in aspin qubit quantum computertheSWAP{\displaystyle {\sqrt {\mathrm {SWAP} }}}gate could be realized viaexchange interactionon thespinof twoelectronsfor half the duration of a full exchange interaction.[25] Thetensor product(orKronecker product) of two quantum gates is the gate that is equal to the two gates in parallel.[4]: 71–75[5]: 148 If we, as in the picture, combine the Pauli-Ygate with the Pauli-Xgate in parallel, then this can be written as: Both the Pauli-Xand the Pauli-Ygate act on a single qubit. The resulting gateC{\displaystyle C}act on two qubits. Sometimes the tensor product symbol is omitted, and indexes are used for the operators instead.[25] The gateH2=H⊗H{\displaystyle H_{2}=H\otimes H}is theHadamard gate(H{\displaystyle H})applied in parallel on 2 qubits. It can be written as: This "two-qubit parallel Hadamard gate" will, when applied to, for example, the two-qubit zero-vector(|00⟩{\displaystyle |00\rangle }),create a quantum state that has equal probability of being observed in any of its four possible outcomes;|00⟩{\displaystyle |00\rangle },|01⟩{\displaystyle |01\rangle },|10⟩{\displaystyle |10\rangle },and|11⟩{\displaystyle |11\rangle }.We can write this operation as: Here the amplitude for each measurable state is1⁄2. The probability to observe any state is the square of the absolute value of the measurable states amplitude, which in the above example means that there is one in four that we observe any one of the individual four cases. Seemeasurementfor details. H2{\displaystyle H_{2}}performs theHadamard transformon two qubits. Similarly the gateH⊗H⊗⋯⊗H⏟ntimes=⨂i=0n−1H=H⊗n=Hn{\displaystyle \underbrace {H\otimes H\otimes \dots \otimes H} _{n{\text{ times}}}=\bigotimes _{i=0}^{n-1}H=H^{\otimes n}=H_{n}}performs a Hadamard transform on aregisterofn{\displaystyle n}qubits. When applied to a register ofn{\displaystyle n}qubits all initialized to|0⟩{\displaystyle |0\rangle },the Hadamard transform puts the quantum register into a superposition with equal probability of being measured in any of its2n{\displaystyle 2^{n}}possible states: This state is auniform superpositionand it is generated as the first step in some search algorithms, for example inamplitude amplificationandphase estimation. Measuringthis state results in arandom numberbetween|0⟩{\displaystyle |0\rangle }and|2n−1⟩{\displaystyle |2^{n}-1\rangle }.[e]How random the number is depends on thefidelityof the logic gates. If not measured, it is a quantum state with equalprobability amplitude12n{\displaystyle {\frac {1}{\sqrt {2^{n}}}}}for each of its possible states. The Hadamard transform acts on a register|ψ⟩{\displaystyle |\psi \rangle }withn{\displaystyle n}qubits such that|ψ⟩=⨂i=0n−1|ψi⟩{\textstyle |\psi \rangle =\bigotimes _{i=0}^{n-1}|\psi _{i}\rangle }as follows: If two or more qubits are viewed as a single quantum state, this combined state is equal to the tensor product of the constituent qubits. Any state that can be written as a tensor product from the constituent subsystems are calledseparable states. On the other hand, anentangled stateis any state that cannot be tensor-factorized, or in other words:An entangled state can not be written as a tensor product of its constituent qubits states.Special care must be taken when applying gates to constituent qubits that make up entangled states. If we have a set ofNqubits that are entangled and wish to apply a quantum gate onM<Nqubits in the set, we will have to extend the gate to takeNqubits. This application can be done by combining the gate with anidentity matrixsuch that their tensor product becomes a gate that act onNqubits. The identity matrix(I{\displaystyle I})is a representation of the gate that maps every state to itself (i.e., does nothing at all). In a circuit diagram the identity gate or matrix will often appear as just a bare wire. For example, the Hadamard gate(H{\displaystyle H})acts on a single qubit, but if we feed it the first of the two qubits that constitute theentangledBell state|00⟩+|11⟩2{\displaystyle {\frac {|00\rangle +|11\rangle }{\sqrt {2}}}},we cannot write that operation easily. We need to extend the Hadamard gateH{\displaystyle H}with the identity gateI{\displaystyle I}so that we can act on quantum states that spantwoqubits: The gateK{\displaystyle K}can now be applied to any two-qubit state, entangled or otherwise. The gateK{\displaystyle K}will leave the second qubit untouched and apply the Hadamard transform to the first qubit. If applied to the Bell state in our example, we may write that as: Thetime complexity for multiplyingtwon×n{\displaystyle n\times n}-matrices is at leastΩ(n2log⁡n){\displaystyle \Omega (n^{2}\log n)},[26]if using a classical machine. Because the size of a gate that operates onq{\displaystyle q}qubits is2q×2q{\displaystyle 2^{q}\times 2^{q}}it means that the time for simulating a step in a quantum circuit (by means of multiplying the gates) that operates on generic entangled states isΩ(2q2log⁡(2q)){\displaystyle \Omega ({2^{q}}^{2}\log({2^{q}}))}.For this reason it is believed to beintractableto simulate large entangled quantum systems using classical computers. Subsets of the gates, such as theClifford gates, or the trivial case of circuits that only implement classical Boolean functions (e.g. combinations ofX,CNOT,Toffoli), can however be efficiently simulated on classical computers. The state vector of aquantum registerwithn{\displaystyle n}qubits is2n{\displaystyle 2^{n}}complex entries. Storing theprobability amplitudesas a list offloating pointvalues is not tractable for largen{\displaystyle n}. Because all quantum logical gates arereversible, any composition of multiple gates is also reversible. All products and tensor products (i.e.seriesandparallelcombinations) ofunitary matricesare also unitary matrices. This means that it is possible to construct an inverse of all algorithms and functions, as long as they contain only gates. Initialization, measurement,I/Oand spontaneousdecoherenceareside effectsin quantum computers. Gates however arepurely functionalandbijective. IfU{\displaystyle U}is aunitary matrix, thenU†U=UU†=I{\displaystyle U^{\dagger }U=UU^{\dagger }=I}andU†=U−1{\displaystyle U^{\dagger }=U^{-1}}.The dagger (†{\displaystyle \dagger }) denotes theconjugate transpose. It is also called theHermitian adjoint. If a functionF{\displaystyle F}is a product ofm{\displaystyle m}gates,F=A1⋅A2⋅⋯⋅Am{\displaystyle F=A_{1}\cdot A_{2}\cdot \dots \cdot A_{m}},the unitary inverse of the functionF†{\displaystyle F^{\dagger }}can be constructed: Because(UV)†=V†U†{\displaystyle (UV)^{\dagger }=V^{\dagger }U^{\dagger }}we have, after repeated application on itself Similarly if the functionG{\displaystyle G}consists of two gatesA{\displaystyle A}andB{\displaystyle B}in parallel, thenG=A⊗B{\displaystyle G=A\otimes B}andG†=(A⊗B)†=A†⊗B†{\displaystyle G^{\dagger }=(A\otimes B)^{\dagger }=A^{\dagger }\otimes B^{\dagger }}. Gates that are their own unitary inverses are calledHermitianorself-adjoint operators. Some elementary gates such as theHadamard(H) and thePauli gates(I,X,Y,Z) are Hermitian operators, while others like thephase shift(S,T,P,CPhase) gates generally are not. For example, an algorithm for addition can be used for subtraction, if it is being "run in reverse", as its unitary inverse. Theinverse quantum Fourier transformis the unitary inverse. Unitary inverses can also be used foruncomputation. Programming languages for quantum computers, such asMicrosoft'sQ#,[10]Bernhard Ömer'sQCL,[13]: 61andIBM'sQiskit,[27]contain function inversion as programming concepts. Measurement (sometimes calledobservation) is irreversible and therefore not a quantum gate, because it assigns the observed quantum state to a single value. Measurement takes a quantum state and projects it to one of thebasis vectors, with a likelihood equal to the square of the vector's length (in the2-norm[4]: 66[5]: 56, 65) along that basis vector.[1]: 15–17[28][29][30]This is known as theBorn ruleand appears[e]as astochasticnon-reversible operation as it probabilistically sets the quantum state equal to the basis vector that represents the measured state. At the instant of measurement, the state is said to "collapse" to the definite single value that was measured. Why and how, or even if[31][32]the quantum state collapses at measurement, is called themeasurement problem. The probability of measuring a value withprobability amplitudeϕ{\displaystyle \phi }is1≥|ϕ|2≥0{\displaystyle 1\geq |\phi |^{2}\geq 0},where|⋅|{\displaystyle |\cdot |}is themodulus. Measuring a single qubit, whose quantum state is represented by the vectora|0⟩+b|1⟩=[ab]{\displaystyle a|0\rangle +b|1\rangle ={\begin{bmatrix}a\\b\end{bmatrix}}},will result in|0⟩{\displaystyle |0\rangle }with probability|a|2{\displaystyle |a|^{2}},and in|1⟩{\displaystyle |1\rangle }with probability|b|2{\displaystyle |b|^{2}}. For example, measuring a qubit with the quantum state|0⟩−i|1⟩2=12[1−i]{\displaystyle {\frac {|0\rangle -i|1\rangle }{\sqrt {2}}}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\-i\end{bmatrix}}}will yield with equal probability either|0⟩{\displaystyle |0\rangle }or|1⟩{\displaystyle |1\rangle }. A quantum state|Ψ⟩{\displaystyle |\Psi \rangle }that spansnqubits can be written as a vector in2n{\displaystyle 2^{n}}complexdimensions:|Ψ⟩∈C2n{\displaystyle |\Psi \rangle \in \mathbb {C} ^{2^{n}}}.This is because the tensor product ofnqubits is a vector in2n{\displaystyle 2^{n}}dimensions. This way, aregisterofnqubits can be measured to2n{\displaystyle 2^{n}}distinct states, similar to how a register ofnclassicalbitscan hold2n{\displaystyle 2^{n}}distinct states. Unlike with the bits of classical computers, quantum states can have non-zero probability amplitudes in multiple measurable values simultaneously. This is calledsuperposition. The sum of all probabilities for all outcomes must always be equal to1.[f]Another way to say this is that thePythagorean theoremgeneralized toC2n{\displaystyle \mathbb {C} ^{2^{n}}}has that all quantum states|Ψ⟩{\displaystyle |\Psi \rangle }withnqubits must satisfy1=∑x=02n−1|ax|2,{\textstyle 1=\sum _{x=0}^{2^{n}-1}|a_{x}|^{2},}[g]whereax{\displaystyle a_{x}}is the probability amplitude for measurable state|x⟩{\displaystyle |x\rangle }.A geometric interpretation of this is that the possiblevalue-spaceof a quantum state|Ψ⟩{\displaystyle |\Psi \rangle }withnqubits is the surface of theunit sphereinC2n{\displaystyle \mathbb {C} ^{2^{n}}}and that theunitary transforms(i.e. quantum logic gates) applied to it are rotations on the sphere. The rotations that the gates perform form thesymmetry groupU(2n). Measurement is then a probabilistic projection of the points at the surface of thiscomplexsphere onto thebasis vectorsthat span the space (and labels the outcomes). In many cases the space is represented as aHilbert spaceH{\displaystyle {\mathcal {H}}}rather than some specific2n{\displaystyle 2^{n}}-dimensionalcomplex space. The number of dimensions (defined by the basis vectors, and thus also the possible outcomes from measurement) is then often implied by the operands, for example as the requiredstate spacefor solving aproblem. InGrover's algorithm,Grovernamed this generic basis vector set"the database". The selection of basis vectors against which to measure a quantum state will influence the outcome of the measurement.[1]: 30–35[4]: 22, 84–85, 185–188[33]Seechange of basisandVon Neumann entropyfor details. In this article, we always use thecomputationalbasis, which means that we have labeled the2n{\displaystyle 2^{n}}basis vectors of ann-qubitregister|0⟩,|1⟩,|2⟩,⋯,|2n−1⟩{\displaystyle |0\rangle ,|1\rangle ,|2\rangle ,\cdots ,|2^{n}-1\rangle },or use thebinary representation|010⟩=|0…002⟩,|110⟩=|0…012⟩,|210⟩=|0…102⟩,⋯,|2n−1⟩=|111…12⟩{\displaystyle |0_{10}\rangle =|0\dots 00_{2}\rangle ,|1_{10}\rangle =|0\dots 01_{2}\rangle ,|2_{10}\rangle =|0\dots 10_{2}\rangle ,\cdots ,|2^{n}-1\rangle =|111\dots 1_{2}\rangle }. Inquantum mechanics, the basis vectors constitute anorthonormal basis. An example of usage of an alternative measurement basis is in theBB84cipher. If twoquantum states(i.e.qubits, orregisters) areentangled(meaning that their combined state cannot be expressed as atensor product), measurement of one register affects or reveals the state of the other register by partially or entirely collapsing its state too. This effect can be used for computation, and is used in many algorithms. The Hadamard-CNOT combination acts on the zero-state as follows: This resulting state is theBell state|00⟩+|11⟩2=12[1001]{\displaystyle {\frac {|00\rangle +|11\rangle }{\sqrt {2}}}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\0\\0\\1\end{bmatrix}}}.It cannot be described as a tensor product of two qubits. There is no solution for because for examplewneeds to be both non-zero and zero in the case ofxwandyw. The quantum statespansthe two qubits. This is calledentanglement. Measuring one of the two qubits that make up this Bell state will result in that the other qubit logically must have the same value, both must be the same: Either it will be found in the state|00⟩{\displaystyle |00\rangle },or in the state|11⟩{\displaystyle |11\rangle }.If we measure one of the qubits to be for example|1⟩{\displaystyle |1\rangle },then the other qubit must also be|1⟩{\displaystyle |1\rangle },because their combined statebecame|11⟩{\displaystyle |11\rangle }.Measurement of one of the qubits collapses the entire quantum state, that span the two qubits. TheGHZ stateis a similar entangled quantum state that spans three or more qubits. This type of value-assignment occursinstantaneously over any distanceand this has as of 2018 been experimentally verified byQUESSfor distances of up to 1200 kilometers.[34][35][36]That the phenomena appears to happen instantaneously as opposed to the time it would take to traverse the distance separating the qubits at the speed of light is called theEPR paradox, and it is an open question in physics how to resolve this. Originally it was solved by giving up the assumption oflocal realism, but otherinterpretationshave also emerged. For more information see theBell test experiments. Theno-communication theoremproves that this phenomenon cannot be used for faster-than-light communication ofclassical information. Take aregisterA withnqubits all initialized to|0⟩{\displaystyle |0\rangle },and feed it through aparallel Hadamard gateH⊗n{\textstyle H^{\otimes n}}.Register A will then enter the state12n∑k=02n−1|k⟩{\textstyle {\frac {1}{\sqrt {2^{n}}}}\sum _{k=0}^{2^{n}-1}|k\rangle }that have equal probability of when measured to be in any of its2n{\displaystyle 2^{n}}possible states;|0⟩{\displaystyle |0\rangle }to|2n−1⟩{\displaystyle |2^{n}-1\rangle }.Take a second register B, also withnqubits initialized to|0⟩{\displaystyle |0\rangle }and pairwiseCNOTits qubits with the qubits in register A, such that for eachpthe qubitsAp{\displaystyle A_{p}}andBp{\displaystyle B_{p}}forms the state|ApBp⟩=|00⟩+|11⟩2{\displaystyle |A_{p}B_{p}\rangle ={\frac {|00\rangle +|11\rangle }{\sqrt {2}}}}. If we now measure the qubits in register A, then register B will be found to contain the same value as A. If we however instead apply a quantum logic gateFon A and then measure, then|A⟩=F|B⟩⟺F†|A⟩=|B⟩{\displaystyle |A\rangle =F|B\rangle \iff F^{\dagger }|A\rangle =|B\rangle },whereF†{\displaystyle F^{\dagger }}is theunitary inverseofF. Because of howunitary inverses of gatesact,F†|A⟩=F−1(|A⟩)=|B⟩{\displaystyle F^{\dagger }|A\rangle =F^{-1}(|A\rangle )=|B\rangle }.For example, sayF(x)=x+3(mod2n){\displaystyle F(x)=x+3{\pmod {2^{n}}}}, then|B⟩=|A−3(mod2n)⟩{\displaystyle |B\rangle =|A-3{\pmod {2^{n}}}\rangle }. The equality will hold no matter in which order measurement is performed (on the registers A or B), assuming thatFhas run to completion. Measurement can even be randomly and concurrently interleaved qubit by qubit, since the measurements assignment of one qubit will limit the possible value-space from the other entangled qubits. Even though the equalities holds, the probabilities for measuring the possible outcomes may change as a result of applyingF, as may be the intent in a quantum search algorithm. This effect of value-sharing via entanglement is used inShor's algorithm,phase estimationand inquantum counting. Using theFourier transformto amplify the probability amplitudes of the solution states for someproblemis a generic method known as "Fourier fishing".[37] Functions and routines that only use gates can themselves be described as matrices, just like the smaller gates. The matrix that represents a quantum function acting onq{\displaystyle q}qubits has size2q×2q{\displaystyle 2^{q}\times 2^{q}}.For example, a function that acts on a "qubyte" (aregisterof 8 qubits) would be represented by a matrix with28×28=256×256{\displaystyle 2^{8}\times 2^{8}=256\times 256}elements. Unitary transformations that are not in the set of gates natively available at the quantum computer (the primitive gates) can be synthesised, or approximated, by combining the available primitive gates in acircuit. One way to do this is to factor the matrix that encodes the unitary transformation into a product of tensor products (i.e.seriesandparallelcircuits) of the available primitive gates. ThegroupU(2q)is thesymmetry groupfor the gates that act onq{\displaystyle q}qubits.[2]Factorization is then theproblemof finding a path in U(2q) from thegenerating setof primitive gates. TheSolovay–Kitaev theoremshows that given a sufficient set of primitive gates, there exist an efficient approximate for any gate. For the general case with a large number of qubits this direct approach to circuit synthesis isintractable.[38][39]This puts a limit on how large functions can be brute-force factorized into primitive quantum gates. Typically quantum programs are instead built using relatively small and simple quantum functions, similar to normal classical programming. Because of the gatesunitarynature, all functions must bereversibleand always bebijectivemappings of input to output. There must always exist a functionF−1{\displaystyle F^{-1}}such thatF−1(F(|ψ⟩))=|ψ⟩{\displaystyle F^{-1}(F(|\psi \rangle ))=|\psi \rangle }.Functions that are not invertible can be made invertible by addingancilla qubitsto the input or the output, or both. After the function has run to completion, the ancilla qubits can then either beuncomputedor left untouched. Measuring or otherwise collapsing the quantum state of an ancilla qubit (e.g. by re-initializing the value of it, or by its spontaneousdecoherence) that have not been uncomputed may result in errors,[40][41]as their state may be entangled with the qubits that are still being used in computations. Logically irreversible operations, for example addition modulo2n{\displaystyle 2^{n}}of twon{\displaystyle n}-qubit registersaandb,F(a,b)=a+b(mod2n){\displaystyle F(a,b)=a+b{\pmod {2^{n}}}},[h]can be made logically reversible by adding information to the output, so that the input can be computed from the output (i.e. there exists a functionF−1{\displaystyle F^{-1}}).In our example, this can be done by passing on one of the input registers to the output:F(|a⟩⊗|b⟩)=|a+b(mod2n)⟩⊗|a⟩{\displaystyle F(|a\rangle \otimes |b\rangle )=|a+b{\pmod {2^{n}}}\rangle \otimes |a\rangle }.The output can then be used to compute the input (i.e. given the outputa+b{\displaystyle a+b}anda{\displaystyle a},we can easily find the input;a{\displaystyle a}is given and(a+b)−a=b{\displaystyle (a+b)-a=b})and the function is made bijective. AllBoolean algebraicexpressions can be encoded as unitary transforms (quantum logic gates), for example by using combinations of thePauli-X,CNOTandToffoligates. These gates arefunctionally completein the Boolean logic domain. There are many unitary transforms available in the libraries ofQ#,QCL,Qiskit, and otherquantum programminglanguages. It also appears in the literature.[42][43] For example,inc(|x⟩)=|x+1(mod2xlength)⟩{\displaystyle \mathrm {inc} (|x\rangle )=|x+1{\pmod {2^{x_{\text{length}}}}}\rangle }, wherexlength{\displaystyle x_{\text{length}}}is the number of qubits that constitutes theregisterx{\displaystyle x},is implemented as the following in QCL:[44][13][12] In QCL, decrement is done by "undoing" increment. The prefix!is used to instead run theunitary inverseof the function.!inc(x)is the inverse ofinc(x)and instead performs the operationinc†|x⟩=inc−1(|x⟩)=|x−1(mod2xlength)⟩{\displaystyle \mathrm {inc} ^{\dagger }|x\rangle =\mathrm {inc} ^{-1}(|x\rangle )=|x-1{\pmod {2^{x_{\text{length}}}}}\rangle }.Thecondkeyword means that the function can beconditional.[11] In themodel of computationused in this article (thequantum circuitmodel), a classic computer generates the gate composition for the quantum computer, and the quantum computer behaves as acoprocessorthat receives instructions from the classical computer about which primitive gates to apply to which qubits.[13]: 36–43[14]Measurement of quantum registers results in binary values that the classical computer can use in its computations.Quantum algorithmsoften contain both a classical and a quantum part. UnmeasuredI/O(sending qubits to remote computers without collapsing their quantum states) can be used to createnetworks of quantum computers.Entanglement swappingcan then be used to realizedistributed algorithmswith quantum computers that are not directly connected. Examples of distributed algorithms that only require the use of a handful of quantum logic gates aresuperdense coding, thequantum Byzantine agreementand theBB84cipherkey exchange protocol.
https://en.wikipedia.org/wiki/Quantum_logic_gate
Astationary stateis aquantum statewith allobservablesindependent of time. It is aneigenvectorof theenergy operator(instead of aquantum superpositionof different energies). It is also calledenergy eigenvector,energy eigenstate,energy eigenfunction, orenergyeigenket. It is very similar to the concept ofatomic orbitalandmolecular orbitalin chemistry, with some slight differences explainedbelow. A stationary state is calledstationarybecause the system remains in the same state as time elapses, in every observable way. For a single-particleHamiltonian, this means that the particle has a constantprobability distributionfor its position, its velocity, itsspin, etc.[1](This is true assuming the particle's environment is also static, i.e. the Hamiltonian is unchanging in time.) Thewavefunctionitself is not stationary: It continually changes its overall complexphase factor, so as to form astanding wave. The oscillation frequency of the standing wave, multiplied by thePlanck constant, is the energy of the state according to thePlanck–Einstein relation. Stationary states arequantum statesthat are solutions to the time-independentSchrödinger equation:H^|Ψ⟩=EΨ|Ψ⟩,{\displaystyle {\hat {H}}|\Psi \rangle =E_{\Psi }|\Psi \rangle ,}where This is aneigenvalue equation:H^{\displaystyle {\hat {H}}}is alinear operatoron a vector space,|Ψ⟩{\displaystyle |\Psi \rangle }is an eigenvector ofH^{\displaystyle {\hat {H}}}, andEΨ{\displaystyle E_{\Psi }}is its eigenvalue. If a stationary state|Ψ⟩{\displaystyle |\Psi \rangle }is plugged into the time-dependent Schrödinger equation, the result is[2]iℏ∂∂t|Ψ⟩=EΨ|Ψ⟩.{\displaystyle i\hbar {\frac {\partial }{\partial t}}|\Psi \rangle =E_{\Psi }|\Psi \rangle .} Assuming thatH^{\displaystyle {\hat {H}}}is time-independent (unchanging in time), this equation holds for any timet. Therefore, this is adifferential equationdescribing how|Ψ⟩{\displaystyle |\Psi \rangle }varies in time. Its solution is|Ψ(t)⟩=e−iEΨt/ℏ|Ψ(0)⟩.{\displaystyle |\Psi (t)\rangle =e^{-iE_{\Psi }t/\hbar }|\Psi (0)\rangle .} Therefore, a stationary state is astanding wavethat oscillates with an overall complexphase factor, and its oscillationangular frequencyis equal to its energy divided byℏ{\displaystyle \hbar }. As shown above, a stationary state is not mathematically constant:|Ψ(t)⟩=e−iEΨt/ℏ|Ψ(0)⟩.{\displaystyle |\Psi (t)\rangle =e^{-iE_{\Psi }t/\hbar }|\Psi (0)\rangle .} However, all observable properties of the state are in fact constant in time. For example, if|Ψ(t)⟩{\displaystyle |\Psi (t)\rangle }represents a simple one-dimensional single-particle wavefunctionΨ(x,t){\displaystyle \Psi (x,t)}, the probability that the particle is at locationxis|Ψ(x,t)|2=|e−iEΨt/ℏΨ(x,0)|2=|e−iEΨt/ℏ|2|Ψ(x,0)|2=|Ψ(x,0)|2,{\displaystyle |\Psi (x,t)|^{2}=\left|e^{-iE_{\Psi }t/\hbar }\Psi (x,0)\right|^{2}=\left|e^{-iE_{\Psi }t/\hbar }\right|^{2}\left|\Psi (x,0)\right|^{2}=\left|\Psi (x,0)\right|^{2},}which is independent of the timet. TheHeisenberg pictureis an alternativemathematical formulation of quantum mechanicswhere stationary states are truly mathematically constant in time. As mentioned above, these equations assume that the Hamiltonian is time-independent. This means simply that stationary states are only stationary when the rest of the system is fixed and stationary as well. For example, a1s electronin ahydrogen atomis in a stationary state, but if the hydrogen atom reacts with another atom, then the electron will of course be disturbed. Spontaneous decay complicates the question of stationary states. For example, according to simple (nonrelativistic)quantum mechanics, thehydrogen atomhas many stationary states:1s, 2s, 2p, and so on, are all stationary states. But in reality, only the ground state 1s is truly "stationary": An electron in a higher energy level willspontaneously emitone or morephotonsto decay into the ground state.[3]This seems to contradict the idea that stationary states should have unchanging properties. The explanation is that theHamiltonianused in nonrelativistic quantum mechanics is only an approximation to the Hamiltonian fromquantum field theory. The higher-energy electron states (2s, 2p, 3s, etc.) are stationary states according to the approximate Hamiltonian, butnotstationary according to the true Hamiltonian, because ofvacuum fluctuations. On the other hand, the 1s state is truly a stationary state, according to both the approximate and the true Hamiltonian. An orbital is a stationary state (or approximation thereof) of a one-electron atom or molecule; more specifically, anatomic orbitalfor an electron in an atom, or amolecular orbitalfor an electron in a molecule.[4] For a molecule that contains only a single electron (e.g. atomichydrogenorH2+), an orbital is exactly the same as a total stationary state of the molecule. However, for a many-electron molecule, an orbital is completely different from a total stationary state, which is amany-particle staterequiring a more complicated description (such as aSlater determinant).[5]In particular, in a many-electron molecule, an orbital is not the total stationary state of the molecule, but rather the stationary state of a single electron within the molecule. This concept of an orbital is only meaningful under the approximation that if we ignore the electron–electron instantaneous repulsion terms in the Hamiltonian as a simplifying assumption, we can decompose the total eigenvector of a many-electron molecule into separate contributions from individual electron stationary states (orbitals), each of which are obtained under the one-electron approximation. (Luckily, chemists and physicists can often (but not always) use this "single-electron approximation".) In this sense, in a many-electron system, an orbital can be considered as the stationary state of an individual electron in the system. In chemistry, calculation of molecular orbitals typically also assume theBorn–Oppenheimer approximation.
https://en.wikipedia.org/wiki/Stationary_state
In variousinterpretationsofquantum mechanics,wave function collapse, also calledreduction of the state vector,[1]occurs when awave function—initially in asuperpositionof severaleigenstates—reduces to a single eigenstate due tointeractionwith the external world. This interaction is called anobservationand is the essence of ameasurement in quantum mechanics, which connects the wave function with classicalobservablessuch aspositionandmomentum. Collapse is one of the two processes by whichquantum systemsevolve in time; the other is the continuous evolution governed by theSchrödinger equation.[2] In theCopenhagen interpretation, wave function collapse connects quantum to classical models, with a specialrole for the observer. By contrast,objective-collapseproposes an origin in physical processes. In themany-worlds interpretation, collapse does not exist; all wave function outcomes occur whilequantum decoherenceaccounts for the appearance of collapse. Historically,Werner Heisenbergwas the first to use the idea of wave function reduction to explain quantum measurement.[3][4] In quantum mechanics each measurable physical quantity of a quantum system is called anobservablewhich, for example, could be the positionr{\displaystyle r}and the momentump{\displaystyle p}but also energyE{\displaystyle E},z{\displaystyle z}components of spin (sz{\displaystyle s_{z}}), and so on. The observable acts as alinear functionon the states of the system; its eigenvectors correspond to the quantum state (i.e.eigenstate) and theeigenvaluesto the possible values of the observable. The collection of eigenstates/eigenvalue pairs represent all possible values of the observable. Writingϕi{\displaystyle \phi _{i}}for an eigenstate andci{\displaystyle c_{i}}for the corresponding observed value, any arbitrary state of the quantum system can be expressed as a vector usingbra–ket notation:|ψ⟩=∑ici|ϕi⟩.{\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle .}The kets{|ϕi⟩}{\displaystyle \{|\phi _{i}\rangle \}}specify the different available quantum "alternatives", i.e., particular quantum states. Thewave functionis a specific representation of a quantum state. Wave functions can therefore always be expressed as eigenstates of an observable though the converse is not necessarily true. To account for the experimental result that repeated measurements of a quantum system give the same results, the theory postulates a "collapse" or "reduction of the state vector" upon observation,[5]:566abruptly converting an arbitrary state into a single component eigenstate of the observable: where the arrow represents a measurement of the observable corresponding to theϕ{\displaystyle \phi }basis.[6]For any single event, only one eigenvalue is measured, chosen randomly from among the possible values. Thecomplexcoefficients{ci}{\displaystyle \{c_{i}\}}in the expansion of a quantum state in terms of eigenstates{|ϕi⟩}{\displaystyle \{|\phi _{i}\rangle \}},|ψ⟩=∑ici|ϕi⟩.{\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle .}can be written as an (complex) overlap of the corresponding eigenstate and the quantum state:ci=⟨ϕi|ψ⟩.{\displaystyle c_{i}=\langle \phi _{i}|\psi \rangle .}They are called theprobability amplitudes. Thesquare modulus|ci|2{\displaystyle |c_{i}|^{2}}is the probability that a measurement of the observable yields the eigenstate|ϕi⟩{\displaystyle |\phi _{i}\rangle }. The sum of the probability over all possible outcomes must be one:[7] As examples, individual counts in adouble slit experimentwith electrons appear at random locations on the detector; after many counts are summed the distribution shows a wave interference pattern.[8]In aStern-Gerlach experimentwith silver atoms, each particle appears in one of two areas unpredictably, but the final conclusion has equal numbers of events in each area. This statistical aspect of quantum measurements differs fundamentally fromclassical mechanics. In quantum mechanics the only information we have about a system is its wave function and measurements of its wave function can only give statistical information.[5]: 17 The two terms "reduction of the state vector" (or "state reduction" for short) and "wave function collapse" are used to describe the same concept. Aquantum stateis a mathematical description of a quantum system; aquantum state vectoruses Hilbert space vectors for the description.[9]: 159Reduction of the state vector replaces the full state vector with a single eigenstate of the observable. The term "wave function" is typically used for a different mathematical representation of the quantum state, one that uses spatial coordinates also called the "position representation".[9]: 324When the wave function representation is used, the "reduction" is called "wave function collapse". The Schrödinger equation describes quantum systems but does not describe their measurement. Solution to the equations include all possible observable values for measurements, but measurements only result in one definite outcome. This difference is called themeasurement problemof quantum mechanics. To predict measurement outcomes from quantum solutions, the orthodox interpretation of quantum theory postulates wave function collapse and uses theBorn ruleto compute the probable outcomes.[10]Despite the widespread quantitative success of these postulates scientists remain dissatisfied and have sought more detailed physical models. Rather than suspending the Schrödinger equation during the process of measurement, the measurement apparatus should be included and governed by the laws of quantum mechanics.[11]: 127 Quantum theory offers no dynamical description of the "collapse" of the wave function. Viewed as a statistical theory, no description is expected. As Fuchs and Peres put it, "collapse is something that happens in our description of the system, not to the system itself".[12] Variousinterpretations of quantum mechanicsattempt to provide a physical model for collapse.[13]: 816Three treatments of collapse can be found among the common interpretations. The first group includes hidden-variable theories likede Broglie–Bohm theory; here random outcomes only result from unknown values of hidden variables. Results fromtestsofBell's theoremshows that these variables would need to be non-local. The second group models measurement as quantum entanglement between the quantum state and the measurement apparatus. This results in a simulation of classical statistics called quantum decoherence. This group includes themany-worlds interpretationandconsistent historiesmodels. The third group postulates additional, but as yet undetected, physical basis for the randomness; this group includes for example theobjective-collapse interpretations. While models in all groups have contributed to better understanding of quantum theory, no alternative explanation for individual events has emerged as more useful than collapse followed by statistical prediction with the Born rule.[13]: 819 The significance ascribed to the wave function varies from interpretation to interpretation and even within an interpretation (such as theCopenhagen interpretation). If the wave function merely encodes an observer's knowledge of the universe, then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent.[citation needed] Quantum decoherence explains why a system interacting with an environment transitions from being apure state, exhibiting superpositions, to amixed state, an incoherent combination of classical alternatives.[14]This transition is fundamentally reversible, as the combined state of system and environment is still pure, but for all practical purposes irreversible in the same sense as in thesecond law of thermodynamics: the environment is a very large and complex quantum system, and it is not feasible to reverse their interaction. Decoherence is thus very important for explaining theclassical limitof quantum mechanics, but cannot explain wave function collapse, as all classical alternatives are still present in the mixed state, and wave function collapse selects only one of them.[15][16][14] The form of decoherence known asenvironment-induced superselectionproposes that when a quantum system interacts with the environment, the superpositionsapparentlyreduce to mixtures of classical alternatives. The combined wave function of the system and environment continue to obey the Schrödinger equation throughout thisapparentcollapse.[17]More importantly, this is not enough to explainactualwave function collapse, as decoherence does not reduce it to a single eigenstate.[15][14] The concept of wavefunction collapse was introduced byWerner Heisenbergin his 1927 paper on theuncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", and incorporated into themathematical formulation of quantum mechanicsbyJohn von Neumann, in his 1932 treatiseMathematische Grundlagen der Quantenmechanik.[4]Heisenberg did not try to specify exactly what the collapse of the wavefunction meant. However, he emphasized that it should not be understood as a physical process.[18]Niels Bohr never mentions wave function collapse in his published work, but he repeatedly cautioned that we must give up a "pictorial representation". Despite the differences between Bohr and Heisenberg, their views are often grouped together as the "Copenhagen interpretation", of which wave function collapse is regarded as a key feature.[19] John von Neumann's influential 1932 workMathematical Foundations of Quantum Mechanicstook a more formal approach, developing an "ideal" measurement scheme[20][21]:1270that postulated that there were two processes of wave function change: In 1957Hugh Everett IIIproposed a model of quantum mechanics that dropped von Neumann's first postulate. Everett observed that the measurement apparatus was also a quantum system and its quantum interaction with the system under observation should determine the results. He proposed that the discontinuous change is instead a splitting of a wave function representing the universe.[21]: 1288While Everett's approach rekindled interest in foundational quantum mechanics, it left core issues unresolved. Two key issues relate to origin of the observed classical results: what causes quantum systems to appear classical and to resolve with the observed probabilities of theBorn rule.[21]: 1290[20]: 5 Beginning in 1970H. Dieter Zehsought a detailed quantum decoherence model for the discontinuous change without postulating collapse. Further work byWojciech H. Zurekin 1980 lead eventually to a large number of papers on many aspects of the concept.[22]Decoherence assumes that every quantum system interacts quantum mechanically with its environment and such interaction is not separable from the system, a concept called an "open system".[21]: 1273Decoherence has been shown to work very quickly and within a minimal environment, but as yet it has not succeeded in a providing a detailed model replacing the collapse postulate of orthodox quantum mechanics.[21]: 1302 By explicitly dealing with the interaction of object and measuring instrument, von Neumann[2]described a quantum mechanical measurement scheme consistent with wave function collapse. However, he did not prove thenecessityof such a collapse. Von Neumann's projection postulate was conceived based on experimental evidence available during the 1930s, in particularCompton scattering. Later work refined the notion of measurements into the more easily discussedfirst kind, that will give the same value when immediately repeated, and thesecond kindthat give different values when repeated.[23][24][25]
https://en.wikipedia.org/wiki/Wave_function_collapse
TheW stateis anentangledquantum stateof threequbitswhich in thebra-ket notationhas the following shape and which is remarkable for representing a specific type ofmultipartite entanglementand for occurring in several applications inquantum information theory. Particles prepared in this state reproduce the properties ofBell's theorem, which states that no classical theory of local hidden variables can produce the predictions of quantum mechanics.[1]The state is named afterWolfgang Dür, who first reported the state together withGuifré Vidal, andIgnacio Ciracin 2000.[2] The W state is the representative of one of the two non-biseparable[3]classes of three-qubit states, the other being theGreenberger–Horne–Zeilinger state,|GHZ⟩=(|000⟩+|111⟩)/2{\displaystyle |\mathrm {GHZ} \rangle =(|000\rangle +|111\rangle )/{\sqrt {2}}}. The|W⟩{\displaystyle |\mathrm {W} \rangle }and|GHZ⟩{\displaystyle |\mathrm {GHZ} \rangle }states represent two very different kinds of tripartite entanglement, as they cannot be transformed (not even probabilistically) into each other bylocal quantum operations.[2] This difference is, for example, illustrated by the following interesting property of the W state: if one of the three qubits is lost, the state of the remaining 2-qubit system is still entangled. This robustness of W-type entanglement contrasts strongly with the GHZ state, which is fully separable after loss of one qubit. The states in the W class can be distinguished from all other 3-qubit states by means ofmultipartite entanglement measures. In particular, W states have non-zero entanglement across any bipartition,[4]while the 3-tangle vanishes, which is also non-zero for GHZ-type states.[2] The notion of W state has been generalized forn{\displaystyle n}qubits[2]and then refers to the quantum superposition with equal expansion coefficients of all possible pure states in which exactly one of the qubits is in an "excited state"|1⟩{\displaystyle |1\rangle }, while all other ones are in the "ground state"|0⟩{\displaystyle |0\rangle }: Both the robustness against particle loss and theLOCC-inequivalence with the (generalized) GHZ state also hold for then{\displaystyle n}-qubit W state. In systems in which a single qubit is stored in an ensemble of many two-level systems the logical "1" is often represented by the W state, while the logical "0" is represented by the state|00...0⟩{\displaystyle |00...0\rangle }. Here the W state's robustness against particle loss is a very beneficial property ensuring good storage properties of these ensemble-based quantum memories.[5]
https://en.wikipedia.org/wiki/W_state
In themathematicaldiscipline ofmatrix theory, aJordan matrix, named afterCamille Jordan, is ablock diagonal matrixover aringR(whoseidentitiesare thezero0 andone1), where each block along the diagonal, called a Jordan block, has the following form:[λ10⋯00λ1⋯0⋮⋮⋮⋱⋮000λ10000λ].{\displaystyle {\begin{bmatrix}\lambda &1&0&\cdots &0\\0&\lambda &1&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\lambda &1\\0&0&0&0&\lambda \end{bmatrix}}.} EveryJordan blockis specified by its dimensionnand itseigenvalueλ∈R{\displaystyle \lambda \in R}, and is denoted asJλ,n. It is ann×n{\displaystyle n\times n}matrix of zeroes everywhere except for the diagonal, which is filled withλ{\displaystyle \lambda }and for thesuperdiagonal, which is composed of ones. Any block diagonal matrix whose blocks are Jordan blocks is called aJordan matrix. This(n1+ ⋯ +nr) × (n1+ ⋯ +nr)square matrix, consisting ofrdiagonal blocks, can be compactly indicated asJλ1,n1⊕⋯⊕Jλr,nr{\displaystyle J_{\lambda _{1},n_{1}}\oplus \cdots \oplus J_{\lambda _{r},n_{r}}}ordiag(Jλ1,n1,…,Jλr,nr){\displaystyle \mathrm {diag} \left(J_{\lambda _{1},n_{1}},\ldots ,J_{\lambda _{r},n_{r}}\right)}, where thei-th Jordan block isJλi,ni. For example, the matrixJ=[010000000000100000000000000000000i1000000000i0000000000i1000000000i000000000071000000000710000000007]{\displaystyle J=\left[{\begin{array}{ccc|cc|cc|ccc}0&1&0&0&0&0&0&0&0&0\\0&0&1&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0&0\\\hline 0&0&0&i&1&0&0&0&0&0\\0&0&0&0&i&0&0&0&0&0\\\hline 0&0&0&0&0&i&1&0&0&0\\0&0&0&0&0&0&i&0&0&0\\\hline 0&0&0&0&0&0&0&7&1&0\\0&0&0&0&0&0&0&0&7&1\\0&0&0&0&0&0&0&0&0&7\end{array}}\right]}is a10 × 10Jordan matrix with a3 × 3block witheigenvalue0, two2 × 2blocks with eigenvalue theimaginary uniti, and a3 × 3block with eigenvalue 7. Its Jordan-block structure is written as eitherJ0,3⊕Ji,2⊕Ji,2⊕J7,3{\displaystyle J_{0,3}\oplus J_{i,2}\oplus J_{i,2}\oplus J_{7,3}}ordiag(J0,3,Ji,2,Ji,2,J7,3). Anyn×nsquare matrixAwhose elements are in analgebraically closed fieldKissimilarto a Jordan matrixJ, also inMn(K){\displaystyle \mathbb {M} _{n}(K)}, which is unique up to a permutation of its diagonal blocks themselves.Jis called theJordan normal formofAand corresponds to a generalization of the diagonalization procedure.[1][2][3]Adiagonalizable matrixis similar, in fact, to a special case of Jordan matrix: the matrix whose blocks are all1 × 1.[4][5][6] More generally, given a Jordan matrixJ=Jλ1,m1⊕Jλ2,m2⊕⋯⊕JλN,mN{\displaystyle J=J_{\lambda _{1},m_{1}}\oplus J_{\lambda _{2},m_{2}}\oplus \cdots \oplus J_{\lambda _{N},m_{N}}}, that is, whosekth diagonal block,1≤k≤N{\displaystyle 1\leq k\leq N}, is the Jordan blockJλk,mkand whose diagonal elementsλk{\displaystyle \lambda _{k}}may not all be distinct, thegeometric multiplicityofλ∈K{\displaystyle \lambda \in K}for the matrixJ, indicated asgmulJ⁡λ{\displaystyle \operatorname {gmul} _{J}\lambda }, corresponds to the number of Jordan blocks whose eigenvalue isλ. Whereas theindexof an eigenvalueλ{\displaystyle \lambda }forJ, indicated asidxJ⁡λ{\displaystyle \operatorname {idx} _{J}\lambda }, is defined as the dimension of the largest Jordan block associated to that eigenvalue. The same goes for all the matricesAsimilar toJ, soidxA⁡λ{\displaystyle \operatorname {idx} _{A}\lambda }can be defined accordingly with respect to theJordan normal formofAfor any of its eigenvaluesλ∈spec⁡A{\displaystyle \lambda \in \operatorname {spec} A}. In this case one can check that the index ofλ{\displaystyle \lambda }forAis equal to its multiplicity as arootof theminimal polynomialofA(whereas, by definition, itsalgebraic multiplicityforA,mulA⁡λ{\displaystyle \operatorname {mul} _{A}\lambda }, is its multiplicity as a root of thecharacteristic polynomialofA; that is,det(A−xI)∈K[x]{\displaystyle \det(A-xI)\in K[x]}). An equivalent necessary and sufficient condition forAto be diagonalizable inKis that all of its eigenvalues have index equal to1; that is, its minimal polynomial has only simple roots. Note that knowing a matrix's spectrum with all of its algebraic/geometric multiplicities and indexes does not always allow for the computation of itsJordan normal form(this may be a sufficient condition only for spectrally simple, usually low-dimensional matrices). Indeed, determining the Jordan normal form is generally a computationally challenging task. From thevector spacepoint of view, the Jordan normal form is equivalent to finding an orthogonal decomposition (that is, viadirect sumsof eigenspaces represented by Jordan blocks) of the domain which the associatedgeneralized eigenvectorsmake a basis for. LetA∈Mn(C){\displaystyle A\in \mathbb {M} _{n}(\mathbb {C} )}(that is, an×ncomplex matrix) andC∈GLn(C){\displaystyle C\in \mathrm {GL} _{n}(\mathbb {C} )}be thechange of basismatrix to theJordan normal formofA; that is,A=C−1JC. Now letf(z)be aholomorphic functionon an open setΩ{\displaystyle \Omega }such thatspecA⊂Ω⊆C{\displaystyle \mathrm {spec} A\subset \Omega \subseteq \mathbb {C} }; that is, the spectrum of the matrix is contained inside thedomain of holomorphyoff. Letf(z)=∑h=0∞ah(z−z0)h{\displaystyle f(z)=\sum _{h=0}^{\infty }a_{h}(z-z_{0})^{h}}be thepower seriesexpansion offaroundz0∈Ω∖spec⁡A{\displaystyle z_{0}\in \Omega \setminus \operatorname {spec} A}, which will be hereinafter supposed to be0for simplicity's sake. The matrixf(A)is then defined via the followingformal power seriesf(A)=∑h=0∞ahAh{\displaystyle f(A)=\sum _{h=0}^{\infty }a_{h}A^{h}}and isabsolutely convergentwith respect to theEuclidean normofMn(C){\displaystyle \mathbb {M} _{n}(\mathbb {C} )}. To put it another way,f(A)converges absolutely for every square matrix whosespectral radiusis less than theradius of convergenceoffaround0and isuniformly convergenton any compact subsets ofMn(C){\displaystyle \mathbb {M} _{n}(\mathbb {C} )}satisfying this property in thematrix Lie grouptopology. TheJordan normal formallows the computation of functions of matrices without explicitly computing aninfinite series, which is one of the main achievements of Jordan matrices. Using the facts that thekth power (k∈N0{\displaystyle k\in \mathbb {N} _{0}}) of a diagonalblock matrixis the diagonal block matrix whose blocks are thekth powers of the respective blocks; that is,(A1⊕A2⊕A3⊕⋯)k=A1k⊕A2k⊕A3k⊕⋯{\displaystyle \left(A_{1}\oplus A_{2}\oplus A_{3}\oplus \cdots \right)^{k}=A_{1}^{k}\oplus A_{2}^{k}\oplus A_{3}^{k}\oplus \cdots },and thatAk=C−1JkC, the above matrix power series becomes f(A)=C−1f(J)C=C−1(⨁k=1Nf(Jλk,mk))C{\displaystyle f(A)=C^{-1}f(J)C=C^{-1}\left(\bigoplus _{k=1}^{N}f\left(J_{\lambda _{k},m_{k}}\right)\right)C} where the last series need not be computed explicitly via power series of every Jordan block. In fact, ifλ∈Ω{\displaystyle \lambda \in \Omega }, anyholomorphic functionof a Jordan blockf(Jλ,n)=f(λI+Z){\displaystyle f(J_{\lambda ,n})=f(\lambda I+Z)}has a finite power series aroundλI{\displaystyle \lambda I}becauseZn=0{\displaystyle Z^{n}=0}. Here,Z{\displaystyle Z}is the nilpotent part ofJ{\displaystyle J}andZk{\displaystyle Z^{k}}has all 0's except 1's along thekth{\displaystyle k^{\text{th}}}superdiagonal. Thus it is the following uppertriangular matrix:f(Jλ,n)=∑k=0n−1f(k)(λ)Zkk!=[f(λ)f′(λ)f′′(λ)2⋯f(n−2)(λ)(n−2)!f(n−1)(λ)(n−1)!0f(λ)f′(λ)⋯f(n−3)(λ)(n−3)!f(n−2)(λ)(n−2)!00f(λ)⋯f(n−4)(λ)(n−4)!f(n−3)(λ)(n−3)!⋮⋮⋮⋱⋮⋮000⋯f(λ)f′(λ)000⋯0f(λ)].{\displaystyle f(J_{\lambda ,n})=\sum _{k=0}^{n-1}{\frac {f^{(k)}(\lambda )Z^{k}}{k!}}={\begin{bmatrix}f(\lambda )&f^{\prime }(\lambda )&{\frac {f^{\prime \prime }(\lambda )}{2}}&\cdots &{\frac {f^{(n-2)}(\lambda )}{(n-2)!}}&{\frac {f^{(n-1)}(\lambda )}{(n-1)!}}\\0&f(\lambda )&f^{\prime }(\lambda )&\cdots &{\frac {f^{(n-3)}(\lambda )}{(n-3)!}}&{\frac {f^{(n-2)}(\lambda )}{(n-2)!}}\\0&0&f(\lambda )&\cdots &{\frac {f^{(n-4)}(\lambda )}{(n-4)!}}&{\frac {f^{(n-3)}(\lambda )}{(n-3)!}}\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&0&\cdots &f(\lambda )&f^{\prime }(\lambda )\\0&0&0&\cdots &0&f(\lambda )\\\end{bmatrix}}.} As a consequence of this, the computation of any function of a matrix is straightforward whenever its Jordan normal form and its change-of-basis matrix are known. For example, usingf(z)=1/z{\displaystyle f(z)=1/z}, the inverse ofJλ,n{\displaystyle J_{\lambda ,n}}is:Jλ,n−1=∑k=0n−1(−Z)kλk+1=[λ−1−λ−2λ−3⋯−(−λ)1−n−(−λ)−n0λ−1−λ−2⋯−(−λ)2−n−(−λ)1−n00λ−1⋯−(−λ)3−n−(−λ)2−n⋮⋮⋮⋱⋮⋮000⋯λ−1−λ−2000⋯0λ−1].{\displaystyle J_{\lambda ,n}^{-1}=\sum _{k=0}^{n-1}{\frac {(-Z)^{k}}{\lambda ^{k+1}}}={\begin{bmatrix}\lambda ^{-1}&-\lambda ^{-2}&\,\,\,\lambda ^{-3}&\cdots &-(-\lambda )^{1-n}&\,-(-\lambda )^{-n}\\0&\;\;\;\lambda ^{-1}&-\lambda ^{-2}&\cdots &-(-\lambda )^{2-n}&-(-\lambda )^{1-n}\\0&0&\,\,\,\lambda ^{-1}&\cdots &-(-\lambda )^{3-n}&-(-\lambda )^{2-n}\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&0&\cdots &\lambda ^{-1}&-\lambda ^{-2}\\0&0&0&\cdots &0&\lambda ^{-1}\\\end{bmatrix}}.} Also,specf(A) =f(specA); that is, every eigenvalueλ∈specA{\displaystyle \lambda \in \mathrm {spec} A}corresponds to the eigenvaluef(λ)∈spec⁡f(A){\displaystyle f(\lambda )\in \operatorname {spec} f(A)}, but it has, in general, differentalgebraic multiplicity, geometric multiplicity and index. However, the algebraic multiplicity may be computed as follows:mulf(A)f(λ)=∑μ∈specA∩f−1(f(λ))mulAμ.{\displaystyle {\text{mul}}_{f(A)}f(\lambda )=\sum _{\mu \in {\text{spec}}A\cap f^{-1}(f(\lambda ))}~{\text{mul}}_{A}\mu .} The functionf(T)of alinear transformationTbetween vector spaces can be defined in a similar way according to theholomorphic functional calculus, whereBanach spaceandRiemann surfacetheories play a fundamental role. In the case of finite-dimensional spaces, both theories perfectly match. Now suppose a (complex)dynamical systemis simply defined by the equationz˙(t)=A(c)z(t),z(0)=z0∈Cn,{\displaystyle {\begin{aligned}{\dot {\mathbf {z} }}(t)&=A(\mathbf {c} )\mathbf {z} (t),\\\mathbf {z} (0)&=\mathbf {z} _{0}\in \mathbb {C} ^{n},\end{aligned}}} wherez:R+→R{\displaystyle \mathbf {z} :\mathbb {R} _{+}\to {\mathcal {R}}}is the (n-dimensional) curve parametrization of an orbit on theRiemann surfaceR{\displaystyle {\mathcal {R}}}of the dynamical system, whereasA(c)is ann×ncomplex matrix whose elements are complex functions of ad-dimensional parameterc∈Cd{\displaystyle \mathbf {c} \in \mathbb {C} ^{d}}. Even ifA∈Mn(C0(Cd)){\displaystyle A\in \mathbb {M} _{n}\left(\mathrm {C} ^{0}\left(\mathbb {C} ^{d}\right)\right)}(that is,Acontinuously depends on the parameterc) theJordan normal formof the matrix is continuously deformedalmost everywhereonCd{\displaystyle \mathbb {C} ^{d}}but, in general,noteverywhere: there is some critical submanifold ofCd{\displaystyle \mathbb {C} ^{d}}on which the Jordan form abruptly changes its structure whenever the parameter crosses or simply "travels" around it (monodromy). Such changes mean that several Jordan blocks (either belonging to different eigenvalues or not) join to a unique Jordan block, or vice versa (that is, one Jordan block splits into two or more different ones). Many aspects ofbifurcation theoryfor both continuous and discrete dynamical systems can be interpreted with the analysis of functional Jordan matrices. From thetangent spacedynamics, this means that the orthogonal decomposition of the dynamical system'sphase spacechanges and, for example, different orbits gain periodicity, or lose it, or shift from a certain kind of periodicity to another (such asperiod-doubling, cfr.logistic map). In a sentence, the qualitative behaviour of such a dynamical system may substantially change as theversal deformationof the Jordan normal form ofA(c). The simplest example of adynamical systemis a system of linear, constant-coefficient, ordinary differential equations; that is, letA∈Mn(C){\displaystyle A\in \mathbb {M} _{n}(\mathbb {C} )}andz0∈Cn{\displaystyle \mathbf {z} _{0}\in \mathbb {C} ^{n}}:z˙(t)=Az(t),z(0)=z0,{\displaystyle {\begin{aligned}{\dot {\mathbf {z} }}(t)&=A\mathbf {z} (t),\\\mathbf {z} (0)&=\mathbf {z} _{0},\end{aligned}}}whose direct closed-form solution involves computation of thematrix exponential:z(t)=etAz0.{\displaystyle \mathbf {z} (t)=e^{tA}\mathbf {z} _{0}.} Another way, provided the solution is restricted to the localLebesgue spaceofn-dimensional vector fieldsz∈Lloc1(R+)n{\displaystyle \mathbf {z} \in \mathrm {L} _{\mathrm {loc} }^{1}(\mathbb {R} _{+})^{n}}, is to use itsLaplace transformZ(s)=L[z](s){\displaystyle \mathbf {Z} (s)={\mathcal {L}}[\mathbf {z} ](s)}. In this caseZ(s)=(sI−A)−1z0.{\displaystyle \mathbf {Z} (s)=\left(sI-A\right)^{-1}\mathbf {z} _{0}.} The matrix function(A−sI)−1is called theresolvent matrixof thedifferential operatorddt−A{\textstyle {\frac {\mathrm {d} }{\mathrm {d} t}}-A}. It ismeromorphicwith respect to the complex parameters∈C{\displaystyle s\in \mathbb {C} }since its matrix elements are rational functions whose denominator is equal for all todet(A−sI). Its polar singularities are the eigenvalues ofA, whose order equals their index for it; that is,ord(A−sI)−1λ=idxAλ{\displaystyle \mathrm {ord} _{(A-sI)^{-1}}\lambda =\mathrm {idx} _{A}\lambda }.
https://en.wikipedia.org/wiki/Jordan_matrix
Inmathematics, specificallylinear algebra, theJordan–Chevalley decomposition, named afterCamille JordanandClaude Chevalley, expresses alinear operatorin a unique way as the sum of two other linear operators which are simpler to understand. Specifically, one part ispotentially diagonalisableand the other isnilpotent. The two parts arepolynomialsin the operator, which makes them behave nicely in algebraic manipulations. The decomposition has a short description when theJordan normal formof the operator is given, but it exists under weaker hypotheses than are needed for the existence of a Jordan normal form. Hence the Jordan–Chevalley decomposition can be seen as a generalisation of the Jordan normal form, which is also reflected in several proofs of it. It is closely related to theWedderburn principal theoremaboutassociative algebras, which also leads to several analogues inLie algebras. Analogues of the Jordan–Chevalley decomposition also exist for elements ofLinear algebraic groupsandLie groupsvia a multiplicative reformulation. The decomposition is an important tool in the study of all of these objects, and was developed for this purpose. In many texts, the potentially diagonalisable part is also characterised as thesemisimplepart. A basic question in linear algebra is whether an operator on a finite-dimensionalvector spacecan bediagonalised. For example, this is closely related to theeigenvaluesof the operator. In several contexts, one may be dealing with many operators which are not diagonalisable. Even over an algebraically closed field, a diagonalisation may not exist. In this context, theJordan normal formachieves the best possible result akin to a diagonalisation. For linear operators over afieldwhich is notalgebraically closed, there may be no eigenvector at all. This latter point is not the main concern dealt with by the Jordan–Chevalley decomposition. To avoid this problem, insteadpotentially diagonalisable operatorsare considered, which are those that admit a diagonalisation over some field (or equivalently over thealgebraic closureof the field under consideration). The operators which are "the furthest away" from being diagonalisable arenilpotent operators. An operator (or more generally an element of aring)x{\displaystyle x}is said to benilpotentwhen there is some positive integerm≥1{\displaystyle m\geq 1}such thatxm=0{\displaystyle x^{m}=0}. In several contexts inabstract algebra, it is the case that the presence of nilpotent elements of a ring make them much more complicated to work with.[citation needed]To some extent, this is also the case for linear operators. The Jordan–Chevalley decomposition "separates out" the nilpotent part of an operator which causes it to be not potentially diagonalisable. So when it exists, the complications introduced by nilpotent operators and their interaction with other operators can be understood using the Jordan–Chevalley decomposition. Historically, the Jordan–Chevalley decomposition was motivated by the applications to the theory ofLie algebrasandlinear algebraic groups,[1]as described insections below. LetK{\displaystyle K}be afield,V{\displaystyle V}a finite-dimensionalvector spaceoverK{\displaystyle K}, andT{\displaystyle T}a linear operator overV{\displaystyle V}(equivalently, amatrixwith entries fromK{\displaystyle K}). If theminimal polynomialofT{\displaystyle T}splits overK{\displaystyle K}(for example ifK{\displaystyle K}is algebraically closed), thenT{\displaystyle T}has aJordan normal formT=SJS−1{\displaystyle T=SJS^{-1}}. IfD{\displaystyle D}is the diagonal ofJ{\displaystyle J}, letR=J−D{\displaystyle R=J-D}be the remaining part. ThenT=SDS−1+SRS−1{\displaystyle T=SDS^{-1}+SRS^{-1}}is a decomposition whereSDS−1{\displaystyle SDS^{-1}}is diagonalisable andSRS−1{\displaystyle SRS^{-1}}is nilpotent. This restatement of the normal form as an additive decomposition not only makes the numerical computation more stable[citation needed], but can be generalised to cases where the minimal polynomial ofT{\displaystyle T}does not split. If the minimal polynomial ofT{\displaystyle T}splits intodistinctlinear factors, thenT{\displaystyle T}is diagonalisable. Therefore, if the minimal polynomial ofT{\displaystyle T}is at leastseparable, thenT{\displaystyle T}is potentially diagonalisable. The Jordan–Chevalley decomposition is concerned with the more general case where the minimal polynomial ofT{\displaystyle T}is a product of separable polynomials. Letx:V→V{\displaystyle x:V\to V}be any linear operator on the finite-dimensional vector spaceV{\displaystyle V}over the fieldK{\displaystyle K}. A Jordan–Chevalley decomposition ofx{\displaystyle x}is an expression of it as a sum wherexs{\displaystyle x_{s}}is potentially diagonalisable,xn{\displaystyle x_{n}}is nilpotent, andxsxn=xnxs{\displaystyle x_{s}x_{n}=x_{n}x_{s}}. Jordan-Chevalley decomposition—Letx:V→V{\displaystyle x:V\to V}be any operator on the finite-dimensional vector spaceV{\displaystyle V}over the fieldK{\displaystyle K}. Thenx{\displaystyle x}admits a Jordan-Chevalley decompositionif and only ifthe minimal polynomial ofx{\displaystyle x}is a product of separable polynomials. Moreover, in this case, there is a unique Jordan-Chevalley decomposition, andxs{\displaystyle x_{s}}(and hence alsoxn{\displaystyle x_{n}}) can be written as a polynomial (with coefficients fromK{\displaystyle K}) inx{\displaystyle x}with zero constant coefficient. Several proofs are discussed in (Couty, Esterle & Zarouf 2011). Two arguments are also described below. IfK{\displaystyle K}is aperfect field, then every polynomial is a product of separable polynomials (since every polynomial is a product of its irreducible factors, and these are separable over a perfect field). So in this case, the Jordan–Chevalley decomposition always exists. Moreover, over a perfect field, a polynomial is separable if and only if it is square-free. Therefore an operator is potentially diagonalisable if and only if its minimal polynomial is square-free. In general (over any field), the minimal polynomial of a linear operator is square-free if and only if the operator issemisimple.[2](In particular, the sum of two commuting semisimple operators is always semisimple over a perfect field. The same statement is not true over general fields.) The property of being semisimple is more relevant than being potentially diagonalisable in most contexts where the Jordan–Chevalley decomposition is applied, such as for Lie algebras.[citation needed]For these reasons, many texts restrict to the case of perfect fields. Thatxs{\displaystyle x_{s}}andxn{\displaystyle x_{n}}are polynomials inx{\displaystyle x}implies in particular that they commute with any operator that commutes withx{\displaystyle x}. This observation underlies the uniqueness proof. Letx=xs+xn{\displaystyle x=x_{s}+x_{n}}be a Jordan–Chevalley decomposition in whichxs{\displaystyle x_{s}}and (hence also)xn{\displaystyle x_{n}}are polynomials inx{\displaystyle x}. Letx=xs′+xn′{\displaystyle x=x_{s}'+x_{n}'}be any Jordan–Chevalley decomposition. Thenxs−xs′=xn′−xn{\displaystyle x_{s}-x_{s}'=x_{n}'-x_{n}}, andxs′,xn′{\displaystyle x_{s}',x_{n}'}both commute withx{\displaystyle x}, hence withxs,xn{\displaystyle x_{s},x_{n}}since these are polynomials inx{\displaystyle x}. The sum of commuting nilpotent operators is again nilpotent, and the sum of commuting potentially diagonalisable operators again potentially diagonalisable (because they aresimultaneously diagonalizableover thealgebraic closureofK{\displaystyle K}). Since the only operator which is both potentially diagonalisable and nilpotent is the zero operator it follows thatxs−xs′=0=xn−xn′{\displaystyle x_{s}-x_{s}'=0=x_{n}-x_{n}'}. To show that the condition thatx{\displaystyle x}have a minimal polynomial which is a product of separable polynomials is necessary, suppose thatx=xs+xn{\displaystyle x=x_{s}+x_{n}}is some Jordan–Chevalley decomposition. Lettingp{\displaystyle p}be the separable minimal polynomial ofxs{\displaystyle x_{s}}, one can check using thebinomial theoremthatp(xs+xn){\displaystyle p(x_{s}+x_{n})}can be written asxny{\displaystyle x_{n}y}wherey{\displaystyle y}is some polynomial inxs,xn{\displaystyle x_{s},x_{n}}. Moreover, for someℓ≥1{\displaystyle \ell \geq 1},xnℓ=0{\displaystyle x_{n}^{\ell }=0}. Thusp(x)ℓ=xnℓyℓ=0{\displaystyle p(x)^{\ell }=x_{n}^{\ell }y^{\ell }=0}and so the minimal polynomial ofx{\displaystyle x}must dividepℓ{\displaystyle p^{\ell }}. Aspℓ{\displaystyle p^{\ell }}is a product of separable polynomials (namely of copies ofp{\displaystyle p}), so is the minimal polynomial. If the ground field is notperfect, then a Jordan–Chevalley decomposition may not exist, as it is possible that the minimal polynomial is not a product of separable polynomials. The simplest such example is the following. Letp{\displaystyle p}be a prime number, letk{\displaystyle k}be an imperfect field of characteristicp,{\displaystyle p,}(e. g.k=Fp(t){\displaystyle k=\mathbb {F} _{p}(t)}) and choosea∈k{\displaystyle a\in k}that is not ap{\displaystyle p}th power. LetV=k[X]/(Xp−a)2,{\displaystyle V=k[X]/\left(X^{p}-a\right)^{2},}letx=X¯{\displaystyle x={\overline {X}}}be the image in the quotient and letT{\displaystyle T}be thek{\displaystyle k}-linear operator given by multiplication byx{\displaystyle x}inV{\displaystyle V}. Note that the minimal polynomial is precisely(Xp−a)2{\displaystyle \left(X^{p}-a\right)^{2}}, which is inseparable and a square. By the necessity of the condition for the Jordan–Chevalley decomposition (as shown in the last section), this operator does not have a Jordan–Chevalley decomposition. It can be instructive to see concretely why there is at least no decomposition into a square-free and a nilpotent part. Note thatT{\displaystyle T}has as its invariantk{\displaystyle k}-linear subspaces precisely the ideals ofV{\displaystyle V}viewed as a ring, which correspond to the ideals ofk[X]{\displaystyle k[X]}containing(Xp−a)2{\displaystyle \left(X^{p}-a\right)^{2}}. SinceXp−a{\displaystyle X^{p}-a}is irreducible ink[X],{\displaystyle k[X],}ideals ofV{\displaystyle V}are0,{\displaystyle 0,}V{\displaystyle V}andJ=(xp−a)V.{\displaystyle J=\left(x^{p}-a\right)V.}SupposeT=S+N{\displaystyle T=S+N}for commutingk{\displaystyle k}-linear operatorsS{\displaystyle S}andN{\displaystyle N}that are respectively semisimple (just overk{\displaystyle k}, which is weaker than semisimplicity over an algebraic closure ofk{\displaystyle k}and also weaker than being potentially diagonalisable) and nilpotent. SinceS{\displaystyle S}andN{\displaystyle N}commute, they each commute withT=S+N{\displaystyle T=S+N}and hence each actsk[x]{\displaystyle k[x]}-linearly onV{\displaystyle V}. ThereforeS{\displaystyle S}andN{\displaystyle N}are each given by multiplication by respective members ofV{\displaystyle V}s=S(1){\displaystyle s=S(1)}andn=N(1),{\displaystyle n=N(1),}withs+n=T(1)=x{\displaystyle s+n=T(1)=x}. SinceN{\displaystyle N}is nilpotent,n{\displaystyle n}is nilpotent inV,{\displaystyle V,}thereforen¯=0{\displaystyle {\overline {n}}=0}inV/J,{\displaystyle V/J,}forV/J{\displaystyle V/J}is a field. Hence,n∈J,{\displaystyle n\in J,}thereforen=(xp−a)h(x){\displaystyle n=\left(x^{p}-a\right)h(x)}for some polynomialh(X)∈k[X]{\displaystyle h(X)\in k[X]}. Also, we see thatn2=0{\displaystyle n^{2}=0}. Sincek{\displaystyle k}is of characteristicp,{\displaystyle p,}we havexp=sp+np=sp{\displaystyle x^{p}=s^{p}+n^{p}=s^{p}}. On the other hand, sincex¯=s¯{\displaystyle {\overline {x}}={\overline {s}}}inA/J,{\displaystyle A/J,}we haveh(s¯)=h(x¯),{\displaystyle h\left({\overline {s}}\right)=h\left({\overline {x}}\right),}thereforeh(s)−h(x)∈J{\displaystyle h(s)-h(x)\in J}inV.{\displaystyle V.}Since(xp−a)J=0,{\displaystyle \left(x^{p}-a\right)J=0,}we have(xp−a)h(x)=(xp−a)h(s).{\displaystyle \left(x^{p}-a\right)h(x)=\left(x^{p}-a\right)h(s).}Combining these results we getx=s+n=s+(sp−a)h(s).{\displaystyle x=s+n=s+\left(s^{p}-a\right)h(s).}This shows thats{\displaystyle s}generatesV{\displaystyle V}as ak{\displaystyle k}-algebra and thus theS{\displaystyle S}-stablek{\displaystyle k}-linear subspaces ofV{\displaystyle V}are ideals ofV,{\displaystyle V,}i.e. they are0,{\displaystyle 0,}J{\displaystyle J}andV.{\displaystyle V.}We see thatJ{\displaystyle J}is anS{\displaystyle S}-invariant subspace ofV{\displaystyle V}which has no complementS{\displaystyle S}-invariant subspace, contrary to the assumption thatS{\displaystyle S}is semisimple. Thus, there is no decomposition ofT{\displaystyle T}as a sum of commutingk{\displaystyle k}-linear operators that are respectively semisimple and nilpotent. If instead of with the polynomial(Xp−a)2{\displaystyle \left(X^{p}-a\right)^{2}}, the same construction is performed withXp−a{\displaystyle {X^{p}}-a}, the resulting operatorT{\displaystyle T}still does not admit a Jordan–Chevalley decomposition by the main theorem. However,T{\displaystyle T}is semi-simple. The trivial decompositionT=T+0{\displaystyle T=T+0}hence expressesT{\displaystyle T}as a sum of a semisimple and a nilpotent operator, both of which are polynomials inT{\displaystyle T}. This construction is similar toHensel's lemmain that it uses an algebraic analogue ofTaylor's theoremto find an element with a certain algebraic property via a variant ofNewton's method. In this form, it is taken from (Geck 2022). Letx{\displaystyle x}have minimal polynomialp{\displaystyle p}and assume this is a product of separable polynomials. This condition is equivalent to demanding that there is some separableq{\displaystyle q}such thatq∣p{\displaystyle q\mid p}andp∣qm{\displaystyle p\mid q^{m}}for somem≥1{\displaystyle m\geq 1}. By theBézout lemma, there are polynomialsu{\displaystyle u}andv{\displaystyle v}such thatuq+vq′=1{\displaystyle {uq+{vq'}}=1}. This can be used to define a recursionxn+1=xn−v(xn)q(xn){\displaystyle x_{n+1}=x_{n}-v(x_{n})q(x_{n})}, starting withx0=x{\displaystyle x_{0}=x}. LettingX{\displaystyle {\mathfrak {X}}}be the algebra of operators which are polynomials inx{\displaystyle x}, it can be checked by induction that for alln{\displaystyle n}: Thus, as soon as2n≥m{\displaystyle 2^{n}\geq m},q(xn)=0{\displaystyle q(x_{n})=0}by the second point sincep∣qm{\displaystyle p\mid q^{m}}andp(x)=0{\displaystyle p(x)=0}, so the minimal polynomial ofxn{\displaystyle x_{n}}will divideq{\displaystyle q}and hence be separable. Moreover,xn{\displaystyle x_{n}}will be a polynomial inx{\displaystyle x}by the first point andxn−x{\displaystyle x_{n}-x}will be nilpotent by the third point (in fact,(xn−x)m=0{\displaystyle (x_{n}-x)^{m}=0}). Therefore,x=xn+(x−xn){\displaystyle x=x_{n}+(x-x_{n})}is then the Jordan–Chevalley decomposition ofx{\displaystyle x}.Q.E.D. This proof, besides being completely elementary, has the advantage that it isalgorithmic: By theCayley–Hamilton theorem,p{\displaystyle p}can be taken to be the characteristic polynomial ofx{\displaystyle x}, and in many contexts,q{\displaystyle q}can be determined fromp{\displaystyle p}.[3]Thenv{\displaystyle v}can be determined using theEuclidean algorithm. The iteration of applying the polynomialvq{\displaystyle vq}to the matrix then can be performed until eitherv(xn)q(xn)=0{\displaystyle v(x_{n})q(x_{n})=0}(because then all later values will be equal) or2n{\displaystyle 2^{n}}exceeds the dimension of the vector space on whichx{\displaystyle x}is defined (wheren{\displaystyle n}is the number of iteration steps performed, as above). This proof, or variants of it, is commonly used to establish the Jordan–Chevalley decomposition. It has the advantage that it is very direct and describes quite precisely how close one can get to a Jordan–Chevalley decomposition: IfL{\displaystyle L}is thesplitting fieldof the minimal polynomial ofx{\displaystyle x}andG{\displaystyle G}is the group ofautomorphismsofL{\displaystyle L}that fix the base fieldK{\displaystyle K}, then the setF{\displaystyle F}of elements ofL{\displaystyle L}that are fixed by all elements ofG{\displaystyle G}is a field with inclusionsK⊆F⊆L{\displaystyle K\subseteq F\subseteq L}(seeGalois correspondence). Below it is argued thatx{\displaystyle x}admits a Jordan–Chevalley decomposition overF{\displaystyle F}, but not any smaller field.[citation needed]This argument does not useGalois theory. However, Galois theory is required deduce from this the condition for the existence of the Jordan-Chevalley given above. Above it was observed that ifx{\displaystyle x}has a Jordan normal form (i. e. if the minimal polynomial ofx{\displaystyle x}splits), then it has a Jordan Chevalley decomposition. In this case, one can also see directly thatxn{\displaystyle x_{n}}(and hence alsoxs{\displaystyle x_{s}}) is a polynomial inx{\displaystyle x}. Indeed, it suffices to check this for the decomposition of the Jordan matrixJ=D+R{\displaystyle J=D+R}. This is a technical argument, but does not require any tricks beyond theChinese remainder theorem. In the Jordan normal form, we have writtenV=⨁i=1rVi{\displaystyle V=\bigoplus _{i=1}^{r}V_{i}}wherer{\displaystyle r}is the number of Jordan blocks andx|Vi{\displaystyle x|_{V_{i}}}is one Jordan block. Now letf(t)=det⁡(tI−x){\displaystyle f(t)=\operatorname {det} (tI-x)}be thecharacteristic polynomialofx{\displaystyle x}. Becausef{\displaystyle f}splits, it can be written asf(t)=∏i=1r(t−λi)di{\displaystyle f(t)=\prod _{i=1}^{r}(t-\lambda _{i})^{d_{i}}}, wherer{\displaystyle r}is the number of Jordan blocks,λi{\displaystyle \lambda _{i}}are the distinct eigenvalues, anddi{\displaystyle d_{i}}are the sizes of the Jordan blocks, sodi=dim⁡Vi{\displaystyle d_{i}=\dim V_{i}}. Now, the Chinese remainder theorem applied to the polynomial ringk[t]{\displaystyle k[t]}gives a polynomialp(t){\displaystyle p(t)}satisfying the conditions (There is a redundancy in the conditions if someλi{\displaystyle \lambda _{i}}is zero but that is not an issue; just remove it from the conditions.) The conditionp(t)≡λimod(t−λi)di{\displaystyle p(t)\equiv \lambda _{i}{\bmod {(}}t-\lambda _{i})^{d_{i}}}, when spelled out, means thatp(t)−λi=gi(t)(t−λi)di{\displaystyle p(t)-\lambda _{i}=g_{i}(t)(t-\lambda _{i})^{d_{i}}}for some polynomialgi(t){\displaystyle g_{i}(t)}. Since(x−λiI)di{\displaystyle (x-\lambda _{i}I)^{d_{i}}}is the zero map onVi{\displaystyle V_{i}},p(x){\displaystyle p(x)}andxs{\displaystyle x_{s}}agree on eachVi{\displaystyle V_{i}}; i.e.,p(x)=xs{\displaystyle p(x)=x_{s}}. Also thenq(x)=xn{\displaystyle q(x)=x_{n}}withq(t)=t−p(t){\displaystyle q(t)=t-p(t)}. The conditionp(t)≡0modt{\displaystyle p(t)\equiv 0{\bmod {t}}}ensures thatp(t){\displaystyle p(t)}andq(t){\displaystyle q(t)}have no constant terms. This completes the proof of the theorem in case the minimal polynomial ofx{\displaystyle x}splits. This fact can be used to deduce the Jordan–Chevalley decomposition in the general case. LetL{\displaystyle L}be the splitting field of the minimal polynomial ofx{\displaystyle x}, so thatx{\displaystyle x}does admit a Jordan normal form overL{\displaystyle L}. Then, by the argument just given,x{\displaystyle x}has a Jordan–Chevalley decompositionx=c(x)+(x−c(x)){\displaystyle x={c(x)}+{(x-{c(x)})}}wherec{\displaystyle c}is a polynomial with coefficients fromL{\displaystyle L},c(x){\displaystyle c(x)}is diagonalisable (overL{\displaystyle L}) andx−c(x){\displaystyle x-c(x)}is nilpotent. Letσ{\displaystyle \sigma }be a field automorphism ofL{\displaystyle L}which fixesK{\displaystyle K}. Thenc(x)+(x−c(x))=x=σ(x)=σ(c(x))+σ(x−c(x)){\displaystyle c(x)+(x-{c(x)})=x={\sigma (x)}={\sigma ({c(x)})}+{\sigma (x-{c(x)})}}Hereσ(c(x))=σ(c)(x){\displaystyle \sigma (c(x))=\sigma (c)(x)}is a polynomial inx{\displaystyle x}, so isx−c(x){\displaystyle x-c(x)}. Thus,σ(c(x)){\displaystyle \sigma (c(x))}andσ(x−c(x)){\displaystyle \sigma (x-c(x))}commute. Also,σ(c(x)){\displaystyle \sigma (c(x))}is potentially diagonalisable andσ(x−c(x)){\displaystyle \sigma ({x-c(x)})}is nilpotent. Thus, by the uniqueness of the Jordan–Chevalley decomposition (overL{\displaystyle L}),σ(c(x))=c(x){\displaystyle \sigma (c(x))=c(x)}andσ(c(x))=c(x){\displaystyle \sigma (c(x))=c(x)}. Therefore, by definition,xs,xn{\displaystyle x_{s},x_{n}}are endomorphisms (represented by matrices) overF{\displaystyle F}. Finally, since{1,x,x2,…}{\displaystyle \left\{1,x,x^{2},\dots \right\}}contains anL{\displaystyle L}-basis that spans the space containingxs,xn{\displaystyle x_{s},x_{n}}, by the same argument, we also see thatc{\displaystyle c}has coefficients inF{\displaystyle F}.Q.E.D. If the minimal polynomial ofx{\displaystyle x}is a product of separable polynomials, then thefield extensionL/K{\displaystyle L/K}isGalois, meaning thatF=K{\displaystyle F=K}. The Jordan–Chevalley decomposition is very closely related to theWedderburn principal theoremin the following formulation:[4] Wedderburn principal theorem—LetA{\displaystyle A}be a finite-dimensional associative algebra over the fieldK{\displaystyle K}with Jacobson radicalJ{\displaystyle J}. ThenA/J{\displaystyle A/J}is separableif and only ifA{\displaystyle A}has a separable semisimple subalgebraB{\displaystyle B}such thatA=B⊕J{\displaystyle A=B\oplus J}. Usually, the term „separable“ in this theorem refers to the general concept of aseparable algebraand the theorem might then be established as a corollary of a more general high-powered result.[5]However, if it is instead interpreted in the more basic sense that every element have a separable minimal polynomial, then this statement is essentially equivalent to the Jordan–Chevalley decomposition as described above. This gives a different way to view the decomposition, and for instance (Jacobson 1979) takes this route for establishing it. To see how the Jordan–Chevalley decomposition follows from the Wedderburn principal theorem, letV{\displaystyle V}be a finite-dimensional vector space over the fieldK{\displaystyle K},x:V→V{\displaystyle x:V\to V}an endomorphism with a minimal polynomial which is a product of separable polynomials andA=K[x]⊂End⁡(V){\displaystyle A=K[x]\subset \operatorname {End} (V)}the subalgebra generated byx{\displaystyle x}. Note thatA{\displaystyle A}is a commutativeArtinian ring, soJ{\displaystyle J}is also the nilradical ofA{\displaystyle A}. Moreover,A/J{\displaystyle A/J}is separable, because ifa∈A{\displaystyle a\in A}, then for minimal polynomialp{\displaystyle p}, there is a separable polynomialq{\displaystyle q}such thatq∣p{\displaystyle q\mid p}andp∣qm{\displaystyle p\mid q^{m}}for somem≥1{\displaystyle m\geq 1}. Thereforeq(a)∈J{\displaystyle q(a)\in J}, so the minimal polynomial of the imagea+J∈A/J{\displaystyle a+J\in A/J}dividesq{\displaystyle q}, meaning that it must be separable as well (since a divisor of a separable polynomial is separable). There is then the vector-space decompositionA=B⊕J{\displaystyle A=B\oplus J}withB{\displaystyle B}separable. In particular, the endomorphismx{\displaystyle x}can be written asx=xs+xn{\displaystyle x=x_{s}+x_{n}}wherexs∈B{\displaystyle x_{s}\in B}andxn∈J{\displaystyle x_{n}\in J}. Moreover, both elements are, like any element ofA{\displaystyle A}, polynomials inx{\displaystyle x}. Conversely, the Wedderburn principal theorem in the formulation above is a consequence of the Jordan–Chevalley decomposition. IfA{\displaystyle A}has a separable subalgebraB{\displaystyle B}such thatA=B⊕J{\displaystyle A=B\oplus J}, thenA/J≅B{\displaystyle A/J\cong B}is separable. Conversely, ifA/J{\displaystyle A/J}is separable, then any element ofA{\displaystyle A}is a sum of a separable and a nilpotent element. As shown above in#Proof of uniqueness and necessity, this implies that the minimal polynomial will be a product of separable polynomials. Letx∈A{\displaystyle x\in A}be arbitrary, define the operatorTx:A→A,a↦ax{\displaystyle T_{x}:A\to A,a\mapsto ax}, and note that this has the same minimal polynomial asx{\displaystyle x}. So it admits a Jordan–Chevalley decomposition, where both operators are polynomials inTx{\displaystyle T_{x}}, hence of the formTs,Tn{\displaystyle T_{s},T_{n}}for somes,n∈A{\displaystyle s,n\in A}which have separable and nilpotent minimal polynomials, respectively. Moreover, this decomposition is unique. Thus ifB{\displaystyle B}is the subalgebra of all separable elements (that this is a subalgebra can be seen by recalling thats{\displaystyle s}is separable if and only ifTs{\displaystyle T_{s}}is potentially diagonalisable),A=B⊕J{\displaystyle A=B\oplus J}(becauseJ{\displaystyle J}is the ideal of nilpotent elements). The algebraB≅A/J{\displaystyle B\cong A/J}is separable and semisimple by assumption. Over perfect fields, this result simplifies. Indeed,A/J{\displaystyle A/J}is then always separable in the sense of minimal polynomials: Ifa∈A{\displaystyle a\in A}, then the minimal polynomialp{\displaystyle p}is a product of separable polynomials, so there is a separable polynomialq{\displaystyle q}such thatq∣p{\displaystyle q\mid p}andp∣qm{\displaystyle p\mid q^{m}}for somem≥1{\displaystyle m\geq 1}. Thusq(a)∈J{\displaystyle q(a)\in J}. So inA/J{\displaystyle A/J}, the minimal polynomial ofa+J{\displaystyle a+J}dividesq{\displaystyle q}and is hence separable. The crucial point in the theorem is then not thatA/J{\displaystyle A/J}is separable (because that condition is vacuous), but that it is semisimple, meaning itsradicalis trivial. The same statement is true for Lie algebras, but only in characteristic zero. This is the content ofLevi’s theorem. (Note that the notions of semisimple in both results do indeed correspond, because in both cases this is equivalent to being the sum of simple subalgebras or having trivial radical, at least in the finite-dimensional case.) The crucial point in the proof for the Wedderburn principal theorem above is that an elementx∈A{\displaystyle x\in A}corresponds to a linear operatorTx:A→A{\displaystyle T_{x}:A\to A}with the same properties. In the theory of Lie algebras, this corresponds to the adjoint representation of a Lie algebrag{\displaystyle {\mathfrak {g}}}. This decomposed operator has a Jordan–Chevalley decompositionad⁡(x)=ad⁡(x)s+ad⁡(x)n{\displaystyle \operatorname {ad} (x)=\operatorname {ad} (x)_{s}+\operatorname {ad} (x)_{n}}. Just as in the associative case, this corresponds to a decomposition ofx{\displaystyle x}, but polynomials are not available as a tool. One context in which this does makes sense is the restricted case whereg{\displaystyle {\mathfrak {g}}}is contained in the Lie algebragl(V){\displaystyle {\mathfrak {gl}}(V)}of the endomorphisms of a finite-dimensional vector spaceV{\displaystyle V}over the perfect fieldK{\displaystyle K}. Indeed, anysemisimple Lie algebracan be realised in this way.[6] Ifx=xs+xn{\displaystyle x=x_{s}+x_{n}}is the Jordan decomposition, thenad⁡(x)=ad⁡(xs)+ad⁡(xn){\displaystyle \operatorname {ad} (x)=\operatorname {ad} (x_{s})+\operatorname {ad} (x_{n})}is the Jordan decomposition of the adjoint endomorphismad⁡(x){\displaystyle \operatorname {ad} (x)}on the vector spaceg{\displaystyle {\mathfrak {g}}}. Indeed, first,ad⁡(xs){\displaystyle \operatorname {ad} (x_{s})}andad⁡(xn){\displaystyle \operatorname {ad} (x_{n})}commute since[ad⁡(xs),ad⁡(xn)]=ad⁡([xs,xn])=0{\displaystyle [\operatorname {ad} (x_{s}),\operatorname {ad} (x_{n})]=\operatorname {ad} ([x_{s},x_{n}])=0}. Second, in general, for each endomorphismy∈g{\displaystyle y\in {\mathfrak {g}}}, we have: Hence, by uniqueness,ad⁡(x)s=ad⁡(xs){\displaystyle \operatorname {ad} (x)_{s}=\operatorname {ad} (x_{s})}andad⁡(x)n=ad⁡(xn){\displaystyle \operatorname {ad} (x)_{n}=\operatorname {ad} (x_{n})}. The adjoint representation is a very natural and general representation of any Lie algebra. The argument above illustrates (and indeed proves) a general principle which generalises this: Ifπ:g→gl(V){\displaystyle \pi :{\mathfrak {g}}\to {\mathfrak {gl}}(V)}isanyfinite-dimensional representation of asemisimplefinite-dimensional Lie algebra over a perfect field, thenπ{\displaystyle \pi }preserves the Jordan decomposition in the following sense: ifx=xs+xn{\displaystyle x=x_{s}+x_{n}}, thenπ(xs)=π(x)s{\displaystyle \pi (x_{s})=\pi (x)_{s}}andπ(xn)=π(x)n{\displaystyle \pi (x_{n})=\pi (x)_{n}}.[8][9] The Jordan decomposition can be used to characterize nilpotency of an endomorphism. Letkbe an algebraically closed field of characteristic zero,E=EndQ⁡(k){\displaystyle E=\operatorname {End} _{\mathbb {Q} }(k)}the endomorphism ring ofkover rational numbers andVa finite-dimensional vector space overk. Given an endomorphismx:V→V{\displaystyle x:V\to V}, letx=s+n{\displaystyle x=s+n}be the Jordan decomposition. Thens{\displaystyle s}is diagonalizable; i.e.,V=⨁Vi{\textstyle V=\bigoplus V_{i}}where eachVi{\displaystyle V_{i}}is the eigenspace for eigenvalueλi{\displaystyle \lambda _{i}}with multiplicitymi{\displaystyle m_{i}}. Then for anyφ∈E{\displaystyle \varphi \in E}letφ(s):V→V{\displaystyle \varphi (s):V\to V}be the endomorphism such thatφ(s):Vi→Vi{\displaystyle \varphi (s):V_{i}\to V_{i}}is the multiplication byφ(λi){\displaystyle \varphi (\lambda _{i})}. Chevalley callsφ(s){\displaystyle \varphi (s)}thereplicaofs{\displaystyle s}given byφ{\displaystyle \varphi }. (For example, ifk=C{\displaystyle k=\mathbb {C} }, then the complex conjugate of an endomorphism is an example of a replica.) Now, Nilpotency criterion—[10]x{\displaystyle x}is nilpotent (i.e.,s=0{\displaystyle s=0}) if and only iftr⁡(xφ(s))=0{\displaystyle \operatorname {tr} (x\varphi (s))=0}for everyφ∈E{\displaystyle \varphi \in E}. Also, ifk=C{\displaystyle k=\mathbb {C} }, then it suffices the condition holds forφ={\displaystyle \varphi =}complex conjugation. Proof:First, sincenφ(s){\displaystyle n\varphi (s)}is nilpotent, Ifφ{\displaystyle \varphi }is the complex conjugation, this impliesλi=0{\displaystyle \lambda _{i}=0}for everyi. Otherwise, takeφ{\displaystyle \varphi }to be aQ{\displaystyle \mathbb {Q} }-linear functionalφ:k→Q{\displaystyle \varphi :k\to \mathbb {Q} }followed byQ↪k{\displaystyle \mathbb {Q} \hookrightarrow k}. Applying that to the above equation, one gets: and, sinceφ(λi){\displaystyle \varphi (\lambda _{i})}are all real numbers,φ(λi)=0{\displaystyle \varphi (\lambda _{i})=0}for everyi. Varying the linear functionals then impliesλi=0{\displaystyle \lambda _{i}=0}for everyi.◻{\displaystyle \square } A typical application of the above criterion is the proof ofCartan's criterion for solvabilityof a Lie algebra. It says: ifg⊂gl(V){\displaystyle {\mathfrak {g}}\subset {\mathfrak {gl}}(V)}is a Lie subalgebra over a fieldkof characteristic zero such thattr⁡(xy)=0{\displaystyle \operatorname {tr} (xy)=0}for eachx∈g,y∈Dg=[g,g]{\displaystyle x\in {\mathfrak {g}},y\in D{\mathfrak {g}}=[{\mathfrak {g}},{\mathfrak {g}}]}, theng{\displaystyle {\mathfrak {g}}}is solvable. Proof:[11]Without loss of generality, assumekis algebraically closed. ByLie's theoremandEngel's theorem, it suffices to show for eachx∈Dg{\displaystyle x\in D{\mathfrak {g}}},x{\displaystyle x}is a nilpotent endomorphism ofV. Writex=∑i[xi,yi]{\textstyle x=\sum _{i}[x_{i},y_{i}]}. Then we need to show: is zero. Letg′=gl(V){\displaystyle {\mathfrak {g}}'={\mathfrak {gl}}(V)}. Note we have:adg′⁡(x):g→Dg{\displaystyle \operatorname {ad} _{{\mathfrak {g}}'}(x):{\mathfrak {g}}\to D{\mathfrak {g}}}and, sinceadg′⁡(s){\displaystyle \operatorname {ad} _{{\mathfrak {g}}'}(s)}is the semisimple part of the Jordan decomposition ofadg′⁡(x){\displaystyle \operatorname {ad} _{{\mathfrak {g}}'}(x)}, it follows thatadg′⁡(s){\displaystyle \operatorname {ad} _{{\mathfrak {g}}'}(s)}is a polynomial without constant term inadg′⁡(x){\displaystyle \operatorname {ad} _{{\mathfrak {g}}'}(x)}; hence,adg′⁡(s):g→Dg{\displaystyle \operatorname {ad} _{{\mathfrak {g}}'}(s):{\mathfrak {g}}\to D{\mathfrak {g}}}and the same is true withφ(s){\displaystyle \varphi (s)}in place ofs{\displaystyle s}. That is,[φ(s),g]⊂Dg{\displaystyle [\varphi (s),{\mathfrak {g}}]\subset D{\mathfrak {g}}}, which implies the claim given the assumption.◻{\displaystyle \square } In the formulation of Chevalley andMostow, the additive decomposition states that an elementXin a realsemisimple Lie algebragwithIwasawa decompositiong=k⊕a⊕ncan be written as the sum of three commuting elements of the Lie algebraX=S+D+N, withS,DandNconjugate to elements ink,aandnrespectively. In general the terms in the Iwasawa decomposition do not commute. Ifx{\displaystyle x}is an invertible linear operator, it may be more convenient to use a multiplicative Jordan–Chevalley decomposition. This expressesx{\displaystyle x}as a product wherexs{\displaystyle x_{s}}is potentially diagonalisable, andxu−1{\displaystyle x_{u}-1}is nilpotent (one also says thatxu{\displaystyle x_{u}}is unipotent). The multiplicative version of the decomposition follows from the additive one since, asxs{\displaystyle x_{s}}is invertible (because the sum of an invertible operator and a nilpotent operator is invertible) and1+xs−1xn{\displaystyle 1+x_{s}^{-1}x_{n}}is unipotent. (Conversely, by the same type of argument, one can deduce the additive version from the multiplicative one.) The multiplicative version is closely related to decompositions encountered in a linear algebraic group. For this it is again useful to assume that the underlying fieldK{\displaystyle K}is perfect because then the Jordan–Chevalley decomposition exists for all matrices. LetG{\displaystyle G}be alinear algebraic groupover a perfect field. Then, essentially by definition, there is a closed embeddingG↪GLn{\displaystyle G\hookrightarrow \mathbf {GL} _{n}}. Now, to each elementg∈G{\displaystyle g\in G}, by the multiplicative Jordan decomposition, there are a pair of a semisimple elementgs{\displaystyle g_{s}}and a unipotent elementgu{\displaystyle g_{u}}a prioriinGLn{\displaystyle \mathbf {GL} _{n}}such thatg=gsgu=gugs{\displaystyle g=g_{s}g_{u}=g_{u}g_{s}}. But, as it turns out,[12]the elementsgs,gu{\displaystyle g_{s},g_{u}}can be shown to be inG{\displaystyle G}(i.e., they satisfy the defining equations ofG) and that they are independent of the embedding intoGLn{\displaystyle \mathbf {GL} _{n}}; i.e., the decomposition is intrinsic. WhenGis abelian,G{\displaystyle G}is then the direct product of the closed subgroup of the semisimple elements inGand that of unipotent elements.[13] The multiplicative decomposition states that ifgis an element of the corresponding connected semisimple Lie groupGwith corresponding Iwasawa decompositionG=KAN, thengcan be written as the product of three commuting elementsg=sduwiths,danduconjugate to elements ofK,AandNrespectively. In general the terms in the Iwasawa decompositiong=kando not commute.
https://en.wikipedia.org/wiki/Jordan%E2%80%93Chevalley_decomposition
Inlinear algebra, themodal matrixis used in thediagonalization processinvolvingeigenvalues and eigenvectors.[1] Specifically the modal matrixM{\displaystyle M}for the matrixA{\displaystyle A}is then×nmatrix formed with the eigenvectors ofA{\displaystyle A}as columns inM{\displaystyle M}. It is utilized in thesimilarity transformation whereD{\displaystyle D}is ann×ndiagonal matrixwith the eigenvalues ofA{\displaystyle A}on the main diagonal ofD{\displaystyle D}and zeros elsewhere. The matrixD{\displaystyle D}is called thespectral matrixforA{\displaystyle A}. The eigenvalues must appear left to right, top to bottom in the same order as their corresponding eigenvectors are arranged left to right inM{\displaystyle M}.[2] The matrix has eigenvalues and corresponding eigenvectors A diagonal matrixD{\displaystyle D},similartoA{\displaystyle A}is One possible choice for aninvertible matrixM{\displaystyle M}such thatD=M−1AM,{\displaystyle D=M^{-1}AM,}is Note that since eigenvectors themselves are not unique, and since the columns of bothM{\displaystyle M}andD{\displaystyle D}may be interchanged, it follows that bothM{\displaystyle M}andD{\displaystyle D}are not unique.[4] LetA{\displaystyle A}be ann×nmatrix. Ageneralized modal matrixM{\displaystyle M}forA{\displaystyle A}is ann×nmatrix whose columns, considered as vectors, form acanonical basisforA{\displaystyle A}and appear inM{\displaystyle M}according to the following rules: One can show that whereJ{\displaystyle J}is a matrix inJordan normal form. By premultiplying byM−1{\displaystyle M^{-1}}, we obtain Note that when computing these matrices, equation (1) is the easiest of the two equations to verify, since it does not requireinvertinga matrix.[6] This example illustrates a generalized modal matrix with four Jordan chains. Unfortunately, it is a little difficult to construct an interesting example of low order.[7]The matrix has a single eigenvalueλ1=1{\displaystyle \lambda _{1}=1}withalgebraic multiplicityμ1=7{\displaystyle \mu _{1}=7}. A canonical basis forA{\displaystyle A}will consist of one linearly independent generalized eigenvector of rank 3 (generalized eigenvector rank; seegeneralized eigenvector), two of rank 2 and four of rank 1; or equivalently, one chain of three vectors{x3,x2,x1}{\displaystyle \left\{\mathbf {x} _{3},\mathbf {x} _{2},\mathbf {x} _{1}\right\}}, one chain of two vectors{y2,y1}{\displaystyle \left\{\mathbf {y} _{2},\mathbf {y} _{1}\right\}}, and two chains of one vector{z1}{\displaystyle \left\{\mathbf {z} _{1}\right\}},{w1}{\displaystyle \left\{\mathbf {w} _{1}\right\}}. An "almost diagonal" matrixJ{\displaystyle J}inJordan normal form, similar toA{\displaystyle A}is obtained as follows: whereM{\displaystyle M}is a generalized modal matrix forA{\displaystyle A}, the columns ofM{\displaystyle M}are a canonical basis forA{\displaystyle A}, andAM=MJ{\displaystyle AM=MJ}.[8]Note that since generalized eigenvectors themselves are not unique, and since some of the columns of bothM{\displaystyle M}andJ{\displaystyle J}may be interchanged, it follows that bothM{\displaystyle M}andJ{\displaystyle J}are not unique.[9]
https://en.wikipedia.org/wiki/Modal_matrix
Inmathematics, inlinear algebra, aWeyr canonical form(or,Weyr formorWeyr matrix) is asquare matrixwhich (in some sense) induces "nice" properties with matrices it commutes with. It also has a particularly simple structure and the conditions for possessing a Weyr form are fairly weak, making it a suitable tool for studying classes ofcommuting matrices. A square matrix is said to beinthe Weyrcanonical formif the matrix has the structure defining the Weyr canonical form. The Weyr form was discovered by theCzechmathematicianEduard Weyrin 1885.[1][2][3]The Weyr form did not become popular among mathematicians and it was overshadowed by the closely related, but distinct, canonical form known by the nameJordan canonical form.[3]The Weyr form has been rediscovered several times since Weyr’s original discovery in 1885.[4]This form has been variously called asmodified Jordan form,reordered Jordan form,second Jordan form,andH-form.[4]The current terminology is credited to Shapiro who introduced it in a paper published in theAmerican Mathematical Monthlyin 1999.[4][5] Recently several applications have been found for the Weyr matrix. Of particular interest is an application of the Weyr matrix in the study ofphylogenetic invariantsinbiomathematics. A basic Weyr matrix witheigenvalueλ{\displaystyle \lambda }is ann×n{\displaystyle n\times n}matrixW{\displaystyle W}of the following form: There is aninteger partition such that, whenW{\displaystyle W}is viewed as anr×r{\displaystyle r\times r}block matrix(Wij){\displaystyle (W_{ij})}, where the(i,j){\displaystyle (i,j)}blockWij{\displaystyle W_{ij}}is anni×nj{\displaystyle n_{i}\times n_{j}}matrix, the following three features are present: In this case, we say thatW{\displaystyle W}has Weyr structure(n1,n2,…,nr){\displaystyle (n_{1},n_{2},\ldots ,n_{r})}. The following is an example of a basic Weyr matrix. W={\displaystyle W=}=[W11W12W22W23W33W34W44]{\displaystyle ={\begin{bmatrix}W_{11}&W_{12}&&\\&W_{22}&W_{23}&\\&&W_{33}&W_{34}\\&&&W_{44}\\\end{bmatrix}}} In this matrix,n=9{\displaystyle n=9}andn1=4,n2=2,n3=2,n4=1{\displaystyle n_{1}=4,n_{2}=2,n_{3}=2,n_{4}=1}. SoW{\displaystyle W}has the Weyr structure(4,2,2,1){\displaystyle (4,2,2,1)}. Also, W11=[λ0000λ0000λ0000λ]=λI4,W22=[λ00λ]=λI2,W33=[λ00λ]=λI2,W44=[λ]=λI1{\displaystyle W_{11}={\begin{bmatrix}\lambda &0&0&0\\0&\lambda &0&0\\0&0&\lambda &0\\0&0&0&\lambda \\\end{bmatrix}}=\lambda I_{4},\quad W_{22}={\begin{bmatrix}\lambda &0\\0&\lambda \\\end{bmatrix}}=\lambda I_{2},\quad W_{33}={\begin{bmatrix}\lambda &0\\0&\lambda \\\end{bmatrix}}=\lambda I_{2},\quad W_{44}={\begin{bmatrix}\lambda \\\end{bmatrix}}=\lambda I_{1}} and W12=[10010000],W23=[1001],W34=[10].{\displaystyle W_{12}={\begin{bmatrix}1&0\\0&1\\0&0\\0&0\\\end{bmatrix}},\quad W_{23}={\begin{bmatrix}1&0\\0&1\\\end{bmatrix}},\quad W_{34}={\begin{bmatrix}1\\0\\\end{bmatrix}}.} LetW{\displaystyle W}be a square matrix and letλ1,…,λk{\displaystyle \lambda _{1},\ldots ,\lambda _{k}}be the distinct eigenvalues ofW{\displaystyle W}. We say thatW{\displaystyle W}is in Weyr form (or is a Weyr matrix) ifW{\displaystyle W}has the following form: W=[W1W2⋱Wk]{\displaystyle W={\begin{bmatrix}W_{1}&&&\\&W_{2}&&\\&&\ddots &\\&&&W_{k}\\\end{bmatrix}}} whereWi{\displaystyle W_{i}}is a basic Weyr matrix with eigenvalueλi{\displaystyle \lambda _{i}}fori=1,…,k{\displaystyle i=1,\ldots ,k}. The following image shows an example of a general Weyr matrix consisting of three basic Weyr matrix blocks. The basic Weyr matrix in the top-left corner has the structure (4,2,1) with eigenvalue 4, the middle block has structure (2,2,1,1) with eigenvalue -3 and the one in the lower-right corner has the structure (3, 2) with eigenvalue 0. The Weyr canonical formW=P−1JP{\displaystyle W=P^{-1}JP}is related to the Jordan formJ{\displaystyle J}by a simple permutationP{\displaystyle P}for each Weyrbasic blockas follows: The first index of each Weyr subblock forms the largest Jordan chain. After crossing out these rows and columns, the first index of each new subblock forms the second largest Jordan chain, and so forth.[6] That the Weyr form is a canonical form of a matrix is a consequence of the following result:[3]Each square matrixA{\displaystyle A}over an algebraically closed field is similar to a Weyr matrixW{\displaystyle W}which is unique up to permutation of its basic blocks. The matrixW{\displaystyle W}is called the Weyr (canonical) form ofA{\displaystyle A}. LetA{\displaystyle A}be a square matrix of ordern{\displaystyle n}over analgebraically closed fieldand let the distinct eigenvalues ofA{\displaystyle A}beλ1,λ2,…,λk{\displaystyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{k}}. TheJordan–Chevalley decompositiontheorem states thatA{\displaystyle A}issimilarto a block diagonal matrix of the form A=[λ1I+N1λ2I+N2⋱λkI+Nk]=[λ1Iλ2I⋱λkI]+[N1N2⋱Nk]=D+N{\displaystyle A={\begin{bmatrix}\lambda _{1}I+N_{1}&&&\\&\lambda _{2}I+N_{2}&&\\&&\ddots &\\&&&\lambda _{k}I+N_{k}\\\end{bmatrix}}={\begin{bmatrix}\lambda _{1}I&&&\\&\lambda _{2}I&&\\&&\ddots &\\&&&\lambda _{k}I\\\end{bmatrix}}+{\begin{bmatrix}N_{1}&&&\\&N_{2}&&\\&&\ddots &\\&&&N_{k}\\\end{bmatrix}}=D+N} whereD{\displaystyle D}is adiagonal matrix,N{\displaystyle N}is anilpotent matrix, and[D,N]=0{\displaystyle [D,N]=0}, justifying the reduction ofN{\displaystyle N}into subblocksNi{\displaystyle N_{i}}. So the problem of reducingA{\displaystyle A}to the Weyr form reduces to the problem of reducing the nilpotent matricesNi{\displaystyle N_{i}}to the Weyr form. This leads to the generalizedeigenspacedecomposition theorem. Given a nilpotent square matrixA{\displaystyle A}of ordern{\displaystyle n}over an algebraically closed fieldF{\displaystyle F}, the following algorithm produces an invertible matrixC{\displaystyle C}and a Weyr matrixW{\displaystyle W}such thatW=C−1AC{\displaystyle W=C^{-1}AC}. Step 1 LetA1=A{\displaystyle A_{1}=A} Step 2 Step 3 IfA2{\displaystyle A_{2}}is nonzero, repeat Step 2 onA2{\displaystyle A_{2}}. Step 4 Continue the processes of Steps 1 and 2 to obtain increasingly smaller square matricesA1,A2,A3,…{\displaystyle A_{1},A_{2},A_{3},\ldots }and associatedinvertible matricesP1,P2,P3,…{\displaystyle P_{1},P_{2},P_{3},\ldots }until the first zero matrixAr{\displaystyle A_{r}}is obtained. Step 5 The Weyr structure ofA{\displaystyle A}is(n1,n2,…,nr){\displaystyle (n_{1},n_{2},\ldots ,n_{r})}whereni{\displaystyle n_{i}}= nullity(Ai){\displaystyle (A_{i})}. Step 6 Step 7 Use elementary row operations to find an invertible matrixYr−1{\displaystyle Y_{r-1}}of appropriate size such that the productYr−1Xr,r−1{\displaystyle Y_{r-1}X_{r,r-1}}is a matrix of the formIr,r−1=[IO]{\displaystyle I_{r,r-1}={\begin{bmatrix}I\\O\end{bmatrix}}}. Step 8 SetQ1={\displaystyle Q_{1}=}diag(I,I,…,Yr−1−1,I){\displaystyle (I,I,\ldots ,Y_{r-1}^{-1},I)}and computeQ1−1XQ1{\displaystyle Q_{1}^{-1}XQ_{1}}. In this matrix, the(r,r−1){\displaystyle (r,r-1)}-block isIr,r−1{\displaystyle I_{r,r-1}}. Step 9 Find a matrixR1{\displaystyle R_{1}}formed as a product ofelementary matricessuch thatR1−1Q1−1XQ1R1{\displaystyle R_{1}^{-1}Q_{1}^{-1}XQ_{1}R_{1}}is a matrix in which all the blocks above the blockIr,r−1{\displaystyle I_{r,r-1}}contain only0{\displaystyle 0}'s. Step 10 Repeat Steps 8 and 9 on columnr−1{\displaystyle r-1}converting(r−1,r−2){\displaystyle (r-1,r-2)}-block toIr−1,r−2{\displaystyle I_{r-1,r-2}}viaconjugationby some invertible matrixQ2{\displaystyle Q_{2}}. Use this block to clear out the blocks above, via conjugation by a productR2{\displaystyle R_{2}}of elementary matrices. Step 11 Repeat these processes onr−2,r−3,…,3,2{\displaystyle r-2,r-3,\ldots ,3,2}columns, using conjugations byQ3,R3,…,Qr−2,Rr−2,Qr−1{\displaystyle Q_{3},R_{3},\ldots ,Q_{r-2},R_{r-2},Q_{r-1}}. The resulting matrixW{\displaystyle W}is now in Weyr form. Step 12 LetC=P1diag(I,P2)⋯diag(I,Pr−1)Q1R1Q2⋯Rr−2Qr−1{\displaystyle C=P_{1}{\text{diag}}(I,P_{2})\cdots {\text{diag}}(I,P_{r-1})Q_{1}R_{1}Q_{2}\cdots R_{r-2}Q_{r-1}}. ThenW=C−1AC{\displaystyle W=C^{-1}AC}. Some well-known applications of the Weyr form are listed below:[3]
https://en.wikipedia.org/wiki/Weyr_canonical_form
Thespectrumof alinear operatorT{\displaystyle T}that operates on aBanach spaceX{\displaystyle X}is a fundamental concept offunctional analysis. The spectrum consists of allscalarsλ{\displaystyle \lambda }such that the operatorT−λ{\displaystyle T-\lambda }does not have a boundedinverseonX{\displaystyle X}. The spectrum has a standarddecompositioninto three parts: This decomposition is relevant to the study ofdifferential equations, and has applications to many branches of science and engineering. A well-known example fromquantum mechanicsis the explanation for thediscrete spectral linesand the continuous band in the light emitted byexcitedatoms ofhydrogen. LetXbe aBanach space,B(X) the family ofbounded operatorsonX, andT∈B(X). Bydefinition, acomplex numberλis in thespectrumofT, denotedσ(T), ifT−λdoes not have an inverse inB(X). IfT−λisone-to-oneandonto, i.e.bijective, then its inverse is bounded; this follows directly from theopen mapping theoremof functional analysis. So,λis in the spectrum ofTif and only ifT−λis not one-to-one or not onto. One distinguishes three separate cases: Soσ(T) is the disjoint union of these three sets,σ(T)=σp(T)∪σc(T)∪σr(T).{\displaystyle \sigma (T)=\sigma _{p}(T)\cup \sigma _{c}(T)\cup \sigma _{r}(T).}The complement of the spectrumσ(T){\displaystyle \sigma (T)}is known asresolvent setρ(T){\displaystyle \rho (T)}that isρ(T)=C∖σ(T){\displaystyle \rho (T)=\mathbb {C} \setminus \sigma (T)}. In addition, whenT−λdoes not have dense range, whether is injective or not, thenλis said to be in thecompression spectrumofT,σcp(T). The compression spectrum consists of the whole residual spectrum and part of point spectrum. The spectrum of anunbounded operatorcan be divided into three parts in the same way as in the bounded case, but because the operator is not defined everywhere, the definitions of domain, inverse, etc. are more involved. Given a σ-finitemeasure space(S,Σ,μ), consider the Banach spaceLp(μ). A functionh:S→Cis calledessentially boundedifhis boundedμ-almost everywhere. An essentially boundedhinduces a bounded multiplication operatorThonLp(μ):(Thf)(s)=h(s)⋅f(s).{\displaystyle (T_{h}f)(s)=h(s)\cdot f(s).} The operator norm ofTis the essential supremum ofh. Theessential rangeofhis defined in the following way: a complex numberλis in the essential range ofhif for allε> 0, the preimage of the open ballBε(λ) underhhas strictly positive measure. We will show first thatσ(Th) coincides with the essential range ofhand then examine its various parts. Ifλis not in the essential range ofh, takeε> 0 such thath−1(Bε(λ)) has zero measure. The functiong(s) = 1/(h(s) −λ) is bounded almost everywhere by 1/ε. The multiplication operatorTgsatisfiesTg· (Th−λ) = (Th−λ) ·Tg=I. Soλdoes not lie in spectrum ofTh. On the other hand, ifλlies in the essential range ofh, consider the sequence of sets{Sn=h−1(B1/n(λ))}. EachSnhas positive measure. Letfnbe the characteristic function ofSn. We can compute directly‖(Th−λ)fn‖pp=‖(h−λ)fn‖pp=∫Sn|h−λ|pdμ≤1npμ(Sn)=1np‖fn‖pp.{\displaystyle \|(T_{h}-\lambda )f_{n}\|_{p}^{p}=\|(h-\lambda )f_{n}\|_{p}^{p}=\int _{S_{n}}|h-\lambda \;|^{p}d\mu \leq {\frac {1}{n^{p}}}\;\mu (S_{n})={\frac {1}{n^{p}}}\|f_{n}\|_{p}^{p}.} This showsTh−λis not bounded below, therefore not invertible. Ifλis such thatμ(h−1({λ})) > 0, thenλlies in the point spectrum ofThas follows. Letfbe the characteristic function of the measurable seth−1(λ), then by considering two cases, we find∀s∈S,(Thf)(s)=λf(s),{\displaystyle \forall s\in S,\;(T_{h}f)(s)=\lambda f(s),}so λ is an eigenvalue ofTh. Anyλin the essential range ofhthat does not have a positive measure preimage is in the continuous spectrum ofTh. To show this, we must show thatTh−λhas dense range. Givenf∈Lp(μ), again we consider the sequence of sets{Sn=h−1(B1/n(λ))}. Letgnbe the characteristic function ofS−Sn. Definefn(s)=1h(s)−λ⋅gn(s)⋅f(s).{\displaystyle f_{n}(s)={\frac {1}{h(s)-\lambda }}\cdot g_{n}(s)\cdot f(s).} Direct calculation shows thatfn∈Lp(μ), with‖fn‖p≤n‖f‖p{\displaystyle \|f_{n}\|_{p}\leq n\|f\|_{p}}. Then by thedominated convergence theorem,(Th−λ)fn→f{\displaystyle (T_{h}-\lambda )f_{n}\rightarrow f}in theLp(μ) norm. Therefore, multiplication operators have no residual spectrum. In particular, by thespectral theorem,normal operatorson a Hilbert space have no residual spectrum. In the special case whenSis the set of natural numbers andμis the counting measure, the correspondingLp(μ) is denoted by lp. This space consists of complex valued sequences {xn} such that∑n≥0|xn|p<∞.{\displaystyle \sum _{n\geq 0}|x_{n}|^{p}<\infty .} For 1 <p< ∞,lpisreflexive. Define theleft shiftT:lp→lpbyT(x1,x2,x3,…)=(x2,x3,x4,…).{\displaystyle T(x_{1},x_{2},x_{3},\dots )=(x_{2},x_{3},x_{4},\dots ).} Tis apartial isometrywith operator norm 1. Soσ(T) lies in the closed unit disk of the complex plane. T*is the right shift (orunilateral shift), which is an isometry onlq, where 1/p+ 1/q= 1:T∗(x1,x2,x3,…)=(0,x1,x2,…).{\displaystyle T^{*}(x_{1},x_{2},x_{3},\dots )=(0,x_{1},x_{2},\dots ).} Forλ∈Cwith |λ| < 1,x=(1,λ,λ2,…)∈lp{\displaystyle x=(1,\lambda ,\lambda ^{2},\dots )\in l^{p}}andT x=λ x. Consequently, the point spectrum ofTcontains the open unit disk. Now,T*has no eigenvalues, i.e.σp(T*) is empty. Thus, invoking reflexivity and the theorem inSpectrum_(functional_analysis)#Spectrum_of_the_adjoint_operator(thatσp(T) ⊂σr(T*) ∪σp(T*)), we can deduce that the open unit disk lies in the residual spectrum ofT*. The spectrum of a bounded operator is closed, which implies the unit circle, { |λ| = 1 } ⊂C, is inσ(T). Again by reflexivity oflpand the theorem given above (this time, thatσr(T) ⊂σp(T*)), we have thatσr(T) is also empty. Therefore, for a complex numberλwith unit norm, one must haveλ∈σp(T) orλ∈σc(T). Now if |λ| = 1 andTx=λx,i.e.(x2,x3,x4,…)=λ(x1,x2,x3,…),{\displaystyle Tx=\lambda x,\qquad i.e.\;(x_{2},x_{3},x_{4},\dots )=\lambda (x_{1},x_{2},x_{3},\dots ),}thenx=x1(1,λ,λ2,…),{\displaystyle x=x_{1}(1,\lambda ,\lambda ^{2},\dots ),}which cannot be inlp, a contradiction. This means the unit circle must lie in the continuous spectrum ofT. So for the left shiftT,σp(T) is the open unit disk andσc(T) is the unit circle, whereas for the right shiftT*,σr(T*) is the open unit disk andσc(T*) is the unit circle. Forp= 1, one can perform a similar analysis. The results will not be exactly the same, since reflexivity no longer holds. Hilbert spacesare Banach spaces, so the above discussion applies to bounded operators on Hilbert spaces as well. A subtle point concerns the spectrum ofT*. For a Banach space,T* denotes the transpose andσ(T*) =σ(T). For a Hilbert space,T* normally denotes theadjointof an operatorT∈B(H), not the transpose, andσ(T*) is notσ(T) but rather its image under complex conjugation. For a self-adjointT∈B(H), theBorel functional calculusgives additional ways to break up the spectrum naturally. This subsection briefly sketches the development of this calculus. The idea is to first establish the continuous functional calculus, and then pass to measurable functions via theRiesz–Markov–Kakutani representation theorem. For the continuous functional calculus, the key ingredients are the following: The familyC(σ(T)) is aBanach algebrawhen endowed with the uniform norm. So the mappingP→P(T){\displaystyle P\rightarrow P(T)}is an isometric homomorphism from a dense subset ofC(σ(T)) toB(H). Extending the mapping by continuity givesf(T) forf∈ C(σ(T)): letPnbe polynomials such thatPn→funiformly and definef(T) = limPn(T). This is the continuous functional calculus. For a fixedh∈H, we notice thatf→⟨h,f(T)h⟩{\displaystyle f\rightarrow \langle h,f(T)h\rangle }is a positive linear functional onC(σ(T)). According to the Riesz–Markov–Kakutani representation theorem a unique measureμhonσ(T) exists such that∫σ(T)fdμh=⟨h,f(T)h⟩.{\displaystyle \int _{\sigma (T)}f\,d\mu _{h}=\langle h,f(T)h\rangle .} This measure is sometimes called thespectral measureassociated toh. The spectral measures can be used to extend the continuous functional calculus to bounded Borel functions. For a bounded functiongthat is Borel measurable, define, for a proposedg(T)∫σ(T)gdμh=⟨h,g(T)h⟩.{\displaystyle \int _{\sigma (T)}g\,d\mu _{h}=\langle h,g(T)h\rangle .} Via thepolarization identity, one can recover (sinceHis assumed to be complex)⟨k,g(T)h⟩.{\displaystyle \langle k,g(T)h\rangle .}and thereforeg(T)hfor arbitraryh. In the present context, the spectral measures, combined with a result from measure theory, give a decomposition ofσ(T). Leth∈Handμhbe its corresponding spectral measure onσ(T). According to a refinement ofLebesgue's decomposition theorem,μhcan be decomposed into three mutually singular parts:μh=μac+μsc+μpp{\displaystyle \mu _{h}=\mu _{\mathrm {ac} }+\mu _{\mathrm {sc} }+\mu _{\mathrm {pp} }}whereμacis absolutely continuous with respect to the Lebesgue measure,μscis singular with respect to the Lebesgue measure and atomless, andμppis a pure point measure.[1][2] All three types of measures are invariant under linear operations. LetHacbe the subspace consisting of vectors whose spectral measures are absolutely continuous with respect to theLebesgue measure. DefineHppandHscin analogous fashion. These subspaces are invariant underT. For example, ifh∈Hacandk=T h. Letχbe the characteristic function of some Borel set inσ(T), then⟨k,χ(T)k⟩=∫σ(T)χ(λ)⋅λ2dμh(λ)=∫σ(T)χ(λ)dμk(λ).{\displaystyle \langle k,\chi (T)k\rangle =\int _{\sigma (T)}\chi (\lambda )\cdot \lambda ^{2}d\mu _{h}(\lambda )=\int _{\sigma (T)}\chi (\lambda )\;d\mu _{k}(\lambda ).}Soλ2dμh=dμk{\displaystyle \lambda ^{2}d\mu _{h}=d\mu _{k}}andk∈Hac. Furthermore, applying the spectral theorem givesH=Hac⊕Hsc⊕Hpp.{\displaystyle H=H_{\mathrm {ac} }\oplus H_{\mathrm {sc} }\oplus H_{\mathrm {pp} }.} This leads to the following definitions: The closure of the eigenvalues is the spectrum ofTrestricted toHpp.[3][nb 1]Soσ(T)=σac(T)∪σsc(T)∪σ¯pp(T).{\displaystyle \sigma (T)=\sigma _{\mathrm {ac} }(T)\cup \sigma _{\mathrm {sc} }(T)\cup {{\bar {\sigma }}_{\mathrm {pp} }(T)}.} A bounded self-adjoint operator on Hilbert space is, a fortiori, a bounded operator on a Banach space. Therefore, one can also apply toTthe decomposition of the spectrum that was achieved above for bounded operators on a Banach space. Unlike the Banach space formulation,[clarification needed]the unionσ(T)=σ¯pp(T)∪σac(T)∪σsc(T){\displaystyle \sigma (T)={{\bar {\sigma }}_{\mathrm {pp} }(T)}\cup \sigma _{\mathrm {ac} }(T)\cup \sigma _{\mathrm {sc} }(T)}need not be disjoint. It is disjoint when the operatorTis of uniform multiplicity, saym, i.e. ifTis unitarily equivalent to multiplication byλon the direct sum⨁i=1mL2(R,μi){\displaystyle \bigoplus _{i=1}^{m}L^{2}(\mathbb {R} ,\mu _{i})}for some Borel measuresμi{\displaystyle \mu _{i}}. When more than one measure appears in the above expression, we see that it is possible for the union of the three types of spectra to not be disjoint. Ifλ∈σac(T) ∩σpp(T),λis sometimes called an eigenvalueembeddedin the absolutely continuous spectrum. WhenTis unitarily equivalent to multiplication byλonL2(R,μ),{\displaystyle L^{2}(\mathbb {R} ,\mu ),}the decomposition ofσ(T) from Borel functional calculus is a refinement of the Banach space case. The preceding comments can be extended to the unbounded self-adjoint operators since Riesz-Markov holds forlocally compactHausdorff spaces. Inquantum mechanics, observables are (often unbounded)self-adjoint operatorsand their spectra are the possible outcomes of measurements. Thepure point spectrumcorresponds tobound statesin the following way: A particle is said to be in a bound state if it remains "localized" in a bounded region of space.[6]Intuitively one might therefore think that the "discreteness" of the spectrum is intimately related to the corresponding states being "localized". However, a careful mathematical analysis shows that this is not true in general.[7]For example, consider the function This function is normalizable (i.e.f∈L2(R){\displaystyle f\in L^{2}(\mathbb {R} )}) as Known as theBasel problem, this series converges toπ26{\textstyle {\frac {\pi ^{2}}{6}}}. Yet,f{\displaystyle f}increases asx→∞{\displaystyle x\to \infty }, i.e, the state "escapes to infinity". The phenomena ofAnderson localizationanddynamical localizationdescribe when the eigenfunctions are localized in a physical sense. Anderson Localization means that eigenfunctions decay exponentially asx→∞{\displaystyle x\to \infty }. Dynamical localization is more subtle to define. Sometimes, when performing quantum mechanical measurements, one encounters "eigenstates" that are not localized, e.g., quantum states that do not lie inL2(R). These arefree statesbelonging to the absolutely continuous spectrum. In thespectral theorem for unbounded self-adjoint operators, these states are referred to as "generalized eigenvectors" of an observable with "generalized eigenvalues" that do not necessarily belong to its spectrum. Alternatively, if it is insisted that the notion of eigenvectors and eigenvalues survive the passage to the rigorous, one can consider operators onrigged Hilbert spaces.[8] An example of an observable whose spectrum is purely absolutely continuous is theposition operatorof a free particle moving on the entire real line. Also, since themomentum operatoris unitarily equivalent to the position operator, via theFourier transform, it has a purely absolutely continuous spectrum as well. The singular spectrum correspond to physically impossible outcomes. It was believed for some time that the singular spectrum was something artificial. However, examples as thealmost Mathieu operatorandrandom Schrödinger operatorshave shown, that all types of spectra arise naturally in physics.[9][10] LetA:X→X{\displaystyle A:\,X\to X}be a closed operator defined on the domainD(A)⊂X{\displaystyle D(A)\subset X}which is dense inX. Then there is a decomposition of the spectrum ofAinto adisjoint union,[11]σ(A)=σess,5(A)⊔σd(A),{\displaystyle \sigma (A)=\sigma _{\mathrm {ess} ,5}(A)\sqcup \sigma _{\mathrm {d} }(A),}where
https://en.wikipedia.org/wiki/Decomposition_of_spectrum_(functional_analysis)
In mathematics, specifically inspectral theory, adiscrete spectrumof aclosed linear operatoris defined as the set ofisolated pointsof its spectrum such that therankof the correspondingRiesz projectoris finite. A pointλ∈C{\displaystyle \lambda \in \mathbb {C} }in thespectrumσ(A){\displaystyle \sigma (A)}of aclosed linear operatorA:B→B{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}in theBanach spaceB{\displaystyle {\mathfrak {B}}}withdomainD(A)⊂B{\displaystyle {\mathfrak {D}}(A)\subset {\mathfrak {B}}}is said to belong todiscrete spectrumσdisc(A){\displaystyle \sigma _{\mathrm {disc} }(A)}ofA{\displaystyle A}if the following two conditions are satisfied:[1] HereIB{\displaystyle I_{\mathfrak {B}}}is theidentity operatorin the Banach spaceB{\displaystyle {\mathfrak {B}}}andΓ⊂C{\displaystyle \Gamma \subset \mathbb {C} }is a smooth simple closed counterclockwise-oriented curve bounding an open regionΩ⊂C{\displaystyle \Omega \subset \mathbb {C} }such thatλ{\displaystyle \lambda }is the only point of the spectrum ofA{\displaystyle A}in the closure ofΩ{\displaystyle \Omega }; that is,σ(A)∩Ω¯={λ}.{\displaystyle \sigma (A)\cap {\overline {\Omega }}=\{\lambda \}.} The discrete spectrumσdisc(A){\displaystyle \sigma _{\mathrm {disc} }(A)}coincides with the set ofnormal eigenvaluesofA{\displaystyle A}: In general, the rank of the Riesz projector can be larger than the dimension of theroot linealLλ{\displaystyle {\mathfrak {L}}_{\lambda }}of the corresponding eigenvalue, and in particular it is possible to havedimLλ<∞{\displaystyle \mathrm {dim} \,{\mathfrak {L}}_{\lambda }<\infty },rankPλ=∞{\displaystyle \mathrm {rank} \,P_{\lambda }=\infty }. So, there is the following inclusion: In particular, for aquasinilpotent operator one hasLλ(Q)={0}{\displaystyle {\mathfrak {L}}_{\lambda }(Q)=\{0\}},rankPλ=∞{\displaystyle \mathrm {rank} \,P_{\lambda }=\infty },σ(Q)={0}{\displaystyle \sigma (Q)=\{0\}},σdisc(Q)=∅{\displaystyle \sigma _{\mathrm {disc} }(Q)=\emptyset }. The discrete spectrumσdisc(A){\displaystyle \sigma _{\mathrm {disc} }(A)}of an operatorA{\displaystyle A}is not to be confused with thepoint spectrumσp(A){\displaystyle \sigma _{\mathrm {p} }(A)}, which is defined as the set ofeigenvaluesofA{\displaystyle A}. While each point of the discrete spectrum belongs to the point spectrum, the converse is not necessarily true: the point spectrum does not necessarily consist of isolated points of the spectrum, as one can see from the example of theleft shift operator,L:l2(N)→l2(N),L:(a1,a2,a3,…)↦(a2,a3,a4,…).{\displaystyle L:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} ),\quad L:\,(a_{1},a_{2},a_{3},\dots )\mapsto (a_{2},a_{3},a_{4},\dots ).}For this operator, the point spectrum is the unit disc of the complex plane, the spectrum is the closure of the unit disc, while the discrete spectrum is empty:
https://en.wikipedia.org/wiki/Discrete_spectrum_(mathematics)
Inmathematics, theessential spectrumof abounded operator(or, more generally, of adensely definedclosed linear operator) is a certain subset of itsspectrum, defined by a condition of the type that says, roughly speaking, "fails badly to be invertible". In formal terms, letX{\displaystyle X}be aHilbert spaceand letT{\displaystyle T}be aself-adjoint operatoronX{\displaystyle X}. Theessential spectrumofT{\displaystyle T}, usually denotedσess(T){\displaystyle \sigma _{\mathrm {ess} }(T)}, is the set of allreal numbersλ∈R{\displaystyle \lambda \in \mathbb {R} }such that is not aFredholm operator, whereIX{\displaystyle I_{X}}denotes theidentity operatoronX{\displaystyle X}, so thatIX(x)=x{\displaystyle I_{X}(x)=x}, for allx∈X{\displaystyle x\in X}. (An operator is Fredholm if itskernelandcokernelare finite-dimensional.) The definition of essential spectrumσess(T){\displaystyle \sigma _{\mathrm {ess} }(T)}will remain unchanged if we allow it to consist of all thosecomplex numbersλ∈C{\displaystyle \lambda \in \mathbb {C} }(instead of just real numbers) such that the above condition holds. This is due to the fact that the spectrum of self-adjoint consists only of real numbers. The essential spectrum is alwaysclosed, and it is a subset of thespectrumσ(T){\displaystyle \sigma (T)}. As mentioned above, sinceT{\displaystyle T}is self-adjoint, the spectrum is contained on the real axis. The essential spectrum is invariant under compact perturbations. That is, ifK{\displaystyle K}is acompactself-adjoint operator onX{\displaystyle X}, then the essential spectra ofT{\displaystyle T}and that ofT+K{\displaystyle T+K}coincide, i.e.σess(T)=σess(T+K){\displaystyle \sigma _{\mathrm {ess} }(T)=\sigma _{\mathrm {ess} }(T+K)}. This explains why it is called theessential spectrum:Weyl(1910) originally defined the essential spectrum of a certain differential operator to be the spectrum independent of boundary conditions. Weyl's criterionis as follows. First, a numberλ{\displaystyle \lambda }is in the spectrumσ(T){\displaystyle \sigma (T)}of the operatorT{\displaystyle T}if and only if there exists asequence{ψk}k∈N⊆X{\displaystyle \{\psi _{k}\}_{k\in \mathbb {N} }\subseteq X}in the Hilbert spaceX{\displaystyle X}such that‖ψk‖=1{\displaystyle \Vert \psi _{k}\Vert =1}and Furthermore,λ{\displaystyle \lambda }is in the essential spectrum if there is a sequence satisfying this condition, but such that it contains no convergentsubsequence(this is the case if, for example{ψk}k∈N{\displaystyle \{\psi _{k}\}_{k\in \mathbb {N} }}is anorthonormalsequence); such a sequence is called asingular sequence. Equivalently,λ{\displaystyle \lambda }is in the essential spectrumσess(T){\displaystyle \sigma _{\mathrm {ess} }(T)}if there exists a sequence satisfying the above condition, which alsoconverges weaklyto the zero vector0X{\displaystyle \mathbf {0} _{X}}inX{\displaystyle X}. The essential spectrumσess(T){\displaystyle \sigma _{\mathrm {ess} }(T)}is a subset of the spectrumσ(T){\displaystyle \sigma (T)}and its complement is called thediscrete spectrum, so IfT{\displaystyle T}is self-adjoint, then, by definition, a numberλ{\displaystyle \lambda }is in thediscrete spectrumσdisc{\displaystyle \sigma _{\mathrm {disc} }}ofT{\displaystyle T}if it is an isolated eigenvalue of finite multiplicity, meaning that the dimension of the space has finite but non-zero dimension and that there is anε>0{\displaystyle \varepsilon >0}such thatμ∈σ(T){\displaystyle \mu \in \sigma (T)}and|μ−λ|<ε{\displaystyle |\mu -\lambda |<\varepsilon }imply thatμ{\displaystyle \mu }andλ{\displaystyle \lambda }are equal. (For general, non-self-adjoint operatorsS{\displaystyle S}onBanach spaces, by definition, a complex numberλ∈C{\displaystyle \lambda \in \mathbb {C} }is in thediscrete spectrumσdisc(S){\displaystyle \sigma _{\mathrm {disc} }(S)}if it is anormal eigenvalue; or, equivalently, if it is an isolated point of the spectrum and the rank of the correspondingRiesz projectoris finite.) LetX{\displaystyle X}be aBanach spaceand letT:D(T)→X{\displaystyle T:\,D(T)\to X}be aclosed linear operatoronX{\displaystyle X}withdense domainD(T){\displaystyle D(T)}. There are several definitions of the essential spectrum, which are not equivalent.[1] Each of the above-defined essential spectraσess,k(T){\displaystyle \sigma _{\mathrm {ess} ,k}(T)},1≤k≤5{\displaystyle 1\leq k\leq 5}, is closed. Furthermore, and any of these inclusions may be strict. For self-adjoint operators, all the above definitions of the essential spectrum coincide. Define theradiusof the essential spectrum by Even though the spectra may be different, the radius is the same for allk=1,2,3,4,5{\displaystyle k=1,2,3,4,5}. The definition of the setσess,2(T){\displaystyle \sigma _{\mathrm {ess} ,2}(T)}is equivalent to Weyl's criterion:σess,2(T){\displaystyle \sigma _{\mathrm {ess} ,2}(T)}is the set of allλ{\displaystyle \lambda }for which there exists a singular sequence. The essential spectrumσess,k(T){\displaystyle \sigma _{\mathrm {ess} ,k}(T)}is invariant under compact perturbations fork=1,2,3,4{\displaystyle k=1,2,3,4}, but not fork=5{\displaystyle k=5}. The setσess,4(T){\displaystyle \sigma _{\mathrm {ess} ,4}(T)}gives the part of the spectrum that is independent of compact perturbations, that is, whereB0(X){\displaystyle B_{0}(X)}denotes the set ofcompact operatorsonX{\displaystyle X}(D.E. Edmunds and W.D. Evans, 1987). The spectrum of a closed, densely defined operatorT{\displaystyle T}can be decomposed into a disjoint union whereσdisc(T){\displaystyle \sigma _{\mathrm {disc} }(T)}is thediscrete spectrumofT{\displaystyle T}. The self-adjoint case is discussed in A discussion of the spectrum for general operators can be found in The original definition of the essential spectrum goes back to
https://en.wikipedia.org/wiki/Essential_spectrum
Inmathematics,Fredholm operatorsare certainoperatorsthat arise in theFredholm theoryofintegral equations. They are named in honour ofErik Ivar Fredholm. By definition, a Fredholm operator is abounded linear operatorT:X→Ybetween twoBanach spaceswith finite-dimensionalkernelker⁡T{\displaystyle \ker T}and finite-dimensional (algebraic)cokernelcoker⁡T=Y/ran⁡T{\displaystyle \operatorname {coker} T=Y/\operatorname {ran} T}, and with closedrangeran⁡T{\displaystyle \operatorname {ran} T}. The last condition is actually redundant.[1] Theindexof a Fredholm operator is the integer or in other words, Intuitively, Fredholm operators are those operators that are invertible "if finite-dimensional effects are ignored." The formally correct statement follows. A bounded operatorT:X→Ybetween Banach spacesXandYis Fredholm if and only if it is invertiblemodulocompact operators, i.e., if there exists a bounded linear operator such that are compact operators onXandYrespectively. If a Fredholm operator is modified slightly, it stays Fredholm and its index remains the same. Formally: The set of Fredholm operators fromXtoYis open in the Banach space L(X,Y) of bounded linear operators, equipped with theoperator norm, and the index is locally constant. More precisely, ifT0is Fredholm fromXtoY, there existsε> 0 such that everyTin L(X,Y) with||T−T0|| <εis Fredholm, with the same index as that ofT0. WhenTis Fredholm fromXtoYandUFredholm fromYtoZ, then the compositionU∘T{\displaystyle U\circ T}is Fredholm fromXtoZand WhenTis Fredholm, thetranspose(or adjoint) operatorT′is Fredholm fromY′toX′, andind(T′) = −ind(T). WhenXandYareHilbert spaces, the same conclusion holds for theHermitian adjointT∗. WhenTis Fredholm andKa compact operator, thenT+Kis Fredholm. The index ofTremains unchanged under such a compact perturbations ofT. This follows from the fact that the indexi(s) ofT+sKis an integer defined for everysin [0, 1], andi(s) is locally constant, hencei(1) =i(0). Invariance by perturbation is true for larger classes than the class of compact operators. For example, whenUis Fredholm andTastrictly singular operator, thenT+Uis Fredholm with the same index.[2]The class ofinessential operators, which properly contains the class of strictly singular operators, is the "perturbation class" for Fredholm operators. This means an operatorT∈B(X,Y){\displaystyle T\in B(X,Y)}is inessential if and only ifT+Uis Fredholm for every Fredholm operatorU∈B(X,Y){\displaystyle U\in B(X,Y)}. LetH{\displaystyle H}be aHilbert spacewith an orthonormal basis{en}{\displaystyle \{e_{n}\}}indexed by the non negative integers. The (right)shift operatorSonHis defined by This operatorSis injective (actually, isometric) and has a closed range of codimension 1, henceSis Fredholm withind⁡(S)=−1{\displaystyle \operatorname {ind} (S)=-1}. The powersSk{\displaystyle S^{k}},k≥0{\displaystyle k\geq 0}, are Fredholm with index−k{\displaystyle -k}. The adjointS*is the left shift, The left shiftS*is Fredholm with index 1. IfHis the classicalHardy spaceH2(T){\displaystyle H^{2}(\mathbf {T} )}on the unit circleTin the complex plane, then the shift operator with respect to the orthonormal basis of complex exponentials is the multiplication operatorMφwith the functionφ=e1{\displaystyle \varphi =e_{1}}. More generally, letφbe a complex continuous function onTthat does not vanish onT{\displaystyle \mathbf {T} }, and letTφdenote theToeplitz operatorwith symbolφ, equal to multiplication byφfollowed by the orthogonal projectionP:L2(T)→H2(T){\displaystyle P:L^{2}(\mathbf {T} )\to H^{2}(\mathbf {T} )}: ThenTφis a Fredholm operator onH2(T){\displaystyle H^{2}(\mathbf {T} )}, with index related to thewinding numberaround 0 of the closed patht∈[0,2π]↦φ(eit){\displaystyle t\in [0,2\pi ]\mapsto \varphi (e^{it})}: the index ofTφ, as defined in this article, is the opposite of this winding number. Anyelliptic operatoron a closed manifold can be extended to a Fredholm operator. The use of Fredholm operators inpartial differential equationsis an abstract form of theparametrixmethod. TheAtiyah-Singer index theoremgives a topological characterization of the index of certain operators on manifolds. TheAtiyah-Jänich theoremidentifies theK-theoryK(X) of a compact topological spaceXwith the set ofhomotopy classesof continuous maps fromXto the space of Fredholm operatorsH→H, whereHis the separable Hilbert space and the set of these operators carries the operator norm. A bounded linear operatorTis calledsemi-Fredholmif its range is closed and at least one ofker⁡T{\displaystyle \ker T},coker⁡T{\displaystyle \operatorname {coker} T}is finite-dimensional. For a semi-Fredholm operator, the index is defined by One may also define unbounded Fredholm operators. LetXandYbe two Banach spaces. As it was noted above, the range of a closed operator is closed as long as the cokernel is finite-dimensional (Edmunds and Evans, Theorem I.3.2).
https://en.wikipedia.org/wiki/Fredholm_operator
Inmathematics, theresolvent formalismis a technique for applying concepts fromcomplex analysisto the study of thespectrumofoperatorsonBanach spacesand more general spaces. Formal justification for the manipulations can be found in the framework ofholomorphic functional calculus. Theresolventcaptures the spectral properties of an operator in the analytic structure of thefunctional. Given an operatorA, the resolvent may be defined as Among other uses, the resolvent may be used to solve the inhomogeneousFredholm integral equations; a commonly used approach is a series solution, theLiouville–Neumann series. The resolvent ofAcan be used to directly obtain information about thespectral decompositionofA. For example, supposeλis an isolatedeigenvaluein thespectrumofA. That is, suppose there exists a simple closed curveCλ{\displaystyle C_{\lambda }}in the complex plane that separatesλfrom the rest of the spectrum ofA. Then theresidue defines aprojection operatoronto theλeigenspaceofA. TheHille–Yosida theoremrelates the resolvent through aLaplace transformto an integral over the one-parametergroupof transformations generated byA.[1]Thus, for example, ifAis askew-Hermitian matrix, thenU(t) = exp(tA)is a one-parameter group of unitary operators. Whenever|z|>‖A‖{\displaystyle |z|>\|A\|}, the resolvent ofAatzcan be expressed as theLaplace transform where the integral is taken along the rayarg⁡t=−arg⁡λ{\displaystyle \arg t=-\arg \lambda }.[2] The first major use of the resolvent operator as a series inA(cf.Liouville–Neumann series) was byIvar Fredholm, in a landmark 1903 paper inActa Mathematicathat helped establish modernoperator theory. The nameresolventwas given byDavid Hilbert. For allz, winρ(A), theresolvent setof an operatorA, we have that thefirst resolvent identity(also called Hilbert's identity) holds:[3] (Note thatDunford and Schwartz, cited, define the resolvent as(zI −A)−1, instead, so that the formula above differs in sign from theirs.) Thesecond resolvent identityis a generalization of the first resolvent identity, above, useful for comparing the resolvents of two distinct operators. Given operatorsAandB, both defined on the same linear space, andzinρ(A) ∩ρ(B)the following identity holds,[4] A one-line proof goes as follows: When studying a closedunbounded operatorA:H→Hon aHilbert spaceH, if there existsz∈ρ(A){\displaystyle z\in \rho (A)}such thatR(z;A){\displaystyle R(z;A)}is acompact operator, we say thatAhas compact resolvent. The spectrumσ(A){\displaystyle \sigma (A)}of suchAis a discrete subset ofC{\displaystyle \mathbb {C} }. If furthermoreAisself-adjoint, thenσ(A)⊂R{\displaystyle \sigma (A)\subset \mathbb {R} }and there exists an orthonormal basis{vi}i∈N{\displaystyle \{v_{i}\}_{i\in \mathbb {N} }}of eigenvectors ofAwith eigenvalues{λi}i∈N{\displaystyle \{\lambda _{i}\}_{i\in \mathbb {N} }}respectively. Also,{λi}{\displaystyle \{\lambda _{i}\}}has no finiteaccumulation point.[5]
https://en.wikipedia.org/wiki/Resolvent_formalism
Inmathematics, or more specifically inspectral theory, theRiesz projectoris the projector onto the eigenspace corresponding to a particulareigenvalueof an operator (or, more generally, a projector onto aninvariant subspacecorresponding to an isolated part of the spectrum). It was introduced byFrigyes Rieszin 1912.[1][2] LetA{\displaystyle A}be aclosed linear operatorin the Banach spaceB{\displaystyle {\mathfrak {B}}}. LetΓ{\displaystyle \Gamma }be a simple or composite rectifiable contour, which encloses some regionGΓ{\displaystyle G_{\Gamma }}and lies entirely within theresolvent setρ(A){\displaystyle \rho (A)}(Γ⊂ρ(A){\displaystyle \Gamma \subset \rho (A)}) of the operatorA{\displaystyle A}. Assuming that the contourΓ{\displaystyle \Gamma }has a positive orientation with respect to the regionGΓ{\displaystyle G_{\Gamma }}, the Riesz projector corresponding toΓ{\displaystyle \Gamma }is defined by hereIB{\displaystyle I_{\mathfrak {B}}}is theidentity operatorinB{\displaystyle {\mathfrak {B}}}. Ifλ∈σ(A){\displaystyle \lambda \in \sigma (A)}is the only point of the spectrum ofA{\displaystyle A}inGΓ{\displaystyle G_{\Gamma }}, thenPΓ{\displaystyle P_{\Gamma }}is denoted byPλ{\displaystyle P_{\lambda }}. The operatorPΓ{\displaystyle P_{\Gamma }}is a projector which commutes withA{\displaystyle A}, and hence in the decomposition both termsLΓ{\displaystyle {\mathfrak {L}}_{\Gamma }}andNΓ{\displaystyle {\mathfrak {N}}_{\Gamma }}areinvariant subspacesof the operatorA{\displaystyle A}. Moreover, IfΓ1{\displaystyle \Gamma _{1}}andΓ2{\displaystyle \Gamma _{2}}are two different contours having the properties indicated above, and the regionsGΓ1{\displaystyle G_{\Gamma _{1}}}andGΓ2{\displaystyle G_{\Gamma _{2}}}have no points in common, then the projectors corresponding to them are mutually orthogonal:
https://en.wikipedia.org/wiki/Riesz_projector
Inmathematics, particularly infunctional analysis, thespectrumof abounded linear operator(or, more generally, anunbounded linear operator) is a generalisation of the set ofeigenvaluesof amatrix. Specifically, acomplex numberλ{\displaystyle \lambda }is said to be in the spectrum of a bounded linear operatorT{\displaystyle T}ifT−λI{\displaystyle T-\lambda I} Here,I{\displaystyle I}is theidentity operator. By theclosed graph theorem,λ{\displaystyle \lambda }is in the spectrum if and only if the bounded operatorT−λI:V→V{\displaystyle T-\lambda I:V\to V}is non-bijective onV{\displaystyle V}. The study of spectra and related properties is known asspectral theory, which has numerous applications, most notably themathematical formulation of quantum mechanics. The spectrum of an operator on afinite-dimensionalvector spaceis precisely the set of eigenvalues. However an operator on an infinite-dimensional space may have additional elements in its spectrum, and may have no eigenvalues. For example, consider theright shiftoperatorRon theHilbert spaceℓ2, This has no eigenvalues, since ifRx=λxthen by expanding this expression we see thatx1=0,x2=0, etc. On the other hand, 0 is in the spectrum because although the operatorR− 0 (i.e.Ritself) is invertible, the inverse is defined on a set which is not dense inℓ2. In facteverybounded linear operator on acomplexBanach spacemust have a non-empty spectrum. The notion of spectrum extends tounbounded(i.e. not necessarily bounded) operators. Acomplex numberλis said to be in the spectrum of an unbounded operatorT:X→X{\displaystyle T:\,X\to X}defined on domainD(T)⊆X{\displaystyle D(T)\subseteq X}if there is no bounded inverse(T−λI)−1:X→D(T){\displaystyle (T-\lambda I)^{-1}:\,X\to D(T)}defined on the whole ofX.{\displaystyle X.}IfTisclosed(which includes the case whenTis bounded), boundedness of(T−λI)−1{\displaystyle (T-\lambda I)^{-1}}follows automatically from its existence. U The space of bounded linear operatorsB(X) on a Banach spaceXis an example of aunitalBanach algebra. Since the definition of the spectrum does not mention any properties ofB(X) except those that any such algebra has, the notion of a spectrum may be generalised to this context by using the same definition verbatim. LetT{\displaystyle T}be abounded linear operatoracting on a Banach spaceX{\displaystyle X}over the complex scalar fieldC{\displaystyle \mathbb {C} }, andI{\displaystyle I}be theidentity operatoronX{\displaystyle X}. ThespectrumofT{\displaystyle T}is the set of allλ∈C{\displaystyle \lambda \in \mathbb {C} }for which the operatorT−λI{\displaystyle T-\lambda I}does not have an inverse that is a bounded linear operator. SinceT−λI{\displaystyle T-\lambda I}is a linear operator, the inverse is linear if it exists; and, by thebounded inverse theorem, it is bounded. Therefore, the spectrum consists precisely of those scalarsλ{\displaystyle \lambda }for whichT−λI{\displaystyle T-\lambda I}is notbijective. The spectrum of a given operatorT{\displaystyle T}is often denotedσ(T){\displaystyle \sigma (T)}, and its complement, theresolvent set, is denotedρ(T)=C∖σ(T){\displaystyle \rho (T)=\mathbb {C} \setminus \sigma (T)}. (ρ(T){\displaystyle \rho (T)}is sometimes used to denote the spectral radius ofT{\displaystyle T}) Ifλ{\displaystyle \lambda }is an eigenvalue ofT{\displaystyle T}, then the operatorT−λI{\displaystyle T-\lambda I}is not one-to-one, and therefore its inverse(T−λI)−1{\displaystyle (T-\lambda I)^{-1}}is not defined. However, the converse statement is not true: the operatorT−λI{\displaystyle T-\lambda I}may not have an inverse, even ifλ{\displaystyle \lambda }is not an eigenvalue. Thus the spectrum of an operator always contains all its eigenvalues, but is not limited to them. For example, consider the Hilbert spaceℓ2(Z){\displaystyle \ell ^{2}(\mathbb {Z} )}, that consists of allbi-infinite sequencesof real numbers that have a finite sum of squares∑i=−∞+∞vi2{\textstyle \sum _{i=-\infty }^{+\infty }v_{i}^{2}}. Thebilateral shiftoperatorT{\displaystyle T}simply displaces every element of the sequence by one position; namely ifu=T(v){\displaystyle u=T(v)}thenui=vi−1{\displaystyle u_{i}=v_{i-1}}for every integeri{\displaystyle i}. The eigenvalue equationT(v)=λv{\displaystyle T(v)=\lambda v}has no nonzero solution in this space, since it implies that all the valuesvi{\displaystyle v_{i}}have the same absolute value (if|λ|=1{\displaystyle \vert \lambda \vert =1}) or are a geometric progression (if|λ|≠1{\displaystyle \vert \lambda \vert \neq 1}); either way, the sum of their squares would not be finite. However, the operatorT−λI{\displaystyle T-\lambda I}is not invertible if|λ|=1{\displaystyle |\lambda |=1}. For example, the sequenceu{\displaystyle u}such thatui=1/(|i|+1){\displaystyle u_{i}=1/(|i|+1)}is inℓ2(Z){\displaystyle \ell ^{2}(\mathbb {Z} )}; but there is no sequencev{\displaystyle v}inℓ2(Z){\displaystyle \ell ^{2}(\mathbb {Z} )}such that(T−I)v=u{\displaystyle (T-I)v=u}(that is,vi−1=ui+vi{\displaystyle v_{i-1}=u_{i}+v_{i}}for alli{\displaystyle i}). The spectrum of a bounded operatorTis always aclosed,boundedsubset of thecomplex plane. If the spectrum were empty, then theresolvent function would be defined everywhere on the complex plane and bounded. But it can be shown that the resolvent functionRisholomorphicon its domain. By the vector-valued version ofLiouville's theorem, this function is constant, thus everywhere zero as it is zero at infinity. This would be a contradiction. The boundedness of the spectrum follows from theNeumann series expansioninλ; the spectrumσ(T) is bounded by ||T||. A similar result shows the closedness of the spectrum. The bound ||T|| on the spectrum can be refined somewhat. Thespectral radius,r(T), ofTis the radius of the smallest circle in the complex plane which is centered at the origin and contains the spectrumσ(T) inside of it, i.e. Thespectral radius formulasays[2]that for any elementT{\displaystyle T}of aBanach algebra, One can extend the definition of spectrum tounbounded operatorson aBanach spaceX. These operators are no longer elements in the Banach algebraB(X). LetXbe a Banach space andT:D(T)→X{\displaystyle T:\,D(T)\to X}be alinear operatordefined on domainD(T)⊆X{\displaystyle D(T)\subseteq X}. A complex numberλis said to be in theresolvent set(also calledregular set) ofT{\displaystyle T}if the operator has a bounded everywhere-defined inverse, i.e. if there exists a bounded operator such that A complex numberλis then in thespectrumifλis not in the resolvent set. Forλto be in the resolvent (i.e. not in the spectrum), just like in the bounded case,T−λI{\displaystyle T-\lambda I}must be bijective, since it must have a two-sided inverse. As before, if an inverse exists, then its linearity is immediate, but in general it may not be bounded, so this condition must be checked separately. By theclosed graph theorem, boundedness of(T−λI)−1{\displaystyle (T-\lambda I)^{-1}}doesfollow directly from its existence whenTisclosed. Then, just as in the bounded case, a complex numberλlies in the spectrum of a closed operatorTif and only ifT−λI{\displaystyle T-\lambda I}is not bijective. Note that the class of closed operators includes all bounded operators. The spectrum of an unbounded operator is in general a closed, possibly empty, subset of the complex plane. If the operatorTis notclosed, thenσ(T)=C{\displaystyle \sigma (T)=\mathbb {C} }. The following example indicates that non-closed operators may have empty spectra. LetT{\displaystyle T}denote the differentiation operator onL2([0,1]){\displaystyle L^{2}([0,1])}, whose domain is defined to be the closure ofCc∞((0,1]){\displaystyle C_{c}^{\infty }((0,1])}with respect to theH1{\displaystyle H^{1}}-Sobolev spacenorm. This space can be characterized as all functions inH1([0,1]){\displaystyle H^{1}([0,1])}that are zero att=0{\displaystyle t=0}. Then,T−z{\displaystyle T-z}has trivial kernel on this domain, as anyH1([0,1]){\displaystyle H^{1}([0,1])}-function in its kernel is a constant multiple ofezt{\displaystyle e^{zt}}, which is zero att=0{\displaystyle t=0}if and only if it is identically zero. Therefore, the complement of the spectrum is all ofC.{\displaystyle \mathbb {C} .} A bounded operatorTon a Banach space is invertible, i.e. has a bounded inverse, if and only ifTis bounded below, i.e.‖Tx‖≥c‖x‖,{\displaystyle \|Tx\|\geq c\|x\|,}for somec>0,{\displaystyle c>0,}and has dense range. Accordingly, the spectrum ofTcan be divided into the following parts: Note that the approximate point spectrum and residual spectrum are not necessarily disjoint[3](however, the point spectrum and the residual spectrum are). The following subsections provide more details on the three parts ofσ(T) sketched above. If an operator is not injective (so there is some nonzeroxwithT(x) = 0), then it is clearly not invertible. So ifλis aneigenvalueofT, one necessarily hasλ∈σ(T). The set of eigenvalues ofTis also called thepoint spectrumofT, denoted byσp(T). Some authors refer to the closure of the point spectrum as thepure point spectrumσpp(T)=σp(T)¯{\displaystyle \sigma _{pp}(T)={\overline {\sigma _{p}(T)}}}while others simply considerσpp(T):=σp(T).{\displaystyle \sigma _{pp}(T):=\sigma _{p}(T).}[4][5] More generally, by thebounded inverse theorem,Tis not invertible if it is not bounded below; that is, if there is noc> 0 such that ||Tx|| ≥c||x|| for allx∈X. So the spectrum includes the set ofapproximate eigenvalues, which are thoseλsuch thatT-λIis not bounded below; equivalently, it is the set ofλfor which there is a sequence of unit vectorsx1,x2, ... for which The set of approximate eigenvalues is known as theapproximate point spectrum, denoted byσap(T){\displaystyle \sigma _{\mathrm {ap} }(T)}. It is easy to see that the eigenvalues lie in the approximate point spectrum. For example, consider the right shiftRonl2(Z){\displaystyle l^{2}(\mathbb {Z} )}defined by where(ej)j∈N{\displaystyle {\big (}e_{j}{\big )}_{j\in \mathbb {N} }}is the standard orthonormal basis inl2(Z){\displaystyle l^{2}(\mathbb {Z} )}. Direct calculation showsRhas no eigenvalues, but everyλwith|λ|=1{\displaystyle |\lambda |=1}is an approximate eigenvalue; lettingxnbe the vector one can see that ||xn|| = 1 for alln, but SinceRis a unitary operator, its spectrum lies on the unit circle. Therefore, the approximate point spectrum ofRis its entire spectrum. This conclusion is also true for a more general class of operators. A unitary operator isnormal. By thespectral theorem, a bounded operator on a Hilbert space H is normal if and only if it is equivalent (after identification ofHwith anL2{\displaystyle L^{2}}space) to amultiplication operator. It can be shown that the approximate point spectrum of a bounded multiplication operator equals its spectrum. Thediscrete spectrumis defined as the set ofnormal eigenvaluesor, equivalently, as the set of isolated points of the spectrum such that the correspondingRiesz projectoris of finite rank. As such, the discrete spectrum is a strict subset of the point spectrum, i.e.,σd(T)⊂σp(T).{\displaystyle \sigma _{d}(T)\subset \sigma _{p}(T).} The set of allλfor whichT−λI{\displaystyle T-\lambda I}is injective and has dense range, but is not surjective, is called thecontinuous spectrumofT, denoted byσc(T){\displaystyle \sigma _{\mathbb {c} }(T)}. The continuous spectrum therefore consists of those approximate eigenvalues which are not eigenvalues and do not lie in the residual spectrum. That is, For example,A:l2(N)→l2(N){\displaystyle A:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )},ej↦ej/j{\displaystyle e_{j}\mapsto e_{j}/j},j∈N{\displaystyle j\in \mathbb {N} }, is injective and has a dense range, yetRan(A)⊊l2(N){\displaystyle \mathrm {Ran} (A)\subsetneq l^{2}(\mathbb {N} )}. Indeed, ifx=∑j∈Ncjej∈l2(N){\textstyle x=\sum _{j\in \mathbb {N} }c_{j}e_{j}\in l^{2}(\mathbb {N} )}withcj∈C{\displaystyle c_{j}\in \mathbb {C} }such that∑j∈N|cj|2<∞{\textstyle \sum _{j\in \mathbb {N} }|c_{j}|^{2}<\infty }, one does not necessarily have∑j∈N|jcj|2<∞{\textstyle \sum _{j\in \mathbb {N} }\left|jc_{j}\right|^{2}<\infty }, and then∑j∈Njcjej∉l2(N){\textstyle \sum _{j\in \mathbb {N} }jc_{j}e_{j}\notin l^{2}(\mathbb {N} )}. The set ofλ∈C{\displaystyle \lambda \in \mathbb {C} }for whichT−λI{\displaystyle T-\lambda I}does not have dense range is known as thecompression spectrumofTand is denoted byσcp(T){\displaystyle \sigma _{\mathrm {cp} }(T)}. The set ofλ∈C{\displaystyle \lambda \in \mathbb {C} }for whichT−λI{\displaystyle T-\lambda I}is injective but does not have dense range is known as theresidual spectrumofTand is denoted byσr(T){\displaystyle \sigma _{\mathrm {r} }(T)}: An operator may be injective, even bounded below, but still not invertible. The right shift onl2(N){\displaystyle l^{2}(\mathbb {N} )},R:l2(N)→l2(N){\displaystyle R:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )},R:ej↦ej+1,j∈N{\displaystyle R:\,e_{j}\mapsto e_{j+1},\,j\in \mathbb {N} }, is such an example. This shift operator is anisometry, therefore bounded below by 1. But it is not invertible as it is not surjective (e1∉Ran(R){\displaystyle e_{1}\not \in \mathrm {Ran} (R)}), and moreoverRan(R){\displaystyle \mathrm {Ran} (R)}is not dense inl2(N){\displaystyle l^{2}(\mathbb {N} )}(e1∉Ran(R)¯{\displaystyle e_{1}\notin {\overline {\mathrm {Ran} (R)}}}). The peripheral spectrum of an operator is defined as the set of points in its spectrum which have modulus equal to its spectral radius.[6] There are five similar definitions of theessential spectrumof closed densely defined linear operatorA:X→X{\displaystyle A:\,X\to X}which satisfy All these spectraσess,k(A),1≤k≤5{\displaystyle \sigma _{\mathrm {ess} ,k}(A),\ 1\leq k\leq 5}, coincide in the case of self-adjoint operators. Thehydrogen atomprovides an example of different types of the spectra. Thehydrogen atom Hamiltonian operatorH=−Δ−Z|x|{\displaystyle H=-\Delta -{\frac {Z}{|x|}}},Z>0{\displaystyle Z>0}, with domainD(H)=H1(R3){\displaystyle D(H)=H^{1}(\mathbb {R} ^{3})}has a discrete set of eigenvalues (the discrete spectrumσd(H){\displaystyle \sigma _{\mathrm {d} }(H)}, which in this case coincides with the point spectrumσp(H){\displaystyle \sigma _{\mathrm {p} }(H)}since there are no eigenvalues embedded into the continuous spectrum) that can be computed by theRydberg formula. Their correspondingeigenfunctionsare calledeigenstates, or thebound states. The result of theionizationprocess is described by the continuous part of the spectrum (the energy of the collision/ionization is not "quantized"), represented byσcont(H)=[0,+∞){\displaystyle \sigma _{\mathrm {cont} }(H)=[0,+\infty )}(it also coincides with the essential spectrum,σess(H)=[0,+∞){\displaystyle \sigma _{\mathrm {ess} }(H)=[0,+\infty )}).[citation needed][clarification needed] LetXbe a Banach space andT:X→X{\displaystyle T:\,X\to X}aclosed linear operatorwith dense domainD(T)⊂X{\displaystyle D(T)\subset X}. IfX*is the dual space ofX, andT∗:X∗→X∗{\displaystyle T^{*}:\,X^{*}\to X^{*}}is thehermitian adjointofT, then Theorem—For a bounded (or, more generally, closed and densely defined) operatorT, In particular,σr(T)⊂σp(T∗)¯⊂σr(T)∪σp(T){\displaystyle \sigma _{\mathrm {r} }(T)\subset {\overline {\sigma _{\mathrm {p} }(T^{*})}}\subset \sigma _{\mathrm {r} }(T)\cup \sigma _{\mathrm {p} }(T)}. Suppose thatRan(T−λI){\displaystyle \mathrm {Ran} (T-\lambda I)}is not dense inX. By theHahn–Banach theorem, there exists a non-zeroφ∈X∗{\displaystyle \varphi \in X^{*}}that vanishes onRan(T−λI){\displaystyle \mathrm {Ran} (T-\lambda I)}. For allx∈X, Therefore,(T∗−λ¯I)φ=0∈X∗{\displaystyle (T^{*}-{\bar {\lambda }}I)\varphi =0\in X^{*}}andλ¯{\displaystyle {\bar {\lambda }}}is an eigenvalue ofT*. Conversely, suppose thatλ¯{\displaystyle {\bar {\lambda }}}is an eigenvalue ofT*. Then there exists a non-zeroφ∈X∗{\displaystyle \varphi \in X^{*}}such that(T∗−λ¯I)φ=0{\displaystyle (T^{*}-{\bar {\lambda }}I)\varphi =0}, i.e. IfRan(T−λI){\displaystyle \mathrm {Ran} (T-\lambda I)}is dense inX, thenφmust be the zero functional, a contradiction. The claim is proved. We also getσp(T)⊂σr(T∗)∪σp(T∗)¯{\displaystyle \sigma _{\mathrm {p} }(T)\subset {\overline {\sigma _{\mathrm {r} }(T^{*})\cup \sigma _{\mathrm {p} }(T^{*})}}}by the following argument:Xembeds isometrically intoX**. Therefore, for every non-zero element in the kernel ofT−λI{\displaystyle T-\lambda I}there exists a non-zero element inX**which vanishes onRan(T∗−λ¯I){\displaystyle \mathrm {Ran} (T^{*}-{\bar {\lambda }}I)}. ThusRan(T∗−λ¯I){\displaystyle \mathrm {Ran} (T^{*}-{\bar {\lambda }}I)}can not be dense. Furthermore, ifXis reflexive, we haveσr(T∗)¯⊂σp(T){\displaystyle {\overline {\sigma _{\mathrm {r} }(T^{*})}}\subset \sigma _{\mathrm {p} }(T)}. IfTis acompact operator, or, more generally, aninessential operator, then it can be shown that the spectrum is countable, that zero is the only possibleaccumulation point, and that any nonzeroλin the spectrum is an eigenvalue. A bounded operatorA:X→X{\displaystyle A:\,X\to X}isquasinilpotentif‖An‖1/n→0{\displaystyle \lVert A^{n}\rVert ^{1/n}\to 0}asn→∞{\displaystyle n\to \infty }(in other words, if the spectral radius ofAequals zero). Such operators could equivalently be characterized by the condition An example of such an operator isA:l2(N)→l2(N){\displaystyle A:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )},ej↦ej+1/2j{\displaystyle e_{j}\mapsto e_{j+1}/2^{j}}forj∈N{\displaystyle j\in \mathbb {N} }. IfXis aHilbert spaceandTis aself-adjoint operator(or, more generally, anormal operator), then a remarkable result known as thespectral theoremgives an analogue of the diagonalisation theorem for normal finite-dimensional operators (Hermitian matrices, for example). For self-adjoint operators, one can usespectral measuresto define adecomposition of the spectruminto absolutely continuous, pure point, and singular parts. The definitions of the resolvent and spectrum can be extended to any continuous linear operatorT{\displaystyle T}acting on a Banach spaceX{\displaystyle X}over the real fieldR{\displaystyle \mathbb {R} }(instead of the complex fieldC{\displaystyle \mathbb {C} }) via itscomplexificationTC{\displaystyle T_{\mathbb {C} }}. In this case we define the resolvent setρ(T){\displaystyle \rho (T)}as the set of allλ∈C{\displaystyle \lambda \in \mathbb {C} }such thatTC−λI{\displaystyle T_{\mathbb {C} }-\lambda I}is invertible as an operator acting on the complexified spaceXC{\displaystyle X_{\mathbb {C} }}; then we defineσ(T)=C∖ρ(T){\displaystyle \sigma (T)=\mathbb {C} \setminus \rho (T)}. Thereal spectrumof a continuous linear operatorT{\displaystyle T}acting on a real Banach spaceX{\displaystyle X}, denotedσR(T){\displaystyle \sigma _{\mathbb {R} }(T)}, is defined as the set of allλ∈R{\displaystyle \lambda \in \mathbb {R} }for whichT−λI{\displaystyle T-\lambda I}fails to be invertible in the real algebra of bounded linear operators acting onX{\displaystyle X}. In this case we haveσ(T)∩R=σR(T){\displaystyle \sigma (T)\cap \mathbb {R} =\sigma _{\mathbb {R} }(T)}. Note that the real spectrum may or may not coincide with the complex spectrum. In particular, the real spectrum could be empty. LetBbe a complexBanach algebracontaining aunite. Then we define the spectrumσ(x) (or more explicitlyσB(x)) of an elementxofBto be the set of thosecomplex numbersλfor whichλe−xis not invertible inB. This extends the definition for bounded linear operatorsB(X) on a Banach spaceX, sinceB(X) is a unital Banach algebra.
https://en.wikipedia.org/wiki/Spectrum_(functional_analysis)
Inmathematics, particularly infunctional analysis, thespectrumof abounded linear operator(or, more generally, anunbounded linear operator) is a generalisation of the set ofeigenvaluesof amatrix. Specifically, acomplex numberλ{\displaystyle \lambda }is said to be in the spectrum of a bounded linear operatorT{\displaystyle T}ifT−λI{\displaystyle T-\lambda I} Here,I{\displaystyle I}is theidentity operator. By theclosed graph theorem,λ{\displaystyle \lambda }is in the spectrum if and only if the bounded operatorT−λI:V→V{\displaystyle T-\lambda I:V\to V}is non-bijective onV{\displaystyle V}. The study of spectra and related properties is known asspectral theory, which has numerous applications, most notably themathematical formulation of quantum mechanics. The spectrum of an operator on afinite-dimensionalvector spaceis precisely the set of eigenvalues. However an operator on an infinite-dimensional space may have additional elements in its spectrum, and may have no eigenvalues. For example, consider theright shiftoperatorRon theHilbert spaceℓ2, This has no eigenvalues, since ifRx=λxthen by expanding this expression we see thatx1=0,x2=0, etc. On the other hand, 0 is in the spectrum because although the operatorR− 0 (i.e.Ritself) is invertible, the inverse is defined on a set which is not dense inℓ2. In facteverybounded linear operator on acomplexBanach spacemust have a non-empty spectrum. The notion of spectrum extends tounbounded(i.e. not necessarily bounded) operators. Acomplex numberλis said to be in the spectrum of an unbounded operatorT:X→X{\displaystyle T:\,X\to X}defined on domainD(T)⊆X{\displaystyle D(T)\subseteq X}if there is no bounded inverse(T−λI)−1:X→D(T){\displaystyle (T-\lambda I)^{-1}:\,X\to D(T)}defined on the whole ofX.{\displaystyle X.}IfTisclosed(which includes the case whenTis bounded), boundedness of(T−λI)−1{\displaystyle (T-\lambda I)^{-1}}follows automatically from its existence. U The space of bounded linear operatorsB(X) on a Banach spaceXis an example of aunitalBanach algebra. Since the definition of the spectrum does not mention any properties ofB(X) except those that any such algebra has, the notion of a spectrum may be generalised to this context by using the same definition verbatim. LetT{\displaystyle T}be abounded linear operatoracting on a Banach spaceX{\displaystyle X}over the complex scalar fieldC{\displaystyle \mathbb {C} }, andI{\displaystyle I}be theidentity operatoronX{\displaystyle X}. ThespectrumofT{\displaystyle T}is the set of allλ∈C{\displaystyle \lambda \in \mathbb {C} }for which the operatorT−λI{\displaystyle T-\lambda I}does not have an inverse that is a bounded linear operator. SinceT−λI{\displaystyle T-\lambda I}is a linear operator, the inverse is linear if it exists; and, by thebounded inverse theorem, it is bounded. Therefore, the spectrum consists precisely of those scalarsλ{\displaystyle \lambda }for whichT−λI{\displaystyle T-\lambda I}is notbijective. The spectrum of a given operatorT{\displaystyle T}is often denotedσ(T){\displaystyle \sigma (T)}, and its complement, theresolvent set, is denotedρ(T)=C∖σ(T){\displaystyle \rho (T)=\mathbb {C} \setminus \sigma (T)}. (ρ(T){\displaystyle \rho (T)}is sometimes used to denote the spectral radius ofT{\displaystyle T}) Ifλ{\displaystyle \lambda }is an eigenvalue ofT{\displaystyle T}, then the operatorT−λI{\displaystyle T-\lambda I}is not one-to-one, and therefore its inverse(T−λI)−1{\displaystyle (T-\lambda I)^{-1}}is not defined. However, the converse statement is not true: the operatorT−λI{\displaystyle T-\lambda I}may not have an inverse, even ifλ{\displaystyle \lambda }is not an eigenvalue. Thus the spectrum of an operator always contains all its eigenvalues, but is not limited to them. For example, consider the Hilbert spaceℓ2(Z){\displaystyle \ell ^{2}(\mathbb {Z} )}, that consists of allbi-infinite sequencesof real numbers that have a finite sum of squares∑i=−∞+∞vi2{\textstyle \sum _{i=-\infty }^{+\infty }v_{i}^{2}}. Thebilateral shiftoperatorT{\displaystyle T}simply displaces every element of the sequence by one position; namely ifu=T(v){\displaystyle u=T(v)}thenui=vi−1{\displaystyle u_{i}=v_{i-1}}for every integeri{\displaystyle i}. The eigenvalue equationT(v)=λv{\displaystyle T(v)=\lambda v}has no nonzero solution in this space, since it implies that all the valuesvi{\displaystyle v_{i}}have the same absolute value (if|λ|=1{\displaystyle \vert \lambda \vert =1}) or are a geometric progression (if|λ|≠1{\displaystyle \vert \lambda \vert \neq 1}); either way, the sum of their squares would not be finite. However, the operatorT−λI{\displaystyle T-\lambda I}is not invertible if|λ|=1{\displaystyle |\lambda |=1}. For example, the sequenceu{\displaystyle u}such thatui=1/(|i|+1){\displaystyle u_{i}=1/(|i|+1)}is inℓ2(Z){\displaystyle \ell ^{2}(\mathbb {Z} )}; but there is no sequencev{\displaystyle v}inℓ2(Z){\displaystyle \ell ^{2}(\mathbb {Z} )}such that(T−I)v=u{\displaystyle (T-I)v=u}(that is,vi−1=ui+vi{\displaystyle v_{i-1}=u_{i}+v_{i}}for alli{\displaystyle i}). The spectrum of a bounded operatorTis always aclosed,boundedsubset of thecomplex plane. If the spectrum were empty, then theresolvent function would be defined everywhere on the complex plane and bounded. But it can be shown that the resolvent functionRisholomorphicon its domain. By the vector-valued version ofLiouville's theorem, this function is constant, thus everywhere zero as it is zero at infinity. This would be a contradiction. The boundedness of the spectrum follows from theNeumann series expansioninλ; the spectrumσ(T) is bounded by ||T||. A similar result shows the closedness of the spectrum. The bound ||T|| on the spectrum can be refined somewhat. Thespectral radius,r(T), ofTis the radius of the smallest circle in the complex plane which is centered at the origin and contains the spectrumσ(T) inside of it, i.e. Thespectral radius formulasays[2]that for any elementT{\displaystyle T}of aBanach algebra, One can extend the definition of spectrum tounbounded operatorson aBanach spaceX. These operators are no longer elements in the Banach algebraB(X). LetXbe a Banach space andT:D(T)→X{\displaystyle T:\,D(T)\to X}be alinear operatordefined on domainD(T)⊆X{\displaystyle D(T)\subseteq X}. A complex numberλis said to be in theresolvent set(also calledregular set) ofT{\displaystyle T}if the operator has a bounded everywhere-defined inverse, i.e. if there exists a bounded operator such that A complex numberλis then in thespectrumifλis not in the resolvent set. Forλto be in the resolvent (i.e. not in the spectrum), just like in the bounded case,T−λI{\displaystyle T-\lambda I}must be bijective, since it must have a two-sided inverse. As before, if an inverse exists, then its linearity is immediate, but in general it may not be bounded, so this condition must be checked separately. By theclosed graph theorem, boundedness of(T−λI)−1{\displaystyle (T-\lambda I)^{-1}}doesfollow directly from its existence whenTisclosed. Then, just as in the bounded case, a complex numberλlies in the spectrum of a closed operatorTif and only ifT−λI{\displaystyle T-\lambda I}is not bijective. Note that the class of closed operators includes all bounded operators. The spectrum of an unbounded operator is in general a closed, possibly empty, subset of the complex plane. If the operatorTis notclosed, thenσ(T)=C{\displaystyle \sigma (T)=\mathbb {C} }. The following example indicates that non-closed operators may have empty spectra. LetT{\displaystyle T}denote the differentiation operator onL2([0,1]){\displaystyle L^{2}([0,1])}, whose domain is defined to be the closure ofCc∞((0,1]){\displaystyle C_{c}^{\infty }((0,1])}with respect to theH1{\displaystyle H^{1}}-Sobolev spacenorm. This space can be characterized as all functions inH1([0,1]){\displaystyle H^{1}([0,1])}that are zero att=0{\displaystyle t=0}. Then,T−z{\displaystyle T-z}has trivial kernel on this domain, as anyH1([0,1]){\displaystyle H^{1}([0,1])}-function in its kernel is a constant multiple ofezt{\displaystyle e^{zt}}, which is zero att=0{\displaystyle t=0}if and only if it is identically zero. Therefore, the complement of the spectrum is all ofC.{\displaystyle \mathbb {C} .} A bounded operatorTon a Banach space is invertible, i.e. has a bounded inverse, if and only ifTis bounded below, i.e.‖Tx‖≥c‖x‖,{\displaystyle \|Tx\|\geq c\|x\|,}for somec>0,{\displaystyle c>0,}and has dense range. Accordingly, the spectrum ofTcan be divided into the following parts: Note that the approximate point spectrum and residual spectrum are not necessarily disjoint[3](however, the point spectrum and the residual spectrum are). The following subsections provide more details on the three parts ofσ(T) sketched above. If an operator is not injective (so there is some nonzeroxwithT(x) = 0), then it is clearly not invertible. So ifλis aneigenvalueofT, one necessarily hasλ∈σ(T). The set of eigenvalues ofTis also called thepoint spectrumofT, denoted byσp(T). Some authors refer to the closure of the point spectrum as thepure point spectrumσpp(T)=σp(T)¯{\displaystyle \sigma _{pp}(T)={\overline {\sigma _{p}(T)}}}while others simply considerσpp(T):=σp(T).{\displaystyle \sigma _{pp}(T):=\sigma _{p}(T).}[4][5] More generally, by thebounded inverse theorem,Tis not invertible if it is not bounded below; that is, if there is noc> 0 such that ||Tx|| ≥c||x|| for allx∈X. So the spectrum includes the set ofapproximate eigenvalues, which are thoseλsuch thatT-λIis not bounded below; equivalently, it is the set ofλfor which there is a sequence of unit vectorsx1,x2, ... for which The set of approximate eigenvalues is known as theapproximate point spectrum, denoted byσap(T){\displaystyle \sigma _{\mathrm {ap} }(T)}. It is easy to see that the eigenvalues lie in the approximate point spectrum. For example, consider the right shiftRonl2(Z){\displaystyle l^{2}(\mathbb {Z} )}defined by where(ej)j∈N{\displaystyle {\big (}e_{j}{\big )}_{j\in \mathbb {N} }}is the standard orthonormal basis inl2(Z){\displaystyle l^{2}(\mathbb {Z} )}. Direct calculation showsRhas no eigenvalues, but everyλwith|λ|=1{\displaystyle |\lambda |=1}is an approximate eigenvalue; lettingxnbe the vector one can see that ||xn|| = 1 for alln, but SinceRis a unitary operator, its spectrum lies on the unit circle. Therefore, the approximate point spectrum ofRis its entire spectrum. This conclusion is also true for a more general class of operators. A unitary operator isnormal. By thespectral theorem, a bounded operator on a Hilbert space H is normal if and only if it is equivalent (after identification ofHwith anL2{\displaystyle L^{2}}space) to amultiplication operator. It can be shown that the approximate point spectrum of a bounded multiplication operator equals its spectrum. Thediscrete spectrumis defined as the set ofnormal eigenvaluesor, equivalently, as the set of isolated points of the spectrum such that the correspondingRiesz projectoris of finite rank. As such, the discrete spectrum is a strict subset of the point spectrum, i.e.,σd(T)⊂σp(T).{\displaystyle \sigma _{d}(T)\subset \sigma _{p}(T).} The set of allλfor whichT−λI{\displaystyle T-\lambda I}is injective and has dense range, but is not surjective, is called thecontinuous spectrumofT, denoted byσc(T){\displaystyle \sigma _{\mathbb {c} }(T)}. The continuous spectrum therefore consists of those approximate eigenvalues which are not eigenvalues and do not lie in the residual spectrum. That is, For example,A:l2(N)→l2(N){\displaystyle A:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )},ej↦ej/j{\displaystyle e_{j}\mapsto e_{j}/j},j∈N{\displaystyle j\in \mathbb {N} }, is injective and has a dense range, yetRan(A)⊊l2(N){\displaystyle \mathrm {Ran} (A)\subsetneq l^{2}(\mathbb {N} )}. Indeed, ifx=∑j∈Ncjej∈l2(N){\textstyle x=\sum _{j\in \mathbb {N} }c_{j}e_{j}\in l^{2}(\mathbb {N} )}withcj∈C{\displaystyle c_{j}\in \mathbb {C} }such that∑j∈N|cj|2<∞{\textstyle \sum _{j\in \mathbb {N} }|c_{j}|^{2}<\infty }, one does not necessarily have∑j∈N|jcj|2<∞{\textstyle \sum _{j\in \mathbb {N} }\left|jc_{j}\right|^{2}<\infty }, and then∑j∈Njcjej∉l2(N){\textstyle \sum _{j\in \mathbb {N} }jc_{j}e_{j}\notin l^{2}(\mathbb {N} )}. The set ofλ∈C{\displaystyle \lambda \in \mathbb {C} }for whichT−λI{\displaystyle T-\lambda I}does not have dense range is known as thecompression spectrumofTand is denoted byσcp(T){\displaystyle \sigma _{\mathrm {cp} }(T)}. The set ofλ∈C{\displaystyle \lambda \in \mathbb {C} }for whichT−λI{\displaystyle T-\lambda I}is injective but does not have dense range is known as theresidual spectrumofTand is denoted byσr(T){\displaystyle \sigma _{\mathrm {r} }(T)}: An operator may be injective, even bounded below, but still not invertible. The right shift onl2(N){\displaystyle l^{2}(\mathbb {N} )},R:l2(N)→l2(N){\displaystyle R:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )},R:ej↦ej+1,j∈N{\displaystyle R:\,e_{j}\mapsto e_{j+1},\,j\in \mathbb {N} }, is such an example. This shift operator is anisometry, therefore bounded below by 1. But it is not invertible as it is not surjective (e1∉Ran(R){\displaystyle e_{1}\not \in \mathrm {Ran} (R)}), and moreoverRan(R){\displaystyle \mathrm {Ran} (R)}is not dense inl2(N){\displaystyle l^{2}(\mathbb {N} )}(e1∉Ran(R)¯{\displaystyle e_{1}\notin {\overline {\mathrm {Ran} (R)}}}). The peripheral spectrum of an operator is defined as the set of points in its spectrum which have modulus equal to its spectral radius.[6] There are five similar definitions of theessential spectrumof closed densely defined linear operatorA:X→X{\displaystyle A:\,X\to X}which satisfy All these spectraσess,k(A),1≤k≤5{\displaystyle \sigma _{\mathrm {ess} ,k}(A),\ 1\leq k\leq 5}, coincide in the case of self-adjoint operators. Thehydrogen atomprovides an example of different types of the spectra. Thehydrogen atom Hamiltonian operatorH=−Δ−Z|x|{\displaystyle H=-\Delta -{\frac {Z}{|x|}}},Z>0{\displaystyle Z>0}, with domainD(H)=H1(R3){\displaystyle D(H)=H^{1}(\mathbb {R} ^{3})}has a discrete set of eigenvalues (the discrete spectrumσd(H){\displaystyle \sigma _{\mathrm {d} }(H)}, which in this case coincides with the point spectrumσp(H){\displaystyle \sigma _{\mathrm {p} }(H)}since there are no eigenvalues embedded into the continuous spectrum) that can be computed by theRydberg formula. Their correspondingeigenfunctionsare calledeigenstates, or thebound states. The result of theionizationprocess is described by the continuous part of the spectrum (the energy of the collision/ionization is not "quantized"), represented byσcont(H)=[0,+∞){\displaystyle \sigma _{\mathrm {cont} }(H)=[0,+\infty )}(it also coincides with the essential spectrum,σess(H)=[0,+∞){\displaystyle \sigma _{\mathrm {ess} }(H)=[0,+\infty )}).[citation needed][clarification needed] LetXbe a Banach space andT:X→X{\displaystyle T:\,X\to X}aclosed linear operatorwith dense domainD(T)⊂X{\displaystyle D(T)\subset X}. IfX*is the dual space ofX, andT∗:X∗→X∗{\displaystyle T^{*}:\,X^{*}\to X^{*}}is thehermitian adjointofT, then Theorem—For a bounded (or, more generally, closed and densely defined) operatorT, In particular,σr(T)⊂σp(T∗)¯⊂σr(T)∪σp(T){\displaystyle \sigma _{\mathrm {r} }(T)\subset {\overline {\sigma _{\mathrm {p} }(T^{*})}}\subset \sigma _{\mathrm {r} }(T)\cup \sigma _{\mathrm {p} }(T)}. Suppose thatRan(T−λI){\displaystyle \mathrm {Ran} (T-\lambda I)}is not dense inX. By theHahn–Banach theorem, there exists a non-zeroφ∈X∗{\displaystyle \varphi \in X^{*}}that vanishes onRan(T−λI){\displaystyle \mathrm {Ran} (T-\lambda I)}. For allx∈X, Therefore,(T∗−λ¯I)φ=0∈X∗{\displaystyle (T^{*}-{\bar {\lambda }}I)\varphi =0\in X^{*}}andλ¯{\displaystyle {\bar {\lambda }}}is an eigenvalue ofT*. Conversely, suppose thatλ¯{\displaystyle {\bar {\lambda }}}is an eigenvalue ofT*. Then there exists a non-zeroφ∈X∗{\displaystyle \varphi \in X^{*}}such that(T∗−λ¯I)φ=0{\displaystyle (T^{*}-{\bar {\lambda }}I)\varphi =0}, i.e. IfRan(T−λI){\displaystyle \mathrm {Ran} (T-\lambda I)}is dense inX, thenφmust be the zero functional, a contradiction. The claim is proved. We also getσp(T)⊂σr(T∗)∪σp(T∗)¯{\displaystyle \sigma _{\mathrm {p} }(T)\subset {\overline {\sigma _{\mathrm {r} }(T^{*})\cup \sigma _{\mathrm {p} }(T^{*})}}}by the following argument:Xembeds isometrically intoX**. Therefore, for every non-zero element in the kernel ofT−λI{\displaystyle T-\lambda I}there exists a non-zero element inX**which vanishes onRan(T∗−λ¯I){\displaystyle \mathrm {Ran} (T^{*}-{\bar {\lambda }}I)}. ThusRan(T∗−λ¯I){\displaystyle \mathrm {Ran} (T^{*}-{\bar {\lambda }}I)}can not be dense. Furthermore, ifXis reflexive, we haveσr(T∗)¯⊂σp(T){\displaystyle {\overline {\sigma _{\mathrm {r} }(T^{*})}}\subset \sigma _{\mathrm {p} }(T)}. IfTis acompact operator, or, more generally, aninessential operator, then it can be shown that the spectrum is countable, that zero is the only possibleaccumulation point, and that any nonzeroλin the spectrum is an eigenvalue. A bounded operatorA:X→X{\displaystyle A:\,X\to X}isquasinilpotentif‖An‖1/n→0{\displaystyle \lVert A^{n}\rVert ^{1/n}\to 0}asn→∞{\displaystyle n\to \infty }(in other words, if the spectral radius ofAequals zero). Such operators could equivalently be characterized by the condition An example of such an operator isA:l2(N)→l2(N){\displaystyle A:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )},ej↦ej+1/2j{\displaystyle e_{j}\mapsto e_{j+1}/2^{j}}forj∈N{\displaystyle j\in \mathbb {N} }. IfXis aHilbert spaceandTis aself-adjoint operator(or, more generally, anormal operator), then a remarkable result known as thespectral theoremgives an analogue of the diagonalisation theorem for normal finite-dimensional operators (Hermitian matrices, for example). For self-adjoint operators, one can usespectral measuresto define adecomposition of the spectruminto absolutely continuous, pure point, and singular parts. The definitions of the resolvent and spectrum can be extended to any continuous linear operatorT{\displaystyle T}acting on a Banach spaceX{\displaystyle X}over the real fieldR{\displaystyle \mathbb {R} }(instead of the complex fieldC{\displaystyle \mathbb {C} }) via itscomplexificationTC{\displaystyle T_{\mathbb {C} }}. In this case we define the resolvent setρ(T){\displaystyle \rho (T)}as the set of allλ∈C{\displaystyle \lambda \in \mathbb {C} }such thatTC−λI{\displaystyle T_{\mathbb {C} }-\lambda I}is invertible as an operator acting on the complexified spaceXC{\displaystyle X_{\mathbb {C} }}; then we defineσ(T)=C∖ρ(T){\displaystyle \sigma (T)=\mathbb {C} \setminus \rho (T)}. Thereal spectrumof a continuous linear operatorT{\displaystyle T}acting on a real Banach spaceX{\displaystyle X}, denotedσR(T){\displaystyle \sigma _{\mathbb {R} }(T)}, is defined as the set of allλ∈R{\displaystyle \lambda \in \mathbb {R} }for whichT−λI{\displaystyle T-\lambda I}fails to be invertible in the real algebra of bounded linear operators acting onX{\displaystyle X}. In this case we haveσ(T)∩R=σR(T){\displaystyle \sigma (T)\cap \mathbb {R} =\sigma _{\mathbb {R} }(T)}. Note that the real spectrum may or may not coincide with the complex spectrum. In particular, the real spectrum could be empty. LetBbe a complexBanach algebracontaining aunite. Then we define the spectrumσ(x) (or more explicitlyσB(x)) of an elementxofBto be the set of thosecomplex numbersλfor whichλe−xis not invertible inB. This extends the definition for bounded linear operatorsB(X) on a Banach spaceX, sinceB(X) is a unital Banach algebra.
https://en.wikipedia.org/wiki/Spectrum_of_an_operator
Ageographic information system(GIS) consists of integrated computer hardware andsoftwarethat store, manage,analyze, edit, output, andvisualizegeographic data.[1][2]Much of this often happens within aspatial database; however, this is not essential to meet the definition of a GIS.[1]In a broader sense, one may consider such a system also to include human users and support staff, procedures and workflows, thebody of knowledgeof relevant concepts and methods, and institutional organizations. The uncounted plural,geographic information systems, also abbreviated GIS, is the most common term for the industry and profession concerned with these systems. The academic discipline that studies these systems and their underlying geographic principles, may also be abbreviated as GIS, but the unambiguousGIScienceis more common.[3]GIScience is often considered a subdiscipline ofgeographywithin the branch oftechnical geography. Geographic information systems are utilized in multiple technologies, processes, techniques and methods. They are attached to various operations and numerous applications, that relate to: engineering, planning, management, transport/logistics, insurance, telecommunications, and business,[4]as well as the natural sciences such as forestry, ecology, and Earth science. For this reason, GIS andlocation intelligenceapplications are at the foundation of location-enabled services, which rely on geographic analysis and visualization. GIS provides the ability to relate previously unrelated information, through the use of location as the "key index variable". Locations and extents that are found in the Earth'sspacetimeare able to be recorded through the date and time of occurrence, along with x, y, and zcoordinates; representing,longitude(x),latitude(y), andelevation(z). All Earth-based, spatial–temporal, location and extent references should be relatable to one another, and ultimately, to a "real" physical location or extent. This key characteristic of GIS has begun to open new avenues of scientific inquiry and studies. While digital GIS dates to the mid-1960s, whenRoger Tomlinsonfirst coined the phrase "geographic information system",[5]many of the geographic concepts and methods that GIS automates date back decades earlier. One of the first known instances in which spatial analysis was used came from the field ofepidemiologyin theRapport sur la marche et les effets du choléra dans Paris et le département de laSeine(1832).[6]Frenchcartographerand geographerCharles Picquetcreated a map outlining theforty-eight districts in Paris, usinghalftonecolor gradients, to provide a visual representation for the number of reported deaths due tocholeraper every 1,000 inhabitants. In 1854,John Snow, an epidemiologist and physician, was able to determine the source of acholera outbreak in Londonthrough the use of spatial analysis. Snow achieved this through plotting the residence of each casualty on a map of the area, as well as the nearby water sources. Once these points were marked, he was able to identify the water source within the cluster that was responsible for the outbreak. This was one of the earliest successful uses of a geographic methodology in pinpointing the source of an outbreak in epidemiology. While the basic elements oftopographyand theme existed previously incartography, Snow's map was unique due to his use of cartographic methods, not only to depict, but also to analyze clusters of geographically dependent phenomena. The early 20th century saw the development ofphotozincography, which allowed maps to be split into layers, for example one layer for vegetation and another for water. This was particularly used for printing contours – drawing these was a labour-intensive task but having them on a separate layer meant they could be worked on without the other layers to confuse thedraughtsman. This work was initially drawn on glass plates, but laterplastic filmwas introduced, with the advantages of being lighter, using less storage space and being less brittle, among others. When all the layers were finished, they were combined into one image using a large process camera. Once color printing came in, the layers idea was also used for creating separate printing plates for each color. While the use of layers much later became one of the typical features of a contemporary GIS, the photographic process just described is not considered a GIS in itself – as the maps were just images with no database to link them to. Two additional developments are notable in the early days of GIS:Ian McHarg's publicationDesign with Nature[7]and its map overlay method and the introduction of a street network into the U.S. Census Bureau's DIME (Dual Independent Map Encoding) system.[8] The first publication detailing the use of computers to facilitate cartography was written byWaldo Toblerin 1959.[9]Furthercomputer hardwaredevelopment spurred bynuclear weaponresearch led to more widespread general-purpose computer "mapping" applications by the early 1960s.[10] In 1963, the world's first true operational GIS was developed inOttawa, Ontario, Canada, by the federal Department of Forestry and Rural Development. Developed byRoger Tomlinson, it was called theCanada Geographic Information System(CGIS) and was used to store, analyze, and manipulate data collected for theCanada Land Inventory, an effort to determine the land capability for rural Canada by mapping information aboutsoils, agriculture, recreation, wildlife,waterfowl,forestryand land use at a scale of 1:50,000. A rating classification factor was also added to permit analysis.[11][12] CGIS was an improvement over "computer mapping" applications as it provided capabilities for data storage, overlay, measurement, anddigitizing/scanning. It supported a national coordinate system that spanned the continent, coded lines asarcshaving a true embeddedtopologyand it stored the attribute and locational information in separate files. As a result of this, Tomlinson has become known as the "father of GIS", particularly for his use of overlays in promoting the spatial analysis of convergent geographic data.[13]CGIS lasted into the 1990s and built a large digital land resource database in Canada. It was developed as amainframe-based system in support of federal and provincial resource planning and management. Its strength was continent-wide analysis of complexdatasets. The CGIS was never available commercially. In 1964, Howard T. Fisher formed the Laboratory for Computer Graphics and Spatial Analysis at theHarvard Graduate School of Design(LCGSA 1965–1991), where a number of important theoretical concepts in spatial data handling were developed, and which by the 1970s had distributed seminal software code and systems, such as SYMAP, GRID, and ODYSSEY, to universities, research centers and corporations worldwide.[14]These programs were the first examples of general-purpose GIS software that was not developed for a particular installation, and was very influential on future commercial software, such asEsriARC/INFO, released in 1983. By the late 1970s, two public domain GIS systems (MOSSandGRASS GIS) were in development, and by the early 1980s, M&S Computing (laterIntergraph) along with Bentley Systems Incorporated for theCADplatform, Environmental Systems Research Institute (ESRI),CARIS(Computer Aided Resource Information System), and ERDAS (Earth Resource Data Analysis System) emerged as commercial vendors of GIS software, successfully incorporating many of the CGIS features, combining the first-generation approach to separation of spatial and attribute information with a second-generation approach to organizing attribute data into database structures.[15] In 1986, Mapping Display and Analysis System (MIDAS), the first desktop GIS product,[16]was released for theDOSoperating system. This was renamed in 1990 to MapInfo for Windows when it was ported to theMicrosoft Windowsplatform. This began the process of moving GIS from the research department into the business environment. By the end of the 20th century, the rapid growth in various systems had been consolidated and standardized on relatively few platforms and users were beginning to explore viewing GIS data over theInternet, requiring data format and transfer standards. More recently, a growing number offree, open-source GIS packagesrun on a range of operating systems and can be customized to perform specific tasks. The major trend of the 21st Century has been the integration of GIS capabilities with otherInformation technologyandInternetinfrastructure, such asrelational databases,cloud computing,software as a service(SAAS), andmobile computing.[17] The distinction must be made between a singulargeographic information system, which is a single installation of software and data for a particular use, along with associated hardware, staff, and institutions (e.g., the GIS for a particular city government); andGIS software, a general-purposeapplication programthat is intended to be used in many individual geographic information systems in a variety of application domains.[18]: 16Starting in the late 1970s, many software packages have been created specifically for GIS applications.Esri'sArcGIS, which includesArcGIS Proand the legacy softwareArcMap, currently dominates the GIS market.[as of?]Other examples of GIS includeAutodeskandMapInfo Professionaland open-source programs such asQGIS,GRASS GIS,MapGuide, andHadoop-GIS.[19]These and other desktop GIS applications include a full suite of capabilities for entering, managing, analyzing, and visualizing geographic data, and are designed to be used on their own. Starting in the late 1990s with the emergence of theInternet, as computer network technology progressed, GIS infrastructure and data began to move toservers, providing another mechanism for providing GIS capabilities.[20]: 216This was facilitated by standalone software installed on a server, similar to other server software such asHTTP serversandrelational database management systems, enabling clients to have access to GIS data and processing tools without having to install specialized desktop software. These networks are known asdistributed GIS.[21][22]This strategy has been extended through the Internet and development ofcloud-basedGIS platforms such as ArcGIS Online and GIS-specializedsoftware as a service(SAAS). The use of the Internet to facilitate distributed GIS is known asInternet GIS.[21][22] An alternative approach is the integration of some or all of these capabilities into other software orinformation technologyarchitectures. One example is aspatial extensiontoObject-relational databasesoftware, which defines a geometry datatype so that spatial data can be stored in relational tables, and extensions toSQLfor spatial analysis operations such asoverlay. Another example is the proliferation of geospatial libraries andapplication programming interfaces(e.g.,GDAL,Leaflet,D3.js) that extend programming languages to enable the incorporation of GIS data and processing into custom software, includingweb mappingsites andlocation-based servicesinsmartphones. The core of any GIS is adatabasethat contains representations of geographic phenomena, modeling theirgeometry(location and shape) and theirpropertiesorattributes. A GIS database may be stored in a variety of forms, such as a collection of separatedata filesor a singlespatially-enabledrelational database. Collecting and managing these data usually constitutes the bulk of the time and financial resources of a project, far more than other aspects such as analysis and mapping.[20]: 175 GIS uses spatio-temporal (space-time) location as the key index variable for all other information. Just as a relational database containing text or numbers can relate many different tables using common key index variables, GIS can relate otherwise unrelated information by using location as the key index variable. The key is the location and/or extent in space-time. Any variable that can be located spatially, and increasingly also temporally, can be referenced using a GIS. Locations or extents in Earth space–time may be recorded as dates/times of occurrence, and x, y, and zcoordinatesrepresenting,longitude,latitude, andelevation, respectively. These GIS coordinates may represent other quantified systems of temporo-spatial reference (for example, film frame number, stream gage station, highway mile-marker, surveyor benchmark, building address, street intersection, entrance gate, water depth sounding,POSorCADdrawing origin/units). Units applied to recorded temporal-spatial data can vary widely (even when using exactly the same data, seemap projections), but all Earth-based spatial–temporal location and extent references should, ideally, be relatable to one another and ultimately to a "real" physical location or extent in space–time. Related by accurate spatial information, an incredible variety of real-world and projected past or future data can be analyzed, interpreted and represented.[23]This key characteristic of GIS has begun to open new avenues of scientific inquiry into behaviors and patterns of real-world information that previously had not been systematicallycorrelated. GIS data represents phenomena that exist in the real world, such as roads, land use, elevation, trees, waterways, and states. The most common types of phenomena that are represented in data can be divided into two conceptualizations:discrete objects(e.g., a house, a road) andcontinuous fields(e.g., rainfall amount or population density).[20]: 62–65Other types of geographic phenomena, such as events (e.g., location ofWorld War IIbattles), processes (e.g., extent ofsuburbanization), and masses (e.g., types ofsoilin an area) are represented less commonly or indirectly, or are modeled in analysis procedures rather than data. Traditionally, there are two broad methods used to store data in a GIS for both kinds of abstractions mapping references:raster imagesandvector. Points, lines, and polygons represent vector data of mapped location attribute references. A new hybrid method of storing data is that of identifying point clouds, which combine three-dimensional points withRGBinformation at each point, returning a3D color image. GIS thematic maps then are becoming more and more realistically visually descriptive of what they set out to show or determine. GIS data acquisition includes several methods for gathering spatial data into a GIS database, which can be grouped into three categories:primary data capture, the direct measurement phenomena in the field (e.g.,remote sensing, theglobal positioning system);secondary data capture, the extraction of information from existing sources that are not in a GIS form, such as paper maps, throughdigitization; anddata transfer, the copying of existing GIS data from external sources such as government agencies and private companies. All of these methods can consume significant time, finances, and other resources.[20]: 173 Surveydata can be directly entered into a GIS from digital data collection systems on survey instruments using a technique calledcoordinate geometry(COGO). Positions from a global navigation satellite system (GNSS) like theGlobal Positioning Systemcan also be collected and then imported into a GIS. A current trend[as of?]in data collection gives users the ability to utilizefield computerswith the ability to edit live data using wireless connections or disconnected editing sessions.[24]The current trend[as of?]is to utilize applications available on smartphones andPDAsin the form of mobile GIS.[25]This has been enhanced by the availability of low-cost mapping-grade GPS units with decimeter accuracy in real time. This eliminates the need to post process, import, and update the data in the office after fieldwork has been collected. This includes the ability to incorporate positions collected using alaser rangefinder. New technologies also allow users to create maps as well as analysis directly in the field, making projects more efficient and mapping more accurate. Remotely senseddata also plays an important role in data collection and consist of sensors attached to a platform. Sensors include cameras, digital scanners andlidar, while platforms usually consist of aircraft andsatellites. In England in the mid-1990s, hybrid kite/balloons calledhelikitesfirst pioneered the use of compact airborne digital cameras as airborne geo-information systems. Aircraft measurement software, accurate to 0.4 mm, was used to link the photographs and measure the ground. Helikites are inexpensive and gather more accurate data than aircraft. Helikites can be used over roads, railways and towns whereunmanned aerial vehicles(UAVs) are banned. Recently, aerial data collection has become more accessible withminiature UAVsand drones. For example, theAeryon Scoutwas used to map a 50-acre area with aground sample distanceof 1 inch (2.54 cm) in only 12 minutes.[26] The majority of digital data currently comes fromphoto interpretationof aerial photographs. Soft-copy workstations are used to digitize features directly fromstereo pairsof digital photographs. These systems allow data to be captured in two and three dimensions, with elevations measured directly from a stereo pair using principles ofphotogrammetry. Analog aerial photos must be scanned before being entered into a soft-copy system, for high-quality digital cameras this step is skipped. Satelliteremote sensingprovides another important source of spatial data. Here satellites use different sensor packages to passively measure the reflectance from parts of theelectromagnetic spectrumor radio waves that were sent out from an active sensor such as radar. Remote sensing collects raster data that can be further processed using different bands to identify objects and classes of interest, such as land cover. The most common method of data creation isdigitization, where ahard copymap or survey plan is transferred into a digital medium through the use of a CAD program, and geo-referencing capabilities. With the wide availability ofortho-rectified imagery(from satellites, aircraft, Helikites and UAVs), heads-up digitizing is becoming the main avenue through which geographic data is extracted. Heads-up digitizing involves the tracing of geographic data directly on top of the aerial imagery instead of by the traditional method of tracing the geographic form on a separatedigitizing tablet(heads-down digitizing). Heads-down digitizing, or manual digitizing, uses a special magnetic pen, or stylus, that feeds information into a computer to create an identical, digital map. Some tablets use a mouse-like tool, called a puck, instead of a stylus.[27][28]The puck has a small window with cross-hairs which allows for greater precision and pinpointing map features. Though heads-up digitizing is more commonly used, heads-down digitizing is still useful for digitizing maps of poor quality.[28] Existing data printed on paper orPET filmmaps can bedigitizedor scanned to produce digital data. A digitizer producesvectordata as an operator traces points, lines, and polygon boundaries from a map.Scanninga map results in raster data that could be further processed to produce vector data. When data is captured, the user should consider if the data should be captured with either a relative accuracy or absolute accuracy, since this could not only influence how information will be interpreted but also the cost of data capture. After entering data into a GIS, the data usually requires editing, to remove errors, or further processing. For vector data it must be made "topologically correct" before it can be used for some advanced analysis. For example, in a road network, lines must connect with nodes at an intersection. Errors such as undershoots and overshoots must also be removed. For scanned maps, blemishes on the source map may need to be removed from the resultingraster. For example, a fleck of dirt might connect two lines that should not be connected. The earth can be represented by various models, each of which may provide a different set of coordinates (e.g., latitude, longitude, elevation) for any given point on the Earth's surface. The simplest model is to assume the earth is a perfect sphere. As more measurements of the earth have accumulated, the models of the earth have become more sophisticated and more accurate. In fact, there are models calleddatumsthat apply to different areas of the earth to provide increased accuracy, likeNorth American Datum of 1983for U.S. measurements, and theWorld Geodetic Systemfor worldwide measurements. The latitude and longitude on a map made against a local datum may not be the same as one obtained from aGPS receiver. Converting coordinates from one datum to another requires adatum transformationsuch as aHelmert transformation, although in certain situations a simpletranslationmay be sufficient.[29] In popular GIS software, data projected in latitude/longitude is often represented as aGeographic coordinate system. For example, data in latitude/longitude if the datum is the 'North American Datumof 1983' is denoted by 'GCS North American 1983'. While no digital model can be a perfect representation of the real world, it is important that GIS data be of a high quality. In keeping with the principle ofhomomorphism, the data must be close enough to reality so that the results of GIS procedures correctly correspond to the results of real world processes. This means that there is no single standard for data quality, because the necessary degree of quality depends on the scale and purpose of the tasks for which it is to be used. Several elements of data quality are important to GIS data: The quality of a dataset is very dependent upon its sources, and the methods used to create it. Land surveyors have been able to provide a high level of positional accuracy utilizing high-endGPSequipment, but GPS locations on the average smartphone are much less accurate.[31]Common datasets such as digital terrain and aerial imagery[32]are available in a wide variety of levels of quality, especially spatial precision. Paper maps, which have been digitized for many years as a data source, can also be of widely varying quality. A quantitative analysis of maps brings accuracy issues into focus. The electronic and other equipment used to make measurements for GIS is far more precise than the machines of conventional map analysis. All geographical data are inherently inaccurate, and these inaccuracies will propagate through GIS operations in ways that are difficult to predict.[33] Data restructuring can be performed by a GIS to convert data into different formats. For example, a GIS may be used to convert a satellite image map to a vector structure by generating lines around all cells with the same classification, while determining the cell spatial relationships, such as adjacency or inclusion. More advanced data processing can occur withimage processing, a technique developed in the late 1960s byNASAand the private sector to provide contrast enhancement, false color rendering and a variety of other techniques including use of two dimensionalFourier transforms. Since digital data is collected and stored in various ways, the two data sources may not be entirely compatible. So a GIS must be able to convertgeographic datafrom one structure to another. In so doing, the implicit assumptions behind different ontologies and classifications require analysis.[34]Object ontologies have gained increasing prominence as a consequence ofobject-oriented programmingand sustained work byBarry Smithand co-workers. Spatial ETLtools provide the data processing functionality of traditionalextract, transform, load(ETL) software, but with a primary focus on the ability to manage spatial data. They provide GIS users with the ability to translate data between different standards and proprietary formats, whilst geometrically transforming the data en route. These tools can come in the form of add-ins to existing wider-purpose software such asspreadsheets. GIS spatial analysis is a rapidly changing field, and GIS packages are increasingly including analytical tools as standard built-in facilities, as optional toolsets, as add-ins or 'analysts'. In many instances these are provided by the original software suppliers (commercial vendors or collaborative non commercial development teams), while in other cases facilities have been developed and are provided by third parties. Furthermore, many products offer software development kits (SDKs), programming languages and language support, scripting facilities and/or special interfaces for developing one's own analytical tools or variants. The increased availability has created a new dimension tobusiness intelligencetermed "spatial intelligence" which, when openly delivered via intranet, democratizes access to geographic and social network data.Geospatial intelligence, based on GIS spatial analysis, has also become a key element for security. GIS as a whole can be described as conversion to a vectorial representation or to any other digitisation process. Geoprocessingis a GIS operation used to manipulate spatial data. A typical geoprocessing operation takes an inputdataset, performs an operation on that dataset, and returns the result of the operation as an output dataset. Common geoprocessing operations include geographic feature overlay, feature selection and analysis,topologyprocessing,rasterprocessing, and data conversion. Geoprocessing allows for definition, management, and analysis of information used to form decisions.[35] Many geographic tasks involve theterrain, the shape of the surface of the earth, such ashydrology,earthworks, andbiogeography. Thus, terrain data is often a core dataset in a GIS, usually in the form of a rasterDigital elevation model(DEM) or aTriangulated irregular network(TIN). A variety of tools are available in most GIS software for analyzing terrain, often by creating derivative datasets that represent a specific aspect of the surface. Some of the most common include: Most of these are generated using algorithms that are discrete simplifications ofvector calculus. Slope, aspect, and surface curvature in terrain analysis are all derived from neighborhood operations using elevation values of a cell's adjacent neighbours.[39]Each of these is strongly affected by the level of detail in the terrain data, such as the resolution of a DEM, which should be chosen carefully.[40] Distance is a key part of solving many geographic tasks, usually due to thefriction of distance. Thus, a wide variety of analysis tools have analyze distance in some form, such asbuffers,Voronoi or Thiessen polygons,Cost distance analysis, andnetwork analysis. It is difficult to relatewetlandsmaps torainfallamounts recorded at different points such as airports, television stations, and schools. A GIS, however, can be used to depict two- and three-dimensional characteristics of the Earth's surface, subsurface, and atmosphere from information points. For example, a GIS can quickly generate a map withisoplethorcontour linesthat indicate differing amounts of rainfall. Such a map can be thought of as a rainfall contour map. Many sophisticated methods can estimate the characteristics of surfaces from a limited number of point measurements. A two-dimensional contour map created from the surface modeling of rainfall point measurements may be overlaid and analyzed with any other map in a GIS covering the same area. This GIS derived map can then provide additional information - such as the viability ofwater powerpotential as arenewable energysource. Similarly, GIS can be used to compare otherrenewable energyresources to find the best geographic potential for a region.[41] Additionally, from a series of three-dimensional points, ordigital elevation model, isopleth lines representing elevation contours can be generated, along with slope analysis,shaded relief, and other elevation products. Watersheds can be easily defined for any given reach, by computing all of the areas contiguous and uphill from any given point of interest. Similarly, an expectedthalwegof where surface water would want to travel in intermittent and permanent streams can be computed from elevation data in the GIS. A GIS can recognize and analyze the spatial relationships that exist within digitally stored spatial data. Thesetopologicalrelationships allow complex spatial modelling and analysis to be performed. Topological relationships between geometric entities traditionally include adjacency (what adjoins what), containment (what encloses what), and proximity (how close something is to something else). Geometric networksare linear networks of objects that can be used to represent interconnected features, and to perform special spatial analysis on them. A geometric network is composed of edges, which are connected at junction points, similar tographsin mathematics and computer science. Just like graphs, networks can have weight and flow assigned to its edges, which can be used to represent various interconnected features more accurately. Geometric networks are often used to model road networks andpublic utilitynetworks, such as electric, gas, and water networks. Network modeling is also commonly employed intransportation planning,hydrologymodeling, andinfrastructuremodeling. Dana Tomlincoined the termcartographic modelingin his PhD dissertation (1983); he later used it in the title of his book,Geographic Information Systems and Cartographic Modeling(1990).[42]Cartographic modelingrefers to a process where several thematiclayersof the same area are produced, processed, and analyzed. Tomlin used raster layers, but the overlay method (see below) can be used more generally. Operations on map layers can be combined into algorithms, and eventually into simulation or optimization models. The combination of several spatial datasets (points, lines, orpolygons) creates a new output vector dataset, visually similar to stacking several maps of the same region. These overlays are similar to mathematicalVenn diagramoverlays. Aunionoverlay combines the geographic features and attribute tables of both inputs into a single new output. Anintersectoverlay defines the area where both inputs overlap and retains a set of attribute fields for each. Asymmetric differenceoverlay defines an output area that includes the total area of both inputs except for the overlapping area. Data extraction is a GIS process similar to vector overlay, though it can be used in either vector or raster data analysis. Rather than combining the properties and features of both datasets, data extraction involves using a "clip" or "mask" to extract the features of one data set that fall within the spatial extent of another dataset. In raster data analysis, the overlay of datasets is accomplished through a process known as "local operation on multiple rasters" or "map algebra", through a function that combines the values of each raster'smatrix. This function may weigh some inputs more than others through use of an "index model" that reflects the influence of various factors upon a geographic phenomenon. Geostatisticsis a branch of statistics that deals with field data, spatial data with a continuous index. It provides methods to model spatial correlation, and predict values at arbitrary locations (interpolation). When phenomena are measured, the observation methods dictate the accuracy of any subsequent analysis. Due to the nature of the data (e.g. traffic patterns in an urban environment; weather patterns over thePacific Ocean), a constant or dynamic degree of precision is always lost in the measurement. This loss of precision is determined from the scale and distribution of the data collection. To determine the statistical relevance of the analysis, an average is determined so that points (gradients) outside of any immediate measurement can be included to determine their predicted behavior. This is due to the limitations of the applied statistic and data collection methods, and interpolation is required to predict the behavior of particles, points, and locations that are not directly measurable. Interpolationis the process by which a surface is created, usually a raster dataset, through the input of data collected at a number of sample points. There are several forms of interpolation, each which treats the data differently, depending on the properties of the data set. In comparing interpolation methods, the first consideration should be whether or not the source data will change (exact or approximate). Next is whether the method is subjective, a human interpretation, or objective. Then there is the nature of transitions between points: are they abrupt or gradual. Finally, there is whether a method is global (it uses the entire data set to form the model), or local where an algorithm is repeated for a small section of terrain. Interpolation is a justified measurement because of a spatial autocorrelation principle that recognizes that data collected at any position will have a great similarity to, or influence of those locations within its immediate vicinity. Digital elevation models,triangulated irregular networks, edge-finding algorithms,Thiessen polygons,Fourier analysis,(weighted) moving averages,inverse distance weighting,kriging,spline, andtrend surface analysisare all mathematical methods to produce interpolative data. Geocoding is interpolating spatial locations (X,Y coordinates) from street addresses or any other spatially referenced data such asZIP Codes,parcel lotsand address locations. A reference theme is required togeocodeindividual addresses, such as a road centerline file with address ranges. The individual address locations have historically been interpolated, or estimated, by examining address ranges along a road segment. These are usually provided in the form of a table or database. The software will then place a dot approximately where that address belongs along the segment of centerline. For example, an address point of 500 will be at the midpoint of a line segment that starts with address 1 and ends with address 1,000. Geocoding can also be applied against actual parcel data, typically from municipal tax maps. In this case, the result of the geocoding will be an actually positioned space as opposed to an interpolated point. This approach is being increasingly used to provide more precise location information. Reverse geocoding is the process of returning an estimatedstreet addressnumber as it relates to a given coordinate. For example, a user can click on a road centerline theme (thus providing a coordinate) and have information returned that reflects the estimated house number. This house number is interpolated from a range assigned to that road segment. If the user clicks at themidpointof a segment that starts with address 1 and ends with 100, the returned value will be somewhere near 50. Note that reverse geocoding does not return actual addresses, only estimates of what should be there based on the predetermined range. Coupled with GIS,multi-criteria decision analysismethods support decision-makers in analysing a set of alternative spatial solutions, such as the most likely ecological habitat for restoration, against multiple criteria, such as vegetation cover or roads. MCDA uses decision rules to aggregate the criteria, which allows the alternative solutions to be ranked or prioritised.[43]GIS MCDA may reduce costs and time involved in identifying potential restoration sites. GIS or spatialdata miningis the application of data mining methods to spatial data. Data mining, which is the partially automated search for hidden patterns in large databases, offers great potential benefits for applied GIS-based decision making. Typical applications includeenvironmental monitoring. A characteristic of such applications is that spatial correlation between data measurements require the use of specialized algorithms for more efficient data analysis.[44] Cartographyis the design and production of maps, or visual representations of spatial data. The vast majority of modern cartography is done with the help of computers, usually using GIS but production of quality cartography is also achieved by importing layers into a design program to refine it. Most GIS software gives the user substantial control over the appearance of the data. Cartographic work serves two major functions: First, it produces graphics on the screen or on paper that convey the results of analysis to the people who make decisions about resources. Wall maps and other graphics can be generated, allowing the viewer to visualize and thereby understand the results of analyses or simulations of potential events.Web Map Serversfacilitate distribution of generated maps through web browsers using various implementations of web-based application programming interfaces (AJAX,Java,Flash, etc.). Second, other database information can be generated for further analysis or use. An example would be a list of all addresses within one mile (1.6 km) of a toxic spill. An archeochrome is a new way of displaying spatial data. It is a thematic on a 3D map that is applied to a specific building or a part of a building. It is suited to the visual display of heat-loss data. Traditional maps are abstractions of the real world, a sampling of important elements portrayed on a sheet of paper with symbols to represent physical objects. People who use maps must interpret these symbols.Topographic mapsshow the shape of land surface withcontour linesor withshaded relief. Today, graphic display techniques such asshadingbased onaltitudein a GIS can make relationships among map elements visible, heightening one's ability to extract and analyze information. For example, two types of data were combined in a GIS to produce a perspective view of a portion ofSan Mateo County,California. A GIS was used to register and combine the two images torenderthe three-dimensionalperspective viewlooking down theSan Andreas Fault, using the Thematic Mapper image pixels, but shaded using the elevation of thelandforms. The GIS display depends on the viewing point of theobserverand time of day of the display, to properly render the shadows created by the sun's rays at that latitude, longitude, and time of day. In recent years there has been a proliferation of free-to-use and easily accessible mapping software such as theproprietaryweb applicationsGoogle MapsandBing Maps, as well as thefree and open-sourcealternativeOpenStreetMap. These services give the public access to huge amounts of geographic data, perceived by many users to be as trustworthy and usable as professional information.[45]For example, during the COVID-19 pandemic, web maps hosted on dashboards were used to rapidly disseminate case data to the general public.[46] Some of them, like Google Maps andOpenLayers, expose anapplication programming interface(API) that enable users to create custom applications. These toolkits commonly offer street maps, aerial/satellite imagery, geocoding, searches, and routing functionality. Web mapping has also uncovered the potential ofcrowdsourcinggeodata in projects likeOpenStreetMap, which is a collaborative project to create a free editable map of the world. Thesemashupprojects have been proven to provide a high level of value and benefit to end users outside that possible through traditional geographic information.[47][48] Web mapping is not without its drawbacks. Web mapping allows for the creation and distribution of maps by people without proper cartographic training.[49]This has led to maps that ignore cartographic conventions and are potentially misleading, with one study finding that more than half of United States state government COVID-19 dashboards did not follow these conventions.[50][51] Since its origin in the 1960s, GIS has been used in an ever-increasing range of applications, corroborating the widespread importance of location and aided by the continuing reduction in the barriers to adopting geospatial technology. The perhaps hundreds of different uses of GIS can be classified in several ways: The implementation of a GIS is often driven by jurisdictional (such as a city), purpose, or application requirements. Generally, a GIS implementation may be custom-designed for an organization. Hence, a GIS deployment developed for an application, jurisdiction, enterprise, or purpose may not be necessarilyinteroperableor compatible with a GIS that has been developed for some other application, jurisdiction, enterprise, or purpose.[62] GIS is also diverging intolocation-based services, which allows GPS-enabled mobile devices to display their location in relation to fixed objects (nearest restaurant, gas station, fire hydrant) or mobile objects (friends, children, police car), or to relay their position back to a central server for display or other processing. GIS is also used in digital marketing and SEO for audience segmentation based on location.[63][64] Geospatial disaster response uses geospatial data and tools to help emergency responders, land managers, and scientists respond to disasters. Geospatial data can help save lives, reduce damage, and improve communication. Geospatial data can be used by federal authorities likeFEMAto create maps that show the extent of a disaster, the location of people in need, and the location of debris, create models that estimate the number of people at risk and the amount of damage, improve communication between emergency responders, land managers, and scientists, as well as help determine where to allocate resources, such as emergency medical resources or search and rescue teams and plan evacuation routes and identify which areas are most at risk. In the United States, FEMA's Response Geospatial Office is responsible for the agency's capture, analysis and development of GIS products to enhance situational awareness and enable expeditions and effective decision making. The RGO's mission is to support decision makers in understanding the size, scope, and extent of disaster impacts so they can deliver resources to the communities most in need.[67] The use of digital maps generated by GIS has also influenced the development of an academic field known as spatial humanities.[75] TheOpen Geospatial Consortium(OGC) is an international industry consortium of 384 companies, government agencies, universities, and individuals participating in a consensus process to develop publicly available geoprocessing specifications. Open interfaces and protocols defined by OpenGIS Specifications support interoperable solutions that "geo-enable" the Web, wireless and location-based services, and mainstream IT, and empower technology developers to make complex spatial information and services accessible and useful with all kinds of applications. Open Geospatial Consortium protocols includeWeb Map Service, andWeb Feature Service.[79] GIS products are broken down by the OGC into two categories, based on how completely and accurately the software follows the OGC specifications. Compliant productsare software products that comply to OGC's OpenGIS Specifications. When a product has been tested and certified as compliant through the OGC Testing Program, the product is automatically registered as "compliant" on this site. Implementing productsare software products that implement OpenGIS Specifications but have not yet passed a compliance test. Compliance tests are not available for all specifications. Developers can register their products as implementing draft or approved specifications, though OGC reserves the right to review and verify each entry. The condition of the Earth's surface, atmosphere, and subsurface can be examined by feeding satellite data into a GIS. GIS technology gives researchers the ability to examine the variations in Earth processes over days, months, and years through the use of cartographic visualizations.[80]As an example, the changes in vegetation vigor through a growing season can be animated to determine when drought was most extensive in a particular region. The resulting graphic represents a rough measure of plant health. Working with two variables over time would then allow researchers to detect regional differences in the lag between a decline in rainfall and its effect on vegetation. GIS technology and the availability of digital data on regional and global scales enable such analyses. The satellite sensor output used to generate a vegetation graphic is produced for example by theadvanced very-high-resolution radiometer(AVHRR). This sensor system detects the amounts of energy reflected from the Earth's surface across various bands of the spectrum for surface areas of about 1 km2(0.39 sq mi). The satellite sensor produces images of a particular location on the Earth twice a day. AVHRR and more recently themoderate-resolution imaging spectroradiometer(MODIS) are only two of many sensor systems used for Earth surface analysis. In addition to the integration of time in environmental studies, GIS is also being explored for its ability to track and model the progress of humans throughout their daily routines. A concrete example of progress in this area is the recent release of time-specific population data by theU.S. Census. In this data set, the populations of cities are shown for daytime and evening hours highlighting the pattern of concentration and dispersion generated by North American commuting patterns. The manipulation and generation of data required to produce this data would not have been possible without GIS. Using models to project the data held by a GIS forward in time have enabled planners to test policy decisions usingspatial decision support systems. Tools and technologies emerging from theWorld Wide Web Consortium'sSemantic Webare proving useful fordata integrationproblems in information systems. Correspondingly, such technologies have been proposed as a means to facilitateinteroperabilityand data reuse among GIS applications and also to enable new analysis mechanisms.[81][82][83][84] Ontologiesare a key component of this semantic approach as they allow a formal, machine-readable specification of the concepts and relationships in a given domain. This in turn allows a GIS to focus on the intended meaning of data rather than its syntax or structure. For example,reasoningthat a land cover type classified asdeciduous needleleaf treesin one dataset is a specialization or subset of land cover typeforestin another more roughly classified dataset can help a GIS automatically merge the two datasets under the more general land cover classification. Tentative ontologies have been developed in areas related to GIS applications, for example the hydrology ontology[85]developed by theOrdnance Surveyin theUnited Kingdomand the SWEET ontologies[86]developed byNASA'sJet Propulsion Laboratory. Also, simpler ontologies and semantic metadata standards are being proposed by the W3C Geo Incubator Group[87]to represent geospatial data on the web.GeoSPARQLis a standard developed by the Ordnance Survey,United States Geological Survey,Natural Resources Canada, Australia'sCommonwealth Scientific and Industrial Research Organisationand others to support ontology creation and reasoning using well-understood OGC literals (GML, WKT), topological relationships (Simple Features, RCC8, DE-9IM), RDF and theSPARQLdatabase query protocols. Recent research results in this area can be seen in the International Conference on Geospatial Semantics[88]and the Terra Cognita – Directions to the Geospatial Semantic Web[89]workshop at the International Semantic Web Conference. With the popularization of GIS in decision making, scholars have begun to scrutinize the social and political implications of GIS.[90][91][45]GIS can also be misused to distort reality for individual and political gain.[92][93]It has been argued that the production, distribution, utilization, and representation of geographic information are largely related with the social context and has the potential to increase citizen trust in government.[94]Other related topics include discussion oncopyright,privacy, andcensorship. A more optimistic social approach to GIS adoption is to use it as a tool for public participation. At the end of the 20th century, GIS began to be recognized as tools that could be used in the classroom.[95][96][97]The benefits of GIS in education seem focused on developingspatial cognition, but there is not enough bibliography or statistical data to show the concrete scope of the use of GIS in education around the world, although the expansion has been faster in those countries where the curriculum mentions them.[98]: 36 GIS seems to provide many advantages in teachinggeographybecause it allows for analysis based on real geographic data and also helps raise research questions from teachers and students in the classroom. It also contributes to improvement in learning by developing spatial and geographical thinking and, in many cases, student motivation.[98]: 38 Courses in GIS are also offered by educational institutions.[99][100] GIS is proven as an organization-wide, enterprise and enduring technology that continues to change how local government operates.[101]Government agencies have adopted GIS technology as a method to better manage the following areas of government organization: Theopen datainitiative is pushing local government to take advantage of technology such as GIS technology, as it encompasses the requirements to fit the open data/open governmentmodel of transparency.[101]With open data, local government organizations can implement citizen engagement applications and online portals, allowing citizens to see land information, report potholes and signage issues, view and sort parks by assets, view real-time crime rates and utility repairs, and much more.[103][104]The push for open data within government organizations is driving the growth in local government GIS technology spending, and database management.
https://en.wikipedia.org/wiki/GIS
GazoPa[1]was an imagesearch enginethat used features from an image tosearch for and identify similar imageswhich closed[2]in 2011. GazoPa began in TechCrunch50 in 2008 before launching into a state ofopen betain 2009.[3]GazoPa branched out and released a flower photo community site called "GazoPa Bloom" in 2010. This site was for exploring flower images and, if users need help identifying a flower, uploading images for other people try to identify them. Both sites closed to the public in 2011 when the company decided to focus on other areas of their business.[4]
https://en.wikipedia.org/wiki/GazoPa
Animage retrievalsystem is a computer system used for browsing, searching and retrieving images from a largedatabaseof digital images. Most traditional and common methods of image retrieval utilize some method of addingmetadatasuch ascaptioning,keywords, title or descriptions to the images so that retrieval can be performed over the annotation words. Manual image annotation is time-consuming, laborious and expensive; to address this, there has been a large amount of research done on automatic image annotation. Additionally, the increase in socialweb applicationsand thesemantic webhave inspired the development of several web-based image annotation tools. The first microcomputer-based image database retrieval system was developed atMIT, in the 1990s, by Banireddy Prasaad,Amar Gupta, Hoo-min Toong, andStuart Madnick.[1] A 2008 survey article documented progresses after 2007.[2] Image searchis a specialized data search used to find images. To search for images, a user may provide query terms such as keyword, image file/link, or click on some image, and the system will return images "similar" to the query. The similarity used for search criteria could be meta tags, color distribution in images, region/shape attributes, etc. It is crucial to understand the scope and nature of image data in order to determine the complexity of image search system design. The design is also largely influenced by factors such as the diversity of user-base and expected user traffic for a search system. Along this dimension, search data can be classified into the following categories: There are evaluation workshops for image retrieval systems aiming to investigate and improve the performance of such systems.
https://en.wikipedia.org/wiki/Image_retrieval
This is a list of publicly availablecontent-based image retrieval(CBIR) engines. These image search engines look at the content (pixels) of images in order to return results that match a particular query.
https://en.wikipedia.org/wiki/List_of_CBIR_engines
Macroglossawas avisual search enginebased on the comparison of images,[1][2]coming from an Italian Group. The development of the project began in 2009. In April 2010 is released the first publicalpha.[3]Users can upload photos or images that they are not sure what they are to determine what the images contain. Macroglossa compares images to return search results based on specific search categories. The engine does not use technologies and solutions such asOCR,tags, vocabulary trees. The comparison is directly based on the contents of the image which the user wants to know more. Included features are the categorization of the elements, the ability to search specific portions of the image or start a search from a video file,[4]but the main function is to simulate a digital eye on trying to find similarities of an unknown subject. This technology allows users to pull results from collections of visual content[5]without using tags for search. The visuals can becrowd sourced. In addition, Macroglosssa can also be used as a reverse image search to findorphan worksand possible violations of copyright of images. Macroglossa supports all popular image extensions suchjpeg,png,bmp,gifand video formats suchavi,mov,mp4,m4v,3gp,wmv,mpeg. Macroglossa entersbetastage in September 2011[6]and at the same time open to the public the opportunity to use the developedinterfaces( Api for web and mobile applications ) in order to expand the use of the engine in theB2BandB2Cfields. Macroglossa becomes aSaaS. APIare distributed on three levels : free, basic, and premium. The free API has limited use, but basic and premium do not. The premium API also offers custom services allowing customers to extend and mold the features offered by computer vision.[7] Discontinued as the site is dead since February 2016.
https://en.wikipedia.org/wiki/Macroglossa_Visual_Search
MPEG-7is amultimediacontentdescriptionstandard. It was standardized inISO/IEC15938 (Multimedia content description interface).[1][2][3][4]This description will be associated with the content itself, to allow fast and efficient searching for material that is of interest to the user. MPEG-7 is formally calledMultimedia Content Description Interface. Thus, it isnota standard which deals with the actual encoding of moving pictures and audio, likeMPEG-1,MPEG-2andMPEG-4. It usesXMLto storemetadata, and can be attached totimecodein order to tag particular events, orsynchroniselyricsto asong, for example. It was designed to standardize: The combination ofMPEG-4and MPEG-7 has been sometimes referred to asMPEG-47.[5] MPEG-7 is intended to complement the previousMPEGstandards, by standardizing multimedia metadata -- information about the content, not the content itself. MPEG-7 can be used independently of the other MPEG standards - the description might even be attached to an analog movie. The representation that is defined within MPEG-4, i.e. the representation of audio-visual data in terms of objects, is however very well suited to what will be built on the MPEG-7 standard. This representation is basic to the process of categorization. In addition, MPEG-7 descriptions could be used to improve the functionality of previous MPEG standards. With these tools, we can build an MPEG-7 Description and deploy it. According to the requirements document,1 "a Description consists of a Description Scheme (structure) and the set of Descriptor Values (instantiations) that describe the Data." A Descriptor Value is "an instantiation of a Descriptor for a given data set (or subset thereof)." The Descriptor is the syntactic and semantic definition of the content. Extraction algorithms are inside the scope of the standard because their standardization is not required to allow interoperability. The MPEG-7 (ISO/IEC 15938) consists of different Parts. Each part covers a certain aspect of the whole specification. An MPEG-7 architecture requirement is thatdescription must be separate from the audiovisual content. On the other hand, there must be arelation between the content and description. Thus the description is multiplexed with the content itself. On the right side you can see this relation between description and content. MPEG-7 uses the following tools: On the right side you can see the relation between MPEG-7 tools. There are many applications and application domains which will benefit from the MPEG-7 standard. A few application examples are: The MPEG-7 standard was originally written inXML Schema(XSD), which constitutessemi-structured data. For example, the running time of a movie annotated using MPEG-7 inXMLismachine-readable data, so software agents will know that the number expressing the running time is a positive integer, but such data is not machine-interpretable (cannot be understood by agents), because it does not conveysemantics(meaning), known as the "Semantic Gap." To address this issue, there were many attempts to map the MPEG-7XML Schemato theWeb Ontology Language(OWL), which is astructured dataequivalent of the terms of the MPEG-7 standard (MPEG-7Ontos, COMM, SWIntO, etc.). However, these mappings did not really bridge the "Semantic Gap," becauselow-level video featuresalone are inadequate for representing video semantics.[9]In other words, annotating an automatically extracted video feature, such as color distribution, does not provide the meaning of the actual visual content.[10]
https://en.wikipedia.org/wiki/MPEG-7
Cell lists(also sometimes referred to ascell linked-lists) is a data structure inmolecular dynamicssimulations to find all atom pairs within a given cut-off distance of each other. These pairs are needed to compute the short-range non-bonded interactions in a system, such asVan der Waals forcesor the short-range part of the electrostatic interaction when usingEwald summation. Cell lists work by subdividing the simulation domain into cells with an edge length greater than or equal to the cut-off radius of the interaction to be computed. The particles are sorted into these cells and the interactions are computed between particles in the same or neighbouring cells. In its most basic form, the non-bonded interactions for a cut-off distancerc{\displaystyle r_{c}}are computed as follows: Since the cell length is at leastrc{\displaystyle r_{c}}in all dimensions, no particles withinrc{\displaystyle r_{c}}of each other can be missed. Given a simulation withN{\displaystyle N}particles with a homogeneous particle density, the number of cellsm{\displaystyle m}is proportional toN{\displaystyle N}and inversely proportional to the cut-off radius (i.e. ifN{\displaystyle N}increases, so does the number of cells). The average number of particles per cellc¯=N/m{\displaystyle {\overline {c}}=N/m}therefore does not depend on the total number of particles. The cost of interacting two cells is inO(c¯2){\displaystyle {\mathcal {O}}({\overline {c}}^{2})}. The number of cell pairs is proportional to the number of cells which is again proportional to the number of particlesN{\displaystyle N}. The total cost of finding all pairwise distances within a given cut-off is inO(Nc)∈O(N){\displaystyle {\mathcal {O}}(Nc)\in {\mathcal {O}}(N)}, which is significantly better than computing theO(N2){\displaystyle {\mathcal {O}}(N^{2})}pairwise distances naively. In most simulations,periodic boundary conditionsare used to avoid imposing artificial boundary conditions. Using cell lists, these boundaries can be implemented in two ways. In the ghost cells approach, the simulation box is wrapped in an additional layer of cells. These cells contain periodically wrapped copies of the corresponding simulation cells inside the domain. Although the data—and usually also the computational cost—is doubled for interactions over the periodic boundary, this approach has the advantage of being straightforward to implement and very easy to parallelize, since cells will only interact with their geographical neighbours. Instead of creating ghost cells, cell pairs that interact over a periodic boundary can also use a periodic correction vectorqαβ{\displaystyle \mathbf {q} _{\alpha \beta }}. This vector, which can be stored or computed for every cell pair(Cα,Cβ){\displaystyle (C_{\alpha },C_{\beta })}, contains the correction which needs to be applied to "wrap" one cell around the domain to neighbour the other. The pairwise distance between two particlespα∈Cα{\displaystyle p_{\alpha }\in C_{\alpha }}andpβ∈Cβ{\displaystyle p_{\beta }\in C_{\beta }}is then computed as This approach, although more efficient than using ghost cells, is less straightforward to implement (the cell pairs need to be identified over the periodic boundaries and the vectorqαβ{\displaystyle \mathbf {q} _{\alpha \beta }}needs to be computed/stored). Despite reducing the computational cost of finding all pairs within a given cut-off distance fromO(N2){\displaystyle {\mathcal {O}}(N^{2})}toO(N){\displaystyle {\mathcal {O}}(N)}, the cell list algorithm listed above still has some inefficiencies. Consider a computational cell in three dimensions with edge length equal to the cut-off radiusrc{\displaystyle r_{c}}. The pairwise distance between all particles in the cell and in one of the neighbouring cells is computed. The cell has 26 neighbours: 6 sharing a common face, 12 sharing a common edge and 8 sharing a common corner. Of all the pairwise distances computed, only about 16% will actually be less than or equal torc{\displaystyle r_{c}}. In other words, 84% of all pairwise distance computations are spurious. One way of overcoming this inefficiency is to partition the domain into cells of edge length smaller thanrc{\displaystyle r_{c}}. The pairwise interactions are then not just computed between neighboring cells, but between all cells withinrc{\displaystyle r_{c}}of each other (first suggested in[1]and implemented and analysed in[2][3]and[4]). This approach can be taken to the limit wherein each cell holds at most one single particle, therefore reducing the number of spurious pairwise distance evaluations to zero. This gain in efficiency, however, is quickly offset by the number of cellsCβ{\displaystyle C_{\beta }}that need to be inspected for every interaction with a cellCα{\displaystyle C_{\alpha }}, which, for example in three dimensions, grows cubically with the inverse of the cell edge length. Setting the edge length torc/2{\displaystyle r_{c}/2}, however, already reduces the number of spurious distance evaluations to 63%. Another approach is outlined and tested in Gonnet,[5]in which the particles are first sorted along the axis connecting the cell centers. This approach generates only about 40% spurious pairwise distance computations, yet carries an additional cost due to sorting the particles.
https://en.wikipedia.org/wiki/Cell_lists
Analogical modeling(AM) is a formal theory ofexemplarbased analogical reasoning, proposed byRoyal Skousen, professor of Linguistics and English language atBrigham Young UniversityinProvo, Utah. It is applicable to language modeling and other categorization tasks. Analogical modeling is related toconnectionismandnearest neighborapproaches, in that it is data-based rather than abstraction-based; but it is distinguished by its ability to cope with imperfect datasets (such as caused by simulated short term memory limits) and to base predictions on all relevant segments of the dataset, whether near or far. In language modeling, AM has successfully predicted empirically valid forms for which no theoretical explanation was known (see the discussion of Finnish morphology in Skousen et al. 2002). An exemplar-based model consists of ageneral-purpose modelingengine and a problem-specific dataset. Within the dataset, each exemplar (a case to be reasoned from, or an informative past experience) appears as a feature vector: a row of values for the set of parameters that define the problem. For example, in a spelling-to-sound task, the feature vector might consist of the letters of a word. Each exemplar in the dataset is stored with an outcome, such as a phoneme or phone to be generated. When the model is presented with a novel situation (in the form of an outcome-less feature vector), the engine algorithmically sorts the dataset to find exemplars that helpfully resemble it, and selects one, whose outcome is the model's prediction. The particulars of the algorithm distinguish one exemplar-based modeling system from another. In AM, we think of the feature values as characterizing a context, and the outcome as a behavior that occurs within that context. Accordingly, the novel situation is known as thegiven context.Given the known features of the context, the AM engine systematically generates all contexts that include it (all of itssupracontexts), and extracts from the dataset the exemplars that belong to each. The engine then discards those supracontexts whose outcomes areinconsistent(this measure of consistency will be discussed further below), leaving ananalogical setof supracontexts, and probabilistically selects an exemplar from the analogical set with a bias toward those in large supracontexts. This multilevel search exponentially magnifies the likelihood of a behavior's being predicted as it occurs reliably in settings that specifically resemble the given context. AM performs the same process for each case it is asked to evaluate. The given context, consisting of n variables, is used as a template to generate2n{\displaystyle 2^{n}}supracontexts. Each supracontext is a set of exemplars in which one or more variables have the same values that they do in the given context, and the other variables are ignored. In effect, each is a view of the data, created by filtering for some criteria of similarity to the given context, and the total set of supracontexts exhausts all such views. Alternatively, each supracontext is a theory of the task or a proposed rule whose predictive power needs to be evaluated. It is important to note that the supracontexts are not equal peers one with another; they are arranged by their distance from the given context, forming a hierarchy. If a supracontext specifies all of the variables that another one does and more, it is a subcontext of that other one, and it lies closer to the given context. (The hierarchy is not strictly branching; each supracontext can itself be a subcontext of several others, and can have several subcontexts.) This hierarchy becomes significant in the next step of the algorithm. The engine now chooses the analogical set from among the supracontexts. A supracontext may contain exemplars that only exhibit one behavior; it is deterministically homogeneous and is included. It is a view of the data that displays regularity, or a relevant theory that has never yet been disproven. A supracontext may exhibit several behaviors, but contain no exemplars that occur in any more specific supracontext (that is, in any of its subcontexts); in this case it is non-deterministically homogeneous and is included. Here there is no great evidence that a systematic behavior occurs, but also no counterargument. Finally, a supracontext may be heterogeneous, meaning that it exhibits behaviors that are found in a subcontext (closer to the given context), and also behaviors that are not. Where the ambiguous behavior of the nondeterministically homogeneous supracontext was accepted, this is rejected because the intervening subcontext demonstrates that there is a better theory to be found. The heterogeneous supracontext is therefore excluded. This guarantees that we see an increase in meaningfully consistent behavior in the analogical set as we approach the given context. With the analogical set chosen, each appearance of an exemplar (for a given exemplar may appear in several of the analogical supracontexts) is given a pointer to every other appearance of an exemplar within its supracontexts. One of these pointers is then selected at random and followed, and the exemplar to which it points provides the outcome. This gives each supracontext an importance proportional to the square of its size, and makes each exemplar likely to be selected in direct proportion to the sum of the sizes of all analogically consistent supracontexts in which it appears. Then, of course, the probability of predicting a particular outcome is proportional to the summed probabilities of all the exemplars that support it. (Skousen 2002, in Skousen et al. 2002, pp. 11–25, and Skousen 2003, both passim) Given a context withn{\displaystyle n}elements: This terminology is best understood through an example. In the example used in the second chapter of Skousen (1989), each context consists of three variables with potential values 0-3 The two outcomes for the dataset areeandr, and the exemplars are: We define a network of pointers like so: The solid lines represent pointers between exemplars with matching outcomes; the dotted lines represent pointers between exemplars with non-matching outcomes. The statistics for this example are as follows: Behavior can only be predicted for a given context; in this example, let us predict the outcome for the context "3 1 2". To do this, we first find all of the contexts containing the given context; these contexts are called supracontexts. We find the supracontexts by systematically eliminating the variables in the given context; withmvariables, there will generally be2m{\displaystyle 2^{m}}supracontexts. The following table lists each of the sub- and supracontexts;xmeans "not x", and-means "anything". These contexts are shown in the venn diagram below: The next step is to determine which exemplars belong to which contexts in order to determine which of the contexts are homogeneous. The table below shows each of the subcontexts, their behavior in terms of the given exemplars, and the number of disagreements within the behavior: Analyzing the subcontexts in the table above, we see that there is only 1 subcontext with any disagreements: "3 12", which in the dataset consists of "3 1 0 e" and "3 1 1 r". There are 2 disagreements in this subcontext; 1 pointing from each of the exemplars to the other (see the pointer network pictured above). Therefore, only supracontexts containing this subcontext will contain any disagreements. We use a simple rule to identify the homogeneous supracontexts: If the number if disagreements in the supracontext is greater than the number of disagreements in the contained subcontext, we say that it is heterogeneous; otherwise, it is homogeneous. There are 3 situations that produce a homogeneous supracontext: The only two heterogeneous supracontexts are "- 1 -" and "- - -". In both of them, it is the combination of the non-deterministic "3 12" with other subcontexts containing theroutcome which causes the heterogeneity. There is actually a 4th type of homogeneous supracontext: it contains more than one non-empty subcontext and it is non-deterministic, but the frequency of outcomes in each sub-context is exactly the same. Analogical modeling does not consider this situation, however, for 2 reasons: Next we construct the analogical set, which consists of all of the pointers and outcomes from the homogeneous supracontexts. The figure below shows the pointer network with the homogeneous contexts highlighted. The pointers are summarized in the following table: 4 of the pointers in the analogical set are associated with the outcomee, and the other 9 are associated withr. In AM, a pointer is randomly selected and the outcome it points to is predicted. With a total of 13 pointers, the probability of the outcomeebeing predicted is 4/13 or 30.8%, and for outcomerit is 9/13 or 69.2%. We can create a more detailed account by listing the pointers for each of the occurrences in the homogeneous supracontexts: We can then see theanalogical effectof each of the instances in the data set. Analogy has been considered useful in describing language at least since the time ofSaussure.Noam Chomskyand others have more recently criticized analogy as too vague to really be useful (Bańko 1991), an appeal to adeus ex machina.Skousen's proposal appears to address that criticism by proposing an explicit mechanism for analogy, which can be tested for psychological validity. Analogical modeling has been employed in experiments ranging fromphonologyandmorphology (linguistics)toorthographyandsyntax. Though analogical modeling aims to create a model free from rules seen as contrived by linguists, in its current form it still requires researchers to select which variables to take into consideration. This is necessary because of the so-called "exponential explosion" of processing power requirements of the computer software used to implement analogical modeling. Recent research suggests thatquantum computingcould provide the solution to such performance bottlenecks (Skousen et al. 2002, see pp 45–47).
https://en.wikipedia.org/wiki/Analogical_modeling
Multidimensional Expressions(MDX) is aquery languageforonline analytical processing(OLAP) using adatabase management system. Much likeSQL, it is a query language forOLAP cubes.[1]It is also a calculation language, with syntax similar to spreadsheet formulae. The MultiDimensional eXpressions (MDX) language provides a specialized syntax for querying and manipulating the multidimensional data stored inOLAP cubes.[1]While it is possible to translate some of these into traditional SQL, it would frequently require the synthesis of clumsy SQL expressions even for very simple MDX expressions. MDX has been embraced by a wide majority ofOLAP vendorsand has become thestandardfor OLAP systems. MDX was first introduced as part of theOLE DB for OLAPspecification in 1997 fromMicrosoft. It was invented by the group ofSQL Serverengineers includingMosha Pasumansky. The specification was quickly followed by commercial release of Microsoft OLAP Services 7.0 in 1998 and later byMicrosoft Analysis Services. The latest version of theOLE DB for OLAPspecification was issued by Microsoft in 1999. While it was not an open standard, but rather a Microsoft-owned specification, it was adopted by a wide range of OLAP vendors. The XML for Analysis specification referred back to the OLE DB for OLAP specification for details on the MDX Query Language. In Analysis Services 2005, Microsoft added some MDX Query Language extensions like subselects. Products like Microsoft Excel 2007 started to use these new MDX Query Language extensions. Some refer to this newer variant of MDX as MDX 2005. In 2001 theXMLA Councilreleased theXML for Analysis(XMLA) standard, which included mdXML as a query language. In the XMLA 1.1 specification, mdXML is essentially MDX wrapped in the XML<Statement>tag. There are six primarydata typesin MDX The following example, adapted from the SQL Server 2000 Books Online, shows a basic MDX query that uses the SELECT statement. This query returns a result set that contains the 2002 and 2003 store sales amounts for stores in the state of California. In this example, the query defines the following result set information Note: You can specify up to 128 query axes in an MDX query. If you create two axes, one must be the column axis and one must be the row axis, although it doesn't matter in which order they appear within the query. If you create a query that has only one axis, it must be the column axis. The square brackets around the particular object identifier are optional as long as the object identifier is not one of the reserved words and does not otherwise contain any characters other than letters, numbers or underscores.
https://en.wikipedia.org/wiki/MultiDimensional_eXpressions
Ineconometrics, amultidimensional panel datais data of a phenomenon observed over three or more dimensions. This comes in contrast withpanel data, observed over two dimensions (typically,timeandcross-sections). An example is a data set containing forecasts of one or multiple macroeconomic variables produced by multiple individuals (the first dimension), in multiple series (the second dimension) at multiple times periods (the third dimension) and for multiple horizons (the fourth dimension). A multidimensional panel with four dimensions can have the form whereiis the individual dimension,sis the series dimension,tis the time dimension, andhis the horizon dimension. A general multidimensional panel dataregression modelis written as Complex assumptions can be made on the precise structure of the correlations among errors in this model. For example, serial correlation (error terms correlated across time) has multiple distinct meanings. Error terms can be correlated across time for the same series, individual, and horizon. They can be correlated across time and across series for the same individual and horizon, etc. Similarly,heteroskedasticitycan be defined across individuals for the same series, time, and horizon, across individuals and different series for the same time and horizon, etc.
https://en.wikipedia.org/wiki/Multidimensional_panel_data
Adimensionis a structure that categorizesfactsandmeasuresin order to enable users to answer business questions. Commonly used dimensions are people, products, place and time.[1][2](Note: People and time sometimes are not modeled as dimensions.) In adata warehouse, dimensions provide structured labeling information to otherwise unordered numeric measures. The dimension is adata setcomposed of individual, non-overlappingdata elements. The primary functions of dimensions are threefold: to provide filtering, grouping and labelling. These functions are often described as "slice and dice". A common data warehouse example involves sales as the measure, with customer and product as dimensions. In each sale a customer buys a product. The data can be sliced by removing all customers except for a group under study, and then diced by grouping by product. A dimensionaldata elementis similar to acategorical variablein statistics. Typically dimensions in a data warehouse are organized internally into one or more hierarchies. "Date" is a common dimension, with several possible hierarchies: Aslowly changing dimensionis a set of data attributes that change slowly over a period of time rather than changing regularly e.g. address or name. These attributes can change over a period of time and that will get combined as a slowly changing dimension. These dimensions can be classified in types: A conformed dimension is a set of data attributes that have been physically referenced in multiple database tables using the same key value to refer to the same structure, attributes, domain values, definitions and concepts. A conformed dimension cuts across many facts. Dimensions are conformed when they are either exactly the same (including keys) or one is a proper subset of the other. Most important, the row headers produced in two different answer sets from the same conformed dimension(s) must be able to match perfectly.' Conformed dimensions are either identical or strict mathematical subsets of the most granular, detailed dimension. Dimension tables are not conformed if the attributes are labeled differently or contain different values. Conformed dimensions come in several different flavors. At the most basic level, conformed dimensions mean exactly the same thing with every possible fact table to which they are joined. The date dimension table connected to the sales facts is identical to the date dimension connected to the inventory facts.[4] A junk dimension is a convenient grouping of typically low-cardinality flags and indicators. By creating an abstract dimension, these flags and indicators are removed from the fact table while placing them into a useful dimensional framework.[5]A junk dimension is a dimension table consisting of attributes that do not belong in the fact table or in any of the existing dimension tables. The nature of these attributes is usually text or various flags, e.g. non-generic comments or just simple yes/no or true/false indicators. These kinds of attributes are typically remaining when all the obvious dimensions in the business process have been identified and thus the designer is faced with the challenge of where to put these attributes that do not belong in the other dimensions. One solution is to create a new dimension for each of the remaining attributes, but due to their nature, it could be necessary to create a vast number of new dimensions resulting in a fact table with a very large number of foreign keys. The designer could also decide to leave the remaining attributes in the fact table but this could make the row length of the table unnecessarily large if, for example, the attribute is a long text string. The solution to this challenge is to identify all the attributes and then put them into one or several junk dimensions. One junk dimension can hold several true/false or yes/no indicators that have no correlation with each other, so it would be convenient to convert the indicators into a more describing attribute. An example would be an indicator about whether a package had arrived: instead of indicating this as “yes” or “no”, it would be converted into "arrived" or "pending" in the junk dimension. The designer can choose to build the dimension table so it ends up holding all the indicators occurring with every other indicator so that all combinations are covered. This sets up a fixed size for the table itself which would be 2xrows, wherexis the number of indicators. This solution is appropriate in situations where the designer would expect to encounter a lot of different combinations and where the possible combinations are limited to an acceptable level. In a situation where the number of indicators are large, thus creating a very big table or where the designer only expects to encounter a few of the possible combinations, it would be more appropriate to build each row in the junk dimension as new combinations are encountered. To limit the size of the tables, multiple junk dimensions might be appropriate in other situations depending on the correlation between various indicators. Junk dimensions are also appropriate for placing attributes like non-generic comments from the fact table. Such attributes might consist of data from an optional comment field when a customer places an order and as a result will probably be blank in many cases. Therefore, the junk dimension should contain a single row representing the blanks as a surrogate key that will be used in the fact table for every row returned with a blank comment field.[6] Adegenerate dimensionis a key, such as a transaction number, invoice number, ticket number, or bill-of-lading number, that has no attributes and hence does not join to an actual dimension table. Degenerate dimensions are very common when thegrainof a fact table represents a single transaction item or line item because the degenerate dimension represents the unique identifier of the parent. Degenerate dimensions often play an integral role in the fact table's primary key.[7] Dimensions are often recycled for multiple applications within the same database. For instance, a "Date" dimension can be used for "Date of Sale", as well as "Date of Delivery", or "Date of Hire". This is often referred to as a "role-playing dimension". This can be implemented using a view over the same dimension table. Usually dimension tables do not reference other dimensions via foreign keys. When this happens, the referenced dimension is called an outrigger dimension. Outrigger dimensions should be considered a data warehouse anti-pattern: it is considered a better practice to use some fact tables that relate the two dimensions.[8] A conformed dimensions is said to be a shrunken dimension when it includes a subset of the rows and/or columns of the original dimension.[9] A special type of dimension can be used to represent dates with a granularity of a day. Dates would be referenced in afact tableas foreign keys to a date dimension. The date dimension primary key could be a surrogate key or a number using the format YYYYMMDD. The date dimension can include other attributes like the week of the year, or flags representing work days, holidays, etc. It could also include special rows representing: not known dates, or yet to be defined dates. The date dimension should be initialized with all the required dates, say the next 10 years of dates, or more if required, or past dates if events in the past are handled. Time instead is usually best represented as a timestamp in thefact table.[10] When referencing data from ametadataregistry such asISO/IEC 11179,representation termssuch as "Indicator" (a boolean true/false value), "Code" (a set of non-overlapping enumerated values) are typically used as dimensions. For example, using theNational Information Exchange Model(NIEM) the data element name would be "PersonGenderCode" and the enumerated values might be "male", "female" and "unknown". Indata warehousing, a dimension table is one of the set of companion tables to afact table. The fact table containsbusiness facts(or measures), andforeign keyswhich refer tocandidate keys(normallyprimary keys) in the dimension tables. Contrary to fact tables, dimension tables contain descriptive attributes (or fields) that are typically textual fields (or discrete numbers that behave like text). These attributes are designed to serve two critical purposes: query constraining and/or filtering, and query result set labeling. Dimension attributes should be: Dimension table rows are uniquely identified by a single key field. It is recommended that the key field be a simple integer because a key value is meaningless, used only for joining fields between the fact and dimension tables. Dimension tables often use primary keys that are also surrogate keys. Surrogate keys are often auto-generated (e.g. a Sybase or SQL Server "identity column", a PostgreSQL or Informix serial, an Oracle SEQUENCE or a column defined with AUTO_INCREMENT in MySQL). The use of surrogate dimension keys brings several advantages, including: Although surrogate key use places a burden on theETLsystem, pipeline processing can be improved, and ETL tools have built-in improved surrogate key processing. The goal of a dimension table is to create standardized, conformed dimensions that can be shared across the enterprise'sdata warehouseenvironment, and enable joining to multiple fact tables representing various business processes. Conformed dimensions are important to the enterprise nature of DW/BI systems because they promote: Over time, the attributes of a given row in a dimension table may change. For example, the shipping address for a company may change.Kimballrefers to this phenomenon asslowly changing dimension. Strategies for dealing with this kind of change are divided into three categories: Source:[11] Since manyfact tablesin a data warehouse are time series of observations, one or more date dimensions are often needed. One of the reasons to have date dimensions is to place calendar knowledge in the data warehouse instead of hard-coded in an application. While a simple SQLdate-timestampis useful for providing accurate information about the time a fact was recorded, it can not give information about holidays, fiscal periods, etc. An SQL date-timestamp can still be useful to store in the fact table, as it allows for precise calculations. Having both the date and time of day in the same dimension, may easily result in a huge dimension with millions of rows. If a high amount of detail is needed it is usually a good idea to split date and time into two or more separate dimensions. A time dimension with a grain of seconds in a day will only have 86400 rows. A more or less detailed grain for date/time dimensions can be chosen depending on needs. As examples, date dimensions can be accurate to year, quarter, month or day and time dimensions can be accurate to hours, minutes or seconds. As a rule of thumb, time of day dimension should only be created if hierarchical groupings are needed or if there are meaningful textual descriptions for periods of time within the day (ex. “evening rush” or “first shift”). If the rows in a fact table are coming from several time zones, it might be useful to store date and time in both local time and a standard time. This can be done by having two dimensions for each date/time dimension needed – one for local time, and one for standard time. Storing date/time in both local and standard time, will allow for analysis on when facts are created in a local setting and in a global setting as well. The standard time chosen can be a global standard time (ex.UTC), it can be the local time of the business’ headquarters (ex.CET), or any other time zone that would make sense to use.
https://en.wikipedia.org/wiki/Dimension_(data_warehouse)
Adimensionis a structure that categorizesfactsandmeasuresin order to enable users to answer business questions. Commonly used dimensions are people, products, place and time.[1][2](Note: People and time sometimes are not modeled as dimensions.) In adata warehouse, dimensions provide structured labeling information to otherwise unordered numeric measures. The dimension is adata setcomposed of individual, non-overlappingdata elements. The primary functions of dimensions are threefold: to provide filtering, grouping and labelling. These functions are often described as "slice and dice". A common data warehouse example involves sales as the measure, with customer and product as dimensions. In each sale a customer buys a product. The data can be sliced by removing all customers except for a group under study, and then diced by grouping by product. A dimensionaldata elementis similar to acategorical variablein statistics. Typically dimensions in a data warehouse are organized internally into one or more hierarchies. "Date" is a common dimension, with several possible hierarchies: Aslowly changing dimensionis a set of data attributes that change slowly over a period of time rather than changing regularly e.g. address or name. These attributes can change over a period of time and that will get combined as a slowly changing dimension. These dimensions can be classified in types: A conformed dimension is a set of data attributes that have been physically referenced in multiple database tables using the same key value to refer to the same structure, attributes, domain values, definitions and concepts. A conformed dimension cuts across many facts. Dimensions are conformed when they are either exactly the same (including keys) or one is a proper subset of the other. Most important, the row headers produced in two different answer sets from the same conformed dimension(s) must be able to match perfectly.' Conformed dimensions are either identical or strict mathematical subsets of the most granular, detailed dimension. Dimension tables are not conformed if the attributes are labeled differently or contain different values. Conformed dimensions come in several different flavors. At the most basic level, conformed dimensions mean exactly the same thing with every possible fact table to which they are joined. The date dimension table connected to the sales facts is identical to the date dimension connected to the inventory facts.[4] A junk dimension is a convenient grouping of typically low-cardinality flags and indicators. By creating an abstract dimension, these flags and indicators are removed from the fact table while placing them into a useful dimensional framework.[5]A junk dimension is a dimension table consisting of attributes that do not belong in the fact table or in any of the existing dimension tables. The nature of these attributes is usually text or various flags, e.g. non-generic comments or just simple yes/no or true/false indicators. These kinds of attributes are typically remaining when all the obvious dimensions in the business process have been identified and thus the designer is faced with the challenge of where to put these attributes that do not belong in the other dimensions. One solution is to create a new dimension for each of the remaining attributes, but due to their nature, it could be necessary to create a vast number of new dimensions resulting in a fact table with a very large number of foreign keys. The designer could also decide to leave the remaining attributes in the fact table but this could make the row length of the table unnecessarily large if, for example, the attribute is a long text string. The solution to this challenge is to identify all the attributes and then put them into one or several junk dimensions. One junk dimension can hold several true/false or yes/no indicators that have no correlation with each other, so it would be convenient to convert the indicators into a more describing attribute. An example would be an indicator about whether a package had arrived: instead of indicating this as “yes” or “no”, it would be converted into "arrived" or "pending" in the junk dimension. The designer can choose to build the dimension table so it ends up holding all the indicators occurring with every other indicator so that all combinations are covered. This sets up a fixed size for the table itself which would be 2xrows, wherexis the number of indicators. This solution is appropriate in situations where the designer would expect to encounter a lot of different combinations and where the possible combinations are limited to an acceptable level. In a situation where the number of indicators are large, thus creating a very big table or where the designer only expects to encounter a few of the possible combinations, it would be more appropriate to build each row in the junk dimension as new combinations are encountered. To limit the size of the tables, multiple junk dimensions might be appropriate in other situations depending on the correlation between various indicators. Junk dimensions are also appropriate for placing attributes like non-generic comments from the fact table. Such attributes might consist of data from an optional comment field when a customer places an order and as a result will probably be blank in many cases. Therefore, the junk dimension should contain a single row representing the blanks as a surrogate key that will be used in the fact table for every row returned with a blank comment field.[6] Adegenerate dimensionis a key, such as a transaction number, invoice number, ticket number, or bill-of-lading number, that has no attributes and hence does not join to an actual dimension table. Degenerate dimensions are very common when thegrainof a fact table represents a single transaction item or line item because the degenerate dimension represents the unique identifier of the parent. Degenerate dimensions often play an integral role in the fact table's primary key.[7] Dimensions are often recycled for multiple applications within the same database. For instance, a "Date" dimension can be used for "Date of Sale", as well as "Date of Delivery", or "Date of Hire". This is often referred to as a "role-playing dimension". This can be implemented using a view over the same dimension table. Usually dimension tables do not reference other dimensions via foreign keys. When this happens, the referenced dimension is called an outrigger dimension. Outrigger dimensions should be considered a data warehouse anti-pattern: it is considered a better practice to use some fact tables that relate the two dimensions.[8] A conformed dimensions is said to be a shrunken dimension when it includes a subset of the rows and/or columns of the original dimension.[9] A special type of dimension can be used to represent dates with a granularity of a day. Dates would be referenced in afact tableas foreign keys to a date dimension. The date dimension primary key could be a surrogate key or a number using the format YYYYMMDD. The date dimension can include other attributes like the week of the year, or flags representing work days, holidays, etc. It could also include special rows representing: not known dates, or yet to be defined dates. The date dimension should be initialized with all the required dates, say the next 10 years of dates, or more if required, or past dates if events in the past are handled. Time instead is usually best represented as a timestamp in thefact table.[10] When referencing data from ametadataregistry such asISO/IEC 11179,representation termssuch as "Indicator" (a boolean true/false value), "Code" (a set of non-overlapping enumerated values) are typically used as dimensions. For example, using theNational Information Exchange Model(NIEM) the data element name would be "PersonGenderCode" and the enumerated values might be "male", "female" and "unknown". Indata warehousing, a dimension table is one of the set of companion tables to afact table. The fact table containsbusiness facts(or measures), andforeign keyswhich refer tocandidate keys(normallyprimary keys) in the dimension tables. Contrary to fact tables, dimension tables contain descriptive attributes (or fields) that are typically textual fields (or discrete numbers that behave like text). These attributes are designed to serve two critical purposes: query constraining and/or filtering, and query result set labeling. Dimension attributes should be: Dimension table rows are uniquely identified by a single key field. It is recommended that the key field be a simple integer because a key value is meaningless, used only for joining fields between the fact and dimension tables. Dimension tables often use primary keys that are also surrogate keys. Surrogate keys are often auto-generated (e.g. a Sybase or SQL Server "identity column", a PostgreSQL or Informix serial, an Oracle SEQUENCE or a column defined with AUTO_INCREMENT in MySQL). The use of surrogate dimension keys brings several advantages, including: Although surrogate key use places a burden on theETLsystem, pipeline processing can be improved, and ETL tools have built-in improved surrogate key processing. The goal of a dimension table is to create standardized, conformed dimensions that can be shared across the enterprise'sdata warehouseenvironment, and enable joining to multiple fact tables representing various business processes. Conformed dimensions are important to the enterprise nature of DW/BI systems because they promote: Over time, the attributes of a given row in a dimension table may change. For example, the shipping address for a company may change.Kimballrefers to this phenomenon asslowly changing dimension. Strategies for dealing with this kind of change are divided into three categories: Source:[11] Since manyfact tablesin a data warehouse are time series of observations, one or more date dimensions are often needed. One of the reasons to have date dimensions is to place calendar knowledge in the data warehouse instead of hard-coded in an application. While a simple SQLdate-timestampis useful for providing accurate information about the time a fact was recorded, it can not give information about holidays, fiscal periods, etc. An SQL date-timestamp can still be useful to store in the fact table, as it allows for precise calculations. Having both the date and time of day in the same dimension, may easily result in a huge dimension with millions of rows. If a high amount of detail is needed it is usually a good idea to split date and time into two or more separate dimensions. A time dimension with a grain of seconds in a day will only have 86400 rows. A more or less detailed grain for date/time dimensions can be chosen depending on needs. As examples, date dimensions can be accurate to year, quarter, month or day and time dimensions can be accurate to hours, minutes or seconds. As a rule of thumb, time of day dimension should only be created if hierarchical groupings are needed or if there are meaningful textual descriptions for periods of time within the day (ex. “evening rush” or “first shift”). If the rows in a fact table are coming from several time zones, it might be useful to store date and time in both local time and a standard time. This can be done by having two dimensions for each date/time dimension needed – one for local time, and one for standard time. Storing date/time in both local and standard time, will allow for analysis on when facts are created in a local setting and in a global setting as well. The standard time chosen can be a global standard time (ex.UTC), it can be the local time of the business’ headquarters (ex.CET), or any other time zone that would make sense to use.
https://en.wikipedia.org/wiki/Dimension_table
In computer programming contexts, adata cube(ordatacube) is amulti-dimensional ("n-D") arrayof values. Typically, the term data cube is applied in contexts where these arrays are massively larger than the hosting computer's main memory; examples include multi-terabyte/petabytedata warehousesandtime seriesof image data. The data cube is used to represent data (sometimes called facts) along some dimensions of interest. For example, inonline analytical processing(OLAP) such dimensions could be the subsidiaries a company has, the products the company offers, and time; in this setup, a fact would be a sales event where a particular product has been sold in a particular subsidiary at a particular time. In satellite image timeseries dimensions would be latitude and longitude coordinates and time; a fact (sometimes called measure) would be a pixel at a given space and time as taken by the satellite (following some processing that is not of concern here). Even though it is called acube(and the examples provided above happen to be 3-dimensional for brevity), a data cube generally is a multi-dimensional concept which can be 1-dimensional, 2-dimensional, 3-dimensional, or higher-dimensional. In any case, every dimension divides data into groups of cells whereas each cell in the cube represents a single measure of interest. Sometimes cubes hold only a few values with the rest beingempty, i.e. undefined, while sometimes most or all cube coordinates hold a cell value. In the first case such data are calledsparse, and in the second case they are calleddense, although there is no hard delineation between the two. Multi-dimensional arrays have long been familiar in programming languages.Fortranoffers arbitrarily-indexed 1-D arrays and arrays of arrays, which allows the construction of higher-dimensional arrays, up to 15 dimensions.APLsupports n-D arrays with a rich set of operations. All these have in common that arrays must fit into the main memory and are available only while the particular program maintaining them (such as image processing software) is running. A series of data exchange formats support storage and transmission of data cube-like data, often tailored towards particular application domains. Examples includeMDXfor statistical (in particular, business) data,Hierarchical Data Formatfor general scientific data, andTIFFfor imagery. In 1992,Peter Baumannintroduced management of massive data cubes with high-level user functionality combined with an efficient software architecture.[1]Datacube operations include subset extraction, processing, fusion, and in general queries in the spirit ofdata manipulation languageslikeSQL. Some years after, the data cube concept was applied to describe time-varying business data as data cubes byJim Gray, et al.,[2]and byVenky Harinarayan,Anand RajaramanandJeff Ullman[3]which rank among the top 500 most cited computer science articles over a 25-year period.[4] Around that time, a working group on Multi-Dimensional Databases ("Arbeitskreis Multi-Dimensionale Datenbanken") was established at GermanGesellschaft für Informatik.[5][6] Datacube Inc.was animage processingcompany sellinghardwareandsoftwareapplications for thePC marketin 1996, however without addressing data cubes as such. The EarthServer initiative has established geo data cube service requirements.[7] In 2018, theISOSQLdatabase language was extended with data cube functionality as "SQL – Part 15: Multi-dimensional arrays (SQL/MDA)".[8] Web Coverage Processing Serviceis a geo data cube analytics language issued by theOpen Geospatial Consortiumin 2008. In addition to the common data cube operations, the language knows about the semantics of space and time and supports both regular and irregular grid data cubes, based on the concept ofcoverage data. An industry standard for querying business data cubes, originally developed byMicrosoft, isMultiDimensional eXpressions. Many high-level computer languages treat data cubes and other large arrays as single entities distinct from their contents. These languages, of whichFortran,APL,IDL,NumPy,PDL, andS-Langare examples, allow the programmer to manipulate completefilmclips and other data en masse with simple expressions derived fromlinear algebraandvectormathematics. Some languages (such as PDL) distinguish between alistof images and a data cube, while many (such as IDL) do not. Array DBMSs(Database Management Systems) offer a data model which generically supports definition, management, retrieval, and manipulation of n-dimensional data cubes. This database category has been pioneered by therasdamansystem since 1994.[9] Multi-dimensional arrays can meaningfully represent spatio-temporal sensor, image, and simulation data, but also statistics data where the semantics of dimensions is not necessarily of spatial or temporal nature. Generally, any kind of axis can be combined with any other into a data cube. In mathematics, a one-dimensional array corresponds to a vector, a two-dimensional array resembles amatrix; more generally, atensormay be represented as an n-dimensional data cube. For a time sequence of color images, the array is generally four-dimensional, with the dimensions representing image X and Y coordinates, time, andRGB(or othercolor space) color plane. For example, the EarthServer initiative[10]unites data centers from different continents offering 3-D x/y/t satellite image timeseries and 4-D x/y/z/t weather data for retrieval and server-side processing through theOpen Geospatial ConsortiumWCPSgeo data cube query language standard. A data cube is also used in the field ofimaging spectroscopy, since a spectrally-resolved image is represented as a three-dimensional volume. Earth observation data cubes combine satellite imagery such asLandsat 8andSentinel-2withGeographic information systemanalytics.[11] Inonline analytical processing(OLAP), data cubes are a common arrangement of business data suitable for analysis from different perspectives through operations like slicing, dicing, pivoting, and aggregation.
https://en.wikipedia.org/wiki/Data_cube
Natural-neighbor interpolationorSibson interpolationis a method ofspatial interpolation, developed byRobin Sibson.[1]The method is based onVoronoi tessellationof a discrete set of spatial points. This has advantages over simpler methods of interpolation, such asnearest-neighbor interpolation, in that it provides a smoother approximation to the underlying "true" function. The basic equation is: whereG(x){\displaystyle G(x)}is the estimate atx{\displaystyle x},wi{\displaystyle w_{i}}are the weights andf(xi){\displaystyle f(x_{i})}are the known data at(xi){\displaystyle (x_{i})}. The weights,wi{\displaystyle w_{i}}, are calculated by finding how much of each of the surrounding areas is "stolen" when insertingx{\displaystyle x}into the tessellation. whereA(x)is the volume of the new cell centered inx, andA(xi)is the volume of the intersection between the new cell centered inxand the old cell centered inxi. wherel(xi)is themeasureof the interface between the cells linked toxandxiin theVoronoi diagram(length in 2D, surface in 3D) andd(xi), the distance betweenxandxi. There are several useful properties of natural neighbor interpolation:[4] Natural neighbor interpolation has also been implemented in a discrete form, which has been demonstrated to be computationally more efficient in at least some circumstances.[5]A form of discrete natural neighbor interpolation has also been developed that gives a measure of interpolation uncertainty.[4] Thisapplied mathematics–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Natural_neighbor_interpolation
Incomputer graphicsanddigital imaging,imagescalingrefers to the resizing of a digital image. In video technology, the magnification of digital material is known as upscaling orresolution enhancement. When scaling avector graphicimage, the graphic primitives that make up the image can be scaled using geometric transformations with no loss ofimage quality. When scaling araster graphicsimage, a new image with a higher or lower number of pixels must be generated. In the case of decreasing the pixel number (scaling down), this usually results in a visible quality loss. From the standpoint ofdigital signal processing, the scaling of raster graphics is a two-dimensional example ofsample-rate conversion, the conversion of adiscrete signalfrom asampling rate(in this case, the local sampling rate) to another. Image scaling can be interpreted as a form of image resampling or image reconstruction from the view of theNyquist sampling theorem. According to the theorem, downsampling to a smaller image from a higher-resolution original can only be carried out after applying a suitable 2Danti-aliasing filterto prevent aliasing artifacts. The image is reduced to the information that can be carried by the smaller image. In the case of up sampling, areconstruction filtertakes the place of the anti-aliasing filter. A more sophisticated approach to upscaling treats the problem as aninverse problem, solving the question of generating a plausible image that, when scaled down, would look like the input image. A variety of techniques have been applied for this, including optimization techniques withregularizationterms and the use ofmachine learningfrom examples. An image size can be changed in several ways. One of the simpler ways of increasing image size isnearest-neighbor interpolation, replacing every pixel with the nearest pixel in the output; for upscaling, this means multiple pixels of the same color will be present. This can preserve sharp details but also introducejaggednessin previously smooth images. 'Nearest' in nearest-neighbor does not have to be the mathematical nearest. One common implementation is to always round toward zero. Rounding this way produces fewer artifacts and is faster to calculate.[citation needed] This algorithm is often preferred for images which have little to no smooth edges. A common application of this can be found inpixel art. Bilinear interpolationworks byinterpolatingpixel color values, introducing a continuous transition into the output even where the original material has discrete transitions. Although this is desirable for continuous-tone images, this algorithm reducescontrast(sharp edges) in a way that may be undesirable for line art.Bicubic interpolationyields substantially better results, with an increase in computational cost.[citation needed] Sinc resampling, in theory, provides the best possible reconstruction for a perfectly bandlimited signal. In practice, the assumptions behind sinc resampling are not completely met by real-world digital images.Lanczos resampling, an approximation to the sinc method, yields better results. Bicubic interpolation can be regarded as a computationally efficient approximation to Lanczos resampling.[citation needed] One weakness of bilinear, bicubic, and related algorithms is that they sample a specific number of pixels. When downscaling below a certain threshold, such as more than twice for all bi-sampling algorithms, the algorithms will sample non-adjacent pixels, which results in both losing data and rough results.[citation needed] The trivial solution to this issue is box sampling, which is to consider the target pixel a box on the original image and sample all pixels inside the box. This ensures that all input pixels contribute to the output. The major weakness of this algorithm is that it is hard to optimize.[citation needed] Another solution to the downscale problem of bi-sampling scaling ismipmaps. A mipmap is a prescaled set of downscaled copies. When downscaling, the nearest larger mipmap is used as the origin to ensure no scaling below the useful threshold of bilinear scaling. This algorithm is fast and easy to optimize. It is standard in many frameworks, such asOpenGL. The cost is using more image memory, exactly one-third more in the standard implementation. Simple interpolation based on theFourier transformpads thefrequency domainwith zero components (a smooth window-based approach would reduce theringing). Besides the good conservation (or recovery) of details, notable are the ringing and the circular bleeding of content from the left border to the right border (and the other way around). Edge-directed interpolation algorithms aim to preserve edges in the image after scaling, unlike other algorithms, which can introduce staircase artifacts. Examples of algorithms for this task include New Edge-Directed Interpolation (NEDI),[1][2]Edge-Guided Image Interpolation (EGGI),[3]Iterative Curvature-Based Interpolation(ICBI),[4]andDirectional Cubic Convolution Interpolation(DCCI).[5]A 2013 analysis found that DCCI had the best scores inpeak signal-to-noise ratioandstructural similarityon a series of test images.[6] For magnifying computer graphics with low resolution and/or few colors (usually from 2 to 256 colors), better results can be achieved byhqxor otherpixel-art scaling algorithms. These produce sharp edges and maintain a high level of detail. Vector extraction, orvectorization, offers another approach. Vectorization first creates a resolution-independent vector representation of the graphic to be scaled. Then the resolution-independent version is rendered as a raster image at the desired resolution. This technique is used byAdobe Illustrator, Live Trace, andInkscape.[7]Scalable Vector Graphicsare well suited to simple geometric images, while photographs do not fare well with vectorization due to their complexity. This method usesmachine learningfor more detailed images, such as photographs and complex artwork. Programs that use this method includewaifu2x, Imglarger and Neural Enhance. Demonstration of conventional vs. waifu2x upscaling with noise reduction, using a detail ofPhosphorus and HesperusbyEvelyn De Morgan. [Click image for full size] AI-driven software such as theMyHeritage Photo Enhancerallows detail and sharpness to be added to historical photographs, where it is not present in the original. Image scaling is used in, among other applications,web browsers,[8]image editors, image and file viewers, software magnifiers, digital zoom, the process of generatingthumbnail images, and when outputting images through screens or printers. This application is the magnification of images for home theaters for HDTV-ready output devices from PAL-Resolution content, for example, from a DVD player. Upscaling is performed in real time, and the output signal is not saved. Aspixel-artgraphics are usually low-resolution, they rely on careful placement of individual pixels, often with a limited palette of colors. This results in graphics that rely on stylized visual cues to define complex shapes with little resolution, down to individual pixels. This makes scaling pixel art a particularly difficult problem. Specialized algorithms[9]were developed to handle pixel-art graphics, as the traditional scaling algorithms do not take perceptual cues into account. Since a typical application is to improve the appearance offourth-generationand earliervideo gamesonarcadeandconsole emulators, many are designed to run in real time for small input images at 60 frames per second. On fast hardware, these algorithms are suitable for gaming and other real-time image processing. These algorithms provide sharp, crisp graphics, while minimizing blur. Scaling art algorithms have been implemented in a wide range of emulators such as HqMAME andDOSBox, as well as 2Dgame enginesandgame engine recreationssuch asScummVM. They gained recognition with gamers, for whom these technologies encouraged a revival of 1980s and 1990s gaming experiences.[citation needed] Such filters are currently used in commercial emulators onXbox Live,Virtual Console, andPSNto allow classic low-resolution games to be more visually appealing on modernHDdisplays. Recently released games that incorporate these filters includeSonic's Ultimate Genesis Collection,Castlevania: The Dracula X Chronicles,Castlevania: Symphony of the Night, andAkumajō Dracula X Chi no Rondo. A number of companies have developed techniques to upscale video frames inreal-time, such as when they are drawn on screen in a video game.Nvidia'sdeep learning super sampling(DLSS) usesdeep learningto upsample lower-resolutionimages to a higher resolution for display on higher-resolution computer monitors.[10]AMD'sFidelityFX Super Resolution1.0 (FSR) does not employ machine learning, instead using traditional hand-written algorithms to achieve spatial upscaling on traditional shading units. FSR 2.0 utilises temporal upscaling, again with a hand-tuned algorithm. FSR standardized presets are not enforced, and some titles such asDota 2offer resolution sliders.[11]Other technologies includeIntelXeSS and Nvidia Image Scaler (NIS).[12][13]
https://en.wikipedia.org/wiki/Image_scaling
Akernel smootheris astatisticaltechnique to estimate a real valuedfunctionf:Rp→R{\displaystyle f:\mathbb {R} ^{p}\to \mathbb {R} }as theweighted averageof neighboring observed data. The weight is defined by thekernel, such that closer points are given higher weights. The estimated function is smooth, and the level of smoothness is set by a single parameter. Kernel smoothing is a type ofweighted moving average. LetKhλ(X0,X){\displaystyle K_{h_{\lambda }}(X_{0},X)}be a kernel defined by where: Popularkernelsused for smoothing include parabolic (Epanechnikov), tricube, andGaussiankernels. LetY(X):Rp→R{\displaystyle Y(X):\mathbb {R} ^{p}\to \mathbb {R} }be a continuous function ofX. For eachX0∈Rp{\displaystyle X_{0}\in \mathbb {R} ^{p}}, the Nadaraya-Watson kernel-weighted average (smoothY(X) estimation) is defined by where: In the following sections, we describe some particular cases of kernel smoothers. TheGaussian kernelis one of the most widely used kernels, and is expressed with the equation below. Here, b is the length scale for the input space. Thek-nearest neighbor algorithmcan be used for defining ak-nearest neighbor smootheras follows. For each pointX0, takemnearest neighbors and estimate the value ofY(X0) by averaging the values of these neighbors. Formally,hm(X0)=‖X0−X[m]‖{\displaystyle h_{m}(X_{0})=\left\|X_{0}-X_{[m]}\right\|}, whereX[m]{\displaystyle X_{[m]}}is themth closest toX0neighbor, and In this example,Xis one-dimensional. For each X0, theY^(X0){\displaystyle {\hat {Y}}(X_{0})}is an average value of 16 closest toX0points (denoted by red). The idea of the kernel average smoother is the following. For each data pointX0, choose a constant distance sizeλ(kernel radius, or window width forp= 1 dimension), and compute a weighted average for all data points that are closer thanλ{\displaystyle \lambda }toX0(the closer toX0points get higher weights). Formally,hλ(X0)=λ=constant,{\displaystyle h_{\lambda }(X_{0})=\lambda ={\text{constant}},}andD(t) is one of the popular kernels. For eachX0the window width is constant, and the weight of each point in the window is schematically denoted by the yellow figure in the graph. It can be seen that the estimation is smooth, but the boundary points are biased. The reason for that is the non-equal number of points (from the right and from the left to theX0) in the window, when theX0is close enough to the boundary. In the two previous sections we assumed that the underlying Y(X) function is locally constant, therefore we were able to use the weighted average for the estimation. The idea of local linear regression is to fit locally a straight line (or a hyperplane for higher dimensions), and not the constant (horizontal line). After fitting the line, the estimationY^(X0){\displaystyle {\hat {Y}}(X_{0})}is provided by the value of this line atX0point. By repeating this procedure for eachX0, one can get the estimation functionY^(X){\displaystyle {\hat {Y}}(X)}. Like in previous section, the window width is constanthλ(X0)=λ=constant.{\displaystyle h_{\lambda }(X_{0})=\lambda ={\text{constant}}.}Formally, the local linear regression is computed by solving a weighted least square problem. For one dimension (p= 1): minα(X0),β(X0)∑i=1NKhλ(X0,Xi)(Y(Xi)−α(X0)−β(X0)Xi)2⇓Y^(X0)=α(X0)+β(X0)X0{\displaystyle {\begin{aligned}&\min _{\alpha (X_{0}),\beta (X_{0})}\sum \limits _{i=1}^{N}{K_{h_{\lambda }}(X_{0},X_{i})\left(Y(X_{i})-\alpha (X_{0})-\beta (X_{0})X_{i}\right)^{2}}\\&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\Downarrow \\&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\hat {Y}}(X_{0})=\alpha (X_{0})+\beta (X_{0})X_{0}\\\end{aligned}}} The closed form solution is given by: where: The resulting function is smooth, and the problem with the biased boundary points is reduced. Local linear regression can be applied to any-dimensional space, though the question of what is a local neighborhood becomes more complicated. It is common to use k nearest training points to a test point to fit the local linear regression. This can lead to high variance of the fitted function. To bound the variance, the set of training points should contain the test point in their convex hull (see Gupta et al. reference). Instead of fitting locally linear functions, one can fit polynomial functions. For p=1, one should minimize: withY^(X0)=α(X0)+∑j=1dβj(X0)X0j{\displaystyle {\hat {Y}}(X_{0})=\alpha (X_{0})+\sum \limits _{j=1}^{d}{\beta _{j}(X_{0})X_{0}^{j}}} In general case (p>1), one should minimize:
https://en.wikipedia.org/wiki/Nearest_neighbor_smoothing
Thezero-order hold(ZOH) is a mathematical model of the practicalsignal reconstructiondone by a conventionaldigital-to-analog converter(DAC).[1]That is, it describes the effect of converting adiscrete-time signalto acontinuous-time signalby holding each sample value for one sample interval. It has several applications in electrical communication. A zero-order hold reconstructs the following continuous-time waveform from a sample sequencex[n], assuming one sample per time intervalT:xZOH(t)=∑n=−∞∞x[n]⋅rect(t−T/2−nTT){\displaystyle x_{\mathrm {ZOH} }(t)\,=\sum _{n=-\infty }^{\infty }x[n]\cdot \mathrm {rect} \left({\frac {t-T/2-nT}{T}}\right)}whererect(⋅){\displaystyle \mathrm {rect} (\cdot )}is therectangular function. The functionrect(t−T/2T){\displaystyle \mathrm {rect} \left({\frac {t-T/2}{T}}\right)}is depicted in Figure 1, andxZOH(t){\displaystyle x_{\mathrm {ZOH} }(t)}is thepiecewise-constantsignal depicted in Figure 2. The equation above for the output of the ZOH can also be modeled as the output of alinear time-invariant filterwith impulse response equal to a rect function, and with input being a sequence ofdirac impulsesscaled to the sample values. The filter can then be analyzed in the frequency domain, for comparison with other reconstruction methods such as theWhittaker–Shannon interpolation formulasuggested by theNyquist–Shannon sampling theorem, or such as thefirst-order holdor linear interpolation between sample values. In this method, a sequence ofDirac impulses,xs(t), representing the discrete samples,x[n], islow-pass filteredto recover acontinuous-time signal,x(t). Even though this isnotwhat a DAC does in reality, the DAC output can be modeled by applying the hypothetical sequence of dirac impulses,xs(t), to alinear, time-invariant filterwith such characteristics (which, for an LTI system, are fully described by theimpulse response) so that each input impulse results in the correct constant pulse in the output. Begin by defining a continuous-time signal from the sample values, as above but using delta functions instead of rect functions:xs(t)=∑n=−∞∞x[n]⋅δ(t−nTT)=T∑n=−∞∞x[n]⋅δ(t−nT).{\displaystyle {\begin{aligned}x_{s}(t)&=\sum _{n=-\infty }^{\infty }x[n]\cdot \delta \left({\frac {t-nT}{T}}\right)\\&{}=T\sum _{n=-\infty }^{\infty }x[n]\cdot \delta (t-nT).\end{aligned}}} The scaling byT{\displaystyle T}, which arises naturally by time-scaling the delta function, has the result that the mean value ofxs(t) is equal to the mean value of the samples, so that the lowpass filter needed will have a DC gain of 1. Some authors use this scaling,[2]while many others omit the time-scaling and theT, resulting in a low-pass filter model with a DC gain ofT, and hence dependent on the units of measurement of time. The zero-order hold is the hypotheticalfilterorLTI systemthat converts the sequence of modulated Dirac impulsesxs(t)to the piecewise-constant signal (shown in Figure 2):xZOH(t)=∑n=−∞∞x[n]⋅rect(t−nTT−12){\displaystyle x_{\mathrm {ZOH} }(t)=\sum _{n=-\infty }^{\infty }x[n]\cdot \mathrm {rect} \left({\frac {t-nT}{T}}-{\frac {1}{2}}\right)}resulting in an effectiveimpulse response(shown in Figure 4) of:hZOH(t)=1Trect(tT−12)={1Tif0≤t<T0otherwise{\displaystyle h_{\mathrm {ZOH} }(t)\,={\frac {1}{T}}\mathrm {rect} \left({\frac {t}{T}}-{\frac {1}{2}}\right)={\begin{cases}{\frac {1}{T}}&{\text{if }}0\leq t<T\\0&{\text{otherwise}}\end{cases}}} The effective frequency response is thecontinuous Fourier transformof the impulse response. HZOH(f)=F{hZOH(t)}=1−e−i2πfTi2πfT=e−iπfTsinc(fT){\displaystyle H_{\mathrm {ZOH} }(f)={\mathcal {F}}\{h_{\mathrm {ZOH} }(t)\}={\frac {1-e^{-i2\pi fT}}{i2\pi fT}}=e^{-i\pi fT}\mathrm {sinc} (fT)}wheresinc(x){\displaystyle \mathrm {sinc} (x)}is the (normalized)sinc functionsin⁡(πx)πx{\displaystyle {\frac {\sin(\pi x)}{\pi x}}}commonly used in digital signal processing. TheLaplace transformtransfer functionof the ZOH is found by substitutings=i2πf:HZOH(s)=L{hZOH(t)}=1−e−sTsT{\displaystyle H_{\mathrm {ZOH} }(s)={\mathcal {L}}\{h_{\mathrm {ZOH} }(t)\}\,={\frac {1-e^{-sT}}{sT}}\ } The fact that practicaldigital-to-analog converters(DAC) do not output a sequence ofdirac impulses,xs(t) (that, if ideally low-pass filtered, would result in the unique underlying bandlimited signal before sampling), but instead output a sequence of rectangular pulses,xZOH(t) (apiecewise constantfunction), means that there is an inherent effect of the ZOH on the effective frequency response of the DAC, resulting in a mildroll-offof gain at the higher frequencies (a 3.9224 dB loss at theNyquist frequency, corresponding to a gain of sinc(1/2) = 2/π). This drop is a consequence of theholdproperty of a conventional DAC, and isnotdue to thesample and holdthat might precede a conventionalanalog-to-digital converter(ADC).
https://en.wikipedia.org/wiki/Zero-order_hold
Roundingorrounding offmeans replacing anumberwith anapproximatevalue that has a shorter, simpler, or more explicit representation. For example, replacing $23.4476with $23.45, thefraction312/937 with 1/3, or the expression √2 with1.414. Rounding is often done to obtain a value that is easier to report and communicate than the original. Rounding can also be important to avoidmisleadingly precisereporting of a computed number, measurement, or estimate; for example, a quantity that was computed as123456but is known to beaccurateonly to within a few hundred units is usually better stated as "about123500". On the other hand, rounding of exact numbers will introduce someround-off errorin the reported result. Rounding is almost unavoidable when reporting many computations – especially when dividing two numbers inintegerorfixed-point arithmetic; when computingmathematical functionssuch assquare roots,logarithms, andsines; or when using afloating-pointrepresentation with a fixed number ofsignificant digits. In a sequence of calculations, these rounding errors generallyaccumulate, and in certainill-conditionedcases they may make the result meaningless. Accurate rounding oftranscendental mathematical functionsis difficult because the number of extra digits that need to be calculated to resolve whether to round up or down cannot be known in advance. This problem is known as "the table-maker's dilemma". Rounding has many similarities to thequantizationthat occurs whenphysical quantitiesmust be encoded by numbers ordigital signals. Awavy equals sign(≈,approximately equal to) is sometimes used to indicate rounding of exact numbers, e.g. 9.98 ≈ 10. This sign was introduced byAlfred George Greenhillin 1892.[1] Ideal characteristics of rounding methods include: Because it is not usually possible for a method to satisfy all ideal characteristics, many different rounding methods exist. As a general rule, rounding isidempotent;[2]i.e., once a number has been rounded, rounding it again to the same precision will not change its value. Rounding functions are alsomonotonic; i.e., rounding two numbers to the same absolute precision will not exchange theirorder(but may give the same value). In the general case of a discrete range, they arepiecewise constant functions. Typical rounding problems include: The most basic form of rounding is to replace an arbitrary number by an integer. All the following rounding modes are concrete implementations of an abstract single-argument "round()" procedure. These are true functions (with the exception of those that use randomness). These four methods are calleddirected rounding to an integer, as the displacements from the original numberxto the rounded valueyare all directed toward or away from the same limiting value (0,+∞, or −∞). Directed rounding is used ininterval arithmeticand is often required in financial calculations. Ifxis positive, round-down is the same as round-toward-zero, and round-up is the same as round-away-from-zero. Ifxis negative, round-down is the same as round-away-from-zero, and round-up is the same as round-toward-zero. In any case, ifxis an integer,yis justx. Where many calculations are done in sequence, the choice of rounding method can have a very significant effect on the result. A famous instance involved a newindexset up by theVancouver Stock Exchangein 1982. It was initially set at 1000.000 (three decimal places of accuracy), and after 22 months had fallen to about 520, although the market appeared to be rising. The problem was caused by the index being recalculated thousands of times daily, and always being truncated (rounded down) to 3 decimal places, in such a way that the rounding errors accumulated. Recalculating the index for the same period using rounding to the nearest thousandth rather than truncation corrected the index value from 524.811 up to 1098.892.[3] For the examples below,sgn(x)refers to thesign functionapplied to the original number,x. One mayround down(or take thefloor, orround toward negative infinity):yis the largest integer that does not exceedx. For example, 23.7 gets rounded to 23, and −23.2 gets rounded to −24. One may alsoround up(or take theceiling, orround toward positive infinity):yis the smallest integer that is not less thanx. For example, 23.2 gets rounded to 24, and −23.7 gets rounded to −23. One may alsoround toward zero(ortruncate, orround away from infinity):yis the integer that is closest toxsuch that it is between 0 andx(included); i.e.yis the integer part ofx, without its fraction digits. For example, 23.7 gets rounded to 23, and −23.7 gets rounded to −23. One may alsoround away from zero(orround toward infinity):yis the integer that is closest to 0 (or equivalently, tox) such thatxis between 0 andy(included). For example, 23.2 gets rounded to 24, and −23.2 gets rounded to −24. These six methods are calledrounding to the nearest integer. Rounding a numberxto the nearest integer requires some tie-breaking rule for those cases whenxis exactly half-way between two integers – that is, when the fraction part ofxis exactly 0.5. If it were not for the 0.5 fractional parts, the round-off errors introduced by the round to nearest method would be symmetric: for every fraction that gets rounded down (such as 0.268), there is a complementary fraction (namely, 0.732) that gets rounded up by the same amount. When rounding a large set offixed-pointnumbers withuniformly distributedfractional parts, the rounding errors by all values, with the omission of those having 0.5 fractional part, would statistically compensate each other. This means that theexpected (average) valueof the rounded numbers is equal to the expected value of the original numbers when numbers with fractional part 0.5 from the set are removed. In practice,floating-pointnumbers are typically used, which have even more computational nuances because they are not equally spaced. One mayround half up(orround half toward positive infinity), a tie-breaking rule that is widely used in many disciplines.[citation needed]That is, half-way values ofxare always rounded up. If the fractional part ofxis exactly 0.5, theny=x+ 0.5 For example, 23.5 gets rounded to 24, and −23.5 gets rounded to −23. Some programming languages (such as Java and Python) use "half up" to refer toround half away from zerorather thanround half toward positive infinity.[4][5] This method only requires checking one digit to determine rounding direction intwo's complementand similar representations. One may alsoround half down(orround half toward negative infinity) as opposed to the more commonround half up. If the fractional part ofxis exactly 0.5, theny=x− 0.5 For example, 23.5 gets rounded to 23, and −23.5 gets rounded to −24. Some programming languages (such as Java and Python) use "half down" to refer toround half toward zerorather thanround half toward negative infinity.[4][5] One may alsoround half toward zero(orround half away from infinity) as opposed to the conventionalround half away from zero. If the fractional part ofxis exactly 0.5, theny=x− 0.5ifxis positive, andy=x+ 0.5ifxis negative. For example, 23.5 gets rounded to 23, and −23.5 gets rounded to −23. This method treats positive and negative values symmetrically, and therefore is free of overall positive/negative bias if the original numbers are positive or negative with equal probability. It does, however, still have bias toward zero. One may alsoround half away from zero(orround half toward infinity), a tie-breaking rule that is commonly taught and used, namely: If the fractional part ofxis exactly 0.5, theny=x+ 0.5ifxis positive, andy=x− 0.5ifxis negative. For example, 23.5 gets rounded to 24, and −23.5 gets rounded to −24. This can be more efficient on computers that usesign-magnituderepresentation for the values to be rounded, because only the first omitted digit needs to be considered to determine if it rounds up or down. This is one method used when rounding tosignificant figuresdue to its simplicity. This method, also known ascommercial rounding,[citation needed]treats positive and negative values symmetrically, and therefore is free of overall positive/negative bias if the original numbers are positive or negative with equal probability. It does, however, still have bias away from zero. It is often used for currency conversions and price roundings (when the amount is first converted into the smallest significant subdivision of the currency, such as cents of a euro) as it is easy to explain by just considering the first fractional digit, independently of supplementary precision digits or sign of the amount (for strict equivalence between the paying and recipient of the amount). One may alsoround half to even, a tie-breaking rule without positive/negative biasandwithout bias toward/away from zero. By this convention, if the fractional part ofxis 0.5, thenyis the even integer nearest tox. Thus, for example, 23.5 becomes 24, as does 24.5; however, −23.5 becomes −24, as does −24.5. This function minimizes the expected error when summing over rounded figures, even when the inputs are mostly positive or mostly negative, provided they are neither mostly even nor mostly odd. This variant of the round-to-nearest method is also calledconvergent rounding,statistician's rounding,Dutch rounding,Gaussian rounding,odd–even rounding,[6]orbankers' rounding.[7] This is the default rounding mode used inIEEE 754operations for results in binary floating-point formats. By eliminating bias, repeated addition or subtraction of independent numbers, as in aone-dimensional random walk, will give a rounded result with an error that tends to grow in proportion to the square root of the number of operations rather than linearly. However, this rule distorts the distribution by increasing the probability of evens relative to odds. Typically this is less important[citation needed]than the biases that are eliminated by this method. One may alsoround half to odd, a similar tie-breaking rule to round half to even. In this approach, if the fractional part ofxis 0.5, thenyis the odd integer nearest tox. Thus, for example, 23.5 becomes 23, as does 22.5; while −23.5 becomes −23, as does −22.5. This method is also free from positive/negative bias and bias toward/away from zero, provided the numbers to be rounded are neither mostly even nor mostly odd. It also shares the round half to even property of distorting the original distribution, as it increases the probability of odds relative to evens. It was the method used for bank balances in theUnited Kingdomwhen it decimalized its currency[8][clarification needed]. This variant is almost never used in computations, except in situations where one wants to avoid increasing the scale of floating-point numbers, which have a limited exponent range. Withround half to even, a non-infinite number would round to infinity, and a smalldenormalvalue would round to a normal non-zero value. Effectively, this mode prefers preserving the existing scale of tie numbers, avoiding out-of-range results when possible for numeral systems of evenradix(such as binary and decimal).[clarification needed(seetalk)]. This rounding mode is used to avoid getting a potentially wrong result aftermultiple roundings. This can be achieved if all roundings except the final one are done using rounding to prepare for shorter precision ("RPSP"), and only the final rounding uses the externally requested mode. With decimal arithmetic, final digits of 0 and 5 are avoided; if there is a choice between numbers with the least significant digit 0 or 1, 4 or 5, 5 or 6, 9 or 0, then the digit different from 0 or 5 shall be selected; otherwise, the choice is arbitrary. IBM defines that, in the latter case, a digit with the smaller magnitude shall be selected.[9]RPSP can be applied with the step between two consequent roundings as small as a single digit (for example, rounding to 1/10 can be applied after rounding to 1/100). For example, when rounding to integer, In the example from "Double rounding" section, rounding 9.46 to one decimal gives 9.4, which rounding to integer in turn gives 9. With binary arithmetic, this rounding is also called "round to odd" (not to be confused with "round half to odd"). For example, when rounding to 1/4 (0.01 in binary), For correct results with binary arithmetic, each rounding step must remove at least 2 binary digits, otherwise, wrong results may appear. For example, If the erroneous middle step is removed, the final rounding to integer rounds 3.25 to the correct value of 3. RPSP is implemented in hardware in IBMzSeriesandpSeries. InPythonmodule "Decimal",Tclmodule "math",Haskellpackage "decimal-arithmetic", and possibly others, this mode is called ROUND_05UP or round05up. One method, more obscure than most, is to alternate direction when rounding a number with 0.5 fractional part. All others are rounded to the closest integer. Whenever the fractional part is 0.5, alternate rounding up or down: for the first occurrence of a 0.5 fractional part, round up, for the second occurrence, round down, and so on. Alternatively, the first 0.5 fractional part rounding can be determined by arandom seed. "Up" and "down" can be any two rounding methods that oppose each other - toward and away from positive infinity or toward and away from zero. If occurrences of 0.5 fractional parts occur significantly more than a restart of the occurrence "counting", then it is effectively bias free. With guaranteed zero bias, it is useful if the numbers are to be summed or averaged. If the fractional part ofxis 0.5, chooseyrandomly betweenx+ 0.5andx− 0.5, with equal probability. All others are rounded to the closest integer. Like round-half-to-even and round-half-to-odd, this rule is essentially free of overall bias, but it is also fair among even and oddyvalues. An advantage over alternate tie-breaking is that the last direction of rounding on the 0.5 fractional part does not have to be "remembered". Rounding as follows to one of the closest integer toward negative infinity and the closest integer toward positive infinity, with a probability dependent on the proximity is calledstochasticrounding and will give an unbiased result on average.[10] For example, 1.6 would be rounded to 1 with probability 0.4 and to 2 with probability 0.6. Stochastic rounding can be accurate in a way that a roundingfunctioncan never be. For example, suppose one started with 0 and added 0.3 to that one hundred times while rounding the running total between every addition. The result would be 0 with regular rounding, but with stochastic rounding, the expected result would be 30, which is the same value obtained without rounding. This can be useful inmachine learningwhere the training may use low precision arithmetic iteratively.[10]Stochastic rounding is also a way to achieve 1-dimensionaldithering. The most common type of rounding is to round to an integer; or, more generally, to an integer multiple of some increment – such as rounding to whole tenths of seconds, hundredths of a dollar, to whole multiples of 1/2 or 1/8 inch, to whole dozens or thousands, etc. In general, rounding a numberxto a multiple of some specified positive valuementails the following steps: For example, roundingx= 2.1784dollars to whole cents (i.e., to a multiple of 0.01) entails computing2.1784 / 0.01 = 217.84, then rounding that to 218, and finally computing218 × 0.01 = 2.18. When rounding to a predetermined number ofsignificant digits, the incrementmdepends on the magnitude of the number to be rounded (or of the rounded result). The incrementmis normally a finite fraction in whatevernumeral systemis used to represent the numbers. For display to humans, that usually means thedecimal numeral system(that is,mis an integer times apowerof 10, like 1/1000 or 25/100). For intermediate values stored in digital computers, it often means thebinary numeral system(mis an integer times a power of 2). The abstract single-argument "round()" function that returns an integer from an arbitrary real value has at least a dozen distinct concrete definitions presented in therounding to integersection. The abstract two-argument "roundToMultiple()" function is formally defined here, but in many cases it is used with the implicit valuem= 1for the increment and then reduces to the equivalent abstract single-argument function, with also the same dozen distinct concrete definitions. Rounding to a specifiedpoweris very different from rounding to a specifiedmultiple; for example, it is common in computing to need to round a number to a whole power of 2. The steps, in general, to round a positive numberxto a power of some positive numberbother than 1, are: Many of the caveats applicable to rounding to a multiple are applicable to rounding to a power. This type of rounding, which is also namedrounding to a logarithmic scale, is a variant ofrounding to a specified power. Rounding on a logarithmic scale is accomplished by taking the log of the amount and doing normal rounding to the nearest value on the log scale. For example, resistors are supplied withpreferred numberson a logarithmic scale. In particular, for resistors with a 10% accuracy, they are supplied with nominal values 100, 120, 150, 180, 220, etc. rounded to multiples of 10 (E12 series). If a calculation indicates a resistor of 165 ohms is required thenlog(150) = 2.176,log(165) = 2.217andlog(180) = 2.255. The logarithm of 165 is closer to the logarithm of 180 therefore a 180 ohm resistor would be the first choice if there are no other considerations. Whether a valuex∈ (a,b)rounds toaorbdepends upon whether the squared valuex2is greater than or less than the productab. The value 165 rounds to 180 in the resistors example because1652= 27225is greater than150 × 180 = 27000. Infloating-point arithmetic, rounding aims to turn a given valuexinto a valueywith a specified number ofsignificantdigits. In other words,yshould be a multiple of a numbermthat depends on the magnitude ofx. The numbermis a power of thebase(usually 2 or 10) of the floating-point representation. Apart from this detail, all the variants of rounding discussed above apply to the rounding of floating-point numbers as well. The algorithm for such rounding is presented in theScaled roundingsection above, but with a constant scaling factors= 1, and an integer baseb> 1. Where the rounded result would overflow the result for a directed rounding is either the appropriate signed infinity when "rounding away from zero", or the highest representable positive finite number (or the lowest representable negative finite number ifxis negative), when "rounding toward zero". The result of an overflow for the usual case ofround to nearestis always the appropriate infinity. In some contexts it is desirable to round a given numberxto a "neat" fraction – that is, the nearest fractiony=m/nwhose numeratormand denominatorndo not exceed a given maximum. This problem is fairly distinct from that of rounding a value to a fixed number of decimal or binary digits, or to a multiple of a given unitm. This problem is related toFarey sequences, theStern–Brocot tree, andcontinued fractions. Finishedlumber, writing paper, electronic components, and many other products are usually sold in only a few standard values. Many design procedures describe how to calculate an approximate value, and then "round" to some standard size using phrases such as "round down to nearest standard value", "round up to nearest standard value", or "round to nearest standard value".[11][12] When a set ofpreferred valuesis equally spaced on a logarithmic scale, choosing the closestpreferred valueto any given value can be seen as a form ofscaled rounding. Such rounded values can be directly calculated.[13] More general rounding rules can separate values at arbitrary break points, used for example indata binning. A related mathematically formalized tool issignpost sequences, which use notions of distance other than the simple difference – for example, a sequence may round to the integer with the smallestrelative(percent) error. When digitizingcontinuous signals, such as sound waves, the overall effect of a number of measurements is more important than the accuracy of each individual measurement. In these circumstances,dithering, and a related technique,error diffusion, are normally used. A related technique calledpulse-width modulationis used to achieve analog type output from an inertial device by rapidly pulsing the power with a variable duty cycle. Error diffusion tries to ensure the error, on average, is minimized. When dealing with a gentle slope from one to zero, the output would be zero for the first few terms until the sum of the error and the current value becomes greater than 0.5, in which case a 1 is output and the difference subtracted from the error so far.Floyd–Steinberg ditheringis a popular error diffusion procedure when digitizing images. As a one-dimensional example, suppose the numbers0.9677,0.9204,0.7451, and0.3091occur in order and each is to be rounded to a multiple of0.01. In this case the cumulative sums,0.9677,1.8881 = 0.9677 + 0.9204,2.6332 = 0.9677 + 0.9204 + 0.7451, and2.9423 = 0.9677 + 0.9204 + 0.7451 + 0.3091, are each rounded to a multiple of0.01:0.97,1.89,2.63, and2.94. The first of these and the differences of adjacent values give the desired rounded values:0.97,0.92 = 1.89 − 0.97,0.74 = 2.63 − 1.89, and0.31 = 2.94 − 2.63. Monte Carlo arithmetic is a technique inMonte Carlo methodswhere the rounding is randomly up or down. Stochastic rounding can be used for Monte Carlo arithmetic, but in general, just rounding up or down with equal probability is more often used. Repeated runs will give a random distribution of results which can indicate the stability of the computation.[14] It is possible to use rounded arithmetic to evaluate the exact value of a function with integer domain and range. For example, if an integernis known to be a perfect square, its square root can be computed by convertingnto a floating-point valuez, computing the approximate square rootxofzwith floating point, and then roundingxto the nearest integery. Ifnis not too big, the floating-point round-off error inxwill be less than 0.5, so the rounded valueywill be the exact square root ofn. This is essentially whyslide rulescould be used for exact arithmetic. Rounding a number twice in succession to different levels of precision, with the latter precision being coarser, is not guaranteed to give the same result as rounding once to the final precision except in the case of directed rounding.[nb 2]For instance rounding 9.46 to one decimal gives 9.5, and then 10 when rounding to integer using rounding half to even, but would give 9 when rounded to integer directly. Borman and Chatfield[15]discuss the implications of double rounding when comparing data rounded to one decimal place to specification limits expressed using integers. InMartinez v. AllstateandSendejo v. Farmers, litigated between 1995 and 1997, the insurance companies argued that double rounding premiums was permissible and in fact required. The US courts ruled against the insurance companies and ordered them to adopt rules to ensure single rounding.[16] Some computer languages and theIEEE 754-2008standard dictate that in straightforward calculations the result should not be rounded twice. This has been a particular problem with Java as it is designed to be run identically on different machines, special programming tricks have had to be used to achieve this withx87floating point.[17][18]The Java language was changed to allow different results where the difference does not matter and require astrictfpqualifier to be used when the results have to conform accurately; strict floating point has been restored in Java 17.[19] In some algorithms, an intermediate result is computed in a larger precision, then must be rounded to the final precision. Double rounding can be avoided by choosing an adequate rounding for the intermediate computation. This consists in avoiding to round to midpoints for the final rounding (except when the midpoint is exact). In binary arithmetic, the idea is to round the result toward zero, and set the least significant bit to 1 if the rounded result is inexact; this rounding is calledsticky rounding.[20]Equivalently, it consists in returning the intermediate result when it is exactly representable, and the nearest floating-point number with an odd significand otherwise; this is why it is also known asrounding to odd.[21][22]A concrete implementation of this approach, for binary and decimal arithmetic, is implemented asRounding to prepare for shorter precision. William M. Kahancoined the term "The Table-Maker's Dilemma" for the unknown cost of roundingtranscendental functions: Nobody knows how much it would cost to computeywcorrectly rounded foreverytwo floating-point arguments at which it does not over/underflow. Instead, reputable math libraries compute elementarytranscendental functionsmostly within slightly more than half anulpand almost always well within one ulp. Why can'tywbe rounded within half an ulp like SQRT? Because nobody knows how much computation it would cost... No general way exists to predict how many extra digits will have to be carried to compute a transcendental expression and round itcorrectlyto some preassigned number of digits. Even the fact (if true) that a finite number of extra digits will ultimately suffice may be a deep theorem.[23] TheIEEE 754floating-point standard guarantees that add, subtract, multiply, divide,fused multiply–add, square root, and floating-point remainder will give the correctly rounded result of the infinite-precision operation. No such guarantee was given in the 1985 standard for more complex functions and they are typically only accurate to within the last bit at best. However, the 2008 standard guarantees that conforming implementations will give correctly rounded results which respect the active rounding mode; implementation of the functions, however, is optional. Using theGelfond–Schneider theoremandLindemann–Weierstrass theorem, many of the standard elementary functions can be proved to returntranscendentalresults, except on some well-known arguments; therefore, from a theoretical point of view, it is always possible to correctly round such functions. However, for an implementation of such a function, determining a limit for a given precision on how accurate results need to be computed, before a correctly rounded result can be guaranteed, may demand a lot of computation time or may be out of reach.[24]In practice, when this limit is not known (or only a very large bound is known), some decision has to be made in the implementation (see below); but according to a probabilistic model, correct rounding can be satisfied with a very high probability when using an intermediate accuracy of up to twice the number of digits of the target format plus some small constant (after taking special cases into account). Some programming packages offer correct rounding. TheGNU MPFRpackage gives correctly rounded arbitrary precision results. Some other libraries implement elementary functions with correct rounding inIEEE 754 double precision(binary64): There existcomputable numbersfor which a rounded value can never be determined no matter how many digits are calculated. Specific instances cannot be given but this follows from the undecidability of thehalting problem. For instance, ifGoldbach's conjectureis true butunprovable, then the result of rounding the following value,n,up to the next integercannot be determined: eithern=1+10−kwherekis the first even number greater than 4 which is not the sum of two primes, orn=1 if there is no such number. The rounded result is 2 if such a numberkexists and 1 otherwise. The value before rounding can however be approximated to any given precision even if the conjecture is unprovable. Rounding can adversely affect a string search for a number. For example,πrounded to four digits is "3.1416" but a simple search for this string will not discover "3.14159" or any other value ofπrounded to more than four digits. In contrast, truncation does not suffer from this problem; for example, a simple string search for "3.1415", which isπtruncated to four digits, will discover values ofπtruncated to more than four digits. The concept of rounding is very old, perhaps older than the concept of division itself. Some ancientclay tabletsfound inMesopotamiacontain tables with rounded values ofreciprocalsand square roots in base 60.[40]Rounded approximations toπ, the length of the year, and the length of the month are also ancient – seebase 60 examples. Theround-half-to-evenmethod has served asAmerican StandardZ25.1 andASTMstandard E-29 since 1940.[41]The origin of the termsunbiased roundingandstatistician's roundingare fairly self-explanatory. In the 1906 fourth edition ofProbability and Theory of ErrorsRobert Simpson Woodwardcalled this "the computer's rule",[42]indicating that it was then in common use byhuman computerswho calculated mathematical tables. For example, it was recommended inSimon Newcomb's c. 1882 bookLogarithmic and Other Mathematical Tables.[43]Lucius Tuttle's 1916Theory of Measurementscalled it a "universally adopted rule" for recording physical measurements.[44]Churchill Eisenhartindicated the practice was already "well established" in data analysis by the 1940s.[45] The origin of the termbankers' roundingremains more obscure. If this rounding method was ever a standard in banking, the evidence has proved extremely difficult to find. To the contrary, section 2 of the European Commission reportThe Introduction of the Euro and the Rounding of Currency Amounts[46]suggests that there had previously been no standard approach to rounding in banking; and it specifies that "half-way" amounts should be rounded up. Until the 1980s, the rounding method used in floating-point computer arithmetic was usually fixed by the hardware, poorly documented, inconsistent, and different for each brand and model of computer. This situation changed after the IEEE 754 floating-point standard was adopted by most computer manufacturers. The standard allows the user to choose among several rounding modes, and in each case specifies precisely how the results should be rounded. These features made numerical computations more predictable and machine-independent, and made possible the efficient and consistent implementation ofinterval arithmetic. Currently, much research tends to round to multiples of 5 or 2. For example,Jörg Batenusedage heapingin many studies, to evaluate the numeracy level of ancient populations. He came up with theABCC Index, which enables the comparison of thenumeracyamong regions possible without any historical sources where the populationliteracywas measured.[47] Mostprogramming languagesprovide functions or special syntax to round fractional numbers in various ways. The earliest numeric languages, such asFortranandC, would provide only one method, usually truncation (toward zero). This default method could be implied in certain contexts, such as when assigning a fractional number to anintegervariable, or using a fractional number as an index of anarray. Other kinds of rounding had to be programmed explicitly; for example, rounding a positive number to the nearest integer could be implemented by adding 0.5 and truncating. In the last decades, however, the syntax and the standardlibrariesof most languages have commonly provided at least the four basic rounding functions (up, down, to nearest, and toward zero). The tie-breaking method can vary depending on the language and version or might be selectable by the programmer. Several languages follow the lead of the IEEE 754 floating-point standard, and define these functions as taking adouble-precision floatargument and returning the result of the same type, which then may be converted to an integer if necessary. This approach may avoid spuriousoverflowsbecause floating-point types have a larger range than integer types. Some languages, such asPHP, provide functions that round a value to a specified number of decimal digits (e.g., from 4321.5678 to 4321.57 or 4300). In addition, many languages provide aprintfor similar string formatting function, which allows one to convert a fractional number to a string, rounded to a user-specified number of decimal places (theprecision). On the other hand, truncation (round to zero) is still the default rounding method used by many languages, especially for the division of two integer values. In contrast,CSSandSVGdo not define any specific maximum precision for numbers and measurements, which they treat and expose in theirDOMand in theirIDLinterface as strings as if they hadinfinite precision, and do not discriminate between integers and floating-point values; however, the implementations of these languages will typically convert these numbers into IEEE 754 double-precision floating-point values before exposing the computed digits with a limited precision (notably within standardJavaScriptorECMAScript[48]interface bindings). Some disciplines or institutions have issued standards or directives for rounding. In a guideline issued in mid-1966,[49]theU.S.Office of the Federal Coordinator for Meteorology determined that weather data should be rounded to the nearest round number, with the "round half up" tie-breaking rule. For example, 1.5 rounded to integer should become 2, and −1.5 should become −1. Prior to that date, the tie-breaking rule was "round half away from zero". Somemeteorologistsmay write "−0" to indicate a temperature between 0.0 and −0.5 degrees (exclusive) that was rounded to an integer. This notation is used when the negative sign is considered important, no matter how small is the magnitude; for example, when rounding temperatures in theCelsiusscale, where below zero indicates freezing.[citation needed]
https://en.wikipedia.org/wiki/Rounding
UPGMA(unweighted pair group method with arithmetic mean) is a simple agglomerative (bottom-up)hierarchical clusteringmethod. It also has a weighted variant,WPGMA, and they are generally attributed toSokalandMichener.[1] Note that the unweighted term indicates that all distances contribute equally to each average that is computed and does not refer to the math by which it is achieved. Thus the simple averaging in WPGMA produces a weighted result and the proportional averaging in UPGMA produces an unweighted result (see the working example).[2] The UPGMA algorithm constructs a rooted tree (dendrogram) that reflects the structure present in a pairwisesimilarity matrix(or adissimilarity matrix). At each step, the nearest two clusters are combined into a higher-level cluster. The distance between any two clustersA{\displaystyle {\mathcal {A}}}andB{\displaystyle {\mathcal {B}}}, each of size (i.e.,cardinality)|A|{\displaystyle {|{\mathcal {A}}|}}and|B|{\displaystyle {|{\mathcal {B}}|}}, is taken to be the average of all distancesd(x,y){\displaystyle d(x,y)}between pairs of objectsx{\displaystyle x}inA{\displaystyle {\mathcal {A}}}andy{\displaystyle y}inB{\displaystyle {\mathcal {B}}}, that is, the mean distance between elements of each cluster: In other words, at each clustering step, the updated distance between the joined clustersA∪B{\displaystyle {\mathcal {A}}\cup {\mathcal {B}}}and a new clusterX{\displaystyle X}is given by the proportional averaging of thedA,X{\displaystyle d_{{\mathcal {A}},X}}anddB,X{\displaystyle d_{{\mathcal {B}},X}}distances: d(A∪B),X=|A|⋅dA,X+|B|⋅dB,X|A|+|B|{\displaystyle d_{({\mathcal {A}}\cup {\mathcal {B}}),X}={\frac {|{\mathcal {A}}|\cdot d_{{\mathcal {A}},X}+|{\mathcal {B}}|\cdot d_{{\mathcal {B}},X}}{|{\mathcal {A}}|+|{\mathcal {B}}|}}} The UPGMA algorithm produces rooted dendrograms and requires a constant-rate assumption - that is, it assumes anultrametrictree in which the distances from the root to every branch tip are equal. When the tips are molecular data (i.e.,DNA,RNAandprotein) sampled at the same time, theultrametricityassumption becomes equivalent to assuming amolecular clock. This working example is based on aJC69genetic distance matrix computed from the5S ribosomal RNAsequence alignment of five bacteria:Bacillus subtilis(a{\displaystyle a}),Bacillus stearothermophilus(b{\displaystyle b}),Lactobacillusviridescens(c{\displaystyle c}),Acholeplasmamodicum(d{\displaystyle d}), andMicrococcus luteus(e{\displaystyle e}).[3][4] Let us assume that we have five elements(a,b,c,d,e){\displaystyle (a,b,c,d,e)}and the following matrixD1{\displaystyle D_{1}}of pairwise distances between them : In this example,D1(a,b)=17{\displaystyle D_{1}(a,b)=17}is the smallest value ofD1{\displaystyle D_{1}}, so we join elementsa{\displaystyle a}andb{\displaystyle b}. Letu{\displaystyle u}denote the node to whicha{\displaystyle a}andb{\displaystyle b}are now connected. Settingδ(a,u)=δ(b,u)=D1(a,b)/2{\displaystyle \delta (a,u)=\delta (b,u)=D_{1}(a,b)/2}ensures that elementsa{\displaystyle a}andb{\displaystyle b}are equidistant fromu{\displaystyle u}. This corresponds to the expectation of theultrametricityhypothesis. The branches joininga{\displaystyle a}andb{\displaystyle b}tou{\displaystyle u}then have lengthsδ(a,u)=δ(b,u)=17/2=8.5{\displaystyle \delta (a,u)=\delta (b,u)=17/2=8.5}(see the final dendrogram) We then proceed to update the initial distance matrixD1{\displaystyle D_{1}}into a new distance matrixD2{\displaystyle D_{2}}(see below), reduced in size by one row and one column because of the clustering ofa{\displaystyle a}withb{\displaystyle b}. Bold values inD2{\displaystyle D_{2}}correspond to the new distances, calculated byaveraging distancesbetween each element of the first cluster(a,b){\displaystyle (a,b)}and each of the remaining elements: D2((a,b),c)=(D1(a,c)×1+D1(b,c)×1)/(1+1)=(21+30)/2=25.5{\displaystyle D_{2}((a,b),c)=(D_{1}(a,c)\times 1+D_{1}(b,c)\times 1)/(1+1)=(21+30)/2=25.5} D2((a,b),d)=(D1(a,d)+D1(b,d))/2=(31+34)/2=32.5{\displaystyle D_{2}((a,b),d)=(D_{1}(a,d)+D_{1}(b,d))/2=(31+34)/2=32.5} D2((a,b),e)=(D1(a,e)+D1(b,e))/2=(23+21)/2=22{\displaystyle D_{2}((a,b),e)=(D_{1}(a,e)+D_{1}(b,e))/2=(23+21)/2=22} Italicized values inD2{\displaystyle D_{2}}are not affected by the matrix update as they correspond to distances between elements not involved in the first cluster. We now reiterate the three previous steps, starting from the new distance matrixD2{\displaystyle D_{2}} Here,D2((a,b),e)=22{\displaystyle D_{2}((a,b),e)=22}is the smallest value ofD2{\displaystyle D_{2}}, so we join cluster(a,b){\displaystyle (a,b)}and elemente{\displaystyle e}. Letv{\displaystyle v}denote the node to which(a,b){\displaystyle (a,b)}ande{\displaystyle e}are now connected. Because of the ultrametricity constraint, the branches joininga{\displaystyle a}orb{\displaystyle b}tov{\displaystyle v}, ande{\displaystyle e}tov{\displaystyle v}are equal and have the following length:δ(a,v)=δ(b,v)=δ(e,v)=22/2=11{\displaystyle \delta (a,v)=\delta (b,v)=\delta (e,v)=22/2=11} We deduce the missing branch length:δ(u,v)=δ(e,v)−δ(a,u)=δ(e,v)−δ(b,u)=11−8.5=2.5{\displaystyle \delta (u,v)=\delta (e,v)-\delta (a,u)=\delta (e,v)-\delta (b,u)=11-8.5=2.5}(see the final dendrogram) We then proceed to updateD2{\displaystyle D_{2}}into a new distance matrixD3{\displaystyle D_{3}}(see below), reduced in size by one row and one column because of the clustering of(a,b){\displaystyle (a,b)}withe{\displaystyle e}. Bold values inD3{\displaystyle D_{3}}correspond to the new distances, calculated byproportional averaging: D3(((a,b),e),c)=(D2((a,b),c)×2+D2(e,c)×1)/(2+1)=(25.5×2+39×1)/3=30{\displaystyle D_{3}(((a,b),e),c)=(D_{2}((a,b),c)\times 2+D_{2}(e,c)\times 1)/(2+1)=(25.5\times 2+39\times 1)/3=30} Thanks to this proportional average, the calculation of this new distance accounts for the larger size of the(a,b){\displaystyle (a,b)}cluster (two elements) with respect toe{\displaystyle e}(one element). Similarly: D3(((a,b),e),d)=(D2((a,b),d)×2+D2(e,d)×1)/(2+1)=(32.5×2+43×1)/3=36{\displaystyle D_{3}(((a,b),e),d)=(D_{2}((a,b),d)\times 2+D_{2}(e,d)\times 1)/(2+1)=(32.5\times 2+43\times 1)/3=36} Proportional averaging therefore gives equal weight to the initial distances of matrixD1{\displaystyle D_{1}}. This is the reason why the method isunweighted, not with respect to the mathematical procedure but with respect to the initial distances. We again reiterate the three previous steps, starting from the updated distance matrixD3{\displaystyle D_{3}}. Here,D3(c,d)=28{\displaystyle D_{3}(c,d)=28}is the smallest value ofD3{\displaystyle D_{3}}, so we join elementsc{\displaystyle c}andd{\displaystyle d}. Letw{\displaystyle w}denote the node to whichc{\displaystyle c}andd{\displaystyle d}are now connected. The branches joiningc{\displaystyle c}andd{\displaystyle d}tow{\displaystyle w}then have lengthsδ(c,w)=δ(d,w)=28/2=14{\displaystyle \delta (c,w)=\delta (d,w)=28/2=14}(see the final dendrogram) There is a single entry to update, keeping in mind that the two elementsc{\displaystyle c}andd{\displaystyle d}each have a contribution of1{\displaystyle 1}in theaverage computation: D4((c,d),((a,b),e))=(D3(c,((a,b),e))×1+D3(d,((a,b),e))×1)/(1+1)=(30×1+36×1)/2=33{\displaystyle D_{4}((c,d),((a,b),e))=(D_{3}(c,((a,b),e))\times 1+D_{3}(d,((a,b),e))\times 1)/(1+1)=(30\times 1+36\times 1)/2=33} The finalD4{\displaystyle D_{4}}matrix is: So we join clusters((a,b),e){\displaystyle ((a,b),e)}and(c,d){\displaystyle (c,d)}. Letr{\displaystyle r}denote the (root) node to which((a,b),e){\displaystyle ((a,b),e)}and(c,d){\displaystyle (c,d)}are now connected. The branches joining((a,b),e){\displaystyle ((a,b),e)}and(c,d){\displaystyle (c,d)}tor{\displaystyle r}then have lengths: δ(((a,b),e),r)=δ((c,d),r)=33/2=16.5{\displaystyle \delta (((a,b),e),r)=\delta ((c,d),r)=33/2=16.5} We deduce the two remaining branch lengths: δ(v,r)=δ(((a,b),e),r)−δ(e,v)=16.5−11=5.5{\displaystyle \delta (v,r)=\delta (((a,b),e),r)-\delta (e,v)=16.5-11=5.5} δ(w,r)=δ((c,d),r)−δ(c,w)=16.5−14=2.5{\displaystyle \delta (w,r)=\delta ((c,d),r)-\delta (c,w)=16.5-14=2.5} The dendrogram is now complete.[5]It is ultrametric because all tips (a{\displaystyle a}toe{\displaystyle e}) are equidistant fromr{\displaystyle r}: δ(a,r)=δ(b,r)=δ(e,r)=δ(c,r)=δ(d,r)=16.5{\displaystyle \delta (a,r)=\delta (b,r)=\delta (e,r)=\delta (c,r)=\delta (d,r)=16.5} The dendrogram is therefore rooted byr{\displaystyle r}, its deepest node. Alternative linkage schemes includesingle linkage clustering,complete linkage clustering, andWPGMA average linkage clustering. Implementing a different linkage is simply a matter of using a different formula to calculate inter-cluster distances during the distance matrix update steps of the above algorithm. Complete linkage clustering avoids a drawback of the alternative single linkage clustering method - the so-calledchaining phenomenon, where clusters formed via single linkage clustering may be forced together due to single elements being close to each other, even though many of the elements in each cluster may be very distant to each other. Complete linkage tends to find compact clusters of approximately equal diameters.[6] A trivial implementation of the algorithm to construct the UPGMA tree hasO(n3){\displaystyle O(n^{3})}time complexity, and using a heap for each cluster to keep its distances from other cluster reduces its time toO(n2log⁡n){\displaystyle O(n^{2}\log n)}. Fionn Murtagh presented anO(n2){\displaystyle O(n^{2})}time and space algorithm.[10]
https://en.wikipedia.org/wiki/UPGMA
WPGMA(WeightedPairGroupMethod withArithmetic Mean) is a simple agglomerative (bottom-up)hierarchical clusteringmethod, generally attributed toSokalandMichener.[1] The WPGMA method is similar to itsunweightedvariant, theUPGMAmethod. The WPGMA algorithm constructs a rooted tree (dendrogram) that reflects the structure present in a pairwisedistance matrix(or asimilarity matrix). At each step, the nearest two clusters, sayi{\displaystyle i}andj{\displaystyle j}, are combined into a higher-level clusteri∪j{\displaystyle i\cup j}. Then, its distance to another clusterk{\displaystyle k}is simply the arithmetic mean of the average distances between members ofk{\displaystyle k}andi{\displaystyle i}andk{\displaystyle k}andj{\displaystyle j}: d(i∪j),k=di,k+dj,k2{\displaystyle d_{(i\cup j),k}={\frac {d_{i,k}+d_{j,k}}{2}}} The WPGMA algorithm produces rooted dendrograms and requires a constant-rate assumption: it produces anultrametrictree in which the distances from the root to every branch tip are equal. Thisultrametricityassumption is called themolecular clockwhen the tips involveDNA,RNAandproteindata. This working example is based on aJC69genetic distance matrix computed from the5S ribosomal RNAsequence alignment of five bacteria:Bacillus subtilis(a{\displaystyle a}),Bacillus stearothermophilus(b{\displaystyle b}),Lactobacillusviridescens(c{\displaystyle c}),Acholeplasmamodicum(d{\displaystyle d}), andMicrococcus luteus(e{\displaystyle e}).[2][3] Let us assume that we have five elements(a,b,c,d,e){\displaystyle (a,b,c,d,e)}and the following matrixD1{\displaystyle D_{1}}of pairwise distances between them : In this example,D1(a,b)=17{\displaystyle D_{1}(a,b)=17}is the smallest value ofD1{\displaystyle D_{1}}, so we join elementsa{\displaystyle a}andb{\displaystyle b}. Letu{\displaystyle u}denote the node to whicha{\displaystyle a}andb{\displaystyle b}are now connected. Settingδ(a,u)=δ(b,u)=D1(a,b)/2{\displaystyle \delta (a,u)=\delta (b,u)=D_{1}(a,b)/2}ensures that elementsa{\displaystyle a}andb{\displaystyle b}are equidistant fromu{\displaystyle u}. This corresponds to the expectation of theultrametricityhypothesis. The branches joininga{\displaystyle a}andb{\displaystyle b}tou{\displaystyle u}then have lengthsδ(a,u)=δ(b,u)=17/2=8.5{\displaystyle \delta (a,u)=\delta (b,u)=17/2=8.5}(see the final dendrogram) We then proceed to update the initial distance matrixD1{\displaystyle D_{1}}into a new distance matrixD2{\displaystyle D_{2}}(see below), reduced in size by one row and one column because of the clustering ofa{\displaystyle a}withb{\displaystyle b}. Bold values inD2{\displaystyle D_{2}}correspond to the new distances, calculated byaveraging distancesbetween each element of the first cluster(a,b){\displaystyle (a,b)}and each of the remaining elements: D2((a,b),c)=(D1(a,c)+D1(b,c))/2=(21+30)/2=25.5{\displaystyle D_{2}((a,b),c)=(D_{1}(a,c)+D_{1}(b,c))/2=(21+30)/2=25.5} D2((a,b),d)=(D1(a,d)+D1(b,d))/2=(31+34)/2=32.5{\displaystyle D_{2}((a,b),d)=(D_{1}(a,d)+D_{1}(b,d))/2=(31+34)/2=32.5} D2((a,b),e)=(D1(a,e)+D1(b,e))/2=(23+21)/2=22{\displaystyle D_{2}((a,b),e)=(D_{1}(a,e)+D_{1}(b,e))/2=(23+21)/2=22} Italicized values inD2{\displaystyle D_{2}}are not affected by the matrix update as they correspond to distances between elements not involved in the first cluster. We now reiterate the three previous steps, starting from the new distance matrixD2{\displaystyle D_{2}}: Here,D2((a,b),e)=22{\displaystyle D_{2}((a,b),e)=22}is the smallest value ofD2{\displaystyle D_{2}}, so we join cluster(a,b){\displaystyle (a,b)}and elemente{\displaystyle e}. Letv{\displaystyle v}denote the node to which(a,b){\displaystyle (a,b)}ande{\displaystyle e}are now connected. Because of the ultrametricity constraint, the branches joininga{\displaystyle a}orb{\displaystyle b}tov{\displaystyle v}, ande{\displaystyle e}tov{\displaystyle v}are equal and have the following length:δ(a,v)=δ(b,v)=δ(e,v)=22/2=11{\displaystyle \delta (a,v)=\delta (b,v)=\delta (e,v)=22/2=11} We deduce the missing branch length:δ(u,v)=δ(e,v)−δ(a,u)=δ(e,v)−δ(b,u)=11−8.5=2.5{\displaystyle \delta (u,v)=\delta (e,v)-\delta (a,u)=\delta (e,v)-\delta (b,u)=11-8.5=2.5}(see the final dendrogram) We then proceed to update theD2{\displaystyle D_{2}}matrix into a new distance matrixD3{\displaystyle D_{3}}(see below), reduced in size by one row and one column because of the clustering of(a,b){\displaystyle (a,b)}withe{\displaystyle e}:D3(((a,b),e),c)=(D2((a,b),c)+D2(e,c))/2=(25.5+39)/2=32.25{\displaystyle D_{3}(((a,b),e),c)=(D_{2}((a,b),c)+D_{2}(e,c))/2=(25.5+39)/2=32.25} Of note, thisaverage calculationof the new distance does not account for the larger size of the(a,b){\displaystyle (a,b)}cluster (two elements) with respect toe{\displaystyle e}(one element). Similarly: D3(((a,b),e),d)=(D2((a,b),d)+D2(e,d))/2=(32.5+43)/2=37.75{\displaystyle D_{3}(((a,b),e),d)=(D_{2}((a,b),d)+D_{2}(e,d))/2=(32.5+43)/2=37.75} The averaging procedure therefore gives differential weight to the initial distances of matrixD1{\displaystyle D_{1}}. This is the reason why the method isweighted, not with respect to the mathematical procedure but with respect to the initial distances. We again reiterate the three previous steps, starting from the updated distance matrixD3{\displaystyle D_{3}}. Here,D3(c,d)=28{\displaystyle D_{3}(c,d)=28}is the smallest value ofD3{\displaystyle D_{3}}, so we join elementsc{\displaystyle c}andd{\displaystyle d}. Letw{\displaystyle w}denote the node to whichc{\displaystyle c}andd{\displaystyle d}are now connected. The branches joiningc{\displaystyle c}andd{\displaystyle d}tow{\displaystyle w}then have lengthsδ(c,w)=δ(d,w)=28/2=14{\displaystyle \delta (c,w)=\delta (d,w)=28/2=14}(see the final dendrogram) There is a single entry to update:D4((c,d),((a,b),e))=(D3(c,((a,b),e))+D3(d,((a,b),e)))/2=(32.25+37.75)/2=35{\displaystyle D_{4}((c,d),((a,b),e))=(D_{3}(c,((a,b),e))+D_{3}(d,((a,b),e)))/2=(32.25+37.75)/2=35} The finalD4{\displaystyle D_{4}}matrix is: So we join clusters((a,b),e){\displaystyle ((a,b),e)}and(c,d){\displaystyle (c,d)}. Letr{\displaystyle r}denote the (root) node to which((a,b),e){\displaystyle ((a,b),e)}and(c,d){\displaystyle (c,d)}are now connected. The branches joining((a,b),e){\displaystyle ((a,b),e)}and(c,d){\displaystyle (c,d)}tor{\displaystyle r}then have lengths: δ(((a,b),e),r)=δ((c,d),r)=35/2=17.5{\displaystyle \delta (((a,b),e),r)=\delta ((c,d),r)=35/2=17.5} We deduce the two remaining branch lengths: δ(v,r)=δ(((a,b),e),r)−δ(e,v)=17.5−11=6.5{\displaystyle \delta (v,r)=\delta (((a,b),e),r)-\delta (e,v)=17.5-11=6.5} δ(w,r)=δ((c,d),r)−δ(c,w)=17.5−14=3.5{\displaystyle \delta (w,r)=\delta ((c,d),r)-\delta (c,w)=17.5-14=3.5} The dendrogram is now complete. It is ultrametric because all tips (a{\displaystyle a}toe{\displaystyle e}) are equidistant fromr{\displaystyle r}: δ(a,r)=δ(b,r)=δ(e,r)=δ(c,r)=δ(d,r)=17.5{\displaystyle \delta (a,r)=\delta (b,r)=\delta (e,r)=\delta (c,r)=\delta (d,r)=17.5} The dendrogram is therefore rooted byr{\displaystyle r}, its deepest node. Alternative linkage schemes includesingle linkage clustering,complete linkage clustering, andUPGMA average linkage clustering. Implementing a different linkage is simply a matter of using a different formula to calculate inter-cluster distances during the distance matrix update steps of the above algorithm. Complete linkage clustering avoids a drawback of the alternative single linkage clustering method - the so-calledchaining phenomenon, where clusters formed via single linkage clustering may be forced together due to single elements being close to each other, even though many of the elements in each cluster may be very distant to each other. Complete linkage tends to find compact clusters of approximately equal diameters.[4]
https://en.wikipedia.org/wiki/WPGMA
Minimum evolutionis adistance methodemployed inphylogeneticsmodeling. It shares withmaximum parsimonythe aspect of searching for the phylogeny that has the shortest total sum of branch lengths.[1][2] The theoretical foundations of the minimum evolution (ME) criterion lay in the seminal works of both Kidd and Sgaramella-Zonta (1971)[3]and Rzhetsky and Nei (1993).[4]In these frameworks, the molecular sequences from taxa are replaced by a set of measures of their dissimilarity (i.e., the so-called "evolutionary distances") and a fundamental result states that if such distances were unbiased estimates of thetrue evolutionary distancesfrom taxa (i.e., the distances that one would obtain if all the molecular data from taxa were available), then thetrue phylogenyof taxa would have an expected length shorter than any other possible phylogeny T compatible with those distances. It is worth noting here a subtle difference between the maximum-parsimony criterion and the ME criterion: while maximum-parsimony is based on an abductive heuristic, i.e., the plausibility of the simplest evolutionary hypothesis of taxa with respect to the more complex ones, the ME criterion is based on Kidd and Sgaramella-Zonta's conjectures that were proven true 22 years later by Rzhetsky and Nei.[4]These mathematical results set the ME criterion free from theOccam's razorprinciple and confer it a solid theoretical and quantitative basis. Similarly to ME, maximum parsimony becomes anNP-hardproblem when trying to find the optimal tree[5](that is, the one with the least total character-state changes). This is why heuristics are often utilized in order to select a tree, though this does not guarantee the tree will be an optimal selection for the input dataset. This method is often used when very similar sequences are analyzed, as part of the process is locating informative sites in the sequences where a notable number of substitutions can be found.[6] Maximum-parsimony criterion, which usesHamming distancebranch lengths, was shown to bestatistically inconsistentin 1978. This led to an interest in statistically consistent alternatives such as ME.[7] Neighbor joiningmay be viewed as agreedy heuristicfor the balanced minimum evolution (BME) criterion. Saito and Nei's 1987 NJ algorithm far predates the BME criterion of 2000. For two decades, researchers used NJ without a firm theoretical basis for why it works.[8] While neighbor joining shares the same underlying principle of prioritizing minimal evolutionary steps, it differs in that it is a distance method as opposed to maximum parsimony, which is a character-based method. Distance methods like neighbor joining are often simpler to implement and more efficient, which has led to its popularity for analyzing especially large datasets where computational speed is critical. Neighbor joining is a relatively fast phylogenetic tree-building method, though its worst-case time complexity can still beO(N3) without utilizing heuristic implementations to improve on this.[9]It also considers varying rates of evolution across branches, which many other methods do not account for. Neighbor joining also is a rather consistent method in that an input distance matrix with little to no errors will often provide an output tree with minimal inaccuracy. However, using simple distance values rather than full sequence information like in maximum parsimony does lend itself to a loss of information due to the simplification of the problem.[10] Maximum likelihood contrasts itself with Minimum Evolution in the sense of Maximum likelihood is a combination of the testing of the most likely tree to result from the data. However, due to the nature of the mathematics involved it is less accurate with smaller datasets but becomes far less biased as the sample size increases, this is due to due to the error rate being 1/log(n). Minimal evolution is similar but it is less accurate with very large datasets. It is similarly powerful but overall much more complicated compared to UPGMA and other options.[11] UPGMA is a clustering method. It builds a collection of clusters that are then further clustered until the maximum potential cluster is obtained.  This is then worked backwards to determine the relation of the groups. It specifically uses an arithmetic mean enabling a more stable clustering. Overall while it is less powerful compared to any of the other listed comparisons it is far simpler and less complex to create. Minimal Evolution is overall more powerful but also more complicated to set up, and is also NP hard.[12] The ME criterion is known to be statistically consistent whenever the branch lengths are estimated via theOrdinary Least-Squares(OLS) or vialinear programming.[4][13][14]However, as observed in Rzhetsky & Nei's article, the phylogeny having the minimum length under the OLS branch length estimation model may be characterized, in some circumstance, by negative branch lengths, which unfortunately are empty of biological meaning.[4]To solve this drawback, Pauplin[15]proposed to replace OLS with a new particular branch length estimation model, known asbalanced basic evolution(BME).Richard DesperandOlivier Gascuel[16]showed that the BME branch length estimation model ensures the general statistical consistency of the minimum length phylogeny as well as the non-negativity of its branch lengths, whenever the estimated evolutionary distances from taxa satisfy the triangle inequality. Le Sy VinhandArndt von Haeseler[17]have shown, by means of massive and systematic simulation experiments, that the accuracy of the ME criterion under the BME branch length estimation model is by far the highest indistance methodsand not inferior to those of alternative criteria based e.g., on Maximum Likelihood or Bayesian Inference. Moreover, as shown by Daniele Catanzaro,Martin FrohnandRaffaele Pesenti,[18]the minimum length phylogeny under the BME branch length estimation model can be interpreted as the (Pareto optimal) consensus tree between concurrent minimum entropy processes encoded by a forest of n phylogenies rooted on the n analyzed taxa. This particular information theory-based interpretation is conjectured to be shared by alldistance methodsin phylogenetics. Francois DenisandOlivier Gascuel[19]proved that the Minimum Evolution principle is not consistent in weighted least squares and generalized least squares. They showed that there was an algorithm that could be used in OLS models where all weights are equal called EDGE_LENGTHS. In this algorithm the lengths of two edges, 1u and 2u can be computed without using distances δij(i,j≠1,2). This property does not hold in WLS models or in the GLS models. Without this property the ME principle is not consistent in the WLS and GLS models. The "minimum evolution problem" (MEP), in which a minimum-summed-length phylogeny is derived from a set of sequences under the ME criterion, is said to beNP-hard.[20][21]The "balanced minimum evolution problem" (BMEP), which uses the newer BME criterion, isAPX-hard.[20] A number of exact algorithms solving BMEP have been described.[22][23][24][25]The best known exact algorithm[26]remains impractical for more than a dozen taxa, even with multiprocessing.[20]There is only one approximation algorithm with proven error bounds, published in 2012. In practical use, BMEP is overwhelmingly implemented byheuristic search. The basic, aforementionedneighbor-joiningalgorithm implements a greedy version of BME.[27] FastME, the "state-of-the-art",[20]starts with a rough tree then improves it using a set of topological moves such as Nearest Neighbor Interchanges (NNI). Compared to NJ, it is about as fast and more accurate.[28] FastME operates on the Balanced Minimum Evolution principle, which calculates tree length using a weighted linear function of all pairwise distances. The BME score for a given topology is expressed as: wheredij{\displaystyle d_{ij}}represents the evolutionary distance between taxai{\displaystyle i}andj{\displaystyle j}, andwij{\displaystyle w_{ij}}is a topology-dependent weight that balances each pair’s contribution. This approach enables more accurate reconstructions than greedy algorithms like NJ. The algorithm improves tree topology through local rearrangements, primarily Subtree Prune and Regraft (SPR) and NNI operations. At each step, it checks if a rearranged tree has a lower BME score. If so, the change is retained. This iterative refinement enables FastME to converge toward near-optimal solutions efficiently, even for large datasets. Simplified pseudocode of FastME: Simulations reported by Desper and Gascuel demonstrate that FastME consistently outperforms NJ in terms of topological accuracy, particularly when evolutionary rates vary or distances deviate from strict additivity. It has also been successfully used on datasets with over 1,000 taxa.[29] Like most distance-based methods, BME assumes that the input distances are additive. When this assumption does not hold—due to noise, unequal rates, or other violations—the resulting trees may still be close to optimal, but accuracy can be affected. In addition to FastME,metaheuristicmethods such as genetic algorithms and simulated annealing have also been used to explore tree topologies under the minimum evolution criterion, particularly for very large datasets where traditional heuristics may struggle.[30]
https://en.wikipedia.org/wiki/Minimum_Evolution
Incomputer science, arange treeis anordered treedata structureto hold a list of points. It allows all points within a given range to bereportedefficiently, and is typically used in two or higher dimensions. Range trees were introduced byJon Louis Bentleyin 1979.[1]Similar data structures were discovered independently by Lueker,[2]Lee and Wong,[3]and Willard.[4]The range tree is an alternative to thek-d tree. Compared tok-d trees, range trees offer faster query times of (inBig O notation)O(logd⁡n+k){\displaystyle O(\log ^{d}n+k)}but worse storage ofO(nlogd−1⁡n){\displaystyle O(n\log ^{d-1}n)}, wherenis the number of points stored in the tree,dis the dimension of each point andkis the number of points reported by a given query. In 1990,Bernard Chazelleimproved this to query timeO(logd−1⁡n+k){\displaystyle O(\log ^{d-1}n+k)}and space complexityO(n(log⁡nlog⁡log⁡n)d−1){\displaystyle O\left(n\left({\frac {\log n}{\log \log n}}\right)^{d-1}\right)}.[5][6] A range tree on a set of 1-dimensional points is a balancedbinary search treeon those points. The points stored in the tree are stored in the leaves of the tree; each internal node stores the largest value of its left subtree. A range tree on a set of points ind-dimensions is arecursively definedmulti-levelbinary search tree. Each level of the data structure is a binary search tree on one of thed-dimensions. The first level is a binary search tree on the first of thed-coordinates. Each vertexvof this tree contains an associated structure that is a (d−1)-dimensional range tree on the last (d−1)-coordinates of the points stored in the subtree ofv. A 1-dimensional range tree on a set ofnpoints is a binary search tree, which can be constructed inO(nlog⁡n){\displaystyle O(n\log n)}time. Range trees in higher dimensions are constructed recursively by constructing a balanced binary search tree on the first coordinate of the points, and then, for each vertexvin this tree, constructing a (d−1)-dimensional range tree on the points contained in the subtree ofv. Constructing a range tree this way would requireO(nlogd⁡n){\displaystyle O(n\log ^{d}n)}time. This construction time can be improved for 2-dimensional range trees toO(nlog⁡n){\displaystyle O(n\log n)}.[7]LetSbe a set ofn2-dimensional points. IfScontains only one point, return a leaf containing that point. Otherwise, construct the associated structure ofS, a 1-dimensional range tree on they-coordinates of the points inS. Letxmbe the medianx-coordinate of the points. LetSLbe the set of points withx-coordinate less than or equal toxmand letSRbe the set of points withx-coordinate greater thanxm. Recursively constructvL, a 2-dimensional range tree onSL, andvR, a 2-dimensional range tree onSR. Create a vertexvwith left-childvLand right-childvR. If we sort the points by theiry-coordinates at the start of the algorithm, and maintain this ordering when splitting the points by theirx-coordinate, we can construct the associated structures of each subtree in linear time. This reduces the time to construct a 2-dimensional range tree toO(nlog⁡n){\displaystyle O(n\log n)}, and also reduces the time to construct ad-dimensional range tree toO(nlogd−1⁡n){\displaystyle O(n\log ^{d-1}n)}. Arange queryon a range tree reports the set of points that lie inside a given interval. To report the points that lie in the interval [x1,x2], we start by searching forx1andx2. At some vertex in the tree, the search paths tox1andx2will diverge. Letvsplitbe the last vertex that these two search paths have in common. For every vertexvin the search path fromvsplittox1, if the value stored atvis greater thanx1, report every point in the right-subtree ofv. Ifvis a leaf, report the value stored atvif it is inside the query interval. Similarly, reporting all of the points stored in the left-subtrees of the vertices with values less thanx2along the search path fromvsplittox2, and report the leaf of this path if it lies within the query interval. Since the range tree is a balanced binary tree, the search paths tox1andx2have lengthO(log⁡n){\displaystyle O(\log n)}. Reporting all of the points stored in the subtree of a vertex can be done in linear time using anytree traversalalgorithm. It follows that the time to perform a range query isO(log⁡n+k){\displaystyle O(\log n+k)}, wherekis the number of points in the query interval. Range queries ind-dimensions are similar. Instead of reporting all of the points stored in the subtrees of the search paths, perform a (d−1)-dimensional range query on the associated structure of each subtree. Eventually, a 1-dimensional range query will be performed and the correct points will be reported. Since ad-dimensional query consists ofO(log⁡n){\displaystyle O(\log n)}(d−1)-dimensional range queries, it follows that the time required to perform ad-dimensional range query isO(logd⁡n+k){\displaystyle O(\log ^{d}n+k)}, wherekis the number of points in the query interval. This can be reduced toO(logd−1⁡n+k){\displaystyle O(\log ^{d-1}n+k)}using a variant offractional cascading.[2][4][7]
https://en.wikipedia.org/wiki/Range_tree
Arange queryis a commondatabaseoperation that retrieves allrecordswhere somevalueis between an upper and lower boundary.[1]For example, list all employees with 3 to 5 years' experience. Range queries are unusual because it is not generally known in advance how many entries a range query will return, or if it will return any at all. Many other queries, such as the top ten most senior employees, or the newest employee, can be done more efficiently because there is an upper bound to the number of results they will return. A query that returns exactly one result is sometimes called asingleton. Match at least one of the requested keys. Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Range_query
Incomputational geometry, aDelaunay triangulationorDelone triangulationof a set of points in the plane subdivides theirconvex hull[1]into triangles whosecircumcirclesdo not contain any of the points; that is, each circumcircle has its generating points on its circumference, but all other points in the set are outside of it. This maximizes the size of the smallest angle in any of the triangles, and tends to avoidsliver triangles. The triangulation is named afterBoris Delaunayfor his work on it from 1934.[2] If the points all lie on a straight line, the notion of triangulation becomesdegenerateand there is no Delaunay triangulation. For four or more points on the same circle (e.g., the vertices of a rectangle) the Delaunay triangulation is not unique: each of the two possible triangulations that split thequadrangleinto two triangles satisfies the "Delaunay condition", i.e., the requirement that the circumcircles of all triangles have empty interiors. By considering circumscribed spheres, the notion of Delaunay triangulation extends to three and higher dimensions. Generalizations are possible tometricsother thanEuclidean distance. However, in these cases a Delaunay triangulation is not guaranteed to exist or be unique. The Delaunaytriangulationof adiscretepoint setPin general position corresponds to thedual graphof theVoronoi diagramforP. Thecircumcentersof Delaunay triangles are the vertices of the Voronoi diagram. In the 2D case, the Voronoi vertices are connected via edges, that can be derived from adjacency-relationships of the Delaunay triangles: If two triangles share an edge in the Delaunay triangulation, their circumcenters are to be connected with an edge in the Voronoi tesselation. Special cases where this relationship does not hold, or is ambiguous, include cases like: For a setPof points in the (d-dimensional)Euclidean space, aDelaunay triangulationis atriangulationDT(P)such that no point inPis inside thecircum-hypersphereof anyd-simplexinDT(P). It is known[2]that there exists a unique Delaunay triangulation forPifPis a set of points ingeneral position; that is, the affine hull ofPisd-dimensional and no set ofd+ 2points inPlie on the boundary of a ball whose interior does not intersectP. The problem of finding the Delaunay triangulation of a set of points ind-dimensionalEuclidean spacecan be converted to the problem of finding theconvex hullof a set of points in (d+ 1)-dimensional space. This may be done by giving each pointpan extra coordinate equal to|p|2, thus turning it into a hyper-paraboloid (this is termed "lifting"); taking the bottom side of the convex hull (as the top end-cap faces upwards away from the origin, and must be discarded); and mapping back tod-dimensional space by deleting the last coordinate. As the convex hull is unique, so is the triangulation, assuming all facets of the convex hull aresimplices. Nonsimplicial facets only occur whend+ 2of the original points lie on the samed-hypersphere, i.e., the points are not in general position.[3] Letnbe the number of points anddthe number of dimensions. From the above properties an important feature arises: Looking at two triangles△ABD, △BCDwith the common edgeBD(see figures), if the sum of the anglesα + γ ≤ 180°, the triangles meet the Delaunay condition. This is an important property because it allows the use of aflippingtechnique. If two triangles do not meet the Delaunay condition, switching the common edgeBDfor the common edgeACproduces two triangles that do meet the Delaunay condition: This operation is called aflip, and can be generalised to three and higher dimensions.[8] Many algorithms for computing Delaunay triangulations rely on fast operations for detecting when a point is within a triangle's circumcircle and an efficient data structure for storing triangles and edges. In two dimensions, one way to detect if pointDlies in the circumcircle ofA, B, Cis to evaluate thedeterminant:[9] WhenA, B, Care sorted in acounterclockwiseorder, this determinant is positive only ifDlies inside the circumcircle. As mentioned above, if a triangle is non-Delaunay, we can flip one of its edges. This leads to a straightforward algorithm: construct any triangulation of the points, and then flip edges until no triangle is non-Delaunay. Unfortunately, this can takeΩ(n2)edge flips.[10]While this algorithm can be generalised to three and higher dimensions, its convergence is not guaranteed in these cases, as it is conditioned to the connectedness of the underlyingflip graph: this graph is connected for two-dimensional sets of points, but may be disconnected in higher dimensions.[8] The most straightforward way of efficiently computing the Delaunay triangulation is to repeatedly add one vertex at a time, retriangulating the affected parts of the graph. When a vertexvis added, we split in three the triangle that containsv, then we apply the flip algorithm. Done naïvely, this will takeO(n)time: we search through all the triangles to find the one that containsv, then we potentially flip away every triangle. Then the overall runtime isO(n2). If we insert vertices in random order, it turns out (by a somewhat intricate proof) that each insertion will flip, on average, onlyO(1)triangles – although sometimes it will flip many more.[11]This still leaves the point location time to improve. We can store the history of the splits and flips performed: each triangle stores a pointer to the two or three triangles that replaced it. To find the triangle that containsv, we start at a root triangle, and follow the pointer that points to a triangle that containsv, until we find a triangle that has not yet been replaced. On average, this will also takeO(logn)time. Over all vertices, then, this takesO(nlogn)time.[12]While the technique extends to higher dimension (as proved by Edelsbrunner and Shah[13]), the runtime can be exponential in the dimension even if the final Delaunay triangulation is small. TheBowyer–Watson algorithmprovides another approach for incremental construction. It gives an alternative to edge flipping for computing the Delaunay triangles containing a newly inserted vertex. Unfortunately the flipping-based algorithms are generally hard to parallelize, since adding some certain point (e.g. the center point of a wagon wheel) can lead to up toO(n)consecutive flips. Blelloch et al.[14]proposed another version of incremental algorithm based on rip-and-tent, which is practical and highly parallelized with polylogarithmicspan. Adivide and conquer algorithmfor triangulations in two dimensions was developed by Lee and Schachter and improved byGuibasandStolfi[9][15]and later by Dwyer.[16]In this algorithm, one recursively draws a line to split the vertices into two sets. The Delaunay triangulation is computed for each set, and then the two sets are merged along the splitting line. Using some clever tricks, the merge operation can be done in timeO(n), so the total running time isO(nlogn).[17] For certain types of point sets, such as a uniform random distribution, by intelligently picking the splitting lines the expected time can be reduced toO(nlog logn)while still maintaining worst-case performance. A divide and conquer paradigm to performing a triangulation inddimensions is presented in "DeWall: A fast divide and conquer Delaunay triangulation algorithm in Ed" by P. Cignoni, C. Montani, R. Scopigno.[18] The divide and conquer algorithm has been shown to be the fastest DT generation technique sequentially.[19][20] Sweephull[21]is a hybrid technique for 2D Delaunay triangulation that uses a radially propagating sweep-hull, and a flipping algorithm. The sweep-hull is created sequentially by iterating a radially-sorted set of 2D points, and connecting triangles to the visible part of the convex hull, which gives a non-overlapping triangulation. One can build a convex hull in this manner so long as the order of points guarantees no point would fall within the triangle. But, radially sorting should minimize flipping by being highly Delaunay to start. This is then paired with a final iterative triangle flipping step. TheEuclidean minimum spanning treeof a set of points is a subset of the Delaunay triangulation of the same points,[22]and this can be exploited to compute it efficiently. For modellingterrainor other objects given apoint cloud, the Delaunay triangulation gives a nice set of triangles to use as polygons in the model. In particular, the Delaunay triangulation avoids narrow triangles (as they have large circumcircles compared to their area). Seetriangulated irregular network. Delaunay triangulations can be used to determine the density or intensity of points samplings by means of theDelaunay tessellation field estimator (DTFE). Delaunay triangulations are often used togenerate meshesfor space-discretised solvers such as thefinite element methodand thefinite volume methodof physics simulation, because of the angle guarantee and because fast triangulation algorithms have been developed. Typically, the domain to be meshed is specified as a coarsesimplicial complex; for the mesh to be numerically stable, it must be refined, for instance by usingRuppert's algorithm. The increasing popularity offinite element methodandboundary element methodtechniques increases the incentive to improve automatic meshing algorithms. However, all of these algorithms can create distorted and even unusable grid elements. Fortunately, several techniques exist which can take an existing mesh and improve its quality. For example, smoothing (also referred to as mesh refinement) is one such method, which repositions nodes to minimize element distortion. Thestretched grid methodallows the generation of pseudo-regular meshes that meet the Delaunay criteria easily and quickly in a one-step solution. Constrained Delaunay triangulationhas found applications inpath planningin automated driving and topographic surveying.[23]
https://en.wikipedia.org/wiki/Delaunay_triangulation
Inmathematics, themap segmentationproblem is a kind ofoptimization problem. It involves a certain geographic region that has to be partitioned into smaller sub-regions in order to achieve a certain goal. Typical optimization objectives include:[1] Fair division of land has been an important issue since ancient times, e.g. inancient Greece.[2] There is a geographic region denoted by C ("cake"). A partition of C, denoted by X, is a list of disjoint subregions whose union is C: There is a certain set of additional parameters (such as: obstacles, fixed points or probability density functions), denoted by P. There is a real-valued function denoted by G ("goal") on the set of all partitions. The map segmentation problem is to find: where the minimization is on the set of all partitions of C. Often, there are geometric shape constraints on the partitions, e.g., it may be required that each part be aconvex setor aconnected setor at least ameasurable set. 1.Red-blue partitioning: there is a setPb{\displaystyle P_{b}}of blue points and a setPr{\displaystyle P_{r}}of red points. Divide the plane inton{\displaystyle n}regions such that each region contains approximately a fraction1/n{\displaystyle 1/n}of the blue points and1/n{\displaystyle 1/n}of the red points. Here:
https://en.wikipedia.org/wiki/Map_segmentation
Thenatural element method (NEM)[1][2][3]is ameshless methodto solvepartial differential equation, where theelementsdo not have a predefined shape as in thefinite element method, but depend on the geometry.[4][5][6] AVoronoi diagrampartitioning the space is used to create each of these elements. Natural neighbor interpolation functionsare then used to model the unknown function within each element. When the simulation is dynamic, this method prevents the elements to be ill-formed, having the possibility to easily redefine them at each time step depending on the geometry.
https://en.wikipedia.org/wiki/Natural_element_method
Incomputational geometry, apower diagram, also called aLaguerre–Voronoi diagram,Dirichlet cell complex,radical Voronoi tesselationor asectional Dirichlet tesselation, is a partition of theEuclidean planeintopolygonalcells defined from a set of circles. The cell for a given circleCconsists of all the points for which thepower distancetoCis smaller than the power distance to the other circles. The power diagram is a form of generalizedVoronoi diagram, and coincides with the Voronoi diagram of the circle centers in the case that all the circles have equal radii.[1][2][3][4] IfCis a circle andPis a point outsideC, then thepowerofPwith respect toCis the square of the length of a line segment fromPto a pointTof tangency withC. Equivalently, ifPhas distancedfrom the center of the circle, and the circle has radiusr, then (by thePythagorean theorem) the power isd2−r2. The same formulad2−r2may be extended to all points in the plane, regardless of whether they are inside or outside ofC: points onChave zero power, and points insideChave negative power.[2][3][4] The power diagram of a set ofncirclesCiis a partition of the plane intonregionsRi(called cells), such that a pointPbelongs toRiwhenever circleCiis the circle minimizing the power ofP.[2][3][4] In the casen= 2, the power diagram consists of twohalfplanes, separated by a line called theradical axisor chordale of the two circles. Along the radical axis, both circles have equal power. More generally, in any power diagram, each cellRiis aconvex polygon, the intersection of the halfspaces bounded by the radical axes of circleCiwith each other circle. Triples of cells meet atverticesof the diagram, which are the radical centers of the three circles whose cells meet at the vertex.[2][3][4] The power diagram may be seen as a weighted form of theVoronoi diagramof a set of point sites, a partition of the plane into cells within which one of the sites is closer than all the other sites. Other forms ofweighted Voronoi diagraminclude the additively weighted Voronoi diagram, in which each site has a weight that is added to its distance before comparing it to the distances to the other sites, and the multiplicatively weighted Voronoi diagram, in which the weight of a site is multiplied by its distance before comparing it to the distances to the other sites. In contrast, in the power diagram, we may view each circle center as a site, and each circle's squared radius as a weight that is subtracted from thesquared Euclidean distancebefore comparing it to other squared distances. In the case that all the circle radii are equal, this subtraction makes no difference to the comparison, and the power diagram coincides with the Voronoi diagram.[3][4] A planar power diagram may also be interpreted as a planar cross-section of an unweighted three-dimensional Voronoi diagram. In this interpretation, the set of circle centers in the cross-section plane are the perpendicular projections of the three-dimensional Voronoi sites, and the squared radius of each circle is a constantKminus the squared distance of the corresponding site from the cross-section plane, whereKis chosen large enough to make all these radii positive.[5] Like the Voronoi diagram, the power diagram may be generalized to Euclidean spaces of any dimension. The power diagram ofnspheres inddimensions is combinatorially equivalent to the intersection of a set ofnupward-facing halfspaces ind+ 1 dimensions, and vice versa.[3] Two-dimensional power diagrams may be constructed by an algorithm that runs in time O(nlogn).[2][3]More generally, because of the equivalence with higher-dimensional halfspace intersections,d-dimensional power diagrams (ford> 2) may be constructed by an algorithm that runs in timeO(n⌈d/2⌉){\displaystyle O(n^{\lceil d/2\rceil })}.[3] The power diagram may be used as part of an efficient algorithm for computing the volume of a union of spheres. Intersecting each sphere with its power diagram cell gives its contribution to the total union, from which the volume may be computed in time proportional to the complexity of the power diagram.[6] Other applications of power diagrams includedata structuresfor testing whether a point belongs to a union of disks,[2]algorithms for constructing the boundary of a union of disks,[2]and algorithms for finding the closest two balls in a set of balls.[7]It is also used for solving the semi-discreteoptimal transportationproblem[8]which in turn has numerous applications, such as early universe reconstruction[9]or fluid dynamics.[10] Aurenhammer (1987)traces the definition of the power distance to the work of 19th-century mathematiciansEdmond LaguerreandGeorgy Voronoy.[3]Fejes Tóth (1977)defined power diagrams and used them to show that the boundary of a union ofncircular disks can always be illuminated from at most 2npoint light sources.[11]Power diagrams have appeared in the literature under other names including the "Laguerre–Voronoi diagram", "Dirichlet cell complex", "radical Voronoi tesselation" and "sectional Dirichlet tesselation".[12]
https://en.wikipedia.org/wiki/Power_diagram
Incomputational geometry, the positive and negativeVoronoi polesof acellin aVoronoi diagramare certain vertices of the diagram, chosen in pairs in each cell of the diagram to be far from the site generating that pair. They have applications insurface reconstruction. LetV{\displaystyle V}be the Voronoi diagram for a set of sitesP{\displaystyle P}, and letVp{\displaystyle V_{p}}be the Voronoi cell ofV{\displaystyle V}corresponding to a sitep∈P{\displaystyle p\in P}. IfVp{\displaystyle V_{p}}is bounded, then itspositive poleis the vertex of the boundary ofVp{\displaystyle V_{p}}that has maximal distance to the pointp{\displaystyle p}. If the cell is unbounded, then a positive pole is not defined.[1] Furthermore, letu¯{\displaystyle {\bar {u}}}be the vector fromp{\displaystyle p}to the positive pole, or, if the cell is unbounded, letu¯{\displaystyle {\bar {u}}}be a vector in the average direction of all unbounded Voronoi edges of the cell. Thenegative poleis then the Voronoi vertexv{\displaystyle v}inVp{\displaystyle V_{p}}with the largest distance top{\displaystyle p}such that the vectoru¯{\displaystyle {\bar {u}}}and the vector fromp{\displaystyle p}tov{\displaystyle v}make an angle larger thanπ2{\displaystyle {\tfrac {\pi }{2}}}.[1] The poles were introduced in 1998 in two papers byNina Amenta, Marshall Bern, andManolis Kellis, for the problem ofsurface reconstruction. As they showed, anysmooth surfacethat is sampled with sampling density inversely proportional to itscurvaturecan be accurately reconstructed, by constructing theDelaunay triangulationof the combined set of sample points and their poles, and then removing certain triangles that are nearly parallel to the line segments between pairs of nearby poles.[2][3]
https://en.wikipedia.org/wiki/Voronoi_pole
Curveletsare a non-adaptivetechnique for multi-scaleobjectrepresentation. Being an extension of thewaveletconcept, they are becoming popular in similar fields, namely inimage processingandscientific computing. Wavelets generalize theFourier transformby using a basis that represents both location and spatial frequency. For 2D or 3D signals, directional wavelet transforms go further, by using basis functions that are also localized inorientation. A curvelet transform differs from other directional wavelet transforms in that the degree of localisation in orientation varies with scale. In particular, fine-scale basis functions are long ridges; the shape of the basis functions at scalejis2−j{\displaystyle 2^{-j}}by2−j/2{\displaystyle 2^{-j/2}}so the fine-scale bases are skinny ridges with a precisely determined orientation. Curvelets are an appropriate basis for representing images (or other functions) which are smooth apart from singularities along smooth curves,where the curves have bounded curvature, i.e. where objects in the image have a minimum length scale. This property holds for cartoons, geometrical diagrams, and text. As one zooms in on such images, the edges they contain appear increasingly straight. Curvelets take advantage of this property, by defining the higher resolution curvelets to be more elongated than the lower resolution curvelets. However, natural images (photographs) do not have this property; they have detail at every scale. Therefore, for natural images, it is preferable to use some sort of directional wavelet transform whose wavelets have the same aspect ratio at every scale. When the image is of the right type, curvelets provide a representation that is considerably sparser than other wavelet transforms. This can be quantified by considering the best approximation of a geometrical test image that can be represented using onlyn{\displaystyle n}wavelets, and analysing the approximation error as a function ofn{\displaystyle n}. For a Fourier transform, the squared error decreases only asO(1/n){\displaystyle O(1/{\sqrt {n}})}. For a wide variety of wavelet transforms, including both directional and non-directional variants, the squared error decreases asO(1/n){\displaystyle O(1/n)}. The extra assumption underlying the curvelet transform allows it to achieveO((log⁡n)3/n2){\displaystyle O({(\log n)}^{3}/{n^{2}})}. Efficient numerical algorithms exist for computing the curvelet transform of discrete data. The computational cost of the discrete curvelet transforms proposed by Candès et al. (Discrete curvelet transform based on unequally-spaced fast Fourier transforms and based on the wrapping of specially selected Fourier samples) is approximately 6–10 times that of an FFT, and has the same dependence ofO(n2log⁡n){\displaystyle O(n^{2}\log n)}for an image of sizen×n{\displaystyle n\times n}.[1] To construct a basic curveletϕ{\displaystyle \phi }and provide a tiling of the 2-D frequency space, two main ideas should be followed: The number of wedges isNj=4⋅2⌈j2⌉{\displaystyle N_{j}=4\cdot 2^{\left\lceil {\frac {j}{2}}\right\rceil }}at the scale2−j{\displaystyle 2^{-j}}, i.e., it doubles in each second circular ring. Letξ=(ξ1,ξ2)T{\displaystyle {\boldsymbol {\xi }}=\left(\xi _{1},\xi _{2}\right)^{T}}be the variable in frequency domain, andr=ξ12+ξ22,ω=arctan⁡ξ1ξ2{\displaystyle r={\sqrt {\xi _{1}^{2}+\xi _{2}^{2}}},\omega =\arctan {\frac {\xi _{1}}{\xi _{2}}}}be the polar coordinates in the frequency domain. We use theansatzfor thedilated basic curveletsin polar coordinates:ϕ^j,0,0:=2−3j4W(2−jr)V~Nj(ω),r≥0,ω∈[0,2π),j∈N0{\displaystyle {\hat {\phi }}_{j,0,0}:=2^{\frac {-3j}{4}}W(2^{-j}r){\tilde {V}}_{N_{j}}(\omega ),r\geq 0,\omega \in [0,2\pi ),j\in N_{0}} To construct a basic curvelet with compact support near a ″basic wedge″, the two windowsW{\displaystyle W}andV~Nj{\displaystyle {\tilde {V}}_{N_{j}}}need to have compact support. Here, we can simply takeW(r){\displaystyle W(r)}to cover(0,∞){\displaystyle (0,\infty )}with dilated curvelets andV~Nj{\displaystyle {\tilde {V}}_{N_{j}}}such that each circular ring is covered by the translations ofV~Nj{\displaystyle {\tilde {V}}_{N_{j}}}. Then the admissibility yields∑j=−∞∞|W(2−jr)|2=1,r∈(0,∞).{\displaystyle \sum _{j=-\infty }^{\infty }\left|W(2^{-j}r)\right|^{2}=1,r\in (0,\infty ).}seeWindow Functionsfor more informationFor tiling a circular ring intoN{\displaystyle N}wedges, whereN{\displaystyle N}is an arbitrary positive integer, we need a2π{\displaystyle 2\pi }-periodic nonnegative windowV~N{\displaystyle {\tilde {V}}_{N}}with support inside[−2πN,2πN]{\displaystyle \left[{\frac {-2\pi }{N}},{\frac {2\pi }{N}}\right]}such that∑l=0N−1V~N2(ω−2πlN)=1{\displaystyle \sum _{l=0}^{N-1}{\tilde {V}}_{N}^{2}\left(\omega -{\frac {2\pi l}{N}}\right)=1},for allω∈[0,2π){\displaystyle \omega \in \left[0,2\pi \right)},V~N{\displaystyle {\tilde {V}}_{N}}can be simply constructed as2π{\displaystyle 2\pi }-periodizations of a scaled windowV(Nω2π){\displaystyle V\left({\frac {N\omega }{2\pi }}\right)}.Then, it follows that∑l=0Nj−1|23j4ϕ^j,0,0(r,ω−2πlNj)|2=|W(2−jr)|2∑l=0Nj−1V~Nj2(ω−2πlN)=|W(2−jr)|2{\displaystyle \sum _{l=0}^{N_{j}-1}\left|2^{\frac {3j}{4}}{\hat {\phi }}_{j,0,0}\left(r,\omega -{\frac {2\pi l}{N_{j}}}\right)\right|^{2}=\left|W(2^{-j}r)\right|^{2}\sum _{l=0}^{N_{j}-1}{\tilde {V}}_{N_{j}}^{2}\left(\omega -{\frac {2\pi l}{N}}\right)=\left|W(2^{-j}r)\right|^{2}} For a complete covering of the frequency plane including the region around zero, we need to define a low pass elementϕ^−1:=W0(|ξ|){\displaystyle {\hat {\phi }}_{-1}:=W_{0}(\left|\xi \right|)}withW02(r)2:=1−∑j=0∞W(2−jr)2{\displaystyle W_{0}^{2}(r)^{2}:=1-\sum _{j=0}^{\infty }W(2^{-j}r)^{2}}that is supported on the unit circle, and where we do not consider any rotation.
https://en.wikipedia.org/wiki/Curvelet
Digital cinemais thedigitaltechnology used within thefilm industrytodistributeorprojectmotion picturesas opposed to the historical use of reels ofmotion picture film, such as35 mm film. Whereas film reels have to be shipped tomovie theaters, a digital movie can be distributed to cinemas in a number of ways: over theInternetor dedicatedsatellitelinks, or by sendinghard drivesoroptical discssuch asBlu-raydiscs, then projected using a digital video projector instead of afilm projector. Typically, digital movies are shot usingdigital movie camerasor in animation transferred from a file and are edited using anon-linear editing system(NLE). The NLE is often a video editing application installed in one or more computers that may be networked to access the original footage from a remote server, share or gain access to computing resources for rendering the final video, and allow several editors to work on the same timeline or project. Alternatively a digital movie could be a film reel that has been digitized using amotion picture film scannerand then restored, or, a digital movie could be recorded using afilm recorderonto film stock for projection using a traditional film projector. Digital cinema is distinct fromhigh-definition televisionand does not necessarily use traditional television or other traditionalhigh-definition videostandards, aspect ratios, or frame rates. In digital cinema, resolutions are represented by the horizontal pixel count, usually2K(2048×1080 or 2.2megapixels) or4K(4096×2160 or 8.8 megapixels). The 2K and 4K resolutions used in digital cinema projection are often referred to as DCI 2K and DCI 4K. DCI stands for Digital Cinema Initiatives. As digital cinema technology improved in the early 2010s, most theaters across the world converted to digital video projection. Digital cinema technology has continued to develop over the years with 3D, RPX, 4DX and ScreenX, allowing moviegoers more immersive experiences. The transition from film todigital videowas preceded by cinema's transition from analog todigital audio, with the release of theDolby Digital(AC-3)audio coding standardin 1991.[1]Its main basis is themodified discrete cosine transform(MDCT), alossyaudio compressionalgorithm.[2]It is a modification of thediscrete cosine transform(DCT) algorithm, which was first proposed byNasir Ahmedin 1972 and was originally intended forimage compression.[3]The DCT was adapted into the MDCT by J.P. Princen, A.W. Johnson and Alan B. Bradley at theUniversity of Surreyin 1987,[4]and thenDolby Laboratoriesadapted the MDCT algorithm along withperceptual codingprinciples to develop the AC-3 audio format for cinema needs.[1]Cinema in the 1990stypically combined analog photochemical images with digital audio. Digital media playback of high-resolution 2K files has at least a 20-year history. Early video data storage units (RAIDs) fed custom frame buffer systems with large memories. In early digital video units, the content was usually restricted to several minutes of material. Transfer of content between remote locations was slow and had limited capacity. It was not until the late 1990s that feature-length films could be sent over the "wire" (Internet or dedicated fiber links). On October 23, 1998,Digital light processing(DLP) projector technology was publicly demonstrated with the release ofThe Last Broadcast, the first feature-length movie, shot, edited and distributed digitally.[5][6][7]In conjunction with Texas Instruments, the movie was publicly demonstrated in five theaters across the United States (Philadelphia,Portland (Oregon),Minneapolis,Providence, andOrlando). In the United States, on June 18, 1999, Texas Instruments'DLP Cinemaprojector technology was publicly demonstrated on two screens in Los Angeles and New York for the release of Lucasfilm'sStar Wars Episode I: The Phantom Menace.[8][9]In Europe, on February 2, 2000, Texas Instruments'DLP Cinemaprojector technology was publicly demonstrated, by Philippe Binant, on one screen in Paris for the release ofToy Story 2.[10][11] From 1997 to 2000, theJPEG 2000image compressionstandard was developed by aJoint Photographic Experts Group(JPEG) committee chaired by Touradj Ebrahimi (later the JPEG president).[12]In contrast to the original 1992JPEGstandard, which is a DCT-basedlossy compressionformat for staticdigital images, JPEG 2000 is adiscrete wavelet transform(DWT) based compression standard that could be adapted for motion imagingvideo compressionwith theMotion JPEG 2000extension. JPEG 2000 technology was later selected as thevideo coding standardfor digital cinema in 2004.[13] On January 19, 2000, theSociety of Motion Picture and Television Engineers, in the United States, initiated the first standards group dedicated towards developing digital cinema.[14]By December 2000, there were 15 digital cinema screens in the United States and Canada, 11 in Western Europe, 4 in Asia, and 1 in South America.[15]Digital Cinema Initiatives(DCI) was formed in March 2002 as a joint project of many motion picture studios (Disney,Fox,MGM,Paramount,Sony Pictures,UniversalandWarner Bros.) to develop a system specification for digital cinema.[16]The same month it was reported that the number of cinemas equipped with digital projectors had increased to about 50 in the US and 30 more in the rest of the world.[17] In April 2004, in cooperation with theAmerican Society of Cinematographers, DCI created standard evaluation material (the ASC/DCI StEM material) for testing of 2K and 4K playback and compression technologies. DCI selectedJPEG 2000as the basis for the compression in the system the same year.[18]Initial tests with JPEG 2000 producedbit ratesof around 75–125Mbit/sfor2K resolutionand 100–200 Mbit/s for4K resolution.[13] In China, in June 2005, an e-cinema system called "dMs" was established and was used in over 15,000 screens spread across China's 30 provinces. dMs estimated that the system would expand to 40,000 screens in 2009.[19]In 2005 the UK Film Council Digital Screen Network launched in the UK by Arts Alliance Media creating a chain of 250 2K digital cinema systems. The roll-out was completed in 2006. This was the first mass roll-out in Europe. AccessIT/Christie Digital also started a roll-out in the United States and Canada. By mid 2006, about 400 theaters were equipped with 2K digital projectors with the number increasing every month. In August 2006, theMalayalamdigital movieMoonnamathoral, produced by Benzy Martin, was distributed via satellite to cinemas, thus becoming the first Indian digital cinema. This was done by Emil and Eric Digital Films, a company based at Thrissur using the end-to-end digital cinema system developed by Singapore-based DG2L Technologies.[20] In January 2007,Gurubecame the firstIndian filmmastered in the DCI-compliant JPEG 2000 Interop format and also the first Indian film to be previewed digitally, internationally, at the Elgin Winter Garden in Toronto. This film was digitally mastered at Real Image Media Technologies in India. In 2007, the UK became home to Europe's first DCI-compliant fully digital multiplex cinemas; Odeon Hatfield and Odeon Surrey Quays (in London), with a total of 18 digital screens, were launched on 9 February 2007. By March 2007, with the release of Disney'sMeet the Robinsons, about 600 screens had been equipped with digital projectors. In June 2007, Arts Alliance Media announced the first European commercial digital cinemaVirtual Print Fee(VPF) agreements (with20th Century FoxandUniversal Pictures). In March 2009AMC Theatresannounced that it closed a $315 million deal withSonyto replace all of itsmovie projectorswith 4K digital projectors starting in the second quarter of 2009; it was anticipated that this replacement would be finished by 2012.[21] As digital cinema technology improved in the early 2010s, most theaters across the world converted to digital video projection.[22]In January 2011, the total number of digital screens worldwide was 36,242, up from 16,339 at end 2009 or a growth rate of 121.8 percent during the year.[23]There were 10,083 d-screens in Europe as a whole (28.2 percent of global figure), 16,522 in the United States and Canada (46.2 percent of global figure) and 7,703 in Asia (21.6 percent of global figure). Worldwide progress was slower as in some territories, particularly Latin America and Africa.[24][25]As of 31 March 2015, 38,719 screens (out of a total of 39,789 screens) in the United States have been converted to digital, 3,007 screens in Canada have been converted, and 93,147 screens internationally have been converted.[26]By the end of 2017, virtually all of the world's cinema screens were digital (98%).[27]Digital cinema technology has continued to develop over the years with 3D, RPX, 4DX and ScreenX, allowing moviegoers with more immersive experiences.[28] Despite the fact that today, virtually all global movie theaters have converted their screens to digital cinemas, some major motion pictures even as of 2019 are shot on film.[29][30]For example,Quentin Tarantinoreleased his latest filmOnce Upon a Time in Hollywoodin 70 mm and 35 mm in selected theaters across the United States and Canada.[31] In addition to the equipment already found in a film-based movie theatre (e.g., asound reinforcement system, screen, etc.), a DCI-compliant digital cinema requires a DCI-compliant[32]digital projector and a powerful computer known as aserver. Movies are supplied to the theatre as a set of digital files called aDigital Cinema Package(DCP).[33]For a typical feature film, these files will be anywhere between 90 GB and 300 GB of data (roughly two to six times the information of a Blu-ray disc) and may arrive as a physical delivery on a conventional computer hard drive or via satellite or fibre-optic broadband Internet.[34]As of 2013, physical deliveries of hard drives were most common in the industry. Promotional trailers arrive on a separate hard drive and range between 200 GB and 400 GB in size. Ingest of DCP files may be done at each cabin’s projector-server or may be stored in a central server called aDigital Cinema Librarysharing its content over the cinema local area network (LAN) and managed by the Theater Management System (software). Regardless of how the DCP arrives, it first needs to be copied onto the internal hard drives of the server, either via an eSATA connection, or via a closed network, a process known as "ingesting."[citation needed]DCPs can be, and in the case of feature films almost always are, encrypted, to prevent illegal copying and piracy. The necessary decryption keys are supplied separately, usually as email attachments or via download, and then "ingested" via USB. Keys are time-limited and will expire after the end of the period for which the title has been booked. They are also locked to the hardware (server and projector) that is to screen the film, so if the theatre wishes to move the title to another screen or extend the run, a new key must be obtained from the distributor.[35]Several versions of the same feature can be sent together. The original version (OV) is used as the basis of all the other playback options. Version files (VF) may have a different sound format (e.g. 7.1 as opposed to5.1 surround sound) or subtitles. 2D and 3D versions are often distributed on the same hard drive. The playback of the content is controlled by the server using a "playlist". As the name implies, this is a list of all the content that is to be played as part of the performance. The playlist will be created by a member of the theatre's staff using proprietary software that runs on the server. In addition to listing the content to be played the playlist also includes automation cues that allow the playlist to control the projector, the sound system, auditorium lighting, tab curtains and screen masking (if present), etc. The playlist can be started manually, by clicking the "play" button on the server's monitor screen, or automatically at pre-set times.[36] TheTheater Manager System(TMS) is a central system software for the whole cinema house, handling the central cinema content library and preparing the playback sessions (playlists) with the correct KDM keys and the selected cinema content moved to each projector. Digital Cinema Initiatives(DCI), ajoint ventureof the sixmajor studios, published the first version (V1.0) of a system specification for digital cinema in July 2005.[16]The main declared objectives of the specification were to define a digital cinema system that would "present a theatrical experience that is better than what one could achieve now with a traditional 35mm Answer Print", to provide global standards for interoperability such that any DCI-compliant content could play on any DCI-compliant hardware anywhere in the world and to provide robust protection for the intellectual property of the content providers. The DCI specification calls for picture encoding using the ISO/IEC 15444-1 "JPEG2000" (.j2c) standard and use of theCIE XYZcolor space at 12 bits per component encoded with a 2.6gammaapplied at projection. Two levels of resolution for both content and projectors are supported: 2K (2048×1080) or 2.2 MP at 24 or 48frames per second, and 4K (4096×2160) or 8.85 MP at 24 frames per second. The specification ensures that 2K content can play on 4K projectors and vice versa. Smaller resolutions in one direction are also supported (the image gets automatically centered). Later versions of the standard added additional playback rates (like 25 fps in SMPTE mode). For the sound component of the content the specification provides for up to 16 channels of uncompressed audio using the"Broadcast Wave" (.wav)format at 24 bits and 48 kHz or 96 kHz sampling. Playback is controlled by anXML-format Composition Playlist, into anMXF-compliant file at a maximum data rate of 250 Mbit/s. Details about encryption,key management, and logging are all discussed in the specification as are the minimum specifications for the projectors employed including thecolor gamut, thecontrast ratioand the brightness of the image. While much of the specification codifies work that had already been ongoing in the Society of Motion Picture and Television Engineers (SMPTE), the specification is important in establishing a content owner framework for the distribution and security of first-release motion-picture content. In addition to DCI's work, theNational Association of Theatre Owners(NATO) released its Digital Cinema System Requirements.[37]The document addresses the requirements of digital cinema systems from the operational needs of the exhibitor, focusing on areas not addressed by DCI, including access for the visually impaired and hearing impaired, workflow inside the cinema, and equipment interoperability. In particular, NATO's document details requirements for the Theatre Management System (TMS), the governing software for digital cinema systems within a theatre complex, and provides direction for the development of security key management systems. As with DCI's document, NATO's document is also important to the SMPTE standards effort. The Society of Motion Picture and Television Engineers (SMPTE) began work on standards for digital cinema in 2000. It was clear by that point in time that HDTV did not provide a sufficient technological basis for the foundation of digital cinema playback. In Europe, India and Japan however, there is still a significant presence of HDTV for theatrical presentations. Agreements within the ISO standards body have led to these non-compliant systems being referred to as Electronic Cinema Systems (E-Cinema). Only four manufacturers make DCI-approved digital cinema projectors; these areBarco,Christie, Sharp/NECandSony. Except for Sony, who used to use their ownSXRDtechnology, all use theDigital light processing(DLP) technology developed byTexas Instruments(TI). D-Cinema projectors are similar in principle to digital projectors used in industry, education, and domestic home cinemas, but differ in two important respects. First, projectors must conform to the strict performance requirements of the DCI specification. Second, projectors must incorporate anti-piracy devices intended to enforce copyright compliance such as licensing limits. For these reasons all projectors intended to be sold to theaters for screening current release moviesmustbe approved by the DCI before being put on sale. They now pass through a process called CTP (compliance test plan). Because feature films in digital form are encrypted and the decryption keys (KDMs) are locked to the serial number of the server used (linking to both the projector serial number and server is planned in the future), a system will allow playback of a protected feature only with the required KDM. Three manufacturers have licensed the DLP Cinema technology developed byTexas Instruments(TI):Christie Digital Systems,Barco, andNEC. While NEC is a relative newcomer to Digital Cinema, Christie is the main player in the U.S. and Barco takes the lead in Europe and Asia.[citation needed]Initially DCI-compliant DLP projectors were available in 2K only, but from early 2012, when TI's 4K DLP chip went into full production, DLP projectors have been available in both 2K and 4K versions. Manufacturers of DLP-based cinema projectors can now also offer 4K upgrades to some of the more recent 2K models.[38]EarlyDLP Cinema projectors, which were deployed primarily in the United States, used limited 1280×1024 resolution or the equivalent of 1.3 MP (megapixels). Digital Projection Incorporated (DPI) designed and sold a few DLP Cinema units (is8-2K) when TI's 2K technology debuted but then abandoned the D-Cinema market while continuing to offer DLP-based projectors for non-cinema purposes. Although based on the same 2K TI "light engine" as those of the major players they are so rare as to be virtually unknown in the industry. They are still widely used for pre-show advertising but not usually for feature presentations. TI's technology is based on the use of digital micromirror devices (DMDs).[39]These areMEMSdevices that are manufactured from silicon using similar technology to that of computer chips. The surface of these devices is covered by a very large number of microscopic mirrors, one for each pixel, so a 2K device has about 2.2 million mirrors and a 4K device about 8.8 million. Each mirror vibrates several thousand times a second between two positions: In one, light from the projector's lamp is reflected towards the screen, in the other away from it. The proportion of the time the mirror is in each position varies according to the required brightness of each pixel. Three DMD devices are used, one for each of the primary colors. Light from the lamp, usually aXenon arc lampsimilar to those used in film projectors with a power between 1 kW and 7 kW, is split by colored filters into red, green and blue beams which are directed at the appropriate DMD. The 'forward' reflected beam from the three DMDs is then re-combined and focused by the lens onto the cinema screen. Later projectors may use lasers instead of xenon lamps. Alone amongst the manufacturers of DCI-compliant cinema projectors Sony decided to develop its own technology rather than use TI's DLP technology.SXRD(Silicon X-tal (Crystal) Reflective Display) projectors have only ever been manufactured in 4K form and, until the launch of the 4K DLP chip by TI, Sony SXRD projectors were the only 4K DCI-compatible projectors on the market. Unlike DLP projectors, however, SXRD projectors do not present the left and right eye images of stereoscopic movies sequentially, instead they use half the available area on the SXRD chip for each eye image. Thus during stereoscopic presentations the SXRD projector functions as a sub 2K projector, the same for HFR 3D Content.[40] However, Sony decided in late April 2020 that they would no longer manufacture digital cinema projectors.[41][42] In late 2005, interest in digital 3Dstereoscopicprojection led to a new willingness on the part of theaters to co-operate in installing 2K stereo installations to show Disney'sChicken Littlein3D film. Six more digital 3D movies were released in 2006 and 2007 (includingBeowulf,Monster HouseandMeet the Robinsons). The technology combines a single digital projector fitted with either a polarizing filter (for use withpolarized glassesand silver screens), a filter wheel or an emitter for LCD glasses.RealDuses a "ZScreen" for polarisation and MasterImage uses a filter wheel that changes the polarity of projector's light output several times per second to alternate quickly the left-and-right-eye views. Another system that uses a filter wheel isDolby 3D. The wheel changes the wavelengths of the colours being displayed, and tinted glasses filter these changes so the incorrect wavelength cannot enter the wrong eye.XpanDmakes use of an external emitter that sends a signal to the 3D glasses to block out the wrong image from the wrong eye. RGB laser projection produces the purestBT.2020 colorsand the brightest images.[43] In Asia, on July 13, 2017, an LED screen for digital cinema developed bySamsung Electronicswas publicly demonstrated on one screen atLotte CinemaWorld Tower inSeoul.[44]The first installation in Europe is in ArenaSihlcityCinema in Zürich.[45]These displays do not use a projector; instead they use aLEDvideo wall, and can offer higher contrast ratios, higher resolutions, and overall improvements in image quality. Sony already sells MicroLED displays as a replacement for conventional cinema screens.[46] Digital distributionof movies has the potential to save money for film distributors. Making thousands of prints for a wide-release movie can be expensive. In contrast, at the maximum 250 megabit-per-second data rate (as defined byDCIfor digital cinema), a feature-length movie can be stored on anoff-the-shelf300GBhard drive for $50 and a broad release of 4000 'digital prints' might cost $200,000.[citation needed]In addition hard drives can be returned to distributors for reuse. With several hundred movies distributed every year, the industry saves billions of dollars. The digital-cinema roll-out was stalled by the slow pace at which exhibitors acquired digital projectors, since the savings would be seen not by themselves but by distribution companies. TheVirtual Print Feemodel was created to address this by passing some of the saving on to the cinemas.[citation needed]As a consequence of the rapid conversion to digital projection, the number of theatrical releases exhibited on film is dwindling. As of 4 May 2014, 37,711 screens (out of a total of 40,048 screens) in the United States have been converted to digital, 3,013 screens in Canada have been converted, and 79,043 screens internationally have been converted.[26] Realization and demonstration, on October 29, 2001, of the first digital cinema transmission bysatellitein Europe[47][48][49]of afeature filmby Bernard Pauchon,[50]Alain Lorentz, Raymond Melwig[51]and Philippe Binant.[52][53] Then, reliable file delivery of DCPs via Internet emerged, thanks to higher bandwidth connections in cinemas, first with bonded DSL lines, then with fiber connection to the Internet. Digital cinemas can deliver livebroadcasts(mainly via satellite digital television) or broadband Internet (streaming media) from performances or events. This began initially with live broadcasts from the New York Metropolitan Opera delivering regular live broadcasts into cinemas and has been widely imitated ever since. Leading territories providing the content are the UK, the US, France and Germany. The Royal Opera House, Sydney Opera House, English National Opera and others have found new and returning audiences captivated by the detail offered by a live digital broadcast featuring handheld and cameras on cranes positioned throughout the venue to capture the emotion that might be missed in a live venue situation. In addition these providers all offer additional value during the intervals e.g. interviews with choreographers, cast members, a backstage tour which would not be on offer at the live event itself. Other live events in this field include live theatre from NT Live, Branagh Live, Royal Shakespeare Company, Shakespeare's Globe, the Royal Ballet, Mariinsky Ballet, the Bolshoi Ballet and the Berlin Philharmoniker. In the last ten years this initial offering of the arts has also expanded to include live and recorded music events such as Take That Live, One Direction Live, Andre Rieu, live musicals such as the recent Miss Saigon and a record-breaking Billy Elliot Live In Cinemas. Live sport, documentary with a live question and answer element such as the recent Oasis documentary, lectures, faith broadcasts, stand-up comedy, museum and gallery exhibitions, TV specials such as the record-breakingDoctor Whofiftieth anniversary specialThe Day Of The Doctor, have all contributed to creating a valuable revenue stream for cinemas large and small all over the world. Subsequently, live broadcasting, formerly known as Alternative Content, has become known as Event Cinema and a trade association now exists to that end. Ten years on the sector has become a sizeable revenue stream in its own right, earning a loyal following amongst fans of the arts, and the content limited only by the imagination of the producers it would seem. Theatre, ballet, sport, exhibitions, TV specials and documentaries are now established forms of Event Cinema. Worldwide estimations put the likely value of the Event Cinema industry at $1bn by 2019.[54] Event Cinema currently accounts for on average between 1-3% of overall box office for cinemas worldwide but anecdotally it's been reported that some cinemas attribute as much as 25%, 48% and even 51% (the Rio Bio cinema in Stockholm) of their overall box office. It is envisaged ultimately that Event Cinema will account for around 5% of the overall box office globally. Event Cinema saw six worldwide records set and broken over from 2013 to 2015 with notable successes Dr Who ($10.2m in three days at the box office – event was also broadcast on terrestrial TV simultaneously), Pompeii Live by the British Museum, Billy Elliot, Andre Rieu, One Direction, Richard III by the Royal Shakespeare Company. Event Cinema is defined more by the frequency of events rather than by the content itself. Event Cinema events typically appear in cinemas during traditionally quieter times in the cinema week such as the Monday-Thursday daytime/evening slot and are characterised by the One Night Only release, followed by one or possibly more 'Encore' releases a few days or weeks later if the event is successful and sold out. On occasion more successful events have returned to cinemas some months or even years later in the case of NT Live where the audience loyalty and company branding is so strong the content owner can be assured of a good showing at the box office. The digital formation of sets and locations, especially in the time of growing film series and sequels, is that virtual sets, once computer generated and stored, can be easily revived for future films.[55]: 62Considering digital film images are documented as data files on hard disk or flash memory, varying systems of edits can be executed with the alteration of a few settings on the editing console with the structure being composed virtually in the computer's memory. A broad choice of effects can be sampled simply and rapidly, without the physical constraints posed by traditional cut-and-stick editing.[55]: 63Digital cinema allows national cinemas to construct films specific to their cultures in ways that the more constricting configurations and economics of customary film-making prevented. Low-cost cameras and computer-based editing software have gradually enabled films to be produced for minimal cost. The ability of digital cameras to allow film-makers to shoot limitless footage without wasting costly film has transformed film production in some Third World countries.[55]: 66From consumers' perspective digital prints do not deteriorate with the number of showings. Unlike film, there is no projection mechanism or manual handling to add scratches or other physically generated artefacts. Provincial cinemas that would have received old prints can give consumers the same cinematographic experience (all other things being equal) as those attending the premiere. The use of NLEs in movies allows for edits and cuts to be made non-destructively, without actually discarding any footage. A number of high-profile film directors, includingChristopher Nolan,[56]Paul Thomas Anderson,[57]David O. Russell[58]andQuentin Tarantino[58]have publicly criticized digital cinema and advocated the use of film and film prints. Most famously, Tarantino has suggested he may retire because, though he can still shoot on film, because of the rapid conversion to digital, he cannot project from 35 mm prints in the majority of American cinemas.[59]Steven Spielberghas stated that though digital projection produces a much better image than film if originally shot in digital, it is "inferior" when it has been converted to digital. He attempted at one stage to releaseIndiana Jones and the Kingdom of the Crystal Skullsolely on film.[60]Paul Thomas Anderson recently was able to create70-mm filmprints for his filmThe Master.[citation needed] Film criticRoger Ebertcriticized the use of DCPs after a cancelled film festival screening ofBrian DePalma's filmPassionatNew York Film Festivalas a result of a lockup due to the coding system.[61] The theoretical resolution of 35 mm film is greater than that of 2K digital cinema.[62][63]2K resolution (2048×1080) is also only slightly greater than that of consumer based1080p HD(1920x1080).[64]However, since digital post-production techniques became the standard in the early 2000s, the majority of movies, whether photographed digitally or on 35 mm film, have been mastered and edited at the 2K resolution. Moreover, 4K post production was becoming more common as of 2013. As projectors are replaced with 4K models[65]the difference in resolution between digital and 35 mm film is somewhat reduced.[66]Digital cinema servers utilize far greater bandwidth over domestic "HD", allowing for a difference in quality (e.g., Blu-ray colour encoding 4:2:0 48 Mbit/s MAX datarate, DCI D-Cinema 4:4:4 250 Mbit/s 2D/3D, 500 Mbit/s HFR3D). Each frame has greater detail. Owing to the smaller dynamic range of digital cameras, correcting poor digital exposures is more difficult than correcting poor film exposures during post-production. A partial solution to this problem is to add complex video-assist technology during the shooting process. However, such technologies are typically available only to high-budget production companies.[55]: 62Digital cinemas' efficiency of storing images has a downside. The speed and ease of modern digital editing processes threatens to give editors and their directors, if not an embarrassment of choice then at least a confusion of options, potentially making the editing process, with this 'try it and see' philosophy, lengthier rather than shorter.[55]: 63Because the equipment needed to produce digital feature films can be obtained more easily than film projectors, producers could inundate the market with cheap productions and potentially dominate the efforts of serious directors. Because of the quick speed in which they are filmed, these stories sometimes lack essential narrative structure.[55]: 66–67 The electronic transferring of digital film, from central servers to servers in cinema projection booths, is an inexpensive process of supplying copies of newest releases to the vast number of cinema screens demanded by prevailing saturation-release strategies. There is a significant saving on print expenses in such cases: at a minimum cost per print of $1200–2000, the cost of film print production is between $5–8 million per movie. With several thousand releases a year, the probable savings offered by digital distribution and projection are over $1 billion.[55]: 67The cost savings and ease, together with the ability to store film rather than having to send a print on to the next cinema, allows a larger scope of films to be screened and watched by the public; minority and small-budget films that would not otherwise get such a chance.[55]: 67 The initial costs for converting theaters to digital are high: $100,000 per screen, on average. Theaters have been reluctant to switch without a cost-sharing arrangement withfilm distributors. A solution is a temporaryVirtual Print Feesystem, where the distributor (who saves the money of producing and transporting a film print) pays a fee per copy to help finance the digital systems of the theaters.[67]A theater can purchase a film projector for as little as $10,000[68](though projectors intended for commercial cinemas cost two to three times that; to which must be added the cost of along-play system, which also costs around $10,000, making a total of around $30,000–$40,000) from which they could expect an average life of 30–40 years. By contrast, a digital cinema playback system—including server,media block, and projector—can cost two to three times as much,[69]and would have a greater risk of component failure and obsolescence. (In Britain the cost of an entry-level projector including server, installation, etc., would be £31,000 [$50,000].) Archivingdigital mastershas also turned out to be both tricky and costly. In a 2007 study, theAcademy of Motion Picture Arts and Sciencesfound the cost of long-term storage of 4K digital masters to be "enormously higher - 1100% higher - that of the cost of storing film masters."[70]: 43This is because of the limited or uncertain lifespan of digital storage: No current digital medium—be itoptical disc, magnetichard driveor digital tape—can reliably store a motion picture for as long as a hundred years or more (a timeframe for film properly stored).[70]: 35The short history of digital storage media has been one of innovation and, therefore, of obsolescence. Archived digital content must be periodically removed from obsolete physical media to up-to-date media.[70]: 36The expense ofdigital image captureis not necessarily less than the capture of images onto film; indeed, it is sometimes greater.[citation needed]
https://en.wikipedia.org/wiki/Digital_cinema
Insignal processing, afilter bank(orfilterbank) is an array ofbandpass filtersthat separates the input signal into multiple components, each one carrying asub-bandof the original signal.[1]One application of a filter bank is agraphic equalizer, which can attenuate the components differently and recombine them into a modified version of the original signal. The process of decomposition performed by the filter bank is calledanalysis(meaning analysis of the signal in terms of its components in each sub-band); the output of analysis is referred to as a subband signal with as many subbands as there are filters in the filter bank. The reconstruction process is calledsynthesis, meaning reconstitution of a complete signal resulting from the filtering process. Indigital signal processing, the termfilter bankis also commonly applied to a bank of receivers. The difference is that receivers alsodown-convertthe subbands to a low center frequency that can be re-sampled at a reduced rate. The same result can sometimes be achieved byundersamplingthe bandpass subbands. Another application of filter banks islossy compressionwhen some frequencies are more important than others. After decomposition, the important frequencies can be coded with a fine resolution. Small differences at these frequencies are significant and acodingscheme that preserves these differences must be used. On the other hand, less important frequencies do not have to be exact. A coarser coding scheme can be used, even though some of the finer (but less important) details will be lost in the coding. Thevocoderuses a filter bank to determine the amplitude information of the subbands of a modulator signal (such as a voice) and uses them to control the amplitude of the subbands of a carrier signal (such as the output of a guitar or synthesizer), thus imposing the dynamic characteristics of the modulator on the carrier. Some filter banks work almost entirely in the time domain, using a series of filters such asquadrature mirror filtersor theGoertzel algorithmto divide the signal into smaller bands. Other filter banks use afast Fourier transform(FFT). A bank of receivers can be created by performing a sequence ofFFTson overlappingsegmentsof the input data stream. A weighting function (akawindow function) is applied to each segment to control the shape of thefrequency responsesof the filters. The wider the shape, the more often the FFTs have to be done to satisfy theNyquist sampling criteria.[A]For a fixed segment length, the amount of overlap determines how often the FFTs are done (and vice versa). Also, the wider the shape of the filters, the fewer filters that are needed to span the input bandwidth. Eliminating unnecessary filters (i.e. decimation in frequency) is efficiently done by treating each weighted segment as a sequence of smallerblocks, and the FFT is performed on only the sum of the blocks. This has been referred to asweight overlap-add (WOLA)andweighted pre-sum FFT. (see§ Sampling the DTFT) A special case occurs when, by design, the length of the blocks is an integer multiple of the interval between FFTs. Then the FFT filter bank can be described in terms of one or more polyphase filter structures where the phases are recombined by an FFT instead of a simple summation. The number of blocks per segment is the impulse response length (ordepth) of each filter. The computational efficiencies of the FFT and polyphase structures, on a general purpose processor, are identical. Synthesis (i.e. recombining the outputs of multiple receivers) is basically a matter ofupsamplingeach one at a rate commensurate with the total bandwidth to be created, translating each channel to its new center frequency, and summing the streams of samples. In that context, the interpolation filter associated with upsampling is calledsynthesis filter. The net frequency response of each channel is the product of the synthesis filter with the frequency response of the filter bank (analysis filter). Ideally, the frequency responses of adjacent channels sum to a constant value at every frequency between the channel centers. That condition is known asperfect reconstruction. Intime–frequency signal processing, a filter bank is a special quadratictime–frequency distribution(TFD) that represents the signal in a jointtime–frequency domain. It is related to theWigner–Ville distributionby a two-dimensional filtering that defines the class ofquadratic (or bilinear) time–frequency distributions.[3]The filter bank and the spectrogram are the two simplest ways of producing a quadratic TFD; they are in essence similar as one (the spectrogram) is obtained by dividing the time domain into slices and then taking a Fourier transform, while the other (the filter bank) is obtained by dividing the frequency domain in slices forming bandpass filters that are excited by the signal under analysis. A multirate filter bank divides a signal into a number of subbands, which can be analysed at different rates corresponding to the bandwidth of the frequency bands. The implementation makes use ofdownsampling (decimation)andupsampling (expansion). SeeDiscrete-time Fourier transform § PropertiesandZ-transform § Propertiesfor additional insight into the effects of those operations in the transform domains. One can define a narrow lowpass filter as alowpass filterwith a narrow passband. In order to create a multirate narrow lowpass FIR filter, one can replace the time-invariant FIR filter with a lowpass antialiasing filter and a decimator, along with an interpolator and lowpass anti-imaging filter. In this way, the resulting multirate system is a time-varying linear-phase filter via the decimator and interpolator. The lowpass filter consists of two polyphase filters, one for the decimator and one for the interpolator.[4] A filter bank divides the input signalx(n){\displaystyle x\left(n\right)}into a set of signalsx1(n),x2(n),x3(n),...{\displaystyle x_{1}(n),x_{2}(n),x_{3}(n),...}. In this way each of the generated signals corresponds to a different region in the spectrum ofx(n){\displaystyle x\left(n\right)}. In this process it can be possible for the regions overlap (or not, based on application). The generated signalsx1(n),x2(n),x3(n),...{\displaystyle x_{1}(n),x_{2}(n),x_{3}(n),...}can be generated via a collection of set of bandpass filters with bandwidthsBW1,BW2,BW3,...{\displaystyle {\rm {BW_{1},BW_{2},BW_{3},...}}}and center frequenciesfc1,fc2,fc3,...{\displaystyle f_{c1},f_{c2},f_{c3},...}(respectively). A multirate filter bank uses a single input signal and then produces multiple outputs of the signal by filtering and subsampling. In order to split the input signal into two or more signals, an analysis-synthesis system can be used. The signal would split with the help of four filtersHk(z){\displaystyle H_{k}(z)}fork=0,1,2,3 into 4 bands of the same bandwidths (In the analysis bank) and then each sub-signal is decimated by a factor of 4. In each band by dividing the signal in each band, we would have different signal characteristics. In synthesis section the filter will reconstruct the original signal: First, upsampling the 4 sub-signals at the output of the processing unit by a factor of 4 and then filter by 4 synthesis filtersFk(z){\displaystyle F_{k}(z)}fork= 0,1,2,3. Finally, the outputs of these four filters are added. A discrete-time filter bank framework allows inclusion of desired input signal dependent features in the design in addition to the more traditional perfect reconstruction property. The information theoretic features like maximized energy compaction, perfect de-correlation of sub-band signals and other characteristics for the given input covariance/correlation structure are incorporated in the design of optimal filter banks.[5]These filter banks resemble the signal dependentKarhunen–Loève transform(KLT) that is the optimal block transform where the length L of basis functions (filters) and the subspace dimension M are the same. Multidimensional filtering,downsampling, andupsamplingare the main parts ofmultirate systemsand filter banks. A complete filter bank consists of the analysis and synthesis side. The analysis filter bank divides an input signal to different subbands with different frequency spectra. The synthesis part reassembles the different subband signals and generates a reconstructed signal. Two of the basic building blocks are the decimator and expander. For example, the input divides into four directional sub bands that each of them covers one of the wedge-shaped frequency regions. In 1D systems, M-fold decimators keep only those samples that are multiples of M and discard the rest. while in multi-dimensional systems the decimators areD×Dnonsingular integer matrix. it considers only those samples that are on the lattice generated by the decimator. Commonly used decimator is the quincunx decimator whose lattice is generated from theQuincunx matrixwhich is defined by[11−11]{\displaystyle {\begin{bmatrix}\;\;\,1&1\\-1&1\end{bmatrix}}} The quincunx lattice generated by quincunx matrix is as shown; the synthesis part is dual to the analysis part. Filter banks can be analyzed from a frequency-domain perspective in terms of subband decomposition and reconstruction. However, equally important isHilbert-spaceinterpretation of filter banks, which plays a key role in geometrical signal representations. For genericK-channel filter bank, with analysis filters{hk[n]}k=1K{\displaystyle \left\{h_{k}[n]\right\}_{k=1}^{K}}, synthesis filters{gk[n]}k=1K{\displaystyle \left\{g_{k}[n]\right\}_{k=1}^{K}}, and sampling matrices{Mk[n]}k=1K{\displaystyle \left\{M_{k}[n]\right\}_{k=1}^{K}}. In the analysis side, we can define vectors inℓ2(Zd){\displaystyle \ell ^{2}(\mathbf {Z} ^{d})}as each index by two parameters:1≤k≤K{\displaystyle 1\leq k\leq K}andm∈Z2{\displaystyle m\in \mathbf {Z} ^{2}}. Similarly, for the synthesis filtersgk[n]{\displaystyle g_{k}[n]}we can defineψk,m[n]=defgk∗[Mkm−n]{\displaystyle \psi _{k,m}[n]{\stackrel {\rm {def}}{=}}g_{k}^{*}[M_{k}m-n]}. Considering the definition of analysis/synthesis sides we can verify that[6]ck[m]=⟨x[n],φk,m[n]⟩{\displaystyle c_{k}[m]=\langle x[n],\varphi _{k,m}[n]\rangle }and for reconstruction part: In other words, the analysis filter bank calculate the inner product of the input signal and the vector from analysis set. Moreover, the reconstructed signal in the combination of the vectors from the synthesis set, and the combination coefficients of the computed inner products, meaning that If there is no loss in the decomposition and the subsequent reconstruction, the filter bank is calledperfect reconstruction. (in that case we would havex[n]=x[n]^{\displaystyle x[n]={\hat {x[n]}}}.[7]Figure shows a general multidimensional filter bank withNchannels and a common sampling matrixM. The analysis part transforms the input signalx[n]{\displaystyle x[n]}intoNfiltered and downsampled outputsyj[n],{\displaystyle y_{j}[n],}j=0,1,...,N−1{\displaystyle j=0,1,...,N-1}. The synthesis part recovers the original signal fromyj[n]{\displaystyle y_{j}[n]}by upsampling and filtering. This kind of setup is used in many applications such assubband coding, multichannel acquisition, anddiscrete wavelet transforms. We can use polyphase representation, so input signalx[n]{\displaystyle x[n]}can be represented by a vector of its polyphase componentsx(z)=def(X0(z),...,X|M|−1(z))T{\displaystyle x(z){\stackrel {\rm {def}}{=}}(X_{0}(z),...,X_{|M|-1}(z))^{T}}. Denotey(z)=def(Y0(z),...,Y|N|−1(z))T.{\displaystyle y(z){\stackrel {\rm {def}}{=}}(Y_{0}(z),...,Y_{|N|-1}(z))^{T}.}So we would havey(z)=H(z)x(z){\displaystyle y(z)=H(z)x(z)}, whereHi,j(z){\displaystyle H_{i,j}(z)}denotes thej-th polyphase component of the filterHi(z){\displaystyle H_{i}(z)}. Similarly, for the output signal we would havex^(z)=G(z)y(z){\displaystyle {\hat {x}}(z)=G(z)y(z)}, wherex^(z)=def(X^0(z),...,X^|M|−1(z))T{\displaystyle {\hat {x}}(z){\stackrel {\rm {def}}{=}}({\hat {X}}_{0}(z),...,{\hat {X}}_{|M|-1}(z))^{T}}. Also G is a matrix whereGi,j(z){\displaystyle G_{i,j}(z)}denotes ith polyphase component of the jth synthesis filter Gj(z). The filter bank has perfect reconstruction ifx(z)=x^(z){\displaystyle x(z)={\hat {x}}(z)}for any input, or equivalentlyI|M|=G(z)H(z){\displaystyle I_{|M|}=G(z)H(z)}which means that G(z) is a left inverse of H(z). 1-D filter banks have been well developed until today. However, many signals, such as image, video, 3D sound, radar, sonar, are multidimensional, and require the design of multidimensional filter banks. With the fast development of communication technology, signal processing system needs more room to store data during the processing, transmission and reception. In order to reduce the data to be processed, save storage and lower the complexity, multirate sampling techniques were introduced to achieve these goals. Filter banks can be used in various areas, such as image coding, voice coding, radar and so on. Many 1D filter issues were well studied and researchers proposed many 1D filter bank design approaches. But there are still many multidimensional filter bank design problems that need to be solved.[8]Some methods may not well reconstruct the signal, some methods are complex and hard to implement. The simplest approach to design a multi-dimensional filter bank is to cascade 1D filter banks in the form of a tree structure where the decimation matrix is diagonal and data is processed in each dimension separately. Such systems are referred to as separable systems. However, the region of support for the filter banks might not be separable. In that case designing of filter bank gets complex. In most cases we deal with non-separable systems. A filter bank consists of an analysis stage and a synthesis stage. Each stage consists of a set of filters in parallel. The filter bank design is the design of the filters in the analysis and synthesis stages. The analysis filters divide the signal into overlapping or non-overlapping subbands depending on the application requirements. The synthesis filters should be designed to reconstruct the input signal back from the subbands when the outputs of these filters are combined. Processing is typically performed after the analysis stage. These filter banks can be designed asInfinite impulse response(IIR) orFinite impulse response(FIR). In order to reduce the data rate, downsampling and upsampling are performed in the analysis and synthesis stages, respectively. Below are several approaches on the design of multidimensional filter banks. For more details, please check theORIGINALreferences. When it is necessary to reconstruct the divided signal back to the original one, perfect-reconstruction (PR) filter banks may be used. Let H(z) be the transfer function of a filter. The size of the filter is defined as the order of corresponding polynomial in every dimension. The symmetry or anti-symmetry of a polynomial determines the linear phase property of the corresponding filter and is related to its size. Like the 1D case, the aliasing term A(z) and transfer function T(z) for a 2 channel filter bank are:[9] A(z)=1/2(H0(-z) F0(z)+H1(-z) F1(z)); T(z)=1/2(H0(z) F0(z)+H1(z) F1(z)), where H0and H1are decomposition filters, and F0and F1are reconstruction filters. The input signal can be perfectly reconstructed if the alias term is cancelled and T(z) equal to a monomial. So the necessary condition is that T'(z) is generally symmetric and of an odd-by-odd size. Linear phase PR filters are very useful for image processing. This two-channel filter bank is relatively easy to implement. But two channels sometimes are not enough. Two-channel filter banks can be cascaded to generate multi-channel filter banks. M-dimensional directional filter banks (MDFB) are a family of filter banks that can achieve the directional decomposition of arbitrary M-dimensional signals with a simple and efficient tree-structured construction. It has many distinctive properties like: directional decomposition, efficient tree construction, angular resolution and perfect reconstruction. In the general M-dimensional case, the ideal frequency supports of the MDFB are hypercube-based hyperpyramids. The first level of decomposition for MDFB is achieved by an N-channel undecimated filter bank, whose component filters are M-D "hourglass"-shaped filter aligned with the w1,...,wMrespectively axes. After that, the input signal is further decomposed by a series of 2-D iteratively resampled checkerboard filter banksIRCli(Li)(i=2,3,...,M), whereIRCli(Li)operates on 2-D slices of the input signal represented by the dimension pair (n1,ni) and superscript (Li) means the levels of decomposition for the ith level filter bank. Note that, starting from the second level, we attach an IRC filter bank to each output channel from the previous level, and hence the entire filter has a total of 2(L1+...+LN)output channels.[10] Oversampled filter banks are multirate filter banks where the number of output samples at the analysis stage is larger than the number of input samples. It is proposed for robust applications. One particular class of oversampled filter banks is nonsubsampled filter banks without downsampling or upsampling. The perfect reconstruction condition for an oversampled filter bank can be stated as a matrix inverse problem in the polyphase domain.[11] For IIR oversampled filter bank, perfect reconstruction have been studied in Wolovich[12]and Kailath.[13]in the context of control theory. While for FIR oversampled filter bank we have to use different strategy for 1-D and M-D. FIR filter are more popular since it is easier to implement. For 1-D oversampled FIR filter banks, the Euclidean algorithm plays a key role in the matrix inverse problem.[14]However, the Euclidean algorithm fails for multidimensional (MD) filters. For MD filter, we can convert the FIR representation into a polynomial representation.[15]And then useAlgebraic geometryand Gröbner bases to get the framework and the reconstruction condition of the multidimensional oversampled filter banks.[11] Nonsubsampled filter banks are particular oversampled filter banks without downsampling or upsampling. The perfect reconstruction condition for nonsubsampled FIR filter banks leads to a vector inverse problem: the analysis filters{H1,...,HN}{\displaystyle \{H_{1},...,H_{N}\}}are given and FIR, and the goal is to find a set of FIR synthesis filters{G1,...,GN}{\displaystyle \{G_{1},...,G_{N}\}}satisfying.[11] As multidimensional filter banks can be represented by multivariate rational matrices, this method is a very effective tool that can be used to deal with the multidimensional filter banks.[15] In Charo,[15]a multivariate polynomial matrix-factorization algorithm is introduced and discussed. The most common problem is the multidimensional filter banks for perfect reconstruction. This paper talks about the method to achieve this goal that satisfies the constrained condition of linear phase. According to the description of the paper, some new results in factorization are discussed and being applied to issues of multidimensional linear phase perfect reconstruction finite-impulse response filter banks. The basic concept ofGröbner basesis given in Adams.[16] This approach based on multivariate matrix factorization can be used in different areas. The algorithmic theory of polynomial ideals and modules can be modified to address problems in processing, compression, transmission, and decoding of multidimensional signals. The general multidimensional filter bank (Figure 7) can be represented by a pair of analysis and synthesis polyphase matricesH(z){\displaystyle H(z)}andG(z){\displaystyle G(z)}of sizeN×M{\displaystyle N\times M}andM×N{\displaystyle M\times N}, whereNis the number of channels andM=def|M|{\displaystyle M{\stackrel {\rm {def}}{=}}|M|}is the absolute value of the determinant of the sampling matrix. AlsoH(z){\displaystyle H(z)}andG(z){\displaystyle G(z)}are the z-transform of the polyphase components of the analysis and synthesis filters. Therefore, they aremultivariate Laurent polynomials, which have the general form: Laurent polynomial matrix equation need to be solve to design perfect reconstruction filter banks: In the multidimensional case with multivariate polynomials we need to use the theory and algorithms of Gröbner bases.[17] Gröbner bases can be used to characterizing perfect reconstruction multidimensional filter banks, but it first need to extend from polynomial matrices toLaurent polynomialmatrices.[18][19] The Gröbner-basis computation can be considered equivalently as Gaussian elimination for solving the polynomial matrix equationG(z)H(z)=I|M|{\displaystyle G(z)H(z)=I_{|M|}}. If we have set of polynomial vectors wherec1(z),...,cN(z){\displaystyle c_{1}(z),...,c_{N}(z)}are polynomials. The Module is analogous to thespanof a set of vectors in linear algebra. The theory of Gröbner bases implies that the Module has a unique reduced Gröbner basis for a given order of power products in polynomials. If we define the Gröbner basis as{b1(z),...,bN(z)}{\displaystyle \left\{b_{1}(z),...,b_{N}(z)\right\}}, it can be obtained from{h1(z),...,hN(z)}{\displaystyle \left\{h_{1}(z),...,h_{N}(z)\right\}}by a finite sequence of reduction (division) steps. Using reverse engineering, we can compute the basis vectorsbi(z){\displaystyle b_{i}(z)}in terms of the original vectorshj(z){\displaystyle h_{j}(z)}through aK×N{\displaystyle K\times N}transformation matrixWij(z){\displaystyle W_{ij}(z)}as: Designing filters with good frequency responses is challenging via Gröbner bases approach.Mapping based design in popularly used to design nonseparable multidimensional filter banks with good frequency responses.[20][21] The mapping approaches have certain restrictions on the kind of filters; however, it brings many important advantages, such as efficient implementation via lifting/ladder structures. Here we provide an example of two-channel filter banks in 2D with sampling matrixD1=[2001]{\displaystyle D_{1}=\left[{\begin{array}{cc}2&0\\0&1\end{array}}\right]}We would have several possible choices of ideal frequency responses of the channel filterH0(ξ){\displaystyle H_{0}(\xi )}andG0(ξ){\displaystyle G_{0}(\xi )}. (Note that the other two filtersH1(ξ){\displaystyle H_{1}(\xi )}andG1(ξ){\displaystyle G_{1}(\xi )}are supported on complementary regions.)All the frequency regions in Figure can be critically sampled by the rectangular lattice spanned byD1{\displaystyle D_{1}}.So imagine the filter bank achieves perfect reconstruction with FIR filters. Then from the polyphase domain characterization it follows that the filters H1(z) and G1(z) are completely specified by H0(z) and G0(z), respectively. Therefore, we need to design H0(x) and G0(z) which have desired frequency responses and satisfy the polyphase-domain conditions.H0(z1,z2)G0(z1,z2)+H0(−z1,z2)G0(−z1,z2)=2{\displaystyle H_{0}(z_{1},z_{2})G_{0}(z_{1},z_{2})+H_{0}(-z_{1},z_{2})G_{0}(-z_{1},z_{2})=2}There are different mapping technique that can be used to get above result.[22] When perfect reconstruction is not needed, the design problem can be simplified by working in frequency domain instead of using FIR filters.[23][24]Note that the frequency domain method is not limited to the design of nonsubsampled filter banks (read[25]). Many of the existing methods for designing 2-channel filter banks are based on transformation of variable technique. For example, McClellan transform can be used to design 1-D 2-channel filter banks. Though the 2-D filter banks have many similar properties with the 1-D prototype, but it is difficult to extend to more than 2-channel cases.[26] In Nguyen,[26]the authors talk about the design of multidimensional filter banks by direct optimization in the frequency domain. The method proposed here is mainly focused on the M-channel 2D filter banks design. The method is flexible towards frequency support configurations. 2D filter banks designed by optimization in the frequency domain has been used in Wei[27]and Lu.[28]In Nguyen's paper,[26]the proposed method is not limited to two-channel 2D filter banks design; the approach is generalized to M-channel filter banks with any critical subsampling matrix. According to the implementation in the paper, it can be used to achieve up to 8-channel 2D filter banks design. (6)Reverse Jacket Matrix[29] In Lee's 1999 paper,[29]the authors talk about the multidimensional filter bank design using a reversejacket matrix. LetHbe aHadamard matrixof ordern, the transpose ofHis closely related to its inverse. The correct formula is:HHT=In{\displaystyle HH^{T}=I_{n}}, where Inis the n×n identity matrix andHTis the transpose ofH. In the 1999 paper,[29]the authors generalize the reverse jacket matrix [RJ]Nusing Hadamard matrices and weighted Hadamard matrices.[30][31] In this paper, the authors proposed that the FIR filter with 128 taps be used as a basic filter, and decimation factor is computed for RJ matrices. They did simulations based on different parameters and achieve a good quality performances in low decimation factor. Bamberger and Smith proposed a 2D directional filter bank (DFB).[32]The DFB is efficiently implemented via anl-level tree-structured decomposition that leads to2l{\displaystyle 2^{l}}subbands with wedge-shaped frequency partition (see Figure). The original construction of the DFB involves modulating the input signal and using diamond-shaped filters. Moreover, in order to obtain the desired frequency partition, a complicated tree expanding rule has to be followed.[33]As a result, the frequency regions for the resulting subbands do not follow a simple ordering as shown in Figure 9 based on the channel indices. The first advantage of DFB is that not only it is not a redundant transform but also it offers perfect reconstruction. Another advantage of DFB is its directional-selectivity and efficient structure. This advantage makes DFB an appropriate approach for many signal and image processing usage. (e.g., Laplacian pyramid, constructed the contourlets,[34]sparse image representation, medical imaging,[35]etc.). Directional Filter Banks can be developed to higher dimensions. It can be use in 3-D to achieve the frequency sectioning. Filter banks are important elements for the physical layer in wideband wireless communication, where the problem is efficient base-band processing of multiple channels. A filter-bank-based transceiver architecture eliminates the scalability and efficiency issues observed by previous schemes in case of non-contiguous channels. Appropriate filter design is necessary to reduce performance degradation caused by the filter bank. In order to obtain universally applicable designs, mild assumptions can be made about waveform format, channel statistics and the coding/decoding scheme. Both heuristic and optimal design methodologies can be used, and excellent performance is possible with low complexity as long as the transceiver operates with a reasonably large oversampling factor. A practical application is OFDM transmission, where they provide very good performance with small additional complexity.[36]
https://en.wikipedia.org/wiki/Filter_bank
Fractal compressionis alossy compressionmethod fordigital images, based onfractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image.[1]Fractalalgorithmsconvert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image. Fractal image representation may be described mathematically as aniterated function system(IFS).[2] We begin with the representation of abinary image, where the image may be thought of as a subset ofR2{\displaystyle \mathbb {R} ^{2}}. An IFS is a set ofcontraction mappingsƒ1,...,ƒN, According to these mapping functions, the IFS describes a two-dimensional setSas the fixed point of theHutchinson operator That is,His an operator mapping sets to sets, andSis the unique set satisfyingH(S) =S. The idea is to construct the IFS such that this setSis the input binary image. The setScan be recovered from the IFS byfixed point iteration: for any nonemptycompactinitial setA0, the iterationAk+1=H(Ak) converges toS. The setSis self-similar becauseH(S) =Simplies thatSis a union of mapped copies of itself: So we see the IFS is a fractal representation ofS. IFS representation can be extended to agrayscale imageby considering the image'sgraphas a subset ofR3{\displaystyle \mathbb {R} ^{3}}. For a grayscale imageu(x,y), consider the setS= {(x,y,u(x,y))}. Then similar to the binary case,Sis described by an IFS using a set of contraction mappingsƒ1,...,ƒN, but inR3{\displaystyle \mathbb {R} ^{3}}, A challenging problem of ongoing research in fractal image representation is how to choose theƒ1,...,ƒNsuch that its fixed point approximates the input image, and how to do this efficiently. A simple approach[2]for doing so is the following partitioned iterated function system (PIFS): In the second step, it is important to find a similar block so that the IFS accurately represents the input image, so a sufficient number of candidate blocks forDineed to be considered. On the other hand, a large search considering many blocks is computationally costly. This bottleneck of searching for similar blocks is why PIFS fractal encoding is much slower than for exampleDCTandwaveletbased image representation. The initial square partitioning andbrute-force searchalgorithm presented by Jacquin provides a starting point for further research and extensions in many possible directions—different ways of partitioning the image into range blocks of various sizes and shapes; fast techniques for quickly finding a close-enough matching domain block for each range block rather than brute-force searching, such as fastmotion estimationalgorithms; different ways of encoding the mapping from the domain block to the range block; etc.[3] Other researchers attempt to find algorithms to automatically encode an arbitrary image as RIFS (recurrent iterated function systems) or global IFS, rather than PIFS; and algorithms for fractal video compression includingmotion compensationand three dimensional iterated function systems.[4][5] Fractal image compression has many similarities tovector quantizationimage compression.[6] With fractal compression, encoding is extremely computationally expensive because of the search used to find the self-similarities. Decoding, however, is quite fast. While this asymmetry has so far made it impractical for real time applications, when video is archived for distribution from disk storage or file downloads fractal compression becomes more competitive.[7][8] At common compression ratios, up to about 50:1, fractal compression provides similar results toDCT-basedalgorithms such asJPEG.[9]At high compression ratios fractal compression may offer superior quality. For satellite imagery, ratios of over 170:1[10]have been achieved with acceptable results. Fractal video compression ratios of 25:1–244:1 have been achieved in reasonable compression times (2.4 to 66 sec/frame).[11] Compression efficiency increases with higher image complexity and color depth, compared to simplegrayscaleimages. An inherent feature of fractal compression is that images become resolution independent[12]after being converted to fractal code. This is because the iterated function systems in the compressed file scale indefinitely. This indefinite scaling property of a fractal is known as "fractal scaling". The resolution independence of a fractal-encoded image can be used to increase the display resolution of an image. This process is also known as "fractal interpolation". In fractal interpolation, an image is encoded into fractal codes via fractal compression, and subsequently decompressed at a higher resolution. The result is an up-sampled image in which iterated function systems have been used as theinterpolant.[13]Fractal interpolation maintains geometric detail very well compared to traditional interpolation methods likebilinear interpolationandbicubic interpolation.[14][15][16]Since the interpolation cannot reverse Shannon entropy however, it ends up sharpening the image by adding random instead of meaningful detail. One cannot, for example, enlarge an image of a crowd where each person's face is one or two pixels and hope to identify them. Michael Barnsleyled the development of fractal compression from 1985 at the Georgia Institute of Technology (where both Barnsley and Sloan were professors in the mathematics department).[17]The work was sponsored byDARPAand theGeorgia Tech Research Corporation. The project resulted in severalpatentsfrom 1987.[18]Barnsley's graduate student Arnaud Jacquin implemented the first automatic algorithm in software in 1992.[19][20]All methods are based on thefractal transformusingiterated function systems. Michael Barnsley and Alan Sloan formed Iterated Systems Inc.[21]in 1987 which was granted over 20 additional patents related to fractal compression. A major breakthrough for Iterated Systems Inc. was the automatic fractal transform process which eliminated the need for human intervention during compression as was the case in early experimentation with fractal compression technology. In 1992, Iterated Systems Inc. received a US$2.1 million government grant[22]to develop a prototype digital image storage and decompression chip using fractal transform image compression technology. Fractal image compression has been used in a number of commercial applications:onOne Software, developed under license from Iterated Systems Inc.,Genuine Fractals5[23]which is aPhotoshopplugin capable of saving files in compressed FIF (Fractal Image Format). To date the most successful use of still fractal image compression is byMicrosoftin itsEncartamultimedia encyclopedia,[24]also under license. Iterated Systems Inc. supplied a shareware encoder (Fractal Imager), a stand-alone decoder, a Netscape plug-in decoder and a development package for use under Windows. The redistribution of the "decompressor DLL" provided by the ColorBox III SDK was governed by restrictive per-disk or year-by-year licensing regimes for proprietary software vendors and by a discretionary scheme that entailed the promotion of the Iterated Systems products for certain classes of other users.[25] ClearVideo – also known asRealVideo(Fractal) – and SoftVideo were early fractal video compression products. ClearFusion was Iterated's freely distributed streaming video plugin for web browsers. In 1994 SoftVideo was licensed toSpectrum Holobytefor use in itsCD-ROMgames including Falcon Gold andStar Trek: The Next Generation A Final Unity.[26] In 1996, Iterated Systems Inc. announced[27]an alliance with theMitsubishiCorporation to market ClearVideo to their Japanese customers. The original ClearVideo 1.2 decoder driver is still supported[28]by Microsoft inWindows Media Playeralthough the encoder is no longer supported. Two firms, Total Multimedia Inc. and Dimension, both claim to own or have the exclusive licence to Iterated's video technology, but neither has yet released a working product. The technology basis appears to be Dimension's U.S. patents 8639053 and 8351509, which have been considerably analyzed.[29]In summary, it is a simple quadtree block-copying system with neither the bandwidth efficiency nor PSNR quality of traditional DCT-based codecs. In January 2016, TMMI announced that it was abandoning fractal-based technology altogether. Research papers between 1997 and 2007 discussed possible solutions to improve fractal algorithms and encoding hardware.[30][31][32][33][34][35][36][37][38] A library calledFiascowas created by Ullrich Hafner. In 2001,Fiascowas covered in theLinux Journal.[39]According to the 2000-04Fiascomanual,Fiascocan be used for video compression.[40]TheNetpbmlibrary includes theFiascolibrary.[41][42] Femtosoft developed an implementation of fractal image compression inObject PascalandJava.[43]
https://en.wikipedia.org/wiki/Fractal_compression
Inmathematics, in the area ofharmonic analysis, thefractional Fourier transform(FRFT) is a family oflinear transformationsgeneralizing theFourier transform. It can be thought of as the Fourier transform to then-th power, wherenneed not be aninteger— thus, it can transform a function to anyintermediatedomain between time andfrequency. Its applications range fromfilter designandsignal analysistophase retrievalandpattern recognition. The FRFT can be used to define fractionalconvolution,correlation, and other operations, and can also be further generalized into thelinear canonical transformation(LCT). An early definition of the FRFT was introduced byCondon,[1]by solving for theGreen's functionfor phase-space rotations, and also by Namias,[2]generalizing work ofWiener[3]onHermite polynomials. However, it was not widely recognized in signal processing until it was independently reintroduced around 1993 by several groups.[4]Since then, there has been a surge of interest in extending Shannon's sampling theorem[5][6]for signals which are band-limited in the Fractional Fourier domain. A completely different meaning for "fractional Fourier transform" was introduced by Bailey and Swartztrauber[7]as essentially another name for az-transform, and in particular for the case that corresponds to adiscrete Fourier transformshifted by a fractional amount in frequency space (multiplying the input by a linearchirp) and evaluating at a fractional set of frequency points (e.g. considering only a small portion of the spectrum). (Such transforms can be evaluated efficiently byBluestein's FFT algorithm.) This terminology has fallen out of use in most of the technical literature, however, in preference to the FRFT. The remainder of this article describes the FRFT. The continuousFourier transformF{\displaystyle {\mathcal {F}}}of a functionf:R↦C{\displaystyle f:\mathbb {R} \mapsto \mathbb {C} }is aunitary operatorofL2{\displaystyle L^{2}}spacethat maps the functionf{\displaystyle f}to its frequential versionf^{\displaystyle {\hat {f}}}(all expressions are taken in theL2{\displaystyle L^{2}}sense, rather than pointwise): f^(ξ)=∫−∞∞f(x)e−2πixξdx{\displaystyle {\hat {f}}(\xi )=\int _{-\infty }^{\infty }f(x)\ e^{-2\pi ix\xi }\,\mathrm {d} x} andf{\displaystyle f}is determined byf^{\displaystyle {\hat {f}}}via the inverse transformF−1,{\displaystyle {\mathcal {F}}^{-1}\,,} f(x)=∫−∞∞f^(ξ)e2πiξxdξ.{\displaystyle f(x)=\int _{-\infty }^{\infty }{\hat {f}}(\xi )\ e^{2\pi i\xi x}\,\mathrm {d} \xi \,.} Let us study itsn-th iteratedFn{\displaystyle {\mathcal {F}}^{n}}defined byFn[f]=F[Fn−1[f]]{\displaystyle {\mathcal {F}}^{n}[f]={\mathcal {F}}[{\mathcal {F}}^{n-1}[f]]}andF−n=(F−1)n{\displaystyle {\mathcal {F}}^{-n}=({\mathcal {F}}^{-1})^{n}}whennis a non-negative integer, andF0[f]=f{\displaystyle {\mathcal {F}}^{0}[f]=f}. Their sequence is finite sinceF{\displaystyle {\mathcal {F}}}is a 4-periodicautomorphism: for every functionf{\displaystyle f},F4[f]=f{\displaystyle {\mathcal {F}}^{4}[f]=f}. More precisely, let us introduce theparity operatorP{\displaystyle {\mathcal {P}}}that invertsx{\displaystyle x},P[f]:x↦f(−x){\displaystyle {\mathcal {P}}[f]\colon x\mapsto f(-x)}. Then the following properties hold:F0=Id,F1=F,F2=P,F4=Id{\displaystyle {\mathcal {F}}^{0}=\mathrm {Id} ,\qquad {\mathcal {F}}^{1}={\mathcal {F}},\qquad {\mathcal {F}}^{2}={\mathcal {P}},\qquad {\mathcal {F}}^{4}=\mathrm {Id} }F3=F−1=P∘F=F∘P.{\displaystyle {\mathcal {F}}^{3}={\mathcal {F}}^{-1}={\mathcal {P}}\circ {\mathcal {F}}={\mathcal {F}}\circ {\mathcal {P}}.} The FRFT provides a family of linear transforms that further extends this definition to handle non-integer powersn=2α/π{\displaystyle n=2\alpha /\pi }of the FT. Note: some authors write the transform in terms of the "ordera" instead of the "angleα", in which case theαis usuallyatimesπ/2. Although these two forms are equivalent, one must be careful about which definition the author uses. For anyrealα, theα-angle fractional Fourier transform of a function ƒ is denoted byFα(u){\displaystyle {\mathcal {F}}_{\alpha }(u)}and defined by:[8][9][10] Fα[f](u)=1−icot⁡(α)eiπcot⁡(α)u2∫−∞∞e−2πi(csc⁡(α)ux−cot⁡(α)2x2)f(x)dx{\displaystyle {\mathcal {F}}_{\alpha }[f](u)={\sqrt {1-i\cot(\alpha )}}e^{i\pi \cot(\alpha )u^{2}}\int _{-\infty }^{\infty }e^{-2\pi i\left(\csc(\alpha )ux-{\frac {\cot(\alpha )}{2}}x^{2}\right)}f(x)\,\mathrm {d} x} Forα=π/2, this becomes precisely the definition of the continuous Fourier transform, and forα= −π/2it is the definition of the inverse continuous Fourier transform. The FRFT argumentuis neither a spatial onexnor a frequencyξ. We will see why it can be interpreted as linear combination of both coordinates(x,ξ). When we want to distinguish theα-angular fractional domain, we will letxa{\displaystyle x_{a}}denote the argument ofFα{\displaystyle {\mathcal {F}}_{\alpha }}. Remark:with the angular frequency ω convention instead of the frequency one, the FRFT formula is theMehler kernel,Fα(f)(ω)=1−icot⁡(α)2πeicot⁡(α)ω2/2∫−∞∞e−icsc⁡(α)ωt+icot⁡(α)t2/2f(t)dt.{\displaystyle {\mathcal {F}}_{\alpha }(f)(\omega )={\sqrt {\frac {1-i\cot(\alpha )}{2\pi }}}e^{i\cot(\alpha )\omega ^{2}/2}\int _{-\infty }^{\infty }e^{-i\csc(\alpha )\omega t+i\cot(\alpha )t^{2}/2}f(t)\,dt~.} Theα-th order fractional Fourier transform operator,Fα{\displaystyle {\mathcal {F}}_{\alpha }}, has the properties: For any real anglesα, β,Fα+β=Fα∘Fβ=Fβ∘Fα.{\displaystyle {\mathcal {F}}_{\alpha +\beta }={\mathcal {F}}_{\alpha }\circ {\mathcal {F}}_{\beta }={\mathcal {F}}_{\beta }\circ {\mathcal {F}}_{\alpha }.} Fα[∑kbkfk(u)]=∑kbkFα[fk(u)]{\displaystyle {\mathcal {F}}_{\alpha }\left[\sum \nolimits _{k}b_{k}f_{k}(u)\right]=\sum \nolimits _{k}b_{k}{\mathcal {F}}_{\alpha }\left[f_{k}(u)\right]} Ifαis an integer multiple ofπ/2{\displaystyle \pi /2}, then:Fα=Fkπ/2=Fk=(F)k{\displaystyle {\mathcal {F}}_{\alpha }={\mathcal {F}}_{k\pi /2}={\mathcal {F}}^{k}=({\mathcal {F}})^{k}} Moreover, it has following relation F2=PP[f(u)]=f(−u)F3=F−1=(F)−1F4=F0=IFi=Fji≡jmod4{\displaystyle {\begin{aligned}{\mathcal {F}}^{2}&={\mathcal {P}}&&{\mathcal {P}}[f(u)]=f(-u)\\{\mathcal {F}}^{3}&={\mathcal {F}}^{-1}=({\mathcal {F}})^{-1}\\{\mathcal {F}}^{4}&={\mathcal {F}}^{0}={\mathcal {I}}\\{\mathcal {F}}^{i}&={\mathcal {F}}^{j}&&i\equiv j\mod 4\end{aligned}}} (Fα)−1=F−α{\displaystyle ({\mathcal {F}}_{\alpha })^{-1}={\mathcal {F}}_{-\alpha }} Fα1Fα2=Fα2Fα1{\displaystyle {\mathcal {F}}_{\alpha _{1}}{\mathcal {F}}_{\alpha _{2}}={\mathcal {F}}_{\alpha _{2}}{\mathcal {F}}_{\alpha _{1}}} (Fα1Fα2)Fα3=Fα1(Fα2Fα3){\displaystyle \left({\mathcal {F}}_{\alpha _{1}}{\mathcal {F}}_{\alpha _{2}}\right){\mathcal {F}}_{\alpha _{3}}={\mathcal {F}}_{\alpha _{1}}\left({\mathcal {F}}_{\alpha _{2}}{\mathcal {F}}_{\alpha _{3}}\right)} ∫f(t)g∗(t)dt=∫fα(u)gα∗(u)du{\displaystyle \int f(t)g^{*}(t)dt=\int f_{\alpha }(u)g_{\alpha }^{*}(u)du} FαP=PFα{\displaystyle {\mathcal {F}}_{\alpha }{\mathcal {P}}={\mathcal {P}}{\mathcal {F}}_{\alpha }}Fα[f(−u)]=fα(−u){\displaystyle {\mathcal {F}}_{\alpha }[f(-u)]=f_{\alpha }(-u)} Define the shift and the phase shift operators as follows: SH(u0)[f(u)]=f(u+u0)PH(v0)[f(u)]=ej2πv0uf(u){\displaystyle {\begin{aligned}{\mathcal {SH}}(u_{0})[f(u)]&=f(u+u_{0})\\{\mathcal {PH}}(v_{0})[f(u)]&=e^{j2\pi v_{0}u}f(u)\end{aligned}}} ThenFαSH(u0)=ejπu02sin⁡αcos⁡αPH(u0sin⁡α)SH(u0cos⁡α)Fα,{\displaystyle {\begin{aligned}{\mathcal {F}}_{\alpha }{\mathcal {SH}}(u_{0})&=e^{j\pi u_{0}^{2}\sin \alpha \cos \alpha }{\mathcal {PH}}(u_{0}\sin \alpha ){\mathcal {SH}}(u_{0}\cos \alpha ){\mathcal {F}}_{\alpha },\end{aligned}}} that is, Fα[f(u+u0)]=ejπu02sin⁡αcos⁡αej2πuu0sin⁡αfα(u+u0cos⁡α){\displaystyle {\begin{aligned}{\mathcal {F}}_{\alpha }[f(u+u_{0})]&=e^{j\pi u_{0}^{2}\sin \alpha \cos \alpha }e^{j2\pi uu_{0}\sin \alpha }f_{\alpha }(u+u_{0}\cos \alpha )\end{aligned}}} Define the scaling and chirp multiplication operators as follows:M(M)[f(u)]=|M|−12f(uM)Q(q)[f(u)]=e−jπqu2f(u){\displaystyle {\begin{aligned}M(M)[f(u)]&=|M|^{-{\frac {1}{2}}}f\left({\tfrac {u}{M}}\right)\\Q(q)[f(u)]&=e^{-j\pi qu^{2}}f(u)\end{aligned}}} Then,FαM(M)=Q(−cot⁡(1−cos2⁡α′cos2⁡αα))×M(sin⁡αMsin⁡α′)Fα′Fα[|M|−12f(uM)]=1−jcot⁡α1−jM2cot⁡αejπu2cot⁡(1−cos2⁡α′cos2⁡αα)×fa(Musin⁡α′sin⁡α){\displaystyle {\begin{aligned}{\mathcal {F}}_{\alpha }M(M)&=Q\left(-\cot \left({\frac {1-\cos ^{2}\alpha '}{\cos ^{2}\alpha }}\alpha \right)\right)\times M\left({\frac {\sin \alpha }{M\sin \alpha '}}\right){\mathcal {F}}_{\alpha '}\\[6pt]{\mathcal {F}}_{\alpha }\left[|M|^{-{\frac {1}{2}}}f\left({\tfrac {u}{M}}\right)\right]&={\sqrt {\frac {1-j\cot \alpha }{1-jM^{2}\cot \alpha }}}e^{j\pi u^{2}\cot \left({\frac {1-\cos ^{2}\alpha '}{\cos ^{2}\alpha }}\alpha \right)}\times f_{a}\left({\frac {Mu\sin \alpha '}{\sin \alpha }}\right)\end{aligned}}} Notice that the fractional Fourier transform off(u/M){\displaystyle f(u/M)}cannot be expressed as a scaled version offα(u){\displaystyle f_{\alpha }(u)}. Rather, the fractional Fourier transform off(u/M){\displaystyle f(u/M)}turns out to be a scaled and chirp modulated version offα′(u){\displaystyle f_{\alpha '}(u)}whereα≠α′{\displaystyle \alpha \neq \alpha '}is a different order.[11] The FRFT is anintegral transformFαf(u)=∫Kα(u,x)f(x)dx{\displaystyle {\mathcal {F}}_{\alpha }f(u)=\int K_{\alpha }(u,x)f(x)\,\mathrm {d} x}where the α-angle kernel isKα(u,x)={1−icot⁡(α)exp⁡(iπ(cot⁡(α)(x2+u2)−2csc⁡(α)ux))ifαis not a multiple ofπ,δ(u−x)ifαis a multiple of2π,δ(u+x)ifα+πis a multiple of2π,{\displaystyle K_{\alpha }(u,x)={\begin{cases}{\sqrt {1-i\cot(\alpha )}}\exp \left(i\pi (\cot(\alpha )(x^{2}+u^{2})-2\csc(\alpha )ux)\right)&{\mbox{if }}\alpha {\mbox{ is not a multiple of }}\pi ,\\\delta (u-x)&{\mbox{if }}\alpha {\mbox{ is a multiple of }}2\pi ,\\\delta (u+x)&{\mbox{if }}\alpha +\pi {\mbox{ is a multiple of }}2\pi ,\\\end{cases}}} Here again the special cases are consistent with the limit behavior whenαapproaches a multiple ofπ. The FRFT has the same properties as its kernels : There also exist related fractional generalizations of similar transforms such as thediscrete Fourier transform. The Fourier transform is essentiallybosonic; it works because it is consistent with the superposition principle and related interference patterns. There is also afermionicFourier transform.[16]These have been generalized into asupersymmetricFRFT, and a supersymmetricRadon transform.[16]There is also a fractional Radon transform, asymplecticFRFT, and a symplecticwavelet transform.[17]Becausequantum circuitsare based onunitary operations, they are useful for computingintegral transformsas the latter are unitary operators on afunction space. A quantum circuit has been designed which implements the FRFT.[18] The usual interpretation of the Fourier transform is as a transformation of a time domain signal into a frequency domain signal. On the other hand, the interpretation of the inverse Fourier transform is as a transformation of a frequency domain signal into a time domain signal. Fractional Fourier transforms transform a signal (either in the time domain or frequency domain) into the domain between time and frequency: it is a rotation in thetime–frequency domain. This perspective is generalized by thelinear canonical transformation, which generalizes the fractional Fourier transform and allows linear transforms of the time–frequency domain other than rotation. Take the figure below as an example. If the signal in the time domain is rectangular (as below), it becomes asinc functionin the frequency domain. But if one applies the fractional Fourier transform to the rectangular signal, the transformation output will be in the domain between time and frequency. The fractional Fourier transform is a rotation operation on atime–frequency distribution. From the definition above, forα= 0, there will be no change after applying the fractional Fourier transform, while forα=π/2, the fractional Fourier transform becomes a plain Fourier transform, which rotates the time–frequency distribution withπ/2. For other value ofα, the fractional Fourier transform rotates the time–frequency distribution according to α. The following figure shows the results of the fractional Fourier transform with different values ofα. The diffraction of light can be calculated using integral transforms. TheFresnel diffraction integralis used to find the near field diffraction pattern. In the far-field limit this equation becomes a Fourier transform to give the equation forFraunhofer diffraction. The fractional Fourier transform is equivalent to the Fresnel diffraction equation.[19][20]When the angleα{\displaystyle \alpha }becomesπ/2{\displaystyle \pi /2}, the fractional Fourier transform is the standard Fourier transform and gives the far-field diffraction pattern. The near-field diffraction maps to values ofα{\displaystyle \alpha }between 0 andπ/2{\displaystyle \pi /2}. Fractional Fourier transform can be used in time frequency analysis andDSP.[21]It is useful to filter noise, but with the condition that it does not overlap with the desired signal in the time–frequency domain. Consider the following example. We cannot apply a filter directly to eliminate the noise, but with the help of the fractional Fourier transform, we can rotate the signal (including the desired signal and noise) first. We then apply a specific filter, which will allow only the desired signal to pass. Thus the noise will be removed completely. Then we use the fractional Fourier transform again to rotate the signal back and we can get the desired signal. Thus, using just truncation in the time domain, or equivalentlylow-pass filtersin the frequency domain, one can cut out anyconvex setin time–frequency space. In contrast, using time domain or frequency domain tools without a fractional Fourier transform would only allow cutting out rectangles parallel to the axes. Fractional Fourier transforms also have applications in quantum physics. For example, they are used to formulate entropic uncertainty relations,[22]in high-dimensional quantum key distribution schemes with single photons,[23]and in observing spatial entanglement of photon pairs.[24] They are also useful in the design of optical systems and for optimizing holographic storage efficiency.[25][26] Other time–frequency transforms:
https://en.wikipedia.org/wiki/Fractional_Fourier_transform
Gabor waveletsarewaveletsinvented byDennis Gaborusing complex functions constructed to serve as a basis forFourier transformsininformation theoryapplications. They are very similar toMorlet wavelets. They are also closely related toGabor filters. The important property of thewaveletis that it minimizes the product of its standard deviations in the time and frequency domain (given by the variances defined below). Put another way, theuncertaintyin information carried by this wavelet is minimized. However they have the downside of being non-orthogonal, so efficient decomposition into the basis is difficult. Since their inception, various applications have appeared, from image processing to analyzing neurons in the human visual system.[1][2] The motivation for Gabor wavelets comes from finding some functionf(x){\displaystyle f(x)}which minimizes its standard deviation in the time and frequency domains. More formally, the variance in the position domain is: wheref∗(x){\displaystyle f^{*}(x)}is the complex conjugate off(x){\displaystyle f(x)}andμ{\displaystyle \mu }is the arithmetic mean, defined as: The variance in thewave numberdomain is: Wherek0{\displaystyle k_{0}}is the arithmetic mean of the Fourier Transform off(x){\displaystyle f(x)},F(x){\displaystyle F(x)}: With these defined, the uncertainty is written as: This quantity has been shown to have a lower bound of12{\displaystyle {\frac {1}{2}}}. The quantum mechanics view is to interpret(Δx){\displaystyle (\Delta x)}as the uncertainty in position andℏ(Δk){\displaystyle \hbar (\Delta k)}as uncertainty in momentum. A functionf(x){\displaystyle f(x)}that has the lowest theoretically possible uncertainty bound is the Gabor Wavelet.[3] The equation of a 1-D Gabor wavelet is a Gaussian modulated by a complex exponential, described as follows:[3] As opposed to other functions commonly used as bases in Fourier Transforms such assin{\displaystyle \sin }andcos{\displaystyle \cos }, Gabor wavelets have the property that they are localized, meaning that as the distance from the centerx0{\displaystyle x_{0}}increases, the value of the function becomes exponentially suppressed.a{\displaystyle a}controls the rate of this exponential drop-off andk0{\displaystyle k_{0}}controls the rate of modulation. It is also worth noting theFourier transform (unitary, angular-frequency convention)of a Gabor wavelet, which is also a Gabor wavelet: An example wavelet is given here: When processing temporal signals, data from the future cannot be accessed, which leads to problems if attempting to use Gabor functions for processing real-time signals that depend upon the temporal dimension. A time-causal analogue of the Gabor filter has been developed in[4]based on replacing the Gaussian kernel in the Gabor function with a time-causal and time-recursive smoothing kernel referred to as the time-causal limit kernel. In this way, time-frequency analysis based on the resulting complex-valued extension of the time-causal limit kernel makes it possible to capture essentially similar transformations of a temporal signal as the Gabor wavelets can handle, and corresponding to the Heisenberg group, while carried out with strictly time-causal and time-recursive operations, see[4]for further details.
https://en.wikipedia.org/wiki/Gabor_wavelet#Wavelet_space
TheHuygens–Fresnel principle(named afterDutchphysicistChristiaan HuygensandFrenchphysicistAugustin-Jean Fresnel) states that every point on awavefrontis itself the source of spherical wavelets, and the secondary wavelets emanating from different points mutuallyinterfere.[1]The sum of these spherical wavelets forms a new wavefront. As such, the Huygens-Fresnel principle is a method of analysis applied to problems of luminouswave propagationboth in thefar-field limitand in near-fielddiffractionas well asreflection. In 1678, Huygens proposed[2]that every point reached by a luminous disturbance becomes a source of a spherical wave. The sum of these secondary waves determines the form of the wave at any subsequent time; the overall procedure is referred to asHuygens' construction.[3]: 132He assumed that the secondary waves travelled only in the "forward" direction, and it is not explained in the theory why this is the case. He was able to provide a qualitative explanation of linear and spherical wave propagation, and to derive the laws of reflection and refraction using this principle, but could not explain the deviations from rectilinear propagation that occur when light encounters edges, apertures and screens, commonly known asdiffractioneffects.[4] In 1818, Fresnel[5]showed that Huygens's principle, together with his own principle ofinterference, could explain both the rectilinear propagation of light and also diffraction effects. To obtain agreement with experimental results, he had to include additional arbitrary assumptions about the phase and amplitude of the secondary waves, and also an obliquity factor. These assumptions have no obvious physical foundation, but led to predictions that agreed with many experimental observations, including thePoisson spot. Poissonwas a member of the French Academy, which reviewed Fresnel's work. He used Fresnel's theory to predict that a bright spot ought to appear in the center of the shadow of a small disc, and deduced from this that the theory was incorrect. However,François Arago, another member of the committee, performed the experiment and showed thatthe prediction was correct.[3]This success was important evidence in favor of the wave theory of light over then predominantcorpuscular theory. In 1882,Gustav Kirchhoffanalyzed Fresnel's theory in a rigorous mathematical formulation, as an approximate form of an integral theorem.[3]: 375Very few rigorous solutions to diffraction problems are known however, and most problems in optics are adequately treated using the Huygens-Fresnel principle.[3]: 370 In 1939Edward Copson, extended the Huygens' original principle to consider the polarization of light, which requires a vector potential, in contrast to the scalar potential of a simpleocean waveorsound wave.[6][7] Inantenna theoryand engineering, the reformulation of the Huygens–Fresnel principle for radiating current sources is known assurface equivalence principle.[8][9] Issues in Huygens-Fresnel theory continue to be of interest. In 1991,David A. B. Millersuggested that treating the source as a dipole (not the monopole assumed by Huygens) will cancel waves propagating in the reverse direction, making Huygens' construction quantitatively correct.[10]In 2021, Forrest L. Anderson showed that treating the wavelets asDirac delta functions, summing and differentiating the summation is sufficient to cancel reverse propagating waves.[11] The apparent change in direction of a light ray as it enters a sheet of glass at angle can be understood by the Huygens construction. Each point on the surface of the glass gives a secondary wavelet. These wavelets propagate at a slower velocity in the glass, making less forward progress than their counterparts in air. When the wavelets are summed, the resulting wavefront propagates at an angle to the direction of the wavefront in air.[12]: 56 In an inhomogeneous medium with a variable index of refraction, different parts of the wavefront propagate at different speeds. Consequently the wavefront bends around in the direction of higher index.[12]: 68 The Huygens–Fresnel principle provides a reasonable basis for understanding and predicting the classical wave propagation of light. However, there are limitations to the principle, namely the same approximations done for deriving theKirchhoff's diffraction formulaand the approximations ofnear fielddue to Fresnel. These can be summarized in the fact that the wavelength of light is much smaller than the dimensions of any optical components encountered.[3] Kirchhoff's diffraction formulaprovides a rigorous mathematical foundation for diffraction, based on the wave equation. The arbitrary assumptions made by Fresnel to arrive at the Huygens–Fresnel equation emerge automatically from the mathematics in this derivation.[13] A simple example of the operation of the principle can be seen when an open doorway connects two rooms and a sound is produced in a remote corner of one of them. A person in the other room will hear the sound as if it originated at the doorway. As far as the second room is concerned, the vibrating air in the doorway is the source of the sound. Consider the case of a point source located at a pointP0, vibrating at afrequencyf. The disturbance may be described by a complex variableU0known as thecomplex amplitude. It produces a spherical wave withwavelengthλ,wavenumberk= 2π/λ. Within a constant of proportionality, the complex amplitude of the primary wave at the pointQlocated at a distancer0fromP0is: Note thatmagnitudedecreases in inverse proportion to the distance traveled, and the phase changes asktimes the distance traveled. Using Huygens's theory and theprinciple of superpositionof waves, the complex amplitude at a further pointPis found by summing the contribution from each point on the sphere of radiusr0. In order to get an agreement with experimental results, Fresnel found that the individual contributions from the secondary waves on the sphere had to be multiplied by a constant, −i/λ, and by an additional inclination factor,K(χ). The first assumption means that the secondary waves oscillate at a quarter of a cycle out of phase with respect to the primary wave and that the magnitude of the secondary waves are in a ratio of 1:λ to the primary wave. He also assumed thatK(χ) had a maximum value when χ = 0, and was equal to zero when χ = π/2, where χ is the angle between the normal of the primary wavefront and the normal of the secondary wavefront. The complex amplitude atP, due to the contribution of secondary waves, is then given by:[14] whereSdescribes the surface of the sphere, andsis the distance betweenQandP. Fresnel used a zone construction method to find approximate values ofKfor the different zones,[3]which enabled him to make predictions that were in agreement with experimental results. Theintegral theorem of Kirchhoffincludes the basic idea of Huygens–Fresnel principle. Kirchhoff showed that in many cases, the theorem can be approximated to a simpler form that is equivalent to the formation of Fresnel's formulation.[3] For an aperture illumination consisting of a single expanding spherical wave, if the radius of the curvature of the wave is sufficiently large, Kirchhoff gave the following expression forK(χ):[3] Khas a maximum value at χ = 0 as in the Huygens–Fresnel principle; however,Kis not equal to zero at χ = π/2, but at χ = π. Above derivation ofK(χ) assumed that the diffracting aperture is illuminated by a single spherical wave with a sufficiently large radius of curvature. However, the principle holds for more general illuminations.[14]An arbitrary illumination can be decomposed into a collection of point sources, and the linearity of the wave equation can be invoked to apply the principle to each point source individually.K(χ) can be generally expressed as:[14] In this case,Ksatisfies the conditions stated above (maximum value at χ = 0 and zero at χ = π/2). Many books and references – e.g. (Greiner, 2002)[15]and (Enders, 2009)[16]- refer to the Generalized Huygens' Principle using the definition in (Feynman, 1948).[17] Feynman defines the generalized principle in the following way: "Actually Huygens’ principle is not correct in optics. It is replaced by Kirchoff’s [sic] modification which requires that both the amplitude and its derivative must be known on the adjacent surface. This is a consequence of the fact that the wave equation in optics is second order in the time. The wave equation of quantum mechanics is first order in the time; therefore, Huygens’ principle is correct for matter waves, action replacing time." This clarifies the fact that in this context the generalized principle reflects the linearity of quantum mechanics and the fact that the quantum mechanics equations are first order in time. Finally only in this case the superposition principle fully apply, i.e. the wave function in a point P can be expanded as a superposition of waves on a border surface enclosing P. Wave functions can be interpreted in the usual quantum mechanical sense as probability densities where the formalism ofGreen's functionsandpropagatorsapply. What is note-worthy is that this generalized principle is applicable for "matter waves" and not for light waves any more. The phase factor is now clarified as given by theactionand there is no more confusion why the phases of the wavelets are different from the one of the original wave and modified by the additional Fresnel parameters. As per Greiner[15]the generalized principle can be expressed fort′>t{\displaystyle t'>t}in the form: whereGis the usual Green function that propagates in time the wave functionψ{\displaystyle \psi }. This description resembles and generalize the initial Fresnel's formula of the classical model. Huygens' theory served as a fundamental explanation of the wave nature of light interference and was further developed by Fresnel and Young but did not fully resolve all observations such as the low-intensitydouble-slit experimentfirst performed by G. I. Taylor in 1909. It was not until the early and mid-1900s that quantum theory discussions, particularly the early discussions at the 1927 BrusselsSolvay Conference, whereLouis de Broglieproposed his de Broglie hypothesis that the photon is guided by a wave function.[18] The wave function presents a much different explanation of the observed light and dark bands in a double slit experiment. In this conception, the photon follows a path which is a probabilistic choice of one of many possible paths in the electromagnetic field. These probable paths form the pattern: in dark areas, no photons are landing, and in bright areas, many photons are landing. The set of possible photon paths is consistent with Richard Feynman's path integral theory, the paths determined by the surroundings: the photon's originating point (atom), the slit, and the screen and by tracking and summing phases. The wave function is a solution to this geometry. The wave function approach was further supported by additional double-slit experiments in Italy and Japan in the 1970s and 1980s with electrons.[19] Huygens' principle can be seen as a consequence of thehomogeneityof space—space is uniform in all locations.[20]Any disturbance created in a sufficiently small region of homogeneous space (or in a homogeneous medium) propagates from that region in all geodesic directions. The waves produced by this disturbance, in turn, create disturbances in other regions, and so on. Thesuperpositionof all the waves results in the observed pattern of wave propagation. Homogeneity of space is fundamental toquantum field theory(QFT) where thewave functionof any object propagates along all available unobstructed paths. Whenintegrated along all possible paths, with aphasefactor proportional to theaction, the interference of the wave-functions correctly predicts observable phenomena. Every point on the wavefront acts as the source of secondary wavelets that spread out in the light cone with the same speed as the wave. The new wavefront is found by constructing the surface tangent to the secondary wavelets. In 1900,Jacques Hadamardobserved that Huygens' principle was broken when the number of spatial dimensions is even.[21][22][23]From this, he developed a set of conjectures that remain an active topic of research.[24][25]In particular, it has been discovered that Huygens' principle holds on a large class ofhomogeneous spacesderived from theCoxeter group(so, for example, theWeyl groupsof simpleLie algebras).[20][26] The traditional statement of Huygens' principle for theD'Alembertiangives rise to theKdV hierarchy; analogously, theDirac operatorgives rise to theAKNShierarchy.[27][28]
https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel_principle
Non-separable waveletsare multi-dimensionalwaveletsthat are not directly implemented astensor productsof wavelets on some lower-dimensional space. They have been studied since 1992.[1]They offer a few important advantages. Notably, using non-separable filters leads to more parameters in design, and consequently better filters.[2]The main difference, when compared to the one-dimensional wavelets, is thatmulti-dimensional samplingrequires the use oflattices(e.g., the quincunx lattice). The wavelet filters themselves can be separable or non-separable regardless of the sampling lattice. Thus, in some cases, the non-separable wavelets can be implemented in a separable fashion. Unlike separable wavelet, the non-separable wavelets are capable of detecting structures that are not only horizontal, vertical or diagonal (show lessanisotropy). Thissignal processing-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Non-separable_wavelet
In applied mathematical analysis,shearletsare a multiscale framework which allows efficient encoding ofanisotropicfeatures inmultivariateproblem classes. Originally, shearlets were introduced in 2006[1]for the analysis andsparse approximationof functionsf∈L2(R2){\displaystyle f\in L^{2}(\mathbb {R} ^{2})}. They are a natural extension ofwavelets, to accommodate the fact that multivariate functions are typically governed by anisotropic features such as edges in images, since wavelets, as isotropic objects, are not capable of capturing such phenomena. Shearlets are constructed by parabolicscaling, shearing, andtranslationapplied to a fewgenerating functions. At fine scales, they are essentially supported within skinny and directional ridges following the parabolic scaling law, which readslength² ≈ width. Similar to wavelets, shearlets arise from theaffine groupand allow a unified treatment of the continuum and digital situation leading to faithful implementations. Although they do not constitute anorthonormal basisforL2(R2){\displaystyle L^{2}(\mathbb {R} ^{2})}, they still form aframeallowing stable expansions of arbitrary functionsf∈L2(R2){\displaystyle f\in L^{2}(\mathbb {R} ^{2})}. One of the most important properties of shearlets is their ability to provide optimally sparse approximations (in the sense of optimality in[2]) forcartoon-like functionsf{\displaystyle f}. In imaging sciences,cartoon-like functionsserve as a model for anisotropic features and are compactly supported in[0,1]2{\displaystyle [0,1]^{2}}while beingC2{\displaystyle C^{2}}apart from a closed piecewiseC2{\displaystyle C^{2}}singularity curve with bounded curvature. The decay rate of theL2{\displaystyle L^{2}}-error of theN{\displaystyle N}-term shearlet approximation obtained by taking theN{\displaystyle N}largest coefficients from the shearlet expansion is in fact optimal up to a log-factor:[3][4] where the constantC{\displaystyle C}depends only on the maximum curvature of the singularity curve and the maximum magnitudes off{\displaystyle f},f′{\displaystyle f'}andf″.{\displaystyle f''.}This approximation rate significantly improves the bestN{\displaystyle N}-term approximation rate of wavelets providing onlyO(N−1){\displaystyle O(N^{-1})}for such class of functions. Shearlets are to date the only directional representation system that provides sparse approximation of anisotropic features while providing a unified treatment of the continuum and digital realm that allows faithful implementation. Extensions of shearlet systems toL2(Rd),d≥2{\displaystyle L^{2}(\mathbb {R} ^{d}),d\geq 2}are also available. A comprehensive presentation of the theory and applications of shearlets can be found in.[5] The construction of continuous shearlet systems is based onparabolic scaling matrices as a mean to change the resolution, onshear matrices as a means to change the orientation, and finally on translations to change the positioning. In comparison tocurvelets, shearlets use shearings instead of rotations, the advantage being that the shear operatorSs{\displaystyle S_{s}}leaves theinteger latticeinvariant in cases∈Z{\displaystyle s\in \mathbb {Z} }, i.e.,SsZ2⊆Z2.{\displaystyle S_{s}\mathbb {Z} ^{2}\subseteq \mathbb {Z} ^{2}.}This indeed allows a unified treatment of the continuum and digital realm, thereby guaranteeing a faithful digital implementation. Forψ∈L2(R2){\displaystyle \psi \in L^{2}(\mathbb {R} ^{2})}thecontinuous shearlet systemgenerated byψ{\displaystyle \psi }is then defined as and the correspondingcontinuous shearlet transformis given by the map A discrete version of shearlet systems can be directly obtained fromSHcont⁡(ψ){\displaystyle \operatorname {SH} _{\mathrm {cont} }(\psi )}bydiscretizingthe parameter setR>0×R×R2.{\displaystyle \mathbb {R} _{>0}\times \mathbb {R} \times \mathbb {R} ^{2}.}There are numerous approaches for this but the most popular one is given by From this, thediscrete shearlet systemassociated with the shearlet generatorψ{\displaystyle \psi }is defined by and the associateddiscrete shearlet transformis defined by Letψ1∈L2(R){\displaystyle \psi _{1}\in L^{2}(\mathbb {R} )}be a function satisfying thediscrete Calderón condition, i.e., withψ^1∈C∞(R){\displaystyle {\hat {\psi }}_{1}\in C^{\infty }(\mathbb {R} )}andsupp⁡ψ^1⊆[−12,−116]∪[116,12],{\displaystyle \operatorname {supp} {\hat {\psi }}_{1}\subseteq [-{\tfrac {1}{2}},-{\tfrac {1}{16}}]\cup [{\tfrac {1}{16}},{\tfrac {1}{2}}],}whereψ^1{\displaystyle {\hat {\psi }}_{1}}denotes theFourier transformofψ1.{\displaystyle \psi _{1}.}For instance, one can chooseψ1{\displaystyle \psi _{1}}to be aMeyer wavelet. Furthermore, letψ2∈L2(R){\displaystyle \psi _{2}\in L^{2}(\mathbb {R} )}be such thatψ^2∈C∞(R),{\displaystyle {\hat {\psi }}_{2}\in C^{\infty }(\mathbb {R} ),}supp⁡ψ^2⊆[−1,1]{\displaystyle \operatorname {supp} {\hat {\psi }}_{2}\subseteq [-1,1]}and One typically choosesψ^2{\displaystyle {\hat {\psi }}_{2}}to be a smoothbump function. Thenψ∈L2(R2){\displaystyle \psi \in L^{2}(\mathbb {R} ^{2})}given by is called aclassical shearlet. It can be shown that the corresponding discrete shearlet systemSH⁡(ψ){\displaystyle \operatorname {SH} (\psi )}constitutes aParseval frameforL2(R2){\displaystyle L^{2}(\mathbb {R} ^{2})}consisting ofbandlimitedfunctions.[5] Another example arecompactlysupportedshearlet systems, where a compactly supported functionψ∈L2(R2){\displaystyle \psi \in L^{2}(\mathbb {R} ^{2})}can be chosen so thatSH⁡(ψ){\displaystyle \operatorname {SH} (\psi )}forms aframeforL2(R2){\displaystyle L^{2}(\mathbb {R} ^{2})}.[4][6][7][8]In this case, all shearlet elements inSH⁡(ψ){\displaystyle \operatorname {SH} (\psi )}are compactly supported providing superior spatial localization compared to the classical shearlets, which are bandlimited. Although a compactly supported shearlet system does not generally form a Parseval frame, any functionf∈L2(R2){\displaystyle f\in L^{2}(\mathbb {R} ^{2})}can be represented by the shearlet expansion due to its frame property. One drawback of shearlets defined as above is the directional bias of shearlet elements associated with large shearing parameters. This effect is already recognizable in the frequency tiling of classical shearlets (see Figure in Section#Examples), where the frequency support of a shearlet increasingly aligns along theξ2{\displaystyle \xi _{2}}-axis as the shearing parameters{\displaystyle s}goes to infinity. This causes serious problems when analyzing a function whose Fourier transform is concentrated around theξ2{\displaystyle \xi _{2}}-axis. To deal with this problem, the frequency domain is divided into a low-frequency part and two conic regions (see Figure): The associatedcone-adapted discrete shearlet systemconsists of three parts, each one corresponding to one of these frequency domains. It is generated by three functionsϕ,ψ,ψ~∈L2(R2){\displaystyle \phi ,\psi ,{\tilde {\psi }}\in L^{2}(\mathbb {R} ^{2})}and alatticesamplingfactorc=(c1,c2)∈(R>0)2:{\displaystyle c=(c_{1},c_{2})\in (\mathbb {R} _{>0})^{2}:} where with The systemsΨ(ψ){\displaystyle \Psi (\psi )}andΨ~(ψ~){\displaystyle {\tilde {\Psi }}({\tilde {\psi }})}basically differ in the reversed roles ofx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}. Thus, they correspond to the conic regionsCh{\displaystyle {\mathcal {C}}_{\mathrm {h} }}andCv{\displaystyle {\mathcal {C}}_{\mathrm {v} }}, respectively. Finally, thescaling functionϕ{\displaystyle \phi }is associated with the low-frequency partR{\displaystyle {\mathcal {R}}}.
https://en.wikipedia.org/wiki/Shearlet
Ultra-wideband(UWB,ultra wideband,ultra-wide bandandultraband) is aradio technologythat can use a very low energy level for short-range, high-bandwidth communications over a large portion of the radio spectrum.[1]UWB has traditional applications in non-cooperativeradar imaging. Most recent applications target sensor data collection, precise locating,[2]and tracking.[3][4]UWB support started to appear in high-endsmartphonesin 2019. Ultra-wideband is a technology for transmitting information across a wide bandwidth (>500MHz). This allows for the transmission of a large amount of signal energy without interfering with conventionalnarrowbandandcarrier wavetransmission in the same frequency band. Regulatory limits in many countries allow for this efficient use of radio bandwidth, and enable high-data-ratepersonal area network(PAN) wireless connectivity, longer-range low-data-rate applications, and the transparent co-existence of radar and imaging systems with existing communications systems. Ultra-wideband was formerly known aspulse radio, but the FCC and theInternational Telecommunication UnionRadiocommunication Sector (ITU-R) currently define UWB as an antenna transmission for which emitted signal bandwidth exceeds the lesser of 500 MHz or 20% of the arithmetic center frequency.[5]Thus, pulse-based systems—where each transmitted pulse occupies the UWB bandwidth (or an aggregate of at least 500 MHz of a narrow-band carrier; for example,orthogonal frequency-division multiplexing(OFDM))—can access the UWB spectrum under the rules. A significant difference between conventional radio transmissions and UWB is that conventional systems transmit information by varying the power level, frequency, or phase (or a combination of these) of a sinusoidal wave. UWB transmissions transmit information by generating radio energy at specific time intervals and occupying a large bandwidth, thus enablingpulse-positionor time modulation. The information can also be modulated on UWB signals (pulses) by encoding the polarity of the pulse, its amplitude and/or by using orthogonal pulses. UWB pulses can be sent sporadically at relatively low pulse rates to support time or position modulation, but can also be sent at rates up to the inverse of the UWB pulse bandwidth. Pulse-UWB systems have been demonstrated at channel pulse rates in excess of 1.3 billion pulses per second using a continuous stream of UWB pulses (Continuous Pulse UWB orC-UWB), while supporting forward error-correction encoded data rates in excess of 675 Mbit/s.[6] A UWB radio system can be used to determine the "time of flight" of the transmission at various frequencies. This helps overcomemultipath propagation, since some of the frequencies have aline-of-sighttrajectory, while other indirect paths have longer delays. With a cooperative symmetric two-way metering technique, distances can be measured to high resolution and accuracy.[7] Ultra-wideband (UWB) technology is utilised for real-time locationing due to its precision and reliability. It plays a role in various industries such as logistics, healthcare, manufacturing, and transportation. UWB's centimeter-level accuracy is valuable in applications in which using traditional methods may be unsuitable, such as in indoor environments, where GPS precision may be hindered. Its low power consumption ensures minimal interference and allows for coexistence with existing infrastructure. UWB performs well in challenging environments with its immunity to multipath interference, providing consistent and accurate positioning. In logistics, UWB increases inventory tracking efficiency, reducing losses and optimizing operations. Healthcare makes use of UWB in asset tracking, patient flow optimization, and in improving care coordination. In manufacturing, UWB is used for streamlining inventory management and enhancing production efficiency through accurate tracking of materials and tools. UWB supports route planning, fleet management, and vehicle security in transportation systems.[8] UWB uses multiple techniques for location detection:[9] Apple launched the first three phones with ultra-wideband capabilities in September 2019, namely, theiPhone 11,iPhone 11 Pro, and iPhone 11 Pro Max.[10][11][12]Apple also launched Series 6 of Apple Watch in September 2020, which features UWB,[13]and theirAirTagsfeaturing this technology were revealed at a press event on April 20, 2021.[14][4]The Samsung Galaxy Note 20 Ultra, Galaxy S21+, and Galaxy S21 Ultra also began supporting UWB,[15]along with the Samsung Galaxy SmartTag+.[16]TheXiaomi MIX 4released in August 2021 supports UWB, and offers the capability of connecting to selectAIoTdevices.[17] TheFiRa Consortiumwas founded in August 2019 to develop interoperable UWB ecosystems including mobile phones. Samsung, Xiaomi, and Oppo are currently members of the FiRa Consortium.[18]In November 2020,Android Open Source Projectreceived first patches related to an upcoming UWB API; "feature-complete" UWB support (exclusively for the sole use case of ranging between supported devices) was released in version 13 of Android.[19] Ultra-wideband gained widespread attention for its implementation insynthetic aperture radar (SAR)technology. Due to its high resolution capacities using lower frequencies, UWB SAR was heavily researched for its object-penetration ability.[23][24][25]Starting in the early 1990s, theU.S. Army Research Laboratory (ARL)developed various stationary and mobile ground-, foliage-, and wall-penetrating radar platforms that served to detect and identify buried IEDs and hidden adversaries at a safe distance. Examples include therailSAR, theboomSAR, theSIRE radar, and theSAFIRE radar.[26][27]ARL has also investigated the feasibility of whether UWB radar technology can incorporate Doppler processing to estimate the velocity of a moving target when the platform is stationary.[28]While a 2013 report highlighted the issue with the use of UWB waveforms due to target range migration during the integration interval, more recent studies have suggested that UWB waveforms can demonstrate better performance compared to conventional Doppler processing as long as a correctmatched filteris used.[29] Ultra-wideband pulseDoppler radarshave also been used to monitor vital signs of the human body, such as heart rate and respiration signals as well as human gait analysis and fall detection. It serves as a potential alternative tocontinuous-wave radar systemssince it involves less power consumption and a high-resolution range profile. However, its low signal-to-noise ratio has made it vulnerable to errors.[30][31] Ultra-wideband is also used in "see-through-the-wall" precision radar-imaging technology,[32][33][34]precision locating and tracking (using distance measurements between radios), and precision time-of-arrival-based localization approaches.[35]UWB radar has been proposed as the active sensor component in anAutomatic Target Recognitionapplication, designed to detect humans or objects that have fallen onto subway tracks.[36] Ultra-wideband characteristics are well-suited to short-range applications, such asPC peripherals,wirelessmonitors,camcorders, wirelessprinting, andfile transferstoportable media players.[37]UWB was proposed for use inpersonal area networks, and appeared in the IEEE 802.15.3a draft PAN standard. However, after several years of deadlock, the IEEE 802.15.3a task group[38]was dissolved[39]in 2006. The work was completed by the WiMedia Alliance and the USB Implementer Forum. Slow progress in UWB standards development, the cost of initial implementation, and performance significantly lower than initially expected are several reasons for the limited use of UWB in consumer products (which caused several UWB vendors to cease operations in 2008 and 2009).[40] UWB's precise positioning and ranging capabilities enable collision avoidance and centimeter-level localization accuracy, surpassing traditional GPS systems. Moreover, its high data rate and low latency facilitate seamless vehicle-to-vehicle communication, promoting real-time information exchange and coordinated actions. UWB also enables effective vehicle-to-infrastructure communication, integrating with infrastructure elements for optimized behavior based on precise timing and synchronized data. Additionally, UWB's versatility supports innovative applications such as high-resolution radar imaging for advanced driver assistance systems, secure key less entry via biometrics or device pairing, and occupant monitoring systems, potentially enhancing convenience, security, and passenger safety.[41] In the U.S.,ultra-widebandrefers to radio technology with abandwidthexceeding the lesser of 500 MHz or 20% of the arithmeticcenter frequency, according to the U.S.Federal Communications Commission(FCC). A February 14, 2002 FCC Report and Order[58]authorized the unlicensed use of UWB in the frequency range from 3.1 to 10.6GHz. The FCC powerspectral density(PSD) emission limit for UWB transmitters is −41.3 dBm/MHz. This limit also applies to unintentional emitters in the UWB band (the"Part 15"limit). However, the emission limit for UWB emitters may be significantly lower (as low as −75 dBm/MHz) in other segments of the spectrum. Deliberations in theInternational Telecommunication UnionRadiocommunication Sector (ITU-R) resulted in a Report and Recommendation on UWB[citation needed]in November 2005.UKregulatorOfcomannounced a similar decision[59]on 9 August 2007. There has been concern over interference between narrowband and UWB signals that share the same spectrum. Earlier, the only radio technology that used pulses wasspark-gap transmitters, which international treaties banned because they interfere with medium-wave receivers. However, UWB uses much lower levels of power. The subject was extensively covered in the proceedings that led to the adoption of the FCC rules in the US, and in the meetings of the ITU-R leading to its Report and Recommendations on UWB technology. Commonly-used electrical appliances emitimpulsive noise(for example, hair dryers), and proponents successfully argued that thenoise floorwould not be raised excessively by wider deployment of low power wideband transmitters.[60] In February 2002, the Federal Communications Commission (FCC) released an amendment (Part 15) that specifies the rules of UWB transmission and reception. According to this release, any signal with fractional bandwidth greater than 20% or having a bandwidth greater than 500 MHz is considered as an UWB signal. The FCC ruling also defines access to 7.5 GHz of unlicensed spectrum between 3.1 and 10.6 GHz that is made available for communication and measurement systems.[61] Narrowband signals that exist in the UWB range, such asIEEE 802.11atransmissions, may exhibit highPSDlevels compared to UWB signals as seen by a UWB receiver. As a result, one would expect a degradation of UWB bit error rate performance.[62]
https://en.wikipedia.org/wiki/Ultra_wideband
Waveletsare often used to analyse piece-wise smooth signals.[1]Wavelet coefficients can efficiently represent a signal which has led to data compression algorithms using wavelets.[2]Wavelet analysis is extended formultidimensional signal processingas well. This article introduces a few methods for wavelet synthesis and analysis for multidimensional signals. There also occur challenges such as directivity in multidimensional case. Thediscrete wavelet transformis extended to the multidimensional case using thetensor productof well known 1-D wavelets. In 2-D for example, the tensor product space for 2-D is decomposed into four tensor product vector spaces[3]as (φ(x) ⨁ ψ(x)) ⊗ (φ(y) ⨁ ψ(y)) = { φ(x)φ(y), φ(x)ψ(y), ψ(x)φ(y), ψ(x)ψ(y)} This leads to the concept of multidimensional separable DWT similar in principle to the multidimensional DFT. φ(x)φ(y)gives the approximation coefficients and other subbands: φ(x)ψ(y)low-high (LH) subband, ψ(x)φ(y)high-low (HL) subband, ψ(x)ψ(y)high-high (HH) subband, give detail coefficients. Wavelet coefficients can be computed by passing the signal to be decomposed though a series of filters. In the case of 1-D, there are two filters at every level-one low pass for approximation and one high pass for the details. In the multidimensional case, the number of filters at each level depends on the number of tensor product vector spaces. For M-D,2Mfilters are necessary at every level. Each of these is called a subband. The subband with all low pass (LLL...) gives the approximation coefficients and all the rest give the detail coefficients at that level. For example, forM=3and a signal of sizeN1 × N2 × N3, a separable DWT can be implemented as follows: Applying the 1-D DWT analysis filterbank in dimensionN1, it is now split into two chunks of sizeN1⁄2× N2 × N3. Applying 1-D DWT inN2dimension, each of these chunks is split into two more chunks ofN1⁄2×N2⁄2× N3. This repeated in 3-D gives a total of 8 chunks of sizeN1⁄2×N2⁄2×N3⁄2.[4] The wavelets generated by the separable DWT procedure are highly shift variant. A small shift in the input signal changes the wavelet coefficients to a large extent. Also, these wavelets are almost equal in their magnitude in all directions and thus do not reflect the orientation or directivity that could be present in the multidimensional signal. For example, there could be an edge discontinuity in an image or an object moving smoothly along a straight line in the space-time 4D dimension. A separable DWT does not fully capture the same. In order to overcome these difficulties, a method of wavelet transform calledComplex wavelet transform(CWT) was developed. Similar to the 1-D complex wavelet transform,[5]tensor products of complex wavelets are considered to produce complex wavelets for multidimensional signal analysis. With further analysis it is seen that these complex wavelets are oriented.[6]This sort of orientation helps to resolve the directional ambiguity of the signal. Dual tree CWT in 1-D uses 2 real DWTs, where the first one gives the real part of CWT and the second DWT gives the imaginary part of the CWT. M-D dual tree CWT is analyzed in terms of tensor products. However, it is possible to implement M-D CWTs efficiently using separable M-D DWTs and considering sum and difference of subbands obtained. Additionally, these wavelets tend to be oriented in specific directions. Two types of oriented M-D CWTs can be implemented. Considering only the real part of the tensor product of wavelets, real coefficients are obtained. All wavelets are oriented in different directions. This is 2mtimes as expansive where m is the dimensions. If both real and imaginary parts of the tensor products of complex wavelets are considered, complex oriented dual tree CWT which is 2 times more expansive than real oriented dual tree CWT is obtained. So there are two wavelets oriented in each of the directions. Although implementing complex oriented dual tree structure takes more resources, it is used in order to ensure an approximate shift invariance property that a complex analytical wavelet can provide in 1-D. In the 1-D case, it is required that the real part of the wavelet and the imaginary part areHilbert transformpairs for the wavelet to be analytical and to exhibit shift invariance. Similarly in the M-D case, the real and imaginary parts of tensor products are made to be approximate Hilbert transform pairs in order to be analytic and shift invariant.[6][7] Consider an example for 2-D dual tree real oriented CWT: Letψ(x)andψ(y)be complex wavelets: ψ(x) = ψ(x)h+ j ψ(x)gandψ(y) = ψ(y)h+ j ψ(y)g. ψ(x,y) = [ψ(x)h+ j ψ(x)g][ ψ(y)h+ j ψ(y)g] = ψ(x)hψ(y)h- ψ(x)gψ(x)g+ j [ψ(x)hψ(y)g- ψ(x)hψ(x)g] The support of the Fourier spectrum of the wavelet above resides in the first quadrant. When just the real part is considered,Real(ψ(x,y)) = ψ(x)hψ(y)h- ψ(x)gψ(x)ghas support on opposite quadrants (see (a) in figure). Bothψ(x)hψ(y)handψ(x)gψ(y)gcorrespond to the HH subband of two different separable 2-D DWTs. This wavelet is oriented at-45o. Similarly, by consideringψ2(x,y) = ψ(x)ψ(y)*, a wavelet oriented at45ois obtained. To obtain 4 more oriented real wavelets,φ(x)ψ(y),ψ(x)φ(y),φ(x)ψ(y)*andψ(x)φ(y)*are considered. The implementation of complex oriented dual tree structure is done as follows: Two separable 2-D DWTs are implemented in parallel using the filterbank structure as in the previous section. Then, the appropriate sum and difference of different subbands (LL, LH, HL, HH) give oriented wavelets, a total of 6 in all. Similarly, in 3-D, 4 separable 3-D DWTs in parallel are needed and a total of 28 oriented wavelets are obtained. Although the M-D CWT provides one with oriented wavelets, these orientations are only appropriate to represent the orientation along the (m-1)thdimension of a signal withmdimensions. When singularities inmanifold[8]of lower dimensions are considered, such as a bee moving in a straight line in the 4-D space-time, oriented wavelets that are smooth in the direction of the manifold and change rapidly in the direction normal to it are needed. A new transform, Hypercomplex Wavelet transform was developed in order to address this issue. The dual treehypercomplex wavelet transform (HWT)developed in[9]consists of a standard DWT tensor and2m -1wavelets obtained from combining the 1-D Hilbert transform of these wavelets along the n-coordinates. In particular a 2-D HWT consists of the standard 2-D separable DWT tensor and three additional components: Hx{ψ(x)hψ(y)h} = ψ(x)gψ(y)h Hy{ψ(x)hψ(y)h} = ψ(x)hψ(y)g HxHy{ψ(x)hψ(y)h} = ψ(x)gψ(y)g For the 2-D case, this is named dual treequaternionwavelet transform (QWT).[10]The total redundancy in M-D is2mtight frame. The hypercomplex transform described above serves as a building block to construct thedirectional hypercomplex wavelet transform (DHWT). A linear combination of the wavelets obtained using the hypercomplex transform give a wavelet oriented in a particular direction. For the 2-D DHWT, it is seen that these linear combinations correspond to the exact 2-D dual tree CWT case. For 3-D, the DHWT can be considered in two dimensions, one DHWT forn = 1and another forn = 2. Forn = 2,n = m-1, so, as in the 2-D case, this corresponds to 3-D dual tree CWT. But the case ofn = 1gives rise to a new DHWT transform. The combination of 3-D HWT wavelets is done in a manner to ensure that the resultant wavelet is lowpass along 1-D and bandpass along 2-D. In,[9]this was used to detect line singularities in 3-D space. The wavelet transforms for multidimensional signals are often computationally challenging which is the case with most multidimensional signals. Also, the methods of CWT and DHWT are redundant even though they offer directivity and shift invariance.
https://en.wikipedia.org/wiki/Wavelet_for_multidimensional_signals_analysis
Inmathematics, a sequencea=(a0,a1, ...,an)of nonnegative real numbers is called alogarithmically concave sequence, or alog-concave sequencefor short, ifai2≥ai−1ai+1holds for0 <i<n. Remark:some authors (explicitly or not) add two further conditions in the definition of log-concave sequences: These conditions mirror the ones required forlog-concave functions. Sequences that fulfill the three conditions are also calledPólya Frequency sequences of order 2(PF2sequences). Refer to chapter 2 of[1]for a discussion on the two notions. For instance, the sequence(1,1,0,0,1)satisfies the concavity inequalities but not the internal zeros condition. Examples of log-concave sequences are given by thebinomial coefficientsalong any row ofPascal's triangleand theelementary symmetric meansof a finite sequence of real numbers. Thiscombinatorics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Logarithmically_concave_sequence
Inmathematics, aBorel measureμonn-dimensionalEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}is calledlogarithmically concave(orlog-concavefor short) if, for anycompact subsetsAandBofRn{\displaystyle \mathbb {R} ^{n}}and 0 <λ< 1, one has whereλA+ (1 −λ)Bdenotes theMinkowski sumofλAand (1 −λ)B.[1] TheBrunn–Minkowski inequalityasserts that theLebesgue measureis log-concave. The restriction of the Lebesgue measure to anyconvex setis also log-concave. By a theorem of Borell,[2]a probability measure on R^d is log-concave if and only if it has a density with respect to the Lebesgue measure on some affine hyperplane, and this density is alogarithmically concave function. Thus, anyGaussian measureis log-concave. ThePrékopa–Leindler inequalityshows that aconvolutionof log-concave measures is log-concave.
https://en.wikipedia.org/wiki/Logarithmically_concave_measure
"AI slop", often simply "slop", is a derogatory term for low-quality media, including writing and images, made usinggenerative artificial intelligencetechnology, characterized by an inherent lack of effort, logic, or purpose.[1][4][5]Coined in the 2020s, the term has a pejorative connotation akin to "spam".[4] It has been variously defined as "digital clutter", "filler content produced by AI tools that prioritize speed and quantity over substance and quality",[6]and "shoddy or unwanted AI content insocial media, art, books and, increasingly, in search results".[7] Jonathan Gilmore, a philosophy professor at theCity University of New York, describes the "incredibly banal, realistic style" of AI slop as being "very easy to process".[8] As earlylarge language models(LLMs) andimage diffusion modelsaccelerated the creation of high-volume but low-quality written content and images, discussion commenced among journalists and on social platforms for the appropriate term for the influx of material. Terms proposed included "AI garbage", "AI pollution", and "AI-generated dross".[5]Early uses of the term "slop" as a descriptor for low-grade AI material apparently came in reaction to the release of AI image generators in 2022. Its early use has been noted among4chan,Hacker News, andYouTubecommentators as a form of in-groupslang.[7] The British computer programmerSimon Willisonis credited with being an early champion of the term "slop" in the mainstream,[1][7]having used it on his personal blog in May 2024.[9]However, he has said it was in use long before he began pushing for the term.[7] The term gained increased popularity in the second quarter of 2024 in part because ofGoogle's use of itsGeminiAI model to generate responses to search queries,[7]and was widely criticized in media headlines during the fourth quarter of 2024.[1][4] Research found that training LLMs on slop causesmodel collapse: a consistent decrease in the lexical, syntactic, and semantic diversity of the model outputs through successive iterations, notably remarkable for tasks demanding high levels of creativity.[10]AI slop is similarly produced when the same content is continuously refined, paraphrased, or reprocessed through LLMs, with each output becoming the input for the next iteration. Research has shown that this process causes information to gradually distort as it passes through a chain of LLMs, a phenomenon reminiscent of a classic communication exercise known as thetelephone game.[11] AI image and video slop proliferated on social media in part because it was revenue-generating for its creators onFacebookandTikTok, with the issue affecting Facebook most notably. This incentivizes individuals fromdeveloping countriesto create images that appeal to audiences in the United States which attract higher advertising rates.[12][13][14] The journalist Jason Koebler speculated that the bizarre nature of some of the content may be due to the creators using Hindi, Urdu, and Vietnamese prompts (languages which are underrepresented in the model'straining data), or using erraticspeech-to-textmethods to translate their intentions into English.[12] Speaking toNew Yorkmagazine, a Kenyan creator of slop images described givingChatGPTprompts such as "WRITE ME 10 PROMPT picture OF JESUS WHICH WILLING BRING HIGH ENGAGEMENT ON FACEBOOK [sic]", and then feeding those created prompts into atext-to-imageAI service such asMidjourney.[4] In August 2024,The Atlanticnoted that AI slop was becoming associated with the political right in the United States, who were using it forshitpostingandengagement farmingon social media, with the technology offering "cheap, fast, on-demand fodder for content".[15] AI slop is frequently used in political campaigns in an attempt at gaining attention throughcontent farming.[16]In August 2024, the American politicianDonald Trumpposted a series of AI-generated images on his social media platform,Truth Social, that portrayed fans of the singerTaylor Swiftin "Swifties for Trump" T-shirts, as well as a photo of the singer herself appearing to endorseTrump's 2024 presidential campaign. The images originated from the conservativeTwitteraccount@amuse, which posted numerous AI slop images leading up to the2024 United States electionsthat were shared by other high-profile figures within the AmericanRepublican Party, such asElon Musk, who has publicly endorsed the utilization of generative AI, furthering this association.[17] In the aftermath ofHurricane Helenein the United States, members of the Republican Party circulated an AI-generated image of a young girl holding a puppy in a flood, and used it as evidence of the failure of PresidentJoe Bidento respond to the disaster.[18][3]Some, likeAmy Kremer, shared the image on social media even while acknowledging that it was not genuine.[19][20] In November 2024,Coca-Colaused artificial intelligence to create three commercials as part of their annualholiday campaign. These videos were immediately met with negative reception from both casual viewers and artists;[21]the animatorAlex Hirsch, the creator ofGravity Falls, criticized the company's decision not to employ human artists to create the commercial.[22]In response to the negative feedback, the company defended their decision to use generative artificial intelligence stating that "Coca-Cola will always remain dedicated to creating the highest level of work at the intersection of human creativity and technology".[23] In March 2025,Paramount Pictureswas criticized for using AI scripting and narration in anInstagramvideo promoting the filmNovocaine.[24]The ad uses a robotic AI voice in a style similar to low-quality AI spam videos produced by content farms.A24received similar backlash for releasing a series of AI-generated posters for the 2024 filmCivil War. One poster appears to depict a group of soldiers in a tank-like raft preparing to fire on a large swan, an image which does not resemble the events of the film.[25][26] In the same month,Activisionposted various advertisements and posters for fake video games such as "Guitar HeroMobile", "Crash Bandicoot: Brawl", and "Call of Duty: Zombie Defender" that were all made using generative AI on platforms such asFacebookand Instagram, which many labelled as AI slop.[27]The intention of the posts was later stated to act as a survey for interest in possible titles by the company.[28]TheItalian brainrotAI trend was widely adopted by advertisers to adjust well to younger audiences.[29] Fantastical promotional graphics for the 2024Willy's Chocolate Experienceevent, characterized as "AI-generated slop",[30]misled audiences into attending an event that was held in a lightly decorated warehouse. Tickets were marketed throughFacebookadvertisements showing AI-generated imagery, with no genuine photographs of the venue.[31] In October 2024, thousands of people were reported to have assembled for a non-existent Halloween parade inDublinas a result of a listing on an aggregation listings website, MySpiritHalloween.com, which used AI-generated content.[32][33]The listing went viral on TikTok andInstagram.[34]While a similar parade had been held inGalway, and Dublin had hosted parades in prior years, there was no parade in Dublin in 2024.[33]One analyst characterized the website, which appeared to use AI-generated staff pictures, as likely using artificial intelligence "to create content quickly and cheaply where opportunities are found".[35]The site's owner said that "We asked ChatGPT to write the article for us, but it wasn't ChatGPT by itself." In the past the site had removed non-existent events when contacted by their venues, but in the case of the Dublin parade the site owner said that "no one reported that this one wasn't going to happen". MySpiritHalloween.com updated their page to say that the parade had been "canceled" when they became aware of the issue.[36] Online booksellers and library vendors now have many titles that are written by AI and are not curated into collections by librarians. The digital media providerHoopla, which supplies libraries withebooksand downloadable content, has generative AI books with fictional authors and dubious quality, which cost libraries money when checked out by unsuspecting patrons.[37] The 2024 video gameCall of Duty: Black Ops 6includes assets generated by artificial intelligence. Since the game's initial release, many players had accusedTreyarchandRaven Softwareof using AI to create in-game assets, including loading screens, emblems, and calling cards. A particular example was a loading screen for the zombies game mode that depicted "Necroclaus", a zombifiedSanta Clauswith six fingers on one hand, an image which also had other irregularities.[38]Theprevious entryin theCall of Dutyfranchise was also accused of selling AI-generatedcosmetics.[39] In February 2025, Activision disclosedBlack Ops 6's usage of generative artificial intelligence to comply withValve's policies on AI-generated or assisted products onSteam. Activision states on the game's product page on Steam that "Our team uses generative AI tools to help develop some in game assets."[40] Foamstars, amultiplayerthird-person shooterreleased bySquare Enixin 2024, features in-game music withcover artthat was generated usingMidjourney. Square Enix confirmed the use of AI, but defended the decision, saying that they wanted to "experiment" with artificial intelligence technologies and claiming that the generated assets make up "about 0.01% or even less" of game content.[41][42][43]Previously, on January 1, 2024, Square Enix president Takashi Kiryu stated in a new year letter that the company will be "aggressive in applying AI and other cutting-edge technologies to both [their] content development and [their] publishing functions".[44][45] In 2024,Rovio Entertainmentreleased a demo of a mobile game called Angry Birds: Block Quest onAndroid. The game featured AI-generated images for loading screens and backgrounds.[46]It was heavily criticized by players, who called itshovelwareand disapproved of Rovio's use of AI images.[47][48]It was eventually discontinued and removed from thePlay Store. Some films have received backlash for including AI-generated content. The filmLate Night with the Devilwas notable for its use of AI, which some criticized as being AI slop.[49][50]Several low-quality AI-generated images were used as interstitial title cards, with one image featuring a skeleton with inaccurate bone structure and poorly-generated fingers that appear disconnected from its hands.[51] Some streaming services such asAmazon Prime Videohave used AI to generate posters and thumbnail images in a manner that can be described as slop. A low-quality AI poster was used for the 1922 filmNosferatu, depictingCount Orlokin a way that does not resemble his look in the film.[52]A thumbnail image for12 Angry MenonAmazon Freeveeused AI to depict 19 men with smudged faces, none of whom appeared to bear any similarities to the characters in the film.[53][54]Additionally, some viewers have noticed that many plot descriptions appear to be generated by AI, which some people have characterized as slop. One synopsis briefly listed on the site for the filmDog Day Afternoonread: "A man takes hostages at a bank in Brooklyn. Unfortunately I do not have enough information to summarize further within the provided guidelines."[55] In one case Deutsche Telekom removed a series from their media offer after viewers complained about the bad quality and monotonous German voice dubbing (translated from original Polish) and it was found out that it was done via AI.[56]
https://en.wikipedia.org/wiki/AI_slop
Artificial intelligence(AI) has been used in applications throughout industry and academia. In a manner analogous to electricity or computers, AI serves as ageneral-purpose technology. AI programs are designed to simulate human perception and understanding. These systems are capable of adapting to new information and responding to changing situations.Machine learninghas been used for various scientific and commercial purposes[1]includinglanguage translation,image recognition,decision-making,[2][3]credit scoring, ande-commerce. Artificial Intelligence (AI) is all about creating computer systems that act like people. This means they can understand information and human language and make decisions similar to how we do.  Artificial intelligence technologies are now being used across various industries, transforming how they function and creating new opportunities. This article provides an overview of the applications of AI in fields like health care, finance, and education, while also discussing the challenges and future prospects in these areas. Machine learning has been used forrecommendation systemsin determining which posts should show up insocial media feeds.[4][5]Various types ofsocial media analysisalso make use of machine learning[6][7]and there is research into its use for (semi-)automated tagging/enhancement/correction ofonline misinformationand relatedfilter bubbles.[8][9][10] AI has been used to customize shopping options and personalize offers.[11]Online gamblingcompanies have used AI for targeting gamblers.[12] Intelligent personal assistantsuse AI to understand many natural language requests in other ways than rudimentary commands. Common examples are Apple's Siri, Amazon's Alexa, and a more recent AI, ChatGPT by OpenAI.[13] Bing Chathas used artificial intelligence as part of itssearch engine.[14] Machine learning can be used to combat spam, scams, andphishing. It can scrutinize the contents of spam and phishing attacks to attempt to identify malicious elements.[15]Some models built via machine learning algorithms have over 90% accuracy in distinguishing between spam and legitimate emails.[16]These models can be refined using new data and evolving spam tactics. Machine learning also analyzes traits such as sender behavior, email header information, and attachment types, potentially enhancing spam detection.[17] Speech translation technology attempts to convert one language's spoken words into another language. This potentially reduces language barriers in global commerce and cross-cultural exchange, enabling speakers of various languages to communicate with one another.[18] AI has been used to automatically translate spoken language and textual content in products such asMicrosoft Translator,Google Translate, andDeepL Translator.[19]Additionally, research and development are in progress to decode and conduct animal communication.[20][21] Meaning is conveyed not only by text, but also through usage and context (seesemanticsandpragmatics). As a result, the two primary categorization approaches for machine translations arestatistical machine translation(SMT) andneural machine translations(NMTs). The old method of performing translation was to use statistical methodology to forecast the best probable output with specific algorithms. However, with NMT, the approach employs dynamic algorithms to achieve better translations based on context.[22] AI has been used infacial recognition systems. Some examples are Apple'sFace IDand Android'sFace Unlock, which are used to secure mobile devices.[23] Image labeling has been used byGoogle Image Labelerto detect products in photos and to allow people to search based on a photo. Image labeling has also been demonstrated to generate speech to describe images to blind people.[19]Facebook'sDeepFaceidentifies human faces in digital images.[citation needed] Social media sites and content aggregators use AI systems to make personalized news feeds by watching users' actions and engagement history.[24]Content moderation often relies on AI to spot harmful content, though these systems have trouble with understanding the bigger picture. Search engines use rules to rank results and understand what people want, while virtual helpers like Siri and Alexa use everyday language to read user queries. Email services use learning tools to find spam by checking the content and patterns.[25] Neural machine translation systems have gotten much better at translating text. It works by examining complete sentences to maintain accuracy. Computer vision systems can identify people in images and videos. This assists with tasks like sorting photos and performing security checks. AI is often used for surveillance for credit systems, targeted advertising and automation we can erode privacy and concentrate power. It also led to dystopian outcomes such as autonomous systems making unaccountable decisions.[26] Games have been a major application[relevant?]of AI's capabilities since the 1950s. In the 21st century, AIs have beaten human players in many games, includingchess(Deep Blue),Jeopardy!(Watson),[27]Go(AlphaGo),[28][29][30][31][32][33][34]poker(Pluribus[35]andCepheus),[36]E-sports(StarCraft),[37][38]andgeneral game playing(AlphaZero[39][40][41]andMuZero).[42][43][44][45] Kuki AI is a set ofchatbotsand other apps which were designed for entertainment and as a marketing tool.[46][47]Character.aiis another example of a chatbot being used for recreation.[citation needed] AI has changed gaming by making smart non-player characters (NPCs) that can adjust.[48]Algorithms can now create game worlds and situations on their own, which reduces development costs and revives the excitement to play again. In digital art and music, AI tools help people express themselves in fresh, new ways using generative algorithms.[49] Recommendation systems on streaming platforms check how people watch to suggest content. This greatly affects the way viewers enjoy the media.[citation needed] AI for Goodis a platform launched in 2017 by theInternational Telecommunication Union(ITU) agency of the United Nations (UN). The goal of the platform is to use AI to help achieve the UN'sSustainable Development Goals.[citation needed] TheUniversity of Southern Californialaunched the Center for Artificial Intelligence in Society, with the goal of using AI to address problems such as homelessness.Stanfordresearchers useAIto analyze satellite images to identify high poverty areas.[50] In agriculture, AI has been proposed as a way for farmers to identify areas that need irrigation, fertilization, or pesticide treatments to increase yields, thereby improving efficiency.[51]AI has been used to attempt toclassify livestock pig call emotions,[20]automategreenhouses,[52]detect diseases and pests,[53]and optimize irrigation.[54] Precision farming uses machine learning and data from satellites, drones and sensors to water, fertilize and manage pests. Computer vision helps keep an eye on plant health, spot diseases and even help with automated harvesting of specific crops. With predictive analytics farmers can make better decisions by predicting weather patterns and knowing when to plant.[55] AI helps with livestock management by tracking animal health and production. These are the tools of "smart farming". They make farming better and more sustainable.[citation needed] Cyber securitycompanies are adoptingneural networks,machine learning, andnatural language processingto improve their systems.[56] Applications of AI in cyber security include: Machine learning tools look at traffic patterns. They find an unusual activity that might be a security breach. Automated systems gather and analyze data. Their goal is to find new threats before they do significant damage.[62]User behavior analytics establish normal patterns for users and systems that alert when there is a change that might mean a hacked account.[citation needed] AI brings new challenges to cybersecurity. Attackers are using the same tools to plan smarter attacks. This is an ongoing race to technological arms race.[citation needed] AI elevates teaching, focusing on significant issues like the knowledge nexus and educational equality. The evolution of AI in education and technology should be used to improve human capabilities in relationships where they do not replace humans.UNESCOrecognizes the future of AI in education as an instrument to reach Sustainable Development Goal 4, called "Inclusive and Equitable Quality Education."[63] TheWorld Economic Forumalso stresses AI's contribution to students' overall improvement and transforming teaching into a more enjoyable process.[63] AI driven tutoring systems, such as Khan Academy, Duolingo and Carnegie Learning are the forefoot of delivering personalized education.[64] These platforms leverage AI algorithms to analyze individual learning patterns, strengths, and weaknesses, enabling the customization of content and Algorithm to suit each student's pace and style of learning.[64] In educational institutions, AI is increasingly used to automate routine tasks like attendance tracking, grading and marking, which allows educators to devote more time to interactive teaching and direct student engagement.[65] Furthermore, AI tools are employed to monitor student progress, analyze learning behaviors, and predict academic challenges, facilitating timely and proactive interventions for students who may be at risk of falling behind.[65] Despite the benefits, the integration of AI in education raises significant ethical and privacy concerns, particularly regarding the handling of sensitive student data.[64] It is imperative that AI systems in education are designed and operated with a strong emphasis on transparency, security, and respect for privacy to maintain trust and uphold the integrity of educational practices.[64] Much of the regulation will be influenced by the AI Act, the world's first comprehensive AI law.[66] Intelligent tutoring systems provide personalized learning by adapting content based on how each student performs.  Automated assessment tools check student work and give fast feedback which reduces the tutor workload.[67]Learning analytics platforms can find students who might have trouble sooner. They do this by looking for patterns connected to learning issues.[citation needed] Content creation tools assist teachers in making learning materials that fit each student's needs. This includes turning text into several languages. Even though these tools offer many benefits, there are still concerns about data privacy. People worry it could also widen the current gaps in education.[68] Financial institutionshave long usedartificial neural networksystems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI inbankingbegan in 1987 whenSecurity Pacific National Banklaunched a fraud prevention task-force to counter the unauthorized use of debit cards.[69] Banks use AI to organize operations for bookkeeping, investing in stocks, and managing properties. AI can adapt to changes during non-business hours.[70]AI is used to combat fraudand financial crimes by monitoring behavioral patterns for anyabnormal changes or anomalies.[71][72][73] The use of AI in applications such as online trading and decision-making has changed major economic theories.[74]For example, AI-based buying and selling platforms estimate personalized demand and supply curves, thus enabling individualized pricing. AI systems reduceinformation asymmetryin the market and thusmake markets more efficient.[75]The application of artificial intelligence in the financial industry can alleviate the financing constraints of non-state-owned enterprises, especially for smaller and more innovative enterprises.[76] Algorithmic trading systems make trades much quicker and in larger amounts than human traders. Robo-advisors provide automatic advice for investing and managing your money at a lower cost than human advisors. Insurance and lending companies use machine learning to determine risks and set prices.[citation needed] Financial groups use AI systems to check transactions for money laundering. They do this by spotting strange patterns.[77]Auditing gets better with detection algorithms. These algorithms find unusual financial transactions.[citation needed] Algorithmic tradinginvolves using AI systems to make trading decisions at speeds of magnitude greater than any human is capable of, making millions of trades in a day without human intervention. Suchhigh-frequency tradingrepresents a fast-growing sector. Many banks, funds, and proprietary trading firms now have AI-managed portfolios.Automated trading systemsare typically used by large institutional investors but include smaller firms trading with their own AI systems.[78] Large financial institutions use AI to assist with their investment practices.BlackRock's AI engine,Aladdin, is used both within the company and by clients to help with investment decisions. Its functions include the use ofnatural language processingto analyze text such as news, broker reports, and social media feeds. It then gauges the sentiment on the companies mentioned and assigns a score. Banks such asUBSandDeutsche Bankuse SQREEM (Sequential Quantum Reduction and Extraction Model) to mine data to develop consumer profiles and match them withwealth managementproducts.[79] Online lenderUpstartuses machine learning forunderwriting.[80] ZestFinance's Zest Automated Machine Learning (ZAML) platform is used for credit underwriting. This platform uses machine learning to analyze data, including purchase transactions and how a customer fills out a form, to score borrowers. The platform is handy for assigning credit scores to those with limited credit histories.[81] AI makes continuous auditing possible. Potential benefits include reducing audit risk, increasing the level of assurance, and reducing audit duration.[82][quantify] Continuous auditing with AI allows real-time monitoring and reporting of financial activities and provides businesses with timely insights that can lead to quick decision-making.[83] AI software, such as LaundroGraph which uses contemporary suboptimal datasets, could be used foranti-money laundering(AML).[84][85] In the 1980s, AI started to become prominent in finance asexpert systemswere commercialized. For example, Dupont created 100 expert systems, which helped them to save almost $10 million per year.[86]One of the first systems was the Pro-trader expert system that predicted the 87-point drop in theDow Jones Industrial Averagein 1986. "The major junctions of the system were to monitor premiums in the market, determine the optimum investment strategy, execute transactions when appropriate and modify the knowledge base through a learning mechanism."[87] One of the first expert systems to help with financial plans was PlanPowerm and Client Profiling System, created by Applied Expert Systems (APEX). It was launched in 1986. It helped create personal financial plans for people.[88] In the 1990s, AI was applied tofraud detection. In 1993, FinCEN Artificial Intelligence System (FAIS) was launched. It was able to review over 200,000 transactions per week, and over two years, it helped identify 400 potential cases ofmoney launderingequal to $1 billion.[89]These expert systems were later replaced by machine learning systems.[90] AI can enhance entrepreneurial activity, and AI is one of the most dynamic areas for start-ups, with significant venture capital flowing into AI.[91] AIfacial recognition systemsare used formass surveillance, notably in China.[92][93]In 2019,Bengaluru, Indiadeployed AI-managed traffic signals. This system uses cameras to monitor traffic density and adjust signal timing based on the interval needed to clear traffic.[94] Various countries are deploying AI military applications.[95]The main applications enhancecommand and control, communications, sensors, integration and interoperability.[citation needed]Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous andautonomous vehicles.[95]AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions,target acquisition, coordination and deconfliction of distributedJoint Firesbetween networked combat vehicles involving manned and unmanned teams.[citation needed] AI has been used in military operations in Iraq, Syria, Israel and Ukraine.[95][96][97][98] AI in healthcare is often used for classification, to evaluate aCT scanorelectrocardiogramor to identify high-risk patients for population health. AI is helping with the high-cost problem of dosing. One study suggested that AI could save $16 billion. In 2016, a study reported that an AI-derived formula derived the proper dose of immunosuppressant drugs to give to transplant patients.[99]Current research has indicated that non-cardiac vascular illnesses are also being treated with artificial intelligence (AI). For certain disorders, AI algorithms can aid in diagnosis, recommended treatments, outcome prediction, and patient progress tracking. As AI technology advances, it is anticipated that it will become more significant in the healthcare industry.[100] The early detection of diseases like cancer is made possible by AI algorithms, which diagnose diseases by analyzing complex sets of medical data. For example, the IBM Watson system might be used to comb through massive data such as medical records and clinical trials to help diagnose a problem.[101]Microsoft's AI project Hanover helps doctors choosecancer treatmentsfrom among the more than 800 medicines and vaccines.[102][103]Its goal is to memorize all the relevant papers to predict which (combinations of) drugs will be most effective for each patient.Myeloid leukemiais one target. Another study reported on an AI that was as good as doctors in identifying skin cancers.[104]Another project monitors multiple high-risk patients by asking each patient questions based on data acquired from doctor/patient interactions.[105]In one study done withtransfer learning, an AI diagnosed eye conditions similar to anophthalmologistand recommended treatment referrals.[106] Another study demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig's bowel judged better than a surgeon.[107] Artificial neural networksare used asclinical decision support systemsfor medical diagnosis,[108]such as inconcept processingtechnology inEMRsoftware. Other healthcare tasks thought suitable for an AI that are in development include: AI-enabledchatbotsdecrease the need for humans to perform basic call center tasks.[124] Machine learning insentiment analysiscan spot fatigue in order to preventoverwork.[124]Similarly,decision support systemscan preventindustrial disastersand makedisaster responsemore efficient.[125]For manual workers inmaterial handling,predictive analyticsmay be used to reducemusculoskeletal injury.[126]Data collected fromwearable sensorscan improveworkplace health surveillance,risk assessment, and research.[125][how?] AI can auto-codeworkers' compensationclaims.[127][128]AI-enabledvirtual realitysystems can enhance safety training for hazard recognition.[125]AI can more efficiently detect accidentnear misses, which are important in reducing accident rates, but are often underreported.[129] AlphaFold 2can determine the 3D structure of a (folded) protein in hours rather than the months required by earlier automated approaches and was used to provide the likely structures of all proteins in the human body and essentially all proteins known to science (more than 200 million).[130][131][132][133] Medical imaging analysis systems can spot patterns that indicate diseases such as cancer. They can do this just as well as human experts. Predictive analytics can help identify patients with a higher risk for specific conditions. This helps in starting treatments earlier.[134] Natural language processing gets key information from electronic health records. This helps doctors make better choices. Machine learning helps find new drugs by predicting how molecules will work together. This can quicken the development of new treatments.[135]Personalized medicine uses AI to change treatments to match each patient's needs. Machine learning has been used fordrug design.[136]It has also been used for predicting molecular properties and exploring large chemical/reaction spaces.[137]Computer-planned syntheses via computational reaction networks, described as a platform that combines "computational synthesis with AI algorithms to predict molecular properties",[138]have been used to explore theorigins of life on Earth,[139]drug-syntheses and developing routes forrecycling200 industrialwaste chemicalsinto important drugs and agrochemicals (chemical synthesis design).[140]There is research about which types of computer-aided chemistry would benefit from machine learning.[141]It can also be used for "drug discoveryand development,drug repurposing, improving pharmaceutical productivity, and clinical trials".[142]It has been used for thedesign of proteinswith prespecified functional sites.[143][144] It has been used with databases for the development of a 46-day process to design, synthesize and test a drug which inhibits enzymes of a particular gene,DDR1. DDR1 is involved in cancers and fibrosis which is one reason for the high-quality datasets that enabled these results.[145] There are various types of applications for machine learning in decoding human biology, such as helping to mapgene expressionpatterns to functional activation patterns[146]or identifying functionalDNA motifs.[147]It is widely used in genetic research.[148] There also is some use of machine learning insynthetic biology,[149][150]disease biology,[150]nanotechnology (e.g. nanostructured materials andbionanotechnology),[151][152]andmaterials science.[153][154][155] There are alsoprototype robot scientists, including robot-embodied ones like the twoRobot Scientists, which show a form of "machine learning" not commonly associated with the term.[156][157] Similarly, there is research and development of biological "wetware computers" that can learn (e.g. for use asbiosensors) and/or implantation into an organism's body (e.g. for use to control prosthetics).[158][159][160]Polymer-based artificial neurons operate directly in biological environments and define biohybrid neurons made of artificial and living components.[161][162] Moreover, ifwhole brain emulationis possible via both scanning and replicating, at a minimum, the bio-chemical brain – as premised in the form of digital replication inThe Age of Em, possibly usingphysical neural networks– that may have applications as or more extensive than e.g. valued human activities and may imply that society would face substantial moral choices, societal risks and ethical problems[163][164]such as whether (and how) such are built,sent through spaceand used compared to potentially competing e.g. potentially more synthetic and/or less human and/or non/less-sentient types of artificial/semi-artificial intelligence.[additional citation(s) needed]An alternative or additive approach to scanning are types of reverse engineering of the brain.[165][166] A subcategory of artificial intelligence is embodied,[167][168]some of which are mobile robotic systems that each consist of one or multiple robots that are able to learn in the physical world. Additionally,biological computers, even if both artificial and highly intelligent, are typically distinguishable from synthetic, predominantly silicon-based, computers. The two technologies could, however, be combined and used for the design of either. Moreover, many tasks may be poorly carried out by AI even if it uses algorithms that are transparent, understood, bias-free, apparently effective and goal-aligned in addition to having trained data sets that are sufficiently large andcleansed. This may occur, for instance, when the underlying data, available metrics,valuesor training methods are incorrect, flawed or used inappropriately.Computer-aidedis a phrase used to describe human activities that make use of computing as tool in more comprehensive activities and systems such as AI for narrow tasks or making use of such without substantially relying on its results (see also:human-in-the-loop).[citation needed]One study described the biological component as a limitation of AI stating that "as long as the biological system cannot be understood, formalized, and imitated, we will not be able to develop technologies that can mimic it" and that, even if it were understood, this does not necessarily mean there will be "a technological solution to imitate natural intelligence".[169]Technologies that integrate biology and AI includebiorobotics. Artificial intelligence is used inastronomyto analyze increasing amounts of available data[170][171]and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights" for example for discoveringexoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects ingravitational wave astronomy.[172]It could also be used for activities in space such asspace exploration, including analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance,[173]and more autonomous operation.[174][175][176][171] In thesearch for extraterrestrial intelligence(SETI), machine learning has been used in attempts to identify artificially generatedelectromagnetic wavesin available data[177][178]– such as real-time observations[179]– and othertechnosignatures, e.g. viaanomaly detection.[180]Inufology, the SkyCAM-5 project headed by Prof. Hakan Kayal[181]and theGalileo Projectheaded byAvi Loebuse machine learning to attempt to detect and classify types of UFOs.[182][183][184][185][186]The Galileo Project also seeks to detect two further types of potential extraterrestrial technological signatures with the use of AI:'Oumuamua-likeinterstellar objects, and non-manmade artificial satellites.[187][188] Machine learning can also be used to produce datasets of spectral signatures of molecules that may be involved in the atmospheric production or consumption of particular chemicals – such asphosphine possibly detected on Venus– which could prevent miss assignments and, if accuracy is improved, be used in future detections and identifications of molecules on other planets.[189] In April 2024, theScientific Advice Mechanismto theEuropean Commissionpublished advice[190]including a comprehensive evidence review of the opportunities and challenges posed by artificial intelligence in scientific research. As benefits, the evidence review[191]highlighted: As challenges: Machine learning can help to restore and attribute ancient texts.[192]It can help to index texts for example to enable better and easier searching and classification of fragments.[193] Artificial intelligence can also be used to investigate genomes to uncovergenetic history, such asinterbreeding between archaic and modern humansby which for example the past existence of aghost population, notNeanderthalorDenisovan, was inferred.[194] It can also be used for "non-invasive and non-destructive access to internal structures of archaeological remains".[195] Adeep learningsystem was reported to learn intuitive physics from visual data (of virtual 3D environments) based on anunpublishedapproach inspired by studies of visual cognition in infants.[196][197]Other researchers have developed a machine learning algorithm that could discover sets of basic variables of various physical systems and predict the systems' future dynamics from video recordings of their behavior.[198][199]In the future, it may be possible that such can be used to automate the discovery of physical laws of complex systems.[198] AI could be used for materials optimization and discovery such as the discovery of stable materials and the prediction of their crystal structure.[200][201][202] In November 2023, researchers atGoogle DeepMindandLawrence Berkeley National Laboratoryannounced that they had developed an AI system known as GNoME. This system has contributed tomaterials scienceby discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganiccrystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through theMaterials Projectdatabase, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds.[203][204][205] Machine learning is used in diverse types ofreverse engineering. For example, machine learning has been used to reverse engineer a composite material part, enabling unauthorized production of high quality parts,[206]and for quickly understanding the behavior ofmalware.[207][208][209]It can be used to reverse engineer artificial intelligence models.[210]It can also design components by engaging in a type of reverse engineering of not-yet existent virtual components such as inverse molecular design for particular desired functionality[211]orprotein designfor prespecified functional sites.[143][144]Biological network reverse engineering could model interactions in a human understandable way, e.g. bas on time series data of gene expression levels.[212] AI is a mainstay of law-related professions. Algorithms and machine learning do some tasks previously done by entry-level lawyers.[213]While its use is common, it is not expected to replace most work done by lawyers in the near future.[214] Theelectronic discoveryindustry uses machine learning to reduce manual searching.[215] Law enforcement has begun usingfacial recognition systems(FRS) to identify suspects from visual data. FRS results have proven to be more accurate when compared to eyewitness results. Furthermore, FRS has shown to have much a better ability to identify individuals when video clarity and visibility are low in comparison to human participants.[216] COMPASis a commercial system used byU.S. courtsto assess the likelihood ofrecidivism.[217] One concern relates toalgorithmic bias, AI programs may become biased after processing data that exhibits bias.[218]ProPublicaclaims that the average COMPAS-assigned recidivism risk level of black defendants is significantly higher than that of white defendants.[217] In 2019, the city ofHangzhou, China established a pilot program artificial intelligence-based Internet Court to adjudicate disputes related to ecommerce and internet-relatedintellectual propertyclaims.[219]: 124Parties appear before the court via videoconference and AI evaluates the evidence presented and applies relevant legal standards.[219]: 124 Another application of AI is in human resources. AI can screen resumes and rank candidates based on their qualifications, predict candidate success in given roles, and automate repetitive communication tasks via chatbots.[citation needed] AI has simplified the recruiting/job search process for both recruiters and job seekers. According toRaj MukherjeefromIndeed, 65% of job searchers search again within 91 days after hire. An AI-powered engine streamlines the complexity of job hunting by assessing information on job skills, salaries, and user tendencies, matching job seekers to the most relevant positions. Machine intelligence calculates appropriate wages and highlights resume information for recruiters using NLP, which extracts relevant words and phrases from text. Another application is an AI resume builder that compiles a CV in 5 minutes.[citation needed]Chatbotsassist website visitors and refine workflows. AI underliesavatars(automated online assistants) on web pages.[220]It can reduce operation and training costs.[220]Pypestreamautomated customer service for its mobile application to streamline communication with customers.[221] A Google app analyzes language and converts speech into text. The platform can identify angry customers through their language and respond appropriately.[222]Amazon uses a chatbot for customer service that can perform tasks like checking the status of an order, cancelling orders, offering refunds and connecting the customer with a human representative.[223]Generative AI (GenAI), such as ChatGPT, is increasingly used in business to automate tasks and enhance decision-making.[224] In the hospitality industry, AI is used to reduce repetitive tasks, analyze trends, interact with guests, and predict customer needs.[225]AI hotel services come in the form of a chatbot,[226]application, virtual voice assistant and service robots. AI applications analyze media content such as movies, TV programs, advertisement videos oruser-generated content. The solutions often involvecomputer vision. Typical scenarios include the analysis of images usingobject recognitionor face recognition techniques, or theanalysis of videofor scene recognizing scenes, objects or faces. AI-based media analysis can facilitate media search, the creation of descriptive keywords for content, content policy monitoring (such as verifying the suitability of content for a particular TV viewing time),speech to textfor archival or other purposes, and the detection of logos, products or celebrity faces for ad placement. Deep-fakescan be used for comedic purposes but are better known forfake newsand hoaxes. Deepfakes can portray individuals in harmful or compromising situations, causing significant reputational damage and emotional distress, especially when the content is defamatory or violates personal ethics. While defamation and false light laws offer some recourse, their focus on false statements rather than fabricated images or videos often leaves victims with limited legal protection and a challenging burden of proof.[240] In January 2016,[241]theHorizon 2020program financed the InVID Project[242][243]to help journalists and researchers detect fake documents, made available as browser plugins.[244][245] In June 2016, the visual computing group of theTechnical University of Munichand fromStanford Universitydeveloped Face2Face,[246]a program that animates photographs of faces, mimicking the facial expressions of another person. The technology has been demonstrated animating the faces of people includingBarack ObamaandVladimir Putin. Other methods have been demonstrated based ondeep neural networks, from which the namedeep fakewas taken. In September 2018, U.S. SenatorMark Warnerproposed to penalizesocial mediacompanies that allow sharing of deep-fake documents on their platforms.[247] In 2018, Darius Afchar and Vincent Nozick found a way to detect faked content by analyzing the mesoscopic properties of video frames.[248]DARPAgave 68 million dollars to work on deep-fake detection.[248] Audio deepfakes[249][250]and AI software capable of detecting deep-fakes and cloning human voices have been developed.[251][252] Respeecheris a program that enables one person to speak with the voice of another. AI algorithms have been used to detect deepfake videos.[253][254] Artificial intelligenceis also starting to be used in video production, with tools and software being developed that utilize generative AI in order to create new video, or alter existing video. Some of the major tools that are being used in these processes currently are DALL-E, Mid-journey, and Runway.[255]Way mark Studios utilized the tools offered by bothDALL-EandMid-journeyto create a fully AI generated film calledThe Frostin the summer of 2023.[255]Way mark Studios is experimenting with using these AI tools to generate advertisements and commercials for companies in mere seconds.[255]Yves Bergquist, a director of the AI &Neurosciencein Media Project at USC's Entertainment Technology Center, says post production crews in Hollywood are already using generative AI, and predicts that in the future more companies will embrace this new technology.[256] AI has been used to compose music of various genres. David Copecreated an AI calledEmily Howellthat managed to become well known in the field of algorithmic computer music.[257]The algorithm behind Emily Howell is registered as a US patent.[258] In 2012, AIIamuscreated the first complete classical album.[259] AIVA(Artificial Intelligence Virtual Artist), composes symphonic music, mainlyclassical musicforfilm scores.[260]It achieved a world first by becoming the first virtual composer to be recognized by a musicalprofessional association.[261] Melomicscreates computer-generated music for stress and pain relief.[262] At Sony CSL Research Laboratory, the Flow Machines software creates pop songs by learning music styles from a huge database of songs. It can compose in multiple styles. The Watson Beat usesreinforcement learninganddeep belief networksto compose music on a simple seed input melody and a select style. The software was open sourced[263]and musicians such asTaryn Southern[264]collaborated with the project to create music. South Korean singer, Hayeon's, debut song, "Eyes on You" was composed using AI which was supervised by real composers, including NUVO.[265] Narrative Sciencesellscomputer-generated newsand reports. It summarizes sporting events based on statistical data from the game. It also creates financial reports and real estate analyses.[266]Automated Insightsgenerates personalized recaps and previews forYahoo SportsFantasy Football.[267] Yseop, uses AI to turn structured data into natural language comments and recommendations.Yseopwrites financial reports, executive summaries, personalized sales or marketing documents and more in multiple languages, including English, Spanish, French, and German.[268] TALESPIN made up stories similar to thefables of Aesop. The program started with a set of characters who wanted to achieve certain goals. The story narrated their attempts to satisfy these goals.[citation needed]Mark Riedl and Vadim Bulitko asserted that the essence of storytelling was experience management, or "how to balance the need for a coherent story progression with user agency, which is often at odds".[269] While AI storytelling focuses on story generation (character and plot), story communication also received attention. In 2002, researchers developed an architectural framework for narrative prose generation. They faithfully reproduced text variety and complexity on stories such asLittle Red Riding Hood.[270]In 2016, a Japanese AI co-wrote a short story and almost won a literary prize.[271] South Korean company Hanteo Global uses a journalism bot to write articles.[272] Literary authors are also exploring uses of AI. An example isDavid Jhave Johnston's workReRites(2017–2019), where the poet created a daily rite of editing the poetic output of a neural network to create a series of performances and publications. In 2010, artificial intelligence usedbaseballstatistics to automatically generate news articles. This was launched byThe Big Ten Networkusing software fromNarrative Science.[273] After being unable to cover everyMinor League Baseballgame with a large team,Associated Presscollaborated withAutomated Insightsin 2016 to create game recaps that were automated by artificial intelligence.[274] UOL in Brazil expanded the use of AI in its writing. Rather than just generating news stories, they programmed the AI to include commonly searched words onGoogle.[274] El Pais, a Spanish news site that covers many things including sports, allows users to make comments on each news article. They use thePerspective APIto moderate these comments and if the software deems a comment to contain toxic language, the commenter must modify it in order to publish it.[274] A local Dutch media group used AI to create automatic coverage of amateur soccer, set to cover 60,000 games in just a single season. NDC partnered with United Robots to create this algorithm and cover what would have never been possible before without an extremely large team.[274] Lede AI has been used in 2023 to take scores fromhigh school footballgames to generate stories automatically for the local newspaper. This was met with significant criticism from readers for the very robotic diction that was published. With some descriptions of games being a "close encounter of the athletic kind," readers were not pleased and let the publishing company,Gannett, know on social media. Gannett has since halted their used of Lede AI until they come up with a solution for what they call an experiment.[275] Millions of its articles have been edited by bots[279]which however are usually not artificial intelligence software. Many AI platforms use Wikipedia data,[280]mainly for training machine learning applications. There is research and development of various artificial intelligence applications for Wikipedia such as for identifying outdated sentences,[281]detecting covert vandalism[282]or recommending articles and tasks to new editors. Machine translation(seeabove)has also be used for translating Wikipedia articles and could play a larger role in creating, updating, expanding, and generally improving articles in the future. A content translation tool allows editors of some Wikipedias to more easily translate articles across several select languages.[283][284] In video games, AI is routinely used to generate behavior innon-player characters(NPCs). In addition, AI is used forpathfinding. Some researchers consider NPC AI in games to be a "solved problem" for most production tasks.[who?]Games with less typical AI include the AI director ofLeft 4 Dead(2008) and the neuroevolutionary training of platoons inSupreme Commander 2(2010).[285][286]AI is also used inAlien Isolation(2014) as a way to control the actions the Alien will perform next.[287] Kinect, which provides a 3D body–motion interface for theXbox 360and theXbox One, uses algorithms that emerged from AI research.[288][which?] AI has been used to produce visual art. The first AI art program, calledAARON, was developed byHarold Cohenin 1968[289]with the goal of being able to code the act of drawing. It started by creating simple black and white drawings, and later to painting using special brushes and dyes that were chosen by the program itself without mediation from Cohen.[290] AI platforms such asDALL-E,[291]Stable Diffusion,[291]Imagen,[292]andMidjourney[293]have been used for generating visual images from inputs such as text or other images.[294]Some AI tools allow users to input images and output changed versions of that image, such as to display an object or product in different environments. AI image models can also attempt to replicate the specific styles of artists, and can add visual complexity to rough sketches. Since their design in 2014,generative adversarial networks(GANs) have been used by AI artists. GAN computer programming, generates technical images through machine learning frameworks that surpass the need for human operators.[289]Examples of GAN programs that generate art includeArtbreederandDeepDream. In addition to the creation of original art, research methods that utilize AI have been generated to quantitatively analyze digital art collections. Although the main goal of the large-scale digitization of artwork in the past few decades was to allow for accessibility and exploration of these collections, the use of AI in analyzing them has brought about new research perspectives.[295]Two computational methods, close reading and distant viewing, are the typical approaches used to analyze digitized art.[296]While distant viewing includes the analysis of large collections, close reading involves one piece of artwork. AI has been in use since the early 2000s, most notably by a system designed by Pixar called "Genesis".[297]It was designed to learn algorithms and create 3D models for its characters and props. Notable movies that used this technology included Up and The Good Dinosaur.[298]AI has been used less ceremoniously in recent years. In 2023, it was revealed Netflix of Japan was using AI to generate background images for their upcoming show to be met with backlash online.[299]In recent years, motion capture became an easily accessible form of AI animation. For example, Move AI is a program built to capture any human movement and reanimate it in its animation program using learning AI.[300] Power electronicsconverters are used inrenewable energy,energy storage,electric vehiclesandhigh-voltage direct currenttransmission. These converters are failure-prone, which can interrupt service and require costly maintenance or catastrophic consequences in mission critical applications.[citation needed]AI can guide the design process for reliable power electronics converters, by calculating exact design parameters that ensure the required lifetime.[301] The U.S. Department of Energy underscores AI's pivotal role in realizing national climate goals. With AI, the ambitious target of achieving net-zero greenhouse gas emissions across the economy becomes feasible. AI also helps make room for wind and solar on the grid by avoiding congestion and increasing grid reliability.[302] Machine learning can be used for energy consumption prediction and scheduling, e.g. to help withrenewable energy intermittency management(see also:smart gridandclimate change mitigation in the power grid).[303][304][305][306][136] Many telecommunications companies make use ofheuristic searchto manage their workforces. For example,BT Groupdeployed heuristic search[307]in an application that schedules 20,000 engineers. Machine learning is also used forspeech recognition(SR), including of voice-controlled devices, and SR-related transcription, including of videos.[308][309] Artificial intelligence has been combined with digitalspectrometryby IdeaCuria Inc.,[310][311]enable applications such as at-home water quality monitoring. In the 1990s, early artificial intelligence tools controlledTamagotchisandGiga Pets, theInternet, and the first widely released robot,Furby.Aibowas adomestic robotin the form of a robotic dog with intelligent features andautonomy. Mattel created an assortment of AI-enabled toys that "understand" conversations, give intelligent responses, and learn.[312] Oil and gascompanies have used artificial intelligence tools to automate functions, foresee equipment issues, and increase oil and gas output.[313][314] Industrial sensors and AI tools work together to watch manufacturing processes and equipment in real-time. Detection programs find strange patterns. The patterns may show quality issues or issues with equipment. Supply chain management improves with better predictions of demand and managing inventory.[315] AI in transport is expected to provide safe, efficient, and reliable transportation while minimizing the impact on the environment and communities. The major development challenge is the complexity of transportation systems that involves independent components and parties, with potentially conflicting objectives.[316] AI-basedfuzzy logiccontrollers operategearboxes. For example, the 2006Audi TT,VW Touareg[citation needed]andVW Caravellfeature the DSP transmission. A number of Škoda variants (Škoda Fabia) include a fuzzy logic-based controller. Cars have AI-baseddriver-assistfeatures such asself-parkingandadaptive cruise control. There are also prototypes of autonomous automotive public transport vehicles such as electric mini-buses[317][318][319][320]as well asautonomous rail transportinoperation.[321][322][323] There also are prototypes of autonomous delivery vehicles, sometimes includingdelivery robots.[324][325][326][327][328][329][330] Transportation's complexity means that in most cases training an AI in a real-world driving environment is impractical. Simulator-based testing can reduce the risks of on-road training.[331] AI underpins self-driving vehicles. Companies involved with AI includeTesla,Waymo, andGeneral Motors. AI-based systems control functions such as braking, lane changing, collision prevention, navigation and mapping.[332] Autonomous trucks are in the testing phase. The UK government passed legislation to begin testing of autonomous truck platoons in 2018.[333]A group of autonomous trucks follow closely behind each other. German corporationDaimleris testing itsFreightliner Inspiration.[334] Autonomous vehicles require accurate maps to be able to navigate between destinations.[335]Some autonomous vehicles do not allow human drivers (they have no steering wheels or pedals).[336] AI has been used to optimize traffic management, which reduces wait times, energy use, and emissions by as much as 25 percent.[337] Smart traffic lightshave been developed atCarnegie Mellonsince 2009. Professor Stephen Smith has started a company since thenSurtracthat has installed smart traffic control systems in 22 cities. It costs about $20,000 per intersection to install. Drive time has been reduced by 25% and traffic jam waiting time has been reduced by 40% at the intersections it has been installed.[338] TheRoyal Australian AirForce (RAAF)Air Operations Division(AOD) uses AI forexpert systems. AIs operate as surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.[339] Aircraft simulators use AI for training aviators. Flight conditions can be simulated that allow pilots to make mistakes without risking themselves or expensive aircraft. Air combat can also be simulated. AI can also be used to operate planes analogously to their control of ground vehicles. Autonomous drones can fly independently or inswarms.[340] AOD uses the Interactive Fault Diagnosis and Isolation System, or IFDIS, which is a rule-based expert system using information fromTF-30documents and expert advice from mechanics that work on the TF-30. This system was designed to be used for the development of the TF-30 for theF-111C. The system replaced specialized workers. The system allowed regular workers to communicate with the system and avoid mistakes, miscalculations, or having to speak to one of the specialized workers. Speech recognitionallows traffic controllers to give verbal directions to drones. Artificial intelligence supported design of aircraft,[341]or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools. The AIDA uses rule-based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective. In 2003 aDryden Flight Research Centerproject created software that could enable a damaged aircraft to continue flight until a safe landing can be achieved.[342]The software compensated for damaged components by relying on the remaining undamaged components.[343] The 2016 Intelligent Autopilot System combinedapprenticeship learningand behavioral cloning whereby the autopilot observed low-level actions required to maneuver the airplane and high-level strategy used to apply those actions.[344] Neural networksare used bysituational awarenesssystems in ships and boats.[345]There also areautonomous boats. The development of self-driving cars is progressing. Machine learning systems help use sensor data to navigate tricky areas.[346]Advanced driver assistance systems provide features such as keeping the car in its lane and preventing accidents.[347]City traffic management systems change traffic lights based on the current traffic conditions. Maritime shipping uses AI to find the best paths. It checks weather and fuel usage. Automated navigation systems help operate the ships. AI also improves loading and placing containers at ports.[citation needed] Autonomous ships that monitor the ocean, AI-driven satellite data analysis,passive acoustics[348]orremote sensingand other applications ofenvironmental monitoringmake use of machine learning.[349][350][351][176] For example, "Global Plastic Watch" is an AI-basedsatellite monitoring-platform for analysis/tracking ofplastic wastesites to helppreventionofplastic pollution– primarilyocean pollution– by helping identify who and where mismanages plastic waste, dumping it into oceans.[352][353] Machine learning can be used tospot early-warning signsof disasters and environmental issues, possibly including naturalpandemics,[354][355]earthquakes,[356][357][358]landslides,[359]heavy rainfall,[360]long-term water supply vulnerability,[361]tipping-points ofecosystem collapse,[362]cyanobacterial bloomoutbreaks,[363]and droughts.[364][365][366] AI early warning systems can warn us about natural disasters like floods, wildfires, and earthquakes.[367]Climate change monitoring uses machine learning to spot patterns in temperature, rainfall, and other signs in the environment. Wildlife conservation gets better when we use automatic tools to identify animals in camera trap pictures and sound recordings. Tools for monitoring the ocean look at key details that help us learn about the health of ocean ecosystems.[368] AI can be used for real-time code completion, chat, and automated test generation. These tools are typically integrated with editors andIDEsasplugins. They differ in functionality, quality, speed, and approach to privacy.[369]Code suggestions could be incorrect, and should be carefully reviewed by software developers before accepted. GitHub Copilotis an artificial intelligence model developed byGitHubandOpenAIthat is able to autocomplete code in multiple programming languages.[370]Price for individuals: $10/mo or $100/yr, with one free month trial. Tabninewas created by Jacob Jackson and was originally owned by Tabnine company. In late 2019, Tabnine was acquired byCodota.[371]Tabnine tool is available aspluginto most popularIDEs. It offers multiple pricing options, including limited "starter" free version.[372] CodiumAIby CodiumAI, small startup in Tel Aviv, offers automated test creation. Currently supports Python, JS, and TS.[373] GhostwriterbyReplitoffers code completion and chat.[374]They have multiple pricing plans, including a free one and a "Hacker" plan for $7/month. CodeWhispererbyAmazoncollects individual users' content, including files open in the IDE. They claim to focus on security both during transmission and when storing.[375]Individual plan is free, professional plan is $19/user/month. Other tools: SourceGraph Cody, CodeCompleteFauxPilot, Tabby[369] AI can be used to create other AIs. For example, around November 2017, Google's AutoML project to evolve new neural net topologies createdNASNet, a system optimized forImageNetand POCO F1. NASNet's performance exceeded all previously published performance on ImageNet.[376] Machine learning has been used for noise-cancelling inquantum technology,[377]includingquantum sensors.[378]Moreover, there is substantial research and development of using quantum computers with machine learning algorithms. For example, there is a prototype, photonic,quantummemristive deviceforneuromorphic (quantum-)computers(NC)/artificial neural networksand NC-using quantum materials with some variety of potential neuromorphic computing-related applications,[379][380]andquantum machine learningis a field with some variety of applications under development. AI could be used forquantum simulatorswhich may have the application of solving physics andchemistry[381][382]problems as well as forquantum annealersfor training of neural networks for AI applications.[383]There may also be some usefulness in chemistry, e.g. for drug discovery, and in materials science, e.g. for materials optimization/discovery (with possible relevance to quantum materials manufacturing[201][202]).[384][385][386][better source needed] AI researchers have created many tools to solve the most difficult problems in computer science. Many of their inventions have been adopted by mainstream computer science and are no longer considered AI. All of the following were originally developed in AI laboratories:[387] Anoptical character readeris used in the extraction of data in business documents like invoices and receipts. It can also be used in business contract documents e.g.employment agreementsto extract critical data like employment terms, delivery terms, termination clauses, etc.[388] Artificial intelligence in architecturedescribes the use ofartificial intelligencein automation, design and planning in the architectural process or in assisting human skills in the field of architecture.[389]Artificial Intelligenceis thought to potentially lead to and ensue major changes in architecture.[390][391][392] AI in architecture has created a way for architects to create things beyond human understanding. AI implementation of machine learning text-to-render technologies, like DALL-E and stable Diffusion, gives power to visualization complex.[393] AI allows designers to demonstrate their creativity and even invent new ideas while designing. In future, AI will not replace architects; instead, it will improve the speed of translating ideas sketching.[393] The use of AI raises some important ethical issues like privacy, bias, and accountability. When algorithms are trained on biased data, they can end up reinforcing existing inequalities, for example consider how facial recognition technology often performs poorly or fails in certain demographics.[394]Plus, AI's use in surveillance makes people worry about their personal rights and data privacy. The integrartion of these technologies raises some issues that we need to look at. First, it's important to make sure that the data we use is accurate and fair. We also need to address issues of bias to ensure that AI systems treat everyone equally. Creating rules and guidelines for how AI is used is another important step. Data privacy and ensuring security is very essential to maintain trust. While adaptation of these technologies can give us more efficiency in the work pattern, there might be a challenge for human workforce.[citation needed] Looking ahead, we aim to develop AI that can explain its decisions clearly so that people can understand how it works. There's also a goal to create more advanced AI that can handle a wider range of problems. Researchers are starting to emphasize the importance of working together across different fields to develop better technologies for AI so that it provides fair benefits for everyone.[395]
https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence#Art
Artificial intelligence in architecturedescribes the use ofartificial intelligencein automation, design and planning in the architectural process or in assisting human skills in the field of architecture.[1]Artificial Intelligenceis thought to potentially lead to and ensue major changes in architecture.[2][3][4] AI's potential in optimization of design, planning and productivity have been noted as accelerators in the field of architectural work. The ability of AI to potentially amplify an architect's design process has also been noted. Fears of the replacement of aspects or core processes of the architectural profession by Artificial Intelligence have also been raised, as well as the philosophical implications on the profession and creativity.[2][3][4] Artificial intelligence, according toArchDaily, is said to potentially significantly augment the Architectural profession though its ability to improve the design and planning process as well as increasing productivity. Through its ability to handle a large amount of data, AI are said to potentially allow architects a range of design choices with criteria considerations such as budget, requirements adjusted to space, and sustainability goals calculated as part of the design process. ArchDaily said this may allow the design of optimized alternatives that can then undergo human review. AI tools are also said to potentially allow architects to assimilate urban and environmental data to inform their designs, streamlining initial stages of project planning and increasing efficiency and productivity.[4][6] The advances ingenerative designthrough the input of specific prompts allow architects to produce visual designs, including photorealistic images, and thus render and explore various material choices and spatial configurations. ArchDaily noted this could speed the creative process as well as allow for experimentation and sophistication in the design. Additionally, AI's capacity for pattern recognition and coding could aid architects in organizing design resources and developing custom applications, thus enhancing the efficiency and the collaboration between both architects and AI.[4][6] AI is thought to also be able to contribute to the sustainability of buildings by analyzing various factors and following recommended energy-efficient modifications, thus pushing the industry towards greener practices. The use of AI in building maintenance, project management, and the creation ofimmersive virtual realityexperiences are also thought as potentially augmenting the architectural design process and workflow.[4][6] Examples include the use oftext-to-imagesystems such asMidjourneyto create detailed architectural images, and the use of AI optimization systems from companies such as Finch3D andAutodeskto automatically generate floor plans from simple programmatic inputs.[7][8] Architect Kudless in an interview toDezeenrecounted that he uses AI to innovate in architectural design by incorporating materials and scenes not usually present in initial plans, which he believes can significantly alter client presentations. He told Dezeen he believes one should show clients renderings from the onset, with AI assisting in this work, arguing that changes in design should be a positive aspect of the client-designer relationship by actively involving clients in the process. Additionally, Kudless highlighted the AI's potential to facilitate labor in architectural firms, particularly in automating rendering tasks, thus reducing the workload on junior staff while maintaining control over the creative output.[9] In an interview for the AItopia series toDezeen, designer Tim Fu discussed the transformative potential of artificial intelligence (AI) in architecture, there he proposed a future where AI could herald a "neoclassical futurist" style, blending the grandeur of classical aesthetics with futuristic design. Through his collaborative project, The AI Stone Carver, Fu showcased how AI can innovate traditional practices by generating design concepts that are then realized through human craftsmanship, such as stone carving by mason Till Apfel. This approach he believed celebrated the fusion of diverse architectural styles and also emphasized the unique capabilities of AI in enhancing creative design processes.[10] Fu told Dezeen he envisions the integration of AI in design as a means to revive the ornamentation and detailed aesthetics characteristic of classical architecture, moving away from the minimalism, which he said dominates contemporary architecture. He argued that AI's involvement in the ideation phase of design allows for a reversal in the roles of machine and human, enabling architects and designers to focus on creating more intricate and ornamental structures. Fu's optimistic outlook extended to the broader impact of AI on the architectural field, seeing it as an indispensable tool that will shift rather than replace human roles, enriching the field with innovative designs that pay homage to the beauty and qualities of classical architecture not present in contemporary architecture while embracing new technologies.[10] This perspective resonates with designers like Manas Bhatia, whose explorations similarly embrace generative AI as a co-creator and a medium to express ideas, blend architectural traditions, and speculate spatial futures. Bhatia’s work, informed by a lifelong fascination with natural geometries and organic structures, reimagines architecture at the intersection of ecology and computation. Projects such as Fluid Mughal Marvels, Symbiotic Architecture, and Future Cities explore how AI can help transcend conventional boundaries of aesthetics, function, and context—shaping speculative built environments that fuse cultural memory with biomorphic imagination.[11][12][13] Asartificial intelligencecontinues to expand its presence across various industries, its impact on the architectural profession has become a topic of growing discussion. These discussions focus on how AI processes may influence traditional architectural practices, potentially altering job roles, and shaping the nature of creativity. While AI-driven processes may increase efficiency in some aspects of the profession, it also raises questions about the potential loss of unique design perspectives. These thoughts have been countered by many prominent creative figures in the realm of AI Architecture such as Stephen Coorlas, Tim Fu, Hassan Ragab, and Manas Bhatia who have showcased the amplification of creativity in design and potential benefits in terms of restoring creative power to the designer.[14][15] One concern is that AI-powered tools may reduce the demand for human input in certain tasks. There is speculation that this may result in a shift toward managerial or supervisory roles for architects.[16] In some design scenarios, algorithmically generated solutions can be adjusted to prioritize efficiency and cost-effectiveness, which some argue may overshadow the creative and contextual nuances that define individualarchitectural styles.[17]As with any discipline though, it has been determined that AI can be configured to provide beneficial results based on inputs and end goals the architect or designer assigns it. There are also concerns about the potential for AI to exacerbate inequalities within the architectural profession. For instance, larger firms with greater resources to invest in advanced AI technologies may gain a competitive edge over smaller firms and independent architects.[18]This dynamic could contribute to industry consolidation, potentially limiting the diversity of architectural practice and stifling innovation. Ethical considerations in regard to cultural sensitivity have also been raised due to the datasets used to train AI. Without proper vetting of data or implementing failsafe overrides, AI generated outcomes can trend toward overly documented and prioritized content.[19]
https://en.wikipedia.org/wiki/Artificial_intelligence_in_architecture
Computational creativity(also known asartificial creativity,mechanical creativity,creative computingorcreative computation) is a multidisciplinary endeavour that is located at the intersection of the fields ofartificial intelligence,cognitive psychology,philosophy, andthe arts(e.g.,computational artas part ofcomputational culture[1]). Isthe application of computer systems to emulate human-like creative processes, facilitating the generation of artistic and design outputs that mimic innovation and originality. The goal of computational creativity is to model, simulate or replicate creativity using a computer, to achieve one of several ends:[2] The field of computational creativity concerns itself with theoretical and practical issues in the study of creativity. Theoretical work on the nature and proper definition of creativity is performed in parallel with practical work on the implementation of systems that exhibit creativity, with one strand of work informing the other. The applied form of computational creativity is known asmedia synthesis. Theoretical approaches concern the essence of creativity. Especially, under what circumstances it is possible to call the model a "creative" if eminent creativity is about rule-breaking or the disavowal of convention. This is a variant ofAda Lovelace's objection to machine intelligence, as recapitulated by modern theorists such asTeresa Amabile.[3]If a machine can do only what it was programmed to do, how can its behavior ever be calledcreative? Indeed, not all computer theorists would agree with the premise that computers can only do what they are programmed to do[4]—a key point in favor of computational creativity. Because no single perspective or definition seems to offer a complete picture of creativity, the AI researchers Newell, Shaw and Simon[5]developed the combination of novelty and usefulness into the cornerstone of a multi-pronged view of creativity, one that uses the following four criteria to categorize a given answer or solution as creative: Margaret Boden focused on the first two of these criteria, arguing instead that creativity (at least when asking whether computers could be creative) should be defined as "the ability to come up with ideas or artifacts that arenew, surprising,andvaluable".[6] Mihali Csikszentmihalyi argued that creativity had to be considered instead in a social context, and his DIFI (Domain-Individual-Field Interaction) framework has since strongly influenced the field.[7]In DIFI, anindividualproduces works whose novelty and value are assessed by thefield—other people in society—providing feedback and ultimately adding the work, now deemed creative, to thedomainof societal works from which an individual might be later influenced. Whereas the above reflects a top-down approach to computational creativity, an alternative thread has developed among bottom-up computational psychologists involved in artificial neural network research. During the late 1980s and early 1990s, for example, such generative neural systems were driven bygenetic algorithms.[8]Experiments involving recurrent nets[9]were successful in hybridizing simple musical melodies and predicting listener expectations. The use computational processes to generate creative artifacts has been present from early times in history. During the late 1800’s, methods for composing music combinatorily were explored, involving prominent figures like Mozart, Bach, Haydn, and Kiernberger.[10]This approach extended to analytical endeavors as early as 1934, where simple mechanical models were built to explore mathematical problem solving.[11]Professional interest in the creative aspect of computation also was commonly addressed in early discussions of artificial intelligence. The 1956 Dartmouth Conference, listed creativity, invention, and discovery as key goals for artificial intelligence.[12] As the development of computers allowed systems of greater complexity, the 1970’s and 1980’s saw invention of early systems that modelled creativity using symbolic or rule-based approaches. The field of creative storytelling investigated several such models. Meehan’s TALE-SPIN (1977) generated narratives through simulation of character goals and decision trees. Dehn’s AUTHOR (1981) approached generation by simulating an author’s process for crafting a story.[13]Beyond narrative generation, computational creativity expanded into artistic and scientific domains. Artistic image generation was one of the disciplines that saw early potential in generated artifacts through computational creativity. One of the most prominent examples was Harold Cohen’s AARON, which produced art through composition and adaptation of figures based on a large set of symbolic rules and heuristics for visual composition. Some systems also tackled creativity in scientific endeavors. BACON was said to rediscover natural laws like Boyle’s Law and Kepler’s law through hypothesis testing in constrained spaces. By the 1990’s the modeling techniques became more adaptive, attempting to implement cognitive creative rules for generation. Turner’s MINSTREL (1993) introduced TRAMs (Transform Recall Adapt Methods) to simulate creative re-use of prior material for generative storytelling. Meanwhile, Pérez y Pérez’s MEXICA (1999) modeled the creative writing process using cycles of engagement and reflection. As systems increasingly incorporated models of internal evaluation, another approach that emerged was that of combining symbolic generation with domain-specific evaluation metrics, modeling generative and selective steps to creativity In the field of generational humor, the JAPE system (1994) generated pun-based riddles using Prolog and WordNet, applying symbolic pattern-matching rules and a large lexical database (WordNet) to compose riddles involving wordplay.[14]WordNet is a system developed by George Miller and his team at Princeton, its platform and inspired word-mapping structures have been used as the backbone of several syntactic and semantic AI programs. A notable system for music generation was David Cope’s EMI (Experiments in Musical Intelligence) or Emmy, which was trained in the styles of artists like Bach, Beethoven, or Chopin and generated novel pieces in their style through pattern abstraction and recomposition. In the 2000s and beyond, machine learning began influencing creative system design. Researchers such as Mihalcea and Strapparava trained classifiers to distinguish humorous from non-humorous text, using stylistic and semantic features. Meanwhile custom computational approaches led to chess systems like Deep Blue generating quasi-creative gameplay strategies through search algorithms and parallel processing constrained by specific rules and patterns for evaluation.[15] The institutional development of computational creativity grew along its technical advances. Dedicated workshops such as the IJWCC emerged in the 1990s, growing out of interdisciplinary conferences focused on AI and creativity. By the early 2000s, the field coalesced around annual conferences like the International Conference on Computational Creativity (ICCC).[16]Recently, with the advent of Deep Learning, Transformers, and further refinement in Machine Learning structures, computational creativity’s implementation space has new tools for development. While traditional computational approaches to creativity rely on the explicit formulation of prescriptions by developers and a certain degree of randomness in computer programs, machine learning methods allow computer programs to learn on heuristics from input data enabling creative capacities within the computer programs.[17]Especially, deep artificial neural networks allow to learn patterns from input data that allow for the non-linear generation of creative artefacts. Before 1989,artificial neural networkshave been used to model certain aspects of creativity. Peter Todd (1989) first trained a neural network to reproduce musical melodies from a training set of musical pieces. Then he used a change algorithm to modify the network's input parameters. The network was able to randomly generate new music in a highly uncontrolled manner.[9][18][19]In 1992, Todd[20]extended this work, using the so-called distal teacher approach that had been developed by Paul Munro,[21]Paul Werbos,[22]D. Nguyen andBernard Widrow,[23]Michael I. JordanandDavid Rumelhart.[24]In the new approach, there are two neural networks, one of which is supplying training patterns to another. In later efforts by Todd, a composer would select a set of melodies that define the melody space, position them on a 2-d plane with a mouse-based graphic interface, and train a connectionist network to produce those melodies, and listen to the new "interpolated" melodies that the network generates corresponding to intermediate points in the 2-d plane. Language models likeGPTand LSTM are used to generate texts for creative purposes, such as novels and scripts. These models demonstratehallucinationfrom time to time, where erroneous materials are presented as factual. Creators make use of their hallucinatory tendency to capture unintended results. Ross Goodwin's1 the Road, for example, uses an LSTM model trained on literature corpora to generate a novel that refers toJack Kerouac'sOn the Roadbased on multimodal input captured by a camera, a microphone, a laptop's inner clock, and a GPS throughout the road trip.[25][26]Brian Merchantcommented on the novel as "pixelated poetry in its ragtag assemblage of modern American imagery".[26]Oscar Sharp and Ross Goodwin created the experimental sci-fi short film Sunspring in 2016, written with an LSTM model, trained on their scripts and 1980-1990 sci-fi movies.[25][27]Rodica Gotca critiqued their overall lack of focus on the narrative and intention to create based on the background of human culture.[25] Nevertheless, researchers highlight the positive side of language models' hallucination for generating novel solutions, given that the correctness and consistency of the response could be controlled. Jiang et al. propose the divergence-convergence flow model for harnessing the hallucinatory effects. They summarize the types of such effects in current research into factuality hallucinations and faithfulness hallucinations, which can be divided into smaller classes like factual fabrication and instruction inconsistency. While the divergence stage involves generating potentially hallucinatory content, the convergence stage focuses on filtering the hallucinations that are useful for the user with intent recognition and evaluation metrics.[28] Some high-level and philosophical themes recur throughout the field of computational creativity, for example as follows. Margaret Boden[6][29]refers to creativity that is novelmerely to the agent that produces itas "P-creativity" (or "psychological creativity"), and refers to creativity that is recognized as novelby society at largeas "H-creativity" (or "historical creativity"). Boden also distinguishes between the creativity that arises from an exploration within an established conceptual space, and the creativity that arises from a deliberate transformation or transcendence of this space. She labels the former asexploratory creativityand the latter astransformational creativity, seeing the latter as a form of creativity far more radical, challenging, and rarer than the former. Following the criteria from Newell and Simon elaborated above, we can see that both forms of creativity should produce results that are appreciably novel and useful (criterion 1), but exploratory creativity is more likely to arise from a thorough and persistent search of a well-understood space (criterion 3) -- while transformational creativity should involve the rejection of some of the constraints that define this space (criterion 2) or some of the assumptions that define the problem itself (criterion 4). Boden's insights have guided work in computational creativity at a very general level, providing more an inspirational touchstone for development work than a technical framework of algorithmic substance. However, Boden's insights are also the subject of formalization, most notably in the work by Geraint Wiggins.[30] The criterion that creative products should be novel and useful means that creative computational systems are typically structured into two phases, generation and evaluation. In the first phase, novel (to the system itself, thus P-Creative) constructs are generated; unoriginal constructs that are already known to the system are filtered at this stage. This body of potentially creative constructs is then evaluated, to determine which are meaningful and useful and which are not. This two-phase structure conforms to the Geneplore model of Finke, Ward and Smith,[31]which is a psychological model of creative generation based on empirical observation of human creativity. Jordanous and Keller emphasize the need for a "tractable and well-articulated model of creativity." They extracted 694 creativity words derived from a corpus of empirical studies in psychology and creativity research spanning 60 years and clustered them based on lexical similarity. As a result, they identify 14 key components of creativity, which form the basis of the framework "Standardised Procedure for Evaluating Creative Systems" (SPECS). These components include aspects like "dealing with uncertainty," "independence and freedom," "social interaction and communication," and "spontaneity & subconscious processing".[32] While much of computational creativity research focuses on independent and automatic machine-based creativity generation, many researchers are inclined towards a collaboration approach.[33]This human-computer interaction is sometimes categorized under the creativity support tools development. These systems aim to provide an ideal framework for research, integration, decision-making, and idea generation.[34][35]Recently, deep learning approaches to imaging, sound and natural language processing, resulted in the modeling of productive creativity development frameworks.[36][37] Computational creativity is increasingly being discussed in the innovation and management literature as the recent development in AI may disrupt entire innovation processes and fundamentally change how innovations will be created.[38][36]Philip Hutchinson[33]highlights the relevance of computational creativity for creatinginnovationand introduced the concept of “self-innovating artificial intelligence” (SAI) to describe how companies make use of AI in innovation processes to enhance their innovative offerings. SAI is defined as the organizational utilization of AI with the aim of incrementally advancing existing or developing new products, based on insights from continuously combining and analyzing multiple data sources. As AI becomes ageneral-purpose technology, the spectrum of products to be developed with SAI will broaden from simple to increasingly complex. This implies that computational creativity leads to a shift of creativity-related skills for humans. Veale and Pérez y Pérez consider "optimal innovation" proposed by Giora et al. a useful foundation for developing computational creativity.[39]Giora et al.'s experiment asks participants to do pleasure and familiarity ratings of verbal stimuli (e.g., body and soul vs. body and sole) and non-verbal stimuli (e.g., a peace dove vs. a peace dove vertically posed that looks like a waving hand). It reveals that pleasing stimuli need to be innovative while preserving the salient meaning of the literal form. Veale and Pérez y Pérez highlight the need to develop computational systems that capture how meaning changes due to formal changes.[40] A great deal, perhaps all, of human creativity can be understood as a novel combination of pre-existing ideas or objects.[41]Common strategies for combinatorial creativity include: The combinatorial perspective allows us to model creativity as a search process through the space of possible combinations. The combinations can arise from composition or concatenation of different representations, or through a rule-based or stochastic transformation of initial and intermediate representations.Genetic algorithmsandneural networkscan be used to generate blended or crossover representations that capture a combination of different inputs. Mark Turner and Gilles Fauconnier[42][43]propose a model called Conceptual Integration Networks that elaborates uponArthur Koestler's ideas aboutcreativity[44]as well as work by Lakoff and Johnson,[45]by synthesizing ideas from Cognitive Linguistic research intomental spacesandconceptual metaphors. Their basic model defines an integration network as four connected spaces: Fauconnier and Turner describe a collection of optimality principles that are claimed to guide the construction of a well-formed integration network. In essence, they see blending as a compression mechanism in which two or more input structures are compressed into a single blend structure. This compression operates on the level of conceptual relations. For example, a series of similarity relations between the input spaces can be compressed into a single identity relationship in the blend. Some computational success has been achieved with the blending model by extending pre-existing computational models of analogical mapping that are compatible by virtue of their emphasis on connected semantic structures.[46]In 2006, Francisco Câmara Pereira[47]presented an implementation of blending theory that employs ideas both fromsymbolic AIandgenetic algorithmsto realize some aspects of blending theory in a practical form; his example domains range from the linguistic to the visual, and the latter most notably includes the creation of mythical monsters by combining 3-D graphical models. Language provides continuous opportunity for creativity, evident in the generation of novel sentences, phrasings,puns,neologisms,rhymes,allusions,sarcasm,irony,similes,metaphors,analogies,witticisms, andjokes.[48]Native speakers of morphologically rich languages frequently create newword-formsthat are easily understood, and some have found their way to the dictionary.[49]The area ofnatural language generationhas been well studied, but these creative aspects of everyday language have yet to be incorporated with any robustness or scale. In the seminal work of applied linguist Ronald Carter, he hypothesized two main creativity types involving words and word patterns: pattern-reforming creativity, and pattern-forming creativity.[48]Pattern-reforming creativity refers to creativity by the breaking of rules, reforming and reshaping patterns of language often through individual innovation, while pattern-forming creativity refers to creativity via conformity to language rules rather than breaking them, creating convergence, symmetry and greater mutuality between interlocutors through their interactions in the form of repetitions.[50] Substantial work has been conducted in this area of linguistic creation since the 1970s, with the development of James Meehan's TALE-SPIN[51]system. TALE-SPIN viewed stories as narrative descriptions of a problem-solving effort, and created stories by first establishing a goal for the story's characters so that their search for a solution could be tracked and recorded. The MINSTREL[52]system represents a complex elaboration of this basic approach, distinguishing a range of character-level goals in the story from a range of author-level goals for the story. Systems like Bringsjord's BRUTUS[53]elaborate these ideas further to create stories with complex interpersonal themes like betrayal. Nonetheless, MINSTREL explicitly models the creative process with a set of Transform Recall Adapt Methods (TRAMs) to create novel scenes from old. The MEXICA[54]model of Rafael Pérez y Pérez and Mike Sharples is more explicitly interested in the creative process of storytelling, and implements a version of the engagement-reflection cognitive model of creative writing. Example of a metaphor:"She was an ape." Example of a simile:"Felt like a tiger-fur blanket." The computational study of these phenomena has mainly focused on interpretation as a knowledge-based process. Computationalists such asYorick Wilks, James Martin,[55]Dan Fass, John Barnden,[56]and Mark Lee have developed knowledge-based approaches to the processing of metaphors, either at a linguistic level or a logical level. Tony Veale and Yanfen Hao have developed a system, called Sardonicus, that acquires a comprehensive database of explicit similes from the web; these similes are then tagged as bona-fide (e.g., "as hard as steel") or ironic (e.g., "as hairy as abowling ball", "as pleasant as aroot canal"); similes of either type can be retrieved on demand for any given adjective. They use these similes as the basis of an on-line metaphor generation system called Aristotle[57]that can suggest lexical metaphors for a given descriptive goal (e.g., to describe a supermodel as skinny, the source terms "pencil", "whip", "whippet", "rope", "stick-insect" and "snake" are suggested). The process of analogical reasoning has been studied from both a mapping and a retrieval perspective, the latter being key to the generation of novel analogies. The dominant school of research, as advanced byDedre Gentner, views analogy as a structure-preserving process; this view has been implemented in thestructure mapping engineor SME,[58]the MAC/FAC retrieval engine (Many Are Called, Few Are Chosen), ACME (Analogical Constraint Mapping Engine) and ARCS (Analogical Retrieval Constraint System). Other mapping-based approaches include Sapper,[46]which situates the mapping process in a semantic-network model of memory. Analogy is a very active sub-area of creative computation and creative cognition; active figures in this sub-area includeDouglas Hofstadter,Paul Thagard, andKeith Holyoak. Also worthy of note here is Peter Turney and Michael Littman'smachine learningapproach to the solving ofSAT-style analogy problems; their approach achieves a score that compares well with average scores achieved by humans on these tests. Humour is an especially knowledge-hungry process, and the most successful joke-generation systems to date have focussed on pun-generation, as exemplified by the work of Kim Binsted and Graeme Ritchie.[59]This work includes theJAPEsystem, which can generate a wide range of puns that are consistently evaluated as novel and humorous by young children. An improved version of JAPE has been developed in the guise of the STANDUP system, which has been experimentally deployed as a means of enhancing linguistic interaction with children with communication disabilities. Some limited progress has been made in generating humour that involves other aspects of natural language, such as the deliberate misunderstanding of pronominal reference (in the work of Hans Wim Tinholt and Anton Nijholt), as well as in the generation of humorous acronyms in the HAHAcronym system[60]of Oliviero Stock and Carlo Strapparava. The blending of multiple word forms is a dominant force for new word creation in language; these new words are commonly called "blends" or "portmanteau words" (afterLewis Carroll). Tony Veale has developed a system called ZeitGeist[61]that harvests neologicalheadwordsfromWikipediaand interprets them relative to their local context in Wikipedia and relative to specific word senses inWordNet. ZeitGeist has been extended to generate neologisms of its own; the approach combines elements from an inventory of word parts that are harvested from WordNet, and simultaneously determines likely glosses for these new words (e.g., "food traveller" for "gastronaut" and "time traveller" for "chrononaut"). It then usesWeb searchto determine which glosses are meaningful and which neologisms have not been used before; this search identifies the subset of generated words that are both novel ("H-creative") and useful. Acorpus linguisticapproach to the search and extraction ofneologismhave also shown to be possible. UsingCorpus of Contemporary American Englishas a reference corpus, Locky Law has performed an extraction ofneologism,portmanteausand slang words using thehapax legomenawhich appeared in the scripts of AmericanTV dramaHouse M.D.[62] In terms of linguistic research in neologism,Stefan Th. Grieshas performed a quantitative analysis of blend structure in English and found that "the degree of recognizability of the source words and that the similarity of source words to the blend plays a vital role in blend formation." The results were validated through a comparison of intentional blends to speech-error blends.[63] More than iron, more than lead, more than gold I need electricity.I need it more than I need lamb or pork or lettuce or cucumber.I need it for my dreams. Like jokes, poems involve a complex interaction of different constraints, and no general-purpose poem generator adequately combines the meaning, phrasing, structure and rhyme aspects of poetry. Nonetheless, Pablo Gervás[64]has developed a noteworthy system called ASPERA that employs acase-based reasoning(CBR) approach to generating poetic formulations of a given input text via a composition of poetic fragments that are retrieved from a case-base of existing poems. Each poem fragment in the ASPERA case-base is annotated with a prose string that expresses the meaning of the fragment, and this prose string is used as the retrieval key for each fragment.Metricalrules are then used to combine these fragments into a well-formed poetic structure.Racteris an example of such a software project. Computational creativity in the music domain has focused both on the generation of musical scores for use by human musicians, and on the generation of music for performance by computers. The domain of generation has included classical music (with software that generates music in the style ofMozartandBach) andjazz.[65]Most notably,David Cope[66]has written a software system called "Experiments in Musical Intelligence" (or "EMI")[67]that is capable of analyzing and generalizing from existing music by a human composer to generate novel musical compositions in the same style. EMI's output is convincing enough to persuade human listeners that its music is human-generated to a high level of competence.[68] In the field of contemporary classical music,Iamusis the first computer that composes from scratch, and produces final scores that professional interpreters can play. TheLondon Symphony Orchestraplayed a piece for full orchestra, included inIamus' debut CD,[69]whichNew Scientistdescribed as "The first major work composed by a computer and performed by a full orchestra".[70]Melomics, the technology behind Iamus, is able to generate pieces in different styles of music with a similar level of quality. Creativity research in jazz has focused on the process of improvisation and the cognitive demands that this places on a musical agent: reasoning about time, remembering and conceptualizing what has already been played, and planning ahead for what might be played next.[71]The robot Shimon, developed by Gil Weinberg of Georgia Tech, has demonstrated jazz improvisation.[72]Virtual improvisation software based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov include OMax, SoMax and PyOracle, are used to create improvisations in real-time by re-injecting variable length sequences learned on the fly from the live performer.[73] In the field of musical composition, the patented works[74]byRené-Louis Baronallowed to make a robot that can create and play a multitude of orchestrated melodies, so-called "coherent" in any musical style. All outdoor physical parameter associated with one or more specific musical parameters, can influence and develop each of these songs (in real-time while listening to the song). The patented inventionMedal-Composerraises problems of copyright. Computational creativity in the generation ofvisual arthas had some notable successes in the creation of both abstract art and representational art. A well-known program in this domain isHarold Cohen'sAARON,[75]which has been continuously developed and augmented since 1973. Though formulaic, Aaron exhibits a range of outputs, generating black-and-white drawings or colour paintings that incorporate human figures (such as dancers), potted plants, rocks, and other elements of background imagery. These images are of a sufficiently high quality to be displayed in reputable galleries. Other software artists of note include the NEvAr system (for "Neuro-EvolutionaryArt") of Penousal Machado.[76]NEvAr uses a genetic algorithm to derive a mathematical function that is then used to generate a coloured three-dimensional surface. A human user is allowed to select the best pictures after each phase of the genetic algorithm, and these preferences are used to guide successive phases, thereby pushing NEvAr's search into pockets of the search space that are considered most appealing to the user. The Painting Fool, developed bySimon Coltonoriginated as a system for overpainting digital images of a given scene in a choice of different painting styles, colour palettes and brush types. Given its dependence on an input source image to work with, the earliest iterations of the Painting Fool raised questions about the extent of, or lack of, creativity in acomputational artsystem. Nonetheless,The Painting Foolhas been extended to create novel images, much asAARONdoes, from its own limited imagination. Images in this vein include cityscapes and forests, which are generated by a process ofconstraint satisfactionfrom some basic scenarios provided by the user (e.g., these scenarios allow the system to infer that objects closer to the viewing plane should be larger and more color-saturated, while those further away should be less saturated and appear smaller). Artistically, the images now created by the Painting Fool appear on a par with those created by Aaron, though the extensible mechanisms employed by the former (constraint satisfaction, etc.) may well allow it to develop into a more elaborate and sophisticated painter. The artist Krasi Dimtch (Krasimira Dimtchevska) and the software developer Svillen Ranev have created a computational system combining a rule-based generator of English sentences and a visual composition builder that converts sentences generated by the system into abstract art.[77]The software generates automatically indefinite number of different images using different color, shape and size palettes. The software also allows the user to select the subject of the generated sentences or/and the one or more of the palettes used by the visual composition builder. An emerging area of computational creativity is that ofvideo games. ANGELINA is a system for creatively developing video games in Java by Michael Cook. One important aspect is Mechanic Miner, a system that can generate short segments of code that act as simple game mechanics.[78]ANGELINA can evaluate these mechanics for usefulness by playing simple unsolvable game levels and testing to see if the new mechanic makes the level solvable. Sometimes Mechanic Miner discovers bugs in the code and exploits these to make new mechanics for the player to solve problems with.[79] In July 2015,GooglereleasedDeepDream– anopen source[80]computer vision program, created to detect faces and other patterns in images with the aim of automatically classifying images, which uses a convolutional neural network to find and enhance patterns in images via algorithmicpareidolia, thus creating a dreamlikepsychedelicappearance in the deliberately over-processed images.[81][82][83] In August 2015, researchers fromTübingen, Germanycreated a convolutional neural network that uses neural representations to separate and recombine content and style of arbitrary images which is able to turn images into stylistic imitations of works of art by artists such as aPicassoorVan Goghin about an hour. Their algorithm is put into use in the websiteDeepArtthat allows users to create unique artistic images by their algorithm.[84][85][86][87] In early 2016, a global team of researchers explained how a new computational creativity approach known as the Digital Synaptic Neural Substrate (DSNS) could be used to generate original chess puzzles that were not derived from endgame databases.[88]The DSNS is able to combine features of different objects (e.g. chess problems, paintings, music) using stochastic methods in order to derive new feature specifications which can be used to generate objects in any of the original domains. The generated chess puzzles have also been featured on YouTube.[89] Creativity is also useful in allowing for unusual solutions inproblem solving. Inpsychologyandcognitive science, this research area is calledcreative problem solving. The Explicit-Implicit Interaction (EII) theory of creativity has been implemented using aCLARION-based computational model that allows for the simulation ofincubationandinsightin problem-solving.[90]The emphasis of this computational creativity project is not on performance per se (as inartificial intelligenceprojects) but rather on the explanation of the psychological processes leading to human creativity and the reproduction of data collected in psychology experiments. So far, this project has been successful in providing an explanation for incubation effects in simple memory experiments, insight in problem solving, and reproducing the overshadowing effect in problem solving. Some researchers feel that creativity is a complex phenomenon whose study is further complicated by the plasticity of the language we use to describe it. We can describe not just the agent of creativity as "creative" but also the product and the method. Consequently, it could be claimed that it is unrealistic to speak of ageneral theory of creativity.[citation needed]Nonetheless, some generative principles are more general than others, leading some advocates to claim that certain computational approaches are "general theories". Stephen Thaler, for instance, proposes that certain modalities of neural networks are generative enough, and general enough, to manifest a high degree of creative capabilities.[citation needed] Traditional computers, as mainly used in the computational creativity application, do not support creativity, as they fundamentally transform a set of discrete, limited domain of input parameters into a set of discrete, limited domain of output parameters using a limited set of computational functions.[citation needed]As such, a computer cannot be creative, as everything in the output must have been already present in the input data or the algorithms.[citation needed]Related discussions and references to related work are captured in work on philosophical foundations of simulation.[91] Mathematically, the same set of arguments against creativity has been made by Chaitin.[92]Similar observations come from a Model Theory perspective. All this criticism emphasizes that computational creativity is useful and may look like creativity, but it is not real creativity, as nothing new is created, just transformed in well-defined algorithms. According to researchers like Mark Riedl, human creativity and computational creativity at their current state differ in several dimensions. While creativity can be viewed in the context of morality, Riedl considers the "educational, moralizing" aspect of stories as one of the challenges to developing narrative-generating AI models, which may contribute to the underlying reasoning coherence of the text.[25]The lack of intention in AI models hinders them from making morally responsible choices, which often appear in human creativity.[93] Michele Loi and Eleonora Vigano identified some potential threats to human creativity caused by AI development. For example, they considered the openness to "experiments of life," introduced byJohn Stuart Mill, an important factor in creativity. Society's overreliance on algorithms for making decisions would constrainutility functions, which may discourage people from exploring riskier solutions and decrease the diversity of exploration and thus the creativity.[94] The International Conference on Computational Creativity (ICCC) occurs annually, organized by The Association for Computational Creativity.[95]Events in the series include: Previously, the community of computational creativity has held a dedicated workshop, the International Joint Workshop on Computational Creativity, every year since 1999. Previous events in this series include:[citation needed] The 1st Conference on Computer Simulation of Musical Creativity will be held
https://en.wikipedia.org/wiki/Computational_creativity
Cybernetic artiscontemporary artthat builds upon the legacy ofcybernetics, wherefeedbackinvolved in the work takes precedence over traditionalaestheticand material concerns. The relationship between cybernetics and art can be summarised in three ways: cybernetics can be used to study art, to create works of art or may itself be regarded as an art form in its own right.[1] Nicolas Schöffer'sCYSP I(1956) was perhaps the first artwork to explicitly employ cybernetic principles (CYSP is an acronym that joins the first two letters of the words "CYbernetic" and "SPatiodynamic").[2]The artistRoy Ascottelaborated an extensive theory of cybernetic art in "Behaviourist Art and the Cybernetic Vision" (Cybernetica, Journal of the International Association for Cybernetics (Namur), Volume IX, No.4, 1966; Volume X No.1, 1967) and in "The Cybernetic Stance: My Process and Purpose" (Leonardo Vol 1, No 2, 1968). Art historianEdward A. Shankenhas written about the history of art and cybernetics in essays including "Cybernetics and Art: Cultural Convergence in the 1960s"[3]and "From Cybernetics to Telematics: The Art, Pedagogy, and Theory ofRoy Ascott"(2003),[4]which traces the trajectory of Ascott's work from cybernetic art totelematic art(art using computer networking as its medium, a precursor tonet.art.) Audio feedback and the use of tape loops, sound synthesis, and computer generated compositions reflected a cybernetic awareness of information, systems, and cycles. Such techniques became widespread in the 1960s in the music industry. The visual effects of electronic feedback became a focus of artistic research in the late 1960s, when video equipment first reached the consumer market.Steina and Woody Vasulka, for example, used "all manner and combination of audio and video signals to generate electronic feedback in their respective of corresponding media."[5] With related work by Edward Ihnatowicz,Wen-Ying Tsaiand cyberneticianGordon Paskand the animist kinetics ofRobert BreerandJean Tinguely, the 1960s produced a strain of cyborg art that was very much concerned with the shared circuits within and between the living and the technological. A line of cyborg art theory also emerged during the late 1960s. Writers like Jonathan Benthall andGene Youngblooddrew on cybernetics and cybernetic. The most substantial contributors here were the British artist and theoristRoy Ascottwith his essay "Behaviourist Art and the Cybernetic Vision" in the journal Cybernetica (1976), and the American critic and theoristJack Burnham. In "Beyond Modern Sculpture" from 1968 he builds cybernetic art into an extensive theory that centers on art's drive to imitate and ultimately reproduce life.[6] Cybernetic Serendipity: The Computer and the Artscurated byJasia Reichardtat theInstitute of Contemporary Arts,London,Englandin 1968 is attributed at being one of the first exhibition of cybernetic art.[7] ComposerHerbert Brünparticipated in theBiological Computer Laboratoryand was later involved in the founding of the School for Designing a Society.[8] Leading art theorists and historians in this field includeChristiane Paul (curator),Frank Popper,Christine Buci-Glucksmann,Dominique Moulon,Robert C. Morgan,Roy Ascott,Margot Lovejoy,Edmond Couchot,Fred ForestandEdward A. Shanken. Others in the creative arts who are associated with cybernetics includeRoland Kayn,Ruairi Glynn,Pauline Oliveros,Tom Scholte, andStephen Willats.
https://en.wikipedia.org/wiki/Cybernetic_art
Deepfakes(aportmanteauof'deep learning'and'fake'[1]) are images, videos, or audio that have been edited or generatedusing artificial intelligence, AI-based tools or AV editing software. They may depict real or fictional people and are considered a form ofsynthetic media, that is media that is usually created by artificial intelligence systems by combining various media elements into a new media artifact.[2][3] While the act of creating fake content is not new, deepfakes uniquely leveragemachine learningandartificial intelligencetechniques,[4][5][6]includingfacial recognitionalgorithms and artificialneural networkssuch asvariational autoencoders(VAEs) andgenerative adversarial networks(GANs).[5][7]In turn, the field of image forensics develops techniques todetect manipulated images.[8]Deepfakes have garnered widespread attention for their potential use in creatingchild sexual abusematerial,celebrity pornographic videos,revenge porn,fake news,hoaxes,bullying, andfinancial fraud.[9][10][11][12] Academics have raised concerns about the potential for deepfakes to promote disinformation and hate speech, as well as interfere with elections. In response, theinformation technologyindustry and governments have proposed recommendations and methods to detect and mitigate their use. Academic research has also delved deeper into the factors driving deepfake engagement online as well as potential countermeasures to malicious application of deepfakes. From traditionalentertainmenttogaming, deepfake technology has evolved to be increasingly convincing[13]and available to the public, allowing for the disruption of the entertainment andmediaindustries.[14] Photo manipulationwas developed in the 19th century and soon applied to motion pictures. Technology steadily improved during the 20th century, and more quickly with the advent ofdigital video. Deepfake technology has been developed by researchers at academic institutions beginning in the 1990s, and later by amateurs in online communities.[15][16]More recently the methods have been adopted by industry.[17] Academic research related to deepfakes is split between the field ofcomputer vision, a sub-field of computer science,[15]which develops techniques for creating and identifying deepfakes, and humanities and social science approaches that study the social, ethical, aesthetic implications as well as journalistic and informational implications of deepfakes.[18]As deepfakes have risen in prominence in popularity with innovations provided by AI tools, significant research has gone into detection methods and defining the factors driving engagement with deepfakes on the internet.[19][20]Deepfakes have been shown to appear on social media platforms and other parts of the internet for purposes ranging from entertainment and education related to deepfakes to misinformation to elicit strong reactions.[21]There are gaps in research related to the propagation of deepfakes on social media. Negativity and emotional response are the primary driving factors for users sharing deepfakes.[22] Age and lack of literacy related to deepfakes are another factor that drives engagement. Older users who may be technologically-illiterate might not recognize deepfakes as falsified content and share this content because they believe it to be true. Alternatively, younger users accustomed to the entertainment value of deepfakes are more likely to share them with an awareness of their falsified content.[23]Despite cognitive ability being a factor in successfully detecting deepfakes, individuals who are aware of a deepfake may be just as likely to share it on social media as one who does not know it is a deepfake.[24]Within scholarship focused on detecting deepfakes, deep-learning methods using techniques to identify software-induced artifacts have been found to be the most effective in separating a deepfake from an authentic product.[25]Due to the capabilities of deepfakes, concerns have developed related to regulations and literacy toward the technology.[26]The potential malicious applications of deepfakes and their capability to impact public figures, reputations, or promote misleading narratives are the primary drivers of these concerns.[27]Amongst some experts, potential malicious applications of deepfakes have encouraged them into labeling deepfakes as a potential danger to democratic societies that would benefit from a regulatory framework to mitigate potential risks.[28] In cinema studies, deepfakes illustrate how how "the human face is emerging as a central object of ambivalence in the digital age".[29]Video artists have used deepfakes to "playfully rewrite film history by retrofitting canonical cinema with new star performers".[30]Film scholar Christopher Holliday analyses how altering the gender and race of performers in familiar movie scenes destabilizes gender classifications and categories.[30]The concept of "queering" deepfakes is also discussed in Oliver M. Gingrich's discussion of media artworks that use deepfakes to reframe gender,[31]including British artistJake Elwes'Zizi: Queering the Dataset, an artwork that uses deepfakes of drag queens to intentionally play with gender. The aesthetic potentials of deepfakes are also beginning to be explored. Theatre historian John Fletcher notes that early demonstrations of deepfakes are presented as performances, and situates these in the context of theater, discussing "some of the more troubling paradigm shifts" that deepfakes represent as a performance genre.[32] Philosophers and media scholars have discussed the ethical implications of deepfakes in the dissemination of disinformation. Amina Vatreš from the Department of Communication Studies at the University of Sarajevo identifies three factors contributing to the widespread acceptance of deepfakes, and where its greatest danger lies: 1) convincing visualization and auditory support, 2) widespread accessibility, and 3) the inability to draw a clear line between truth and falsehood.[33]Another area of discussion on deepfakes is in relation to pornography made with deepfakes.[34]Media scholar Emily van der Nagel draws upon research in photography studies on manipulated images to discuss verification systems, that allow women to consent to uses of their images.[35] Beyond pornography, deepfakes have been framed by philosophers as an "epistemic threat" to knowledge and thus to society.[36]There are several other suggestions for how to deal with the risks deepfakes give rise beyond pornography, but also to corporations, politicians and others, of "exploitation, intimidation, and personal sabotage",[37]and there are several scholarly discussions of potential legal and regulatory responses both in legal studies and media studies.[38]In psychology and media studies, scholars discuss the effects ofdisinformationthat uses deepfakes,[39][40]and the social impact of deepfakes.[41] While most English-language academic studies of deepfakes focus on the Western anxieties about disinformation and pornography, digital anthropologist Gabriele de Seta has analyzed the Chinese reception of deepfakes, which are known ashuanlian, which translates to "changing faces". The Chinese term does not contain the "fake" of the English deepfake, and de Seta argues that this cultural context may explain why the Chinese response has centered on practical regulatory measures to "fraud risks, image rights, economic profit, and ethical imbalances".[42] A landmark early project was the "Video Rewrite" program, published in 1997. The program modified existing video footage of a person speaking to depict that person mouthing the words from a different audio track.[43]It was the first system to fully automate this kind of facial reanimation, and it did so using machine learning techniques to make connections between the sounds produced by a video's subject and the shape of the subject's face.[43] Contemporary academic projects have focused on creating more realistic videos and improving deepfake techniques.[44][45]The "Synthesizing Obama" program, published in 2017, modifies video footage of former presidentBarack Obamato depict him mouthing the words contained in a separate audio track.[44]The project lists as a main research contribution to itsphotorealistictechnique for synthesizing mouth shapes from audio.[44]The "Face2Face" program, published in 2016, modifies video footage of a person's face to depict them mimicking another person's facial expressions.[45]The project highlights its primary research contribution as the development of the first method for re-enacting facial expressions in real time using a camera that does not capture depth, enabling the technique to work with common consumer cameras. In August 2018, researchers at theUniversity of California, Berkeleypublished a paper introducing a deepfake dancing app that can create the impression of masterful dancing ability using AI.[46]This project expands the application of deepfakes to the entire body; previous works focused on the head or parts of the face.[47] Researchers have also shown that deepfakes are expanding into other domains such as medical imagery.[48]In this work, it was shown how an attacker can automatically inject or remove lung cancer in a patient's3D CT scan. The result was so convincing that it fooled three radiologists and a state-of-the-art lung cancer detection AI. To demonstrate the threat, the authors successfully performed the attack on a hospital in aWhite hat penetration test.[49] A survey of deepfakes, published in May 2020, provides a timeline of how the creation and detection deepfakes have advanced over the last few years.[50]The survey identifies that researchers have been focusing on resolving the following challenges of deepfake creation: Overall, deepfakes are expected to have several implications in media and society, media production, media representations, media audiences, gender, law, and regulation, and politics.[51] The termdeepfakeoriginated in late 2017 from aReddituser named "deepfakes".[52]He, along with other members of Reddit's "r/deepfakes", shared deepfakes they created; many videos involved celebrities' faces swapped onto the bodies of actors in pornographic videos,[52]while non-pornographic content included many videos with actorNicolas Cage's face swapped into various movies.[53] Other online communities remain, including Reddit communities that do not share pornography, such as "r/SFWdeepfakes" (short for "safe for work deepfakes"), in which community members share deepfakes depicting celebrities, politicians, and others in non-pornographic scenarios.[54]Other online communities continue to share pornography on platforms that have not banned deepfake pornography.[55] In January 2018, a proprietary desktop application called "FakeApp" was launched.[56]This app allows users to easily create and share videos with their faces swapped with each other.[57]As of 2019, "FakeApp" had been largely replaced by open-source alternatives such as "Faceswap", command line-based "DeepFaceLab", and web-based apps such as DeepfakesWeb.com[58][59][60] Larger companies started to use deepfakes.[17]Corporate training videos can be created using deepfaked avatars and their voices, for exampleSynthesia, which uses deepfake technology with avatars to create personalized videos.[61]The mobile appMomocreated the application Zao which allows users to superimpose their face on television and movie clips with a single picture.[17]As of 2019 the Japanese AI company DataGrid made a full body deepfake that could create a person from scratch.[62] As of 2020audio deepfakes, and AI software capable of detecting deepfakes andcloning human voicesafter 5 seconds of listening time also exist.[63][64][65][66][67][68]A mobile deepfake app, Impressions, was launched in March 2020. It was the first app for the creation of celebrity deepfake videos from mobile phones.[69][70] Deepfake technology's ability to fabricate messages and actions of others can include deceased individuals. On 29 October 2020,Kim Kardashianposted a video featuring ahologramof her late fatherRobert Kardashiancreated by the company Kaleida, which used a combination of performance, motion tracking, SFX, VFX andDeepFaketechnologies to create the illusion.[71][72] In 2020, a deepfake video of Joaquin Oliver, a victim of theParkland shootingwas created as part of a gun safety campaign. Oliver's parents partnered with nonprofit Change the Ref and McCann Health to produce a video in which Oliver to encourage people to support gun safety legislation and politicians who back do so as well.[73] In 2022, a deepfake video ofElvis Presleywas used on the programAmerica's Got Talent 17.[74] A TV commercial used a deepfake video ofBeatlesmemberJohn Lennon, who was murdered in 1980.[75] Deepfakes rely on a type ofneural networkcalled anautoencoder.[76]These consist of an encoder, which reduces an image to a lower dimensionallatent space, and a decoder, which reconstructs the image from the latent representation.[77]Deepfakes utilize this architecture by having a universal encoder which encodes a person in to the latent space.[citation needed]The latent representation contains key features about their facial features and body posture. This can then be decoded with a model trained specifically for the target. This means the target's detailed information will be superimposed on the underlying facial and body features of the original video, represented in the latent space.[citation needed] A popular upgrade to this architecture attaches agenerative adversarial networkto the decoder. AGANtrains a generator, in this case the decoder, and a discriminator in an adversarial relationship. The generator creates new images from the latent representation of the source material, while the discriminator attempts to determine whether or not the image is generated.[citation needed]This causes the generator to create images that mimic reality extremely well as any defects would be caught by the discriminator.[78]Both algorithms improve constantly in azero sum game. This makes deepfakes difficult to combat as they are constantly evolving; any time a defect is determined, it can be corrected.[78] Digital clonesof professional actors have appeared infilmsbefore, and progress in deepfake technology is expected to further the accessibility and effectiveness of such clones.[79]The use of AI technology was a major issue in the2023 SAG-AFTRA strike, as new techniques enabled the capability of generating and storing a digital likeness to use in place of actors.[80] Disneyhas improved their visual effects using high-resolution deepfake face swapping technology.[81]Disney improved their technology through progressive training programmed to identify facial expressions, implementing a face-swapping feature, and iterating in order to stabilize and refine the output.[81]This high-resolution deepfake technology saves significant operational and production costs.[82]Disney's deepfake generation model can produce AI-generated media at a 1024 x 1024 resolution, as opposed to common models that produce media at a 256 x 256 resolution.[82]The technology allows Disney tode-agecharacters or revive deceased actors.[83]Similar technology was initially used by fans to unofficially insert faces into existing media, such as overlayingHarrison Ford's young face onto Han Solo's face inSolo: A Star Wars Story.[84]Disney used deepfakes for the characters of Princess Leia andGrand Moff TarkininRogue One.[85][86] The 2020 documentaryWelcome to Chechnyaused deepfake technology to obscure the identity of the people interviewed, so as to protect them from retaliation.[87] Creative Artists Agencyhas developed a facility to capture the likeness of an actor "in a single day", to develop a digital clone of the actor, which would be controlled by the actor or their estate alongside otherpersonality rights.[88] Companies which have used digital clones of professional actors in advertisements includePuma,NikeandProcter & Gamble.[89] Deepfakes allowed for the use of David Beckham in a campaign using nearly nine languages to raise awareness the fight against Malaria.[90] In the 2024 IndianTamilscience fictionaction thrillerThe Greatest of All Time, the teenage version ofVijay's character Jeevan is portrayed by Ayaz Khan. Vijay's teenage face was then attained byAIdeepfake.[91] Deepfakes are also being used in education and media to create realistic videos and interactive content, which offer new ways to engage audiences. However, they also bring risks, especially for spreading false information, which has led to calls for responsible use and clear rules. In March 2018 the multidisciplinary artist Joseph Ayerle published thevideo artworkUn'emozione per sempre 2.0(English title:The Italian Game). The artist worked with Deepfake technology to create anAI actor,a synthetic version of 80s movie starOrnella Muti, traveling in time from 1978 to 2018. TheMassachusetts Institute of Technologyreferred this artwork in the study "Collective Wisdom".[92]The artist used Ornella Muti'stime travelto explore generational reflections, while also investigating questions about the role of provocation in the world of art.[93]For the technical realization Ayerle used scenes of photo modelKendall Jenner. The program replaced Jenner's face by an AI calculated face of Ornella Muti. As a result, the AI actor has the face of the Italian actor Ornella Muti and the body of Kendall Jenner. Deepfakes have been widely used insatireor to parody celebrities and politicians. The 2020 webseriesSassy Justice, created byTrey ParkerandMatt Stone, heavily features the use of deepfaked public figures to satirize current events and raise awareness of deepfake technology.[94] Deepfakes can be used to generate blackmail materials that falsely incriminate a victim. A report by the AmericanCongressional Research Servicewarned that deepfakes could be used to blackmail elected officials or those with access toclassified informationforespionageorinfluencepurposes.[95] Alternatively, since the fakes cannot reliably be distinguished from genuine materials, victims of actual blackmail can now claim that the true artifacts are fakes, granting them plausible deniability. The effect is to void credibility of existing blackmail materials, which erases loyalty to blackmailers and destroys the blackmailer's control. This phenomenon can be termed "blackmail inflation", since it "devalues" real blackmail, rendering it worthless.[96]It is possible to utilize commodity GPU hardware with a small software program to generate this blackmail content for any number of subjects in huge quantities, driving up the supply of fake blackmail content limitlessly and in highly scalable fashion.[97] On June 8, 2022,[98]Daniel Emmet, a formerAGTcontestant, teamed up with theAIstartup[99][100]Metaphysic AI, to create a hyperrealistic deepfake to make it appear asSimon Cowell. Cowell, notoriously known for severely critiquing contestants,[101]was on stage interpreting "You're The Inspiration" byChicago. Emmet sang on stage as an image of Simon Cowell emerged on the screen behind him in flawless synchronicity.[102] On August 30, 2022, Metaphysic AI had 'deep-fake'Simon Cowell,Howie MandelandTerry Crewssingingoperaon stage.[103] On September 13, 2022, Metaphysic AI performed with asyntheticversion ofElvis Presleyfor the finals ofAmerica's Got Talent.[104] TheMITartificial intelligence project15.aihas been used for content creation for multiple Internetfandoms, particularly on social media.[105][106][107] In 2023 the bandsABBAandKISSpartnered withIndustrial Light & MagicandPophouse Entertainmentto develop deepfake avatars capable of performingvirtual concerts.[108] Fraudsters and scammers make use of deepfakes to trick people into fake investment schemes,financial fraud,cryptocurrencies,sending money, and followingendorsements. The likenesses of celebrities and politicians have been used for large-scale scams, as well as those of private individuals, which are used inspearphishingattacks. According to theBetter Business Bureau, deepfake scams are becoming more prevalent.[109]These scams are responsible for an estimated $12 billion in fraud losses globally.[110]According to a recent report these numbers are expected to reach $40 Billion over the next three years.[110] Fake endorsements have misused the identities of celebrities likeTaylor Swift,[111][109]Tom Hanks,[112]Oprah Winfrey,[113]andElon Musk;[114]news anchors[115]likeGayle King[112]andSally Bundock;[116]and politicians likeLee Hsien Loong[117]andJim Chalmers.[118][119]Videos of them have appeared inonline advertisementsonYouTube,Facebook, andTikTok, who have policies againstsynthetic and manipulated media.[120][111][121]Ads running these videos are seen by millions of people. A singleMedicare fraudcampaign had been viewed more than 195 million times across thousands of videos.[120][122]Deepfakes have been used for: a fake giveaway ofLe Creusetcookware for a "shipping fee" without receiving the products, except for hidden monthly charges;[111]weight-loss gummies that charge significantly more than what was said;[113]a fake iPhone giveaway;[111][121]and fraudulentget-rich-quick,[114][123]investment,[124]andcryptocurrencyschemes.[117][125] Many ads pair AIvoice cloningwith "decontextualized video of the celebrity" to mimic authenticity. Others use a whole clip from a celebrity before moving to a different actor or voice.[120]Some scams may involve real-time deepfakes.[121] Celebrities have been warning people of these fake endorsements, and to be more vigilant against them.[109][111][113]Celebrities are unlikely to file lawsuits against every person operating deepfake scams, as "finding and suing anonymous social media users is resource intensive," thoughcease and desistletters to social media companies work in getting videos and ads taken down.[126] Audio deepfakeshave been used as part ofsocial engineeringscams, fooling people into thinking they are receiving instructions from a trusted individual.[127]In 2019, a U.K.-based energy firm's CEO was scammed over the phone when he was ordered to transfer €220,000 into a Hungarian bank account by an individual who reportedly used audio deepfake technology to impersonate the voice of the firm's parent company's chief executive.[128][129] As of 2023, the combination advances in deepfake technology, which could clone an individual's voice from a recording of a few seconds to a minute, and newtext generation tools, enabled automated impersonation scams, targeting victims using a convincing digital clone of a friend or relative.[130] Audio deepfakes can be used to mask a user's real identity. Inonline gaming, for example, aplayermay want to choose a voice that sounds like theirin-game characterwhen speaking to other players. Those who are subject toharassment, such as women, children, and transgender people, can use these "voice skins" to hide their gender or age.[131] In 2020, aninternet memeemerged utilizing deepfakes to generate videos of people singing the chorus of "Baka Mitai"(ばかみたい), a song from the gameYakuza 0in the video game seriesLike a Dragon. In the series, the melancholic song is sung by the player in akaraokeminigame. Most iterations of this meme use a 2017 video uploaded by user Dobbsyrules, wholip syncsthe song, as a template.[132][133] Deepfakes have been used to misrepresent well-known politicians in videos. In 2017, Deepfake pornography prominently surfaced on the Internet, particularly onReddit.[156]As of 2019, many deepfakes on the internet feature pornography of female celebrities whose likeness is typically used without their consent.[157]A report published in October 2019 by Dutch cybersecurity startup Deeptrace estimated that 96% of all deepfakes online were pornographic.[158]As of 2018, aDaisy Ridleydeepfake first captured attention,[156]among others.[159][160][161]As of October 2019, most of the deepfake subjects on the internet were British and American actors.[157]However, around a quarter of the subjects are South Korean, the majority of which are K-pop stars.[157][162] In June 2019, a downloadableWindowsandLinuxapplication called DeepNude was released that used neural networks, specificallygenerative adversarial networks, to remove clothing from images of women. The app had both a paid and unpaid version, the paid version costing $50.[163][164]On 27 June the creators removed the application and refunded consumers.[165] Female celebrities are often a main target when it comes to deepfake pornography. In 2023, deepfake porn videos appeared online ofEmma WatsonandScarlett Johanssonin a face swapping app.[166]In 2024, deepfake porn images circulated online ofTaylor Swift.[167] Academic studies have reported that women, LGBT people and people of colour (particularly activists, politicians and those questioning power) are at higher risk of being targets of promulgation of deepfake pornography.[168] Deepfakes have begun to see use in popular social media platforms, notably through Zao, a Chinese deepfake app that allows users to substitute their own faces onto those of characters in scenes from films and television shows such asRomeo + JulietandGame of Thrones.[169]The app originally faced scrutiny over its invasive user data and privacy policy, after which the company put out a statement claiming it would revise the policy.[17]In January 2020 Facebook announced that it was introducing new measures to counter this on its platforms.[170] TheCongressional Research Servicecited unspecified evidence as showing that foreignintelligence operativesused deepfakes to create social media accounts with the purposes ofrecruitingindividuals with access toclassified information.[95] In 2021, realistic deepfake videos of actorTom Cruisewere released onTikTok, which went viral and garnered more than tens of millions of views. The deepfake videos featured an "artificial intelligence-generated doppelganger" of Cruise doing various activities such as teeing off at the golf course, showing off a coin trick, and biting into a lollipop. The creator of the clips,BelgianVFXArtist Chris Umé,[171]said he first got interested in deepfakes in 2018 and saw the "creative potential" of them.[172][173] Deepfake photographs can be used to createsockpuppets, non-existent people, who are active both online and in traditional media. A deepfake photograph appears to have been generated together with a legend for an apparently non-existent person named Oliver Taylor, whose identity was described as a university student in the United Kingdom. The Oliver Taylor persona submitted opinion pieces in several newspapers and was active in online media attacking a British legal academic and his wife, as "terrorist sympathizers." The academic had drawn international attention in 2018 when he commenced a lawsuit in Israel against NSO, a surveillance company, on behalf of people in Mexico who alleged they were victims of NSO'sphone hackingtechnology.Reuterscould find only scant records for Oliver Taylor and "his" university had no records for him. Many experts agreed that the profile photo is a deepfake. Several newspapers have not retracted articles attributed to him or removed them from their websites. It is feared that such techniques are a new battleground indisinformation.[174] Collections of deepfake photographs of non-existent people onsocial networkshave also been deployed as part of Israelipartisanpropaganda. TheFacebookpage "Zionist Spring" featured photos of non-existent persons along with their "testimonies" purporting to explain why they have abandoned their left-leaning politics to embraceright-wing politics, and the page also contained large numbers of posts fromPrime Minister of IsraelBenjamin Netanyahuand his son and from other Israeli right wing sources. The photographs appear to have been generated by "human image synthesis" technology, computer software that takes data from photos of real people to produce a realistic composite image of a non-existent person. In much of the "testimonies," the reason given for embracing the political right was the shock of learning of allegedincitementto violence against the prime minister. Right wing Israeli television broadcasters then broadcast the "testimonies" of these non-existent people based on the fact that they were being "shared" online. The broadcasters aired these "testimonies" despite being unable to find such people, explaining "Why does the origin matter?" Other Facebook fake profiles—profiles of fictitious individuals—contained material that allegedly contained such incitement against the right wing prime minister, in response to which the prime minister complained that there was a plot to murder him.[175][176] Though fake photos have long been plentiful, faking motion pictures has been more difficult, and the presence of deepfakes increases the difficulty of classifying videos as genuine or not.[134]AI researcher Alex Champandard has said people should know how fast things can be corrupted with deepfake technology, and that the problem is not a technical one, but rather one to be solved by trust in information and journalism.[134]Computer science associate professorHao Liof theUniversity of Southern Californiastates that deepfakes created for malicious use, such asfake news, will be even more harmful if nothing is done to spread awareness of deepfake technology.[177]Li predicted that genuine videos and deepfakes would become indistinguishable in as soon as half a year, as of October 2019, due to rapid advancement inartificial intelligenceand computer graphics.[177]FormerGooglefraud czarShuman Ghosemajumderhas called deepfakes an area of "societal concern" and said that they will inevitably evolve to a point at which they can be generated automatically, and an individual could use that technology to produce millions of deepfake videos.[178] A primary pitfall is that humanity could fall into an age in which it can no longer be determined whether a medium's content corresponds to the truth.[134][179]Deepfakes are one of a number of tools fordisinformation attack, creating doubt, and undermining trust. They have a potential to interfere with democratic functions in societies, such as identifying collective agendas, debating issues, informing decisions, and solving problems though the exercise of political will.[180]People may also start to dismiss real events as fake.[131] Deepfakes possess the ability to damage individual entities tremendously.[181]This is because deepfakes are often targeted at one individual, and/or their relations to others in hopes to create a narrative powerful enough to influence public opinion or beliefs. This can be done through deepfake voice phishing, which manipulates audio to create fake phone calls or conversations.[181]Another method of deepfake use is fabricated private remarks, which manipulate media to convey individuals voicing damaging comments.[181]The quality of a negative video or audio does not need to be that high. As long as someone's likeness and actions are recognizable, a deepfake can hurt their reputation.[131] In September 2020 Microsoft made public that they are developing a Deepfake detection software tool.[182] Beyond public‑figure defamation, deepfakes are increasingly used in K–12 and higher‑education settings to target peers with false portrayals that extend well beyond static images or text messages.[183]Students have reported encountering synthetic videos depicting classmates in non‑consensual intimate acts and experiencing academic or social disruptions as a result. Schools and universities frequently find themselves ill‑equipped to address these incidents, as most anti‑bullying policies predate AI‑generated media and focus narrowly on on‑campus behavior or student‑owned devices. When deepfake harassment originates off‑campus—via private social networks, encrypted messaging, or servers hosted overseas—administrators struggle to determine jurisdiction, balance free‑speech concerns, and coordinate with law enforcement. To close these gaps, legal scholars recommend updating both state and federal anti‑bullying statutes to explicitly include “non‑consensual synthetic imagery” alongside traditional definitions of harassment. Concurrently, integrating AI‑literacy curricula—covering deepfake detection techniques, ethical considerations, and responsible digital citizenship—can empower students to recognize, report, and resist deepfake bullying before it escalates.[184] With the rise of accessible deepfake tools—ranging from open‑source platforms like DeepFaceLab and Faceswap to user‑friendly mobile apps such as Zao and Impressions—malicious actors have begun weaponizing synthetic media as a novel form of cyberbullying.[185]Termed “deepfake bullying,” these attacks can take many forms: perpetrators may superimpose a student’s face onto videos of underage drinking or drug use, create fabricated nude images without consent, or manufacture scenes of criminal behavior to falsely implicate victims. Because modern generative adversarial networks can produce hyper‑realistic results, targets often cannot distinguish real from fake, causing intense embarrassment and panic when content first appears. Once released, deepfake content tends to circulate rapidly across messaging apps, social feeds, and even private chat groups, where it can be downloaded, reedited, and recirculated—subjecting victims to repeated retraumatization and a persistent fear of new exposures. Psychological assessments of affected students report elevated levels of anxiety, depression, and social withdrawal; some victims have withdrawn from school activities or changed schools entirely to escape ongoing harassment. In response, educators, school counselors, and child‑psychology experts are calling for explicit policies that define deepfake bullying as a disciplinary offense, mandate immediate content takedowns, and require trauma‑informed support structures—such as specialized counseling services, peer‑support hotlines, and restorative‑justice circles—to help victims regain a sense of safety and agency.[186] Detecting fake audio is a highly complex task that requires careful attention to the audio signal in order to achieve good performance. Using deep learning, preprocessing of feature design and masking augmentation have been proven effective in improving performance.[187] Most of the academic research surrounding deepfakes focuses on the detection of deepfake videos.[188]One approach to deepfake detection is to use algorithms to recognize patterns and pick up subtle inconsistencies that arise in deepfake videos.[188]For example, researchers have developed automatic systems that examine videos for errors such as irregular blinking patterns of lighting.[189][15]This approach has been criticized because deepfake detection is characterized by a "moving goal post" where the production of deepfakes continues to change and improve as algorithms to detect deepfakes improve.[188]In order to assess the most effective algorithms for detecting deepfakes, a coalition of leading technology companies hosted the Deepfake Detection Challenge to accelerate the technology for identifying manipulated content.[190]The winning model of the Deepfake Detection Challenge was 65% accurate on the holdout set of 4,000 videos.[191]A team at Massachusetts Institute of Technology published a paper in December 2021 demonstrating that ordinary humans are 69–72% accurate at identifying a random sample of 50 of these videos.[192] A team at the University of Buffalo published a paper in October 2020 outlining their technique of using reflections of light in the eyes of those depicted to spot deepfakes with a high rate of success, even without the use of an AI detection tool, at least for the time being.[193] In the case of well-documented individuals such as political leaders, algorithms have been developed to distinguish identity-based features such as patterns of facial, gestural, and vocal mannerisms and detect deep-fake impersonators.[194] Another team led by Wael AbdAlmageed with Visual Intelligence and Multimedia Analytics Laboratory (VIMAL) of theInformation Sciences Instituteat theUniversity Of Southern Californiadeveloped two generations[195][196]of deepfake detectors based onconvolutional neural networks. The first generation[195]usedrecurrent neural networksto spot spatio-temporal inconsistencies to identify visual artifacts left by the deepfake generation process. The algorithm achieved 96% accuracy on FaceForensics++, the only large-scale deepfake benchmark available at that time. The second generation[196]used end-to-end deep networks to differentiate between artifacts and high-level semantic facial information using two-branch networks. The first branch propagates colour information while the other branch suppresses facial content and amplifies low-level frequencies usingLaplacian of Gaussian (LoG). Further, they included a new loss function that learns a compact representation of bona fide faces, while dispersing the representations (i.e. features) of deepfakes. VIMAL's approach showed state-of-the-art performance on FaceForensics++ and Celeb-DF benchmarks, and onMarch 16, 2022(the same day of the release), was used to identify the deepfake of Volodymyr Zelensky out-of-the-box without any retraining or knowledge of the algorithm with which the deepfake was created.[citation needed] Other techniques suggest thatblockchaincould be used to verify the source of the media.[197]For instance, a video might have to be verified through the ledger before it is shown on social media platforms.[197]With this technology, only videos from trusted sources would be approved, decreasing the spread of possibly harmful deepfake media.[197] Digitally signing of all video and imagery by cameras and video cameras, including smartphone cameras, was suggested to fight deepfakes.[198]That allows tracing every photograph or video back to its original owner that can be used to pursue dissidents.[198] One easy way to uncover deepfake video calls consists in asking the caller to turn sideways.[199] Henry Ajder who works for Deeptrace, a company that detects deepfakes, says there are several ways to protect against deepfakes in the workplace. Semantic passwords or secret questions can be used when holding important conversations. Voice authentication and otherbiometric security featuresshould be up to date. Educate employees about deepfakes.[131] Due to the capability of deepfakes to fool viewers and believably mimic a person, research has indicated that the concept of truth through observation cannot be fully relied on.[200]Additionally, literacy of the technology among populations could be called into question due to the relatively new success of convincing deepfakes.[201]When combined with increasing ease of access to the technology, this has led to the concern amongst some experts that some societies are not prepared to interact with deepfakes organically without potential consequences from sharing misinformation and disinformation.[202]Media literacyhas been considered as a potential counter to "prime" a viewer to identify a deepfake when they encounter one organically by engendering critical thinking.[203]While media literacy education can have conflicting results in the overall success in detecting deepfakes,[204]research has indicated that critical thinking and a skeptical outlook toward a presented piece of media are effective at assisting an individual in determining a deepfake.[205][206]Media literacy frameworks promote critical analysis of media and the motivations behind the presentation of the associated content. Media literacy shows promise as a potential cognitive countermeasure when interacting with malicious deepfakes.[207] In March 2024, a video clip was shown from theBuckingham Palace, whereKate Middletonhad cancer and she was undergoing chemotherapy. However, the clip fuelled rumours that the woman in that clip was an AI deepfake.[208]UCLA's race director Johnathan Perkins doubted she had cancer, and further speculated that she could be in critical condition or dead.[209] Recently, the use of deepfakes has inspired research on deepfake's capability and effects when used in disinformation campaigns. This capability has raised concerns, partly due to the potential of deepfakes to circumvent a person's skepticism and influence their views on an issue.[210][179]Due to the continued advancement in technology that improves deceptive capabilities of deepfakes, some scholars believe that deepfakes could pose a significant threat to democratic societies.[211]Studies have investigated the effects of political deepfakes.[210][211][179]In two separate studies focusing on Dutch participants, it was found that deepfakes have varying effects on an audience. As a tool of disinformation, deepfakes did not necessarily produce stronger reactions or shifts in viewpoints than traditional textual disinformation.[210]However, deepfakes did produce a reassuring effect on individuals who held preconceived notions that aligned with the viewpoint promoted by the deepfake disinformation in the study.[210]Additionally, deepfakes are effective when designed to target a specific demographic segment related to a particular issue.[211]"Microtargeting" involves understanding nuanced political issues of a specific demographic to create a targeted deepfake. The targeted deepfake is then used to connect with and influence the viewpoint of that demographic. Targeted deepfakes were found to be notably effective by the researchers.[211]Research has also found that the political effects of deepfakes are not necessarily as straightforward or assured. Researchers in the United Kingdom uncovered that deepfake political disinformation does not have a guaranteed effect on populations beyond indications that it may sow distrust or uncertainty in a source that provides the deepfake.[179]The implications of distrust in sources led researchers to conclude that deepfakes may have outsized effect in a "low-trust" information environment where public institutions are not trusted by the public.[179] Across the world, there are key instances where deepfakes have been used to misrepresent well-known politicians and other public figures. Twitter(laterX) is taking active measures to handle synthetic and manipulated media on their platform. In order to prevent disinformation from spreading, Twitter is placing a notice on tweets that contain manipulated media and/or deepfakes that signal to viewers that the media is manipulated.[237]There will also be a warning that appears to users who plan on retweeting, liking, or engaging with the tweet.[237]Twitter will also work to provide users a link next to the tweet containing manipulated or synthetic media that links to a Twitter Moment or credible news article on the related topic—as a debunking action.[237]Twitter also has the ability to remove any tweets containing deepfakes or manipulated media that may pose a harm to users' safety.[237]In order to better improve Twitter's detection of deepfakes and manipulated media, Twitter asked users who are interested in partnering with them to work on deepfake detection solutions to fill out a form.[238] "In August 2024, thesecretaries of stateof Minnesota, Pennsylvania, Washington, Michigan and New Mexico penned an open letter to X owner Elon Musk urging modifications to its AI chatbotGrok's newtext-to-videogenerator, added in August 2024, stating that it had disseminated election misinformation.[239][240][241] Facebookhas taken efforts towards encouraging the creation of deepfakes in order to develop state of the art deepfake detection software. Facebook was the prominent partner in hosting the Deepfake Detection Challenge (DFDC), held December 2019, to 2114 participants who generated more than 35,000 models.[242]The top performing models with the highest detection accuracy were analyzed for similarities and differences; these findings are areas of interest in further research to improve and refine deepfake detection models.[242]Facebook has also detailed that the platform will be taking down media generated with artificial intelligence used to alter an individual's speech.[243]However, media that has been edited to alter the order or context of words in one's message would remain on the site but be labeled as false, since it was not generated by artificial intelligence.[243] On 31 January 2018,Gfycatbegan removing all deepfakes from its site.[244][245]OnReddit, the r/deepfakes subreddit was banned on 7 February 2018, due to the policy violation of "involuntary pornography".[246][247][248][249][250]In the same month, representatives fromTwitterstated that they would suspend accounts suspected of posting non-consensual deepfake content.[251]Chat siteDiscordhas taken action against deepfakes in the past,[252]and has taken a general stance against deepfakes.[245][253]In September 2018,Googleadded "involuntary synthetic pornographic imagery" to its ban list, allowing anyone to request the block of results showing their fake nudes.[254][check quotation syntax] In February 2018,Pornhubsaid that it would ban deepfake videos on its website because it is considered "non consensual content" which violates their terms of service.[255]They also stated previously to Mashable that they will take down content flagged as deepfakes.[256]Writers from Motherboard reported that searching "deepfakes" onPornhubstill returned multiple recent deepfake videos.[255] Facebookhas previously stated that they would not remove deepfakes from their platforms.[257]The videos will instead be flagged as fake by third-parties and then have a lessened priority in user's feeds.[258]This response was prompted in June 2019 after a deepfake featuring a 2016 video ofMark Zuckerbergcirculated on Facebook andInstagram.[257] In May 2022,Googleofficially changed the terms of service for theirJupyter Notebook colabs, banning the use of their colab service for the purpose of creating deepfakes.[259]This came a few days after a VICE article had been published, claiming that "most deepfakes are non-consensual porn" and that the main use of popular deepfake software DeepFaceLab (DFL), "the most important technology powering the vast majority of this generation of deepfakes" which often was used in combination with Google colabs, would be to create non-consensual pornography, by pointing to the fact that among many other well-known examples of third-party DFL implementations such as deepfakes commissioned byThe Walt Disney Company, official music videos, and web seriesSassy Justiceby the creators ofSouth Park, DFL'sGitHubpage also links to deepfake porn websiteMr.‍Deepfakesand participants of the DFL Discord server also participate onMr.‍Deepfakes.[260] In the United States, there have been some responses to the problems posed by deepfakes. In 2018, the Malicious Deep Fake Prohibition Act was introduced to theUS Senate;[261]in 2019, the Deepfakes Accountability Act was introduced in the116th United States CongressbyU.S. representativeforNew York's 9th congressional districtYvette Clarke.[262]Several states have also introduced legislation regarding deepfakes, including Virginia,[263]Texas, California, and New York;[264]charges as varied asidentity theft,cyberstalking, andrevenge pornhave been pursued, while more comprehensive statutes are urged.[254] Among U.S. legislative efforts, on 3 October 2019, California governorGavin Newsomsigned into law Assembly Bills No. 602 and No. 730.[265][266]Assembly Bill No. 602 provides individuals targeted by sexually explicit deepfake content made without their consent with a cause of action against the content's creator.[265]Assembly Bill No. 730 prohibits the distribution of malicious deepfake audio or visual media targeting a candidate running for public office within 60 days of their election.[266]U.S. representative Yvette Clarke introduced H.R. 5586: Deepfakes Accountability Act into the118th United States Congresson September 20, 2023 in an effort to protect national security from threats posed by deepfake technology.[267]U.S. representativeMaría Salazarintroduced H.R. 6943: No AI Fraud Act into the118th United States Congresson January 10, 2024, to establish specific property rights of individual physicality, including voice.[268] In November 2019, China announced that deepfakes and other synthetically faked footage should bear a clear notice about their fakeness starting in 2020. Failure to comply could be considered acrimetheCyberspace Administration of Chinastated on its website.[269]The Chinese government seems to be reserving the right to prosecute both users andonline video platformsfailing to abide by the rules.[270]The Cyberspace Administration of China, theMinistry of Industry and Information Technology, and theMinistry of Public Securityjointly issued the Provision on the Administration of Deep Synthesis Internet Information Service in November 2022.[271]China's updated Deep Synthesis Provisions (Administrative Provisions on Deep Synthesis in Internet-Based Information Services) went into effect in January 2023.[272] In the United Kingdom, producers of deepfake material could be prosecuted for harassment, but deepfake production was not a specific crime[273]until 2023, when theOnline Safety Actwas passed, which made deepfakes illegal; the UK plans to expand the Act's scope to criminalize deepfakes created with "intention to cause distress" in 2024.[274][275] In Canada, in 2019, theCommunications Security Establishmentreleased a report which said that deepfakes could be used to interfere in Canadian politics, particularly to discredit politicians and influence voters.[276][277]As a result, there are multiple ways for citizens in Canada to deal with deepfakes if they are targeted by them.[278]In February 2024,billC-63 was tabled in the44th Canadian Parliamentin order to enact theOnline Harms Act, which would amend Criminal Code, and other Acts. An earlier version of the Bill, C-36, was ended by the dissolution of the 43rd Canadian Parliament in September 2021.[279][280] In India, there are no direct laws or regulation on AI or deepfakes, but there are provisions under the Indian Penal Code and Information Technology Act 2000/2008, which can be looked at for legal remedies, and the new proposed Digital India Act will have a chapter on AI and deepfakes in particular, as per the MoS Rajeev Chandrasekhar.[281] In Europe, the European Union's 2024Artificial Intelligence Act(AI Act) takes a risk-based approach to regulating AI systems, including deepfakes. It establishes categories of "unacceptable risk," "high risk," "specific/limited or transparency risk", and "minimal risk" to determine the level of regulatory obligations for AI providers and users. However, the lack of clear definitions for these risk categories in the context of deepfakes creates potential challenges for effective implementation. Legal scholars have raised concerns about the classification of deepfakes intended for political misinformation or the creation of non-consensual intimate imagery. Debate exists over whether such uses should always be considered "high-risk" AI systems, which would lead to stricter regulatory requirements.[282] In August 2024, the IrishData Protection Commission(DPC) launched court proceedings againstXfor its unlawful use of the personal data of over 60 million EU/EEA users, in order to train its AI technologies, such as itschatbotGrok.[283] In 2016, theDefense Advanced Research Projects Agency(DARPA) launched the Media Forensics (MediFor) program which was funded through 2020.[284]MediFor aimed at automatically spotting digital manipulation in images and videos, includingDeepfakes.[285][286]In the summer of 2018, MediFor held an event where individuals competed to create AI-generated videos, audio, and images as well as automated tools to detect these deepfakes.[287]According to the MediFor program, it established a framework of three tiers of information - digital integrity, physical integrity and semantic integrity - to generate one integrity score in an effort to enable accurate detection of manipulated media.[288] In 2019, DARPA hosted a "proposers day" for the Semantic Forensics (SemaFor) program where researchers were driven to prevent viral spread of AI-manipulated media.[289]DARPA and the Semantic Forensics Program were also working together to detect AI-manipulated media through efforts in training computers to utilize common sense, logical reasoning.[289]Built on the MediFor's technologies, SemaFor's attribution algorithms infer if digital media originates from a particular organization or individual, while characterization algorithms determine whether media was generated or manipulated for malicious purposes.[290]In March 2024, SemaFor published an analytic catalog that offers the public access to open-source resources developed under SemaFor.[291][292] TheInternational Panel on the Information Environmentwas launched in 2023 as a consortium of over 250 scientists working to develop effective countermeasures to deepfakes and other problems created by perverse incentives in organizations disseminating information via the Internet.[293] Media related toDeepfakeat Wikimedia Commons
https://en.wikipedia.org/wiki/Deepfakes
Many notable artificial intelligence artists have created a wide variety ofartificial intelligence artfrom the 1960s to today. These include: Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/List_of_artificial_intelligence_artists
Music and artificial intelligence(music and AI) is the development ofmusic softwareprograms which useAIto generate music.[1]As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment.[2]Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI inmusic composition,performance, theory and digitalsound processing. Composers/artists likeJennifer WalsheorHolly Herndonhave been exploring aspects of music AI for years in their performances and musical works. Another original approach of humans “imitating AI” can be found in the 43-hour sound installationString Quartet(s)byGeorges Lentz. 20th century art historianErwin Panofskyproposed that in all art, there existed three levels of meaning: primary meaning, or the natural subject; secondary meaning, or the conventional subject; and tertiary meaning, the intrinsic content of the subject.[3][4]AI music explores the foremost of these, creating music without the "intention" which is usually behind it, leaving composers who listen to machine-generated pieces feeling unsettled by the lack of apparent meaning.[5] In the 1950s and the 1960s, music made by artificial intelligence was not fully original, but generated from templates that people had already defined and given to theAI, with this being known asrule-based systems. As time passed, computers became more powerful, which allowed machine learning and artificial neural networks to help in the music industry by giving AI large amounts of data to learn how music is made instead of predefined templates. By the early 2000s, more advancements in artificial intelligence had been made, withgenerative adversarial networks(GANs) anddeep learningbeing used to help AI compose more original music that is more complex and varied than possible before. Notable AI-driven projects, such as OpenAI’sMuseNetand Google’s Magenta, have demonstrated AI’s ability to generate compositions that mimic various musical styles.[6] Artificial intelligence finds its beginnings in music with the transcription problem: accurately recording a performance into musical notation as it is played.Père Engramelle's schematic of a "piano roll", a mode of automatically recording note timing and duration in a way which could be easily transcribed to proper musical notation by hand, was first implemented by German engineers J.F. Unger and J. Hohlfield in 1752.[7] In 1957, the ILLIAC I (Illinois Automatic Computer) produced the "Illiac Suite for String Quartet", a completely computer-generated piece of music. The computer was programmed to accomplish this by composerLejaren Hillerand mathematicianLeonard Isaacson.[5]: v–viiIn 1960, Russian researcher Rudolf Zaripov published worldwide first paper on algorithmic music composing using theUral-1computer.[8] In 1965, inventorRay Kurzweildeveloped software capable of recognizing musical patterns and synthesizing new compositions from them. The computer first appeared on the quiz showI've Got a Secretthat same year.[9] By 1983,Yamaha Corporation's Kansei Music System had gained momentum, and a paper was published on its development in 1989. The software utilized music information processing and artificial intelligence techniques to essentially solve the transcription problem for simpler melodies, although higher-level melodies and musical complexities are regarded even today as difficult deep-learning tasks, and near-perfect transcription is still a subject of research.[7][10] In 1997, an artificial intelligence program named Experiments in Musical Intelligence (EMI) appeared to outperform a human composer at the task of composing a piece of music to imitate the style ofBach.[11]EMI would later become the basis for a more sophisticated algorithm calledEmily Howell, named for its creator. In 2002, the music research team at the Sony Computer Science Laboratory Paris, led by French composer and scientistFrançois Pachet, designed the Continuator, an algorithm uniquely capable of resuming a composition after a live musician stopped.[12] Emily Howellwould continue to make advancements in musical artificial intelligence, publishing its first albumFrom Darkness, Lightin 2009.[13]Since then, many more pieces by artificial intelligence and various groups have been published. In 2010,Iamusbecame the first AI to produce a fragment of original contemporary classical music, in its own style: "Iamus' Opus 1". Located at the Universidad de Malága (Malága University) in Spain, the computer can generate a fully original piece in a variety of musical styles.[14][5]: 468–481In August 2019, a large dataset consisting of 12,197 MIDI songs, each with their lyrics and melodies,[15]was created to investigate the feasibility of neural melody generation from lyrics using a deep conditional LSTM-GAN method. With progress ingenerative AI, models capable of creating complete musical compositions (including lyrics) from a simple text description have begun to emerge. Two notable web applications in this field areSuno AI, launched in December 2023, andUdio, which followed in April 2024.[16] Developed at Princeton University by Ge Wang and Perry Cook, ChucK is a text-based, cross-platform language.[17]By extracting and classifying the theoretical techniques it finds in musical pieces, the software is able to synthesize entirely new pieces from the techniques it has learned.[18]The technology is used bySLOrk(Stanford Laptop Orchestra)[19]andPLOrk(Princeton Laptop Orchestra). Jukedeck was a website that let people use artificial intelligence to generate original, royalty-free music for use in videos.[20][21]The team started building the music generation technology in 2010,[22]formed a company around it in 2012,[23]and launched the website publicly in 2015.[21]The technology used was originally a rule-basedalgorithmic compositionsystem,[24]which was later replaced withartificial neural networks.[20]The website was used to create over 1 million pieces of music, and brands that used it includedCoca-Cola,Google,UKTV, and theNatural History Museum, London.[25]In 2019, the company was acquired byByteDance.[26][27][28] MorpheuS[29]is a research project byDorien Herremansand Elaine Chew atQueen Mary University of London, funded by a Marie Skłodowská-Curie EU project. The system uses an optimization approach based on avariable neighborhood searchalgorithm to morph existing template pieces into novel pieces with a set level of tonal tension that changes dynamically throughout the piece. This optimization approach allows for the integration of a pattern detection technique in order to enforce long term structure and recurring themes in the generated music. Pieces composed by MorpheuS have been performed at concerts in both Stanford and London. Created in February 2016, inLuxembourg,AIVAis a program that produces soundtracks for any type of media. The algorithms behind AIVA are based on deep learning architectures[30]AIVA has also been used to compose a Rock track calledOn the Edge,[31]as well as a pop tuneLove Sick[32]in collaboration with singerTaryn Southern,[33]for the creation of her 2018 album "I am AI". Google's Magenta team has published several AI music applications and technical papers since their launch in 2016.[34]In 2017 they released theNSynthalgorithm and dataset,[35]and anopen sourcehardware musical instrument, designed to facilitate musicians in using the algorithm.[36]The instrument was used by notable artists such asGrimesandYACHTin their albums.[37][38]In 2018, they released a piano improvisation app called Piano Genie. This was later followed by Magenta Studio, a suite of 5 MIDI plugins that allow music producers to elaborate on existing music in their DAW.[39]In 2023, their machine learning team published a technical paper on GitHub that described MusicLM, a private text-to-music generator which they'd developed.[40][41] Riffusionis aneural network, designed by Seth Forsgren and Hayk Martiros, that generates music using images of sound rather than audio.[42] The resulting music has been described as "de otro mundo" (otherworldly),[43]although unlikely to replace man-made music.[43]The model was made available on December 15, 2022, with the code also freely available onGitHub.[44] The first version of Riffusion was created as afine-tuningofStable Diffusion, an existing open-source model for generating images from text prompts, onspectrograms,[42]resulting in a model which used text prompts to generate image files which could then be put through aninverse Fourier transformand converted into audio files.[44]While these files were only several seconds long, the model could also uselatent spacebetween outputs tointerpolatedifferent files together[42][45](using theimg2imgcapabilities of SD).[46]It was one of many models derived from Stable Diffusion.[46] In December 2022, Mubert[47]similarly used Stable Diffusion to turn descriptive text into music loops. In January 2023, Google published a paper on their own text-to-music generator called MusicLM.[48][49] Spike AI is an AI-basedaudio plug-in, developed bySpike Stentin collaboration with his son Joshua Stent and friend Henry Ramsey, that analyzes tracks and provides suggestions to increase clarity and other aspects duringmixing. Communication is done by using achatbottrained on Spike Stent's personal data. The plug-in integrates intodigital audio workstation.[52][53] Artificial intelligence can potentially impact how producers create music by giving reiterations of a track that follow a prompt given by the creator. These prompts allow the AI to follow a certain style that the artist is trying to go for.[5]AI has also been seen in musical analysis where it has been used for feature extraction, pattern recognition, and musical recommendations.[54]New tools that are powered by artificial intelligence have been made to help aid in generating original music compositions, likeAIVA(Artificial Intelligence Virtual Artist) andUdio. This is done by giving an AI model data of already-existing music and having it analyze the data using deep learning techniques to generate music in many different genres, such as classical music or electronic music.[55] Several musicians such asDua Lipa,Elton John,Nick Cave,Paul McCartneyandStinghave criticized the use of AI in music and are encouraging theUK governmentto act on this matter.[56][57][58][59][60] Some artists have encouraged the use of AI in music such asGrimes.[61] While helpful in generating new music, many issues have come up since artificial intelligence has begun making music. Some major concerns include how the economy will be impacted with AI taking over music production, who truly owns music generated by AI, and a lower demand for human-made musical compositions. Some critics argue that AI diminishes the value of human creativity, while proponents see it as an augmentative tool that expands artistic possibilities rather than replacing human musicians.[62] Additionally, concerns have been raised about AI's potential to homogenize music. AI-driven models often generate compositions based on existing trends, which some fear could limit musical diversity. Addressing this concern, researchers are working on AI systems that incorporate more nuanced creative elements, allowing for greater stylistic variation.[55] Another major concern about artificial intelligence in music is copyright laws. Many questions have been asked about who owns AI generated music and productions, as today’s copyright laws require the work to be human-authorized in order to be granted copyright protection. One proposed solution is to create hybrid laws that recognize both the artificial intelligence that generated the creation and the humans that contributed to the creation. In the United States, the current legal framework tends to apply traditional copyright laws to AI, despite its differences with the human creative process.[63]However, music outputs solely generated by AI are not granted copyright protection. In thecompendium of the U.S. Copyright Office Practices, the Copyright Office has stated that it would not grant copyrights to "works that lack human authorship" and "the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author."[64]In February 2022, the Copyright Review Board rejected an application to copyright AI-generated artwork on the basis that it "lacked the required human authorship necessary to sustain a claim in copyright."[65]The usage of copyrighted music in training AI has also been a topic of contention. One instance of this was seen whenSACEM, a professional organization of songwriters, composers, and music publishers demanded that PozaLabs, an AI music generation startup refrain from utilizing any music affiliated with them for training models.[66] The situation in the European Union (EU) is similar to the US, because its legal framework also emphasizes the role of human involvement in a copyright-protected work.[67]According to theEuropean Union Intellectual Property Officeand the recent jurisprudence of theCourt of Justice of the European Union, the originality criterion requires the work to be the author's own intellectual creation, reflecting the personality of the author evidenced by the creative choices made during its production, requires distinct level of human involvement.[67]The reCreating Europe project, funded by the European Union's Horizon 2020 research and innovation program, delves into the challenges posed by AI-generated contents including music, suggesting legal certainty and balanced protection that encourages innovation while respecting copyright norms.[67]The recognition ofAIVAmarks a significant departure from traditional views on authorship and copyrights in the realm of music composition, allowing AI artists capable of releasing music and earning royalties. This acceptance marks AIVA as a pioneering instance where an AI has been formally acknowledged within the music production.[68] The recent advancements in artificial intelligence made by groups such asStability AI,OpenAI, andGooglehas incurred an enormous sum of copyright claims leveled against generative technology, including AI music. Should these lawsuits succeed, the machine learning models behind these technologies would have their datasets restricted to the public domain.[69]Strides towards addressing ethical issues have been made as well, such as the collaboration between Sound Ethics(a company promoting ethical AI usage in the music industry) and UC Irvine, focusing on ethical frameworks and the responsible usage of AI.[70] A more nascent development of AI in music is the application ofaudio deepfakesto cast the lyrics or musical style of a pre-existing song to the voice or style of another artist. This has raised many concerns regarding the legality of technology, as well as the ethics of employing it, particularly in the context of artistic identity.[71]Furthermore, it has also raised the question of to whom the authorship of these works is attributed. As AI cannot hold authorship of its own, current speculation suggests that there will be no clear answer until further rulings are made regarding machine learning technologies as a whole.[72]Most recent preventative measures have started to be developed byGoogleand Universal Music group who have taken into royalties and credit attribution to allow producers to replicated the voices and styles of artists.[73] In 2023, an artist known as ghostwriter977 created a musical deepfake called "Heart on My Sleeve" that cloned the voices ofDrakeandThe Weekndby inputting an assortment of vocal-only tracks from the respective artists into a deep-learning algorithm, creating an artificial model of the voices of each artist, to which this model could be mapped onto originalreference vocalswith original lyrics.[74]The track was submitted forGrammyconsideration for the best rap song and song of the year.[75]It went viral and gained traction onTikTokand received a positive response from the audience, leading to its official release onApple Music,Spotify, andYouTubein April 2023.[76]Many believed the track was fully composed by an AI software, but the producer claimed the songwriting, production, and original vocals (pre-conversion) were still done by him.[74]It would later be rescinded from any Grammy considerations due to it not following the guidelines necessary to be considered for a Grammy award.[76]The track would end up being removed from all music platforms byUniversal Music Group.[76]The song was a watershed moment for AI voice cloning, and models have since been created for hundreds, if not thousands, of popular singers and rappers. In 2013, country music singerRandy Travissuffered astrokewhich left him unable to sing. In the meantime, vocalist James Dupré toured on his behalf, singing his songs for him. Travis and longtime producerKyle Lehningreleased a new song in May 2024 titled "Where That Came From", Travis's first new song since his stroke. The recording uses AI technology to re-create Travis's singing voice, having been composited from over 40 existing vocal recordings alongside those of Dupré.[77][78] Artificial intelligence music encompasses a number of technical approaches used for music composition, analysis, classification, and suggestion. Techniques used are drawn from deep learning, machine learning, natural language processing, and signal processing. Current systems are able to compose entire musical compositions, parse affective content, accompany human players in real-time, and acquire patterns of user and context-dependent preferences.[79][80][81][82] Symbolic music generation is the generation of music in discrete symbolic forms such as MIDI, where note and timing are precisely defined. Early systems employed rule-based systems and Markov models, but modern systems employ deep learning to a large extent. Recurrent Neural Networks (RNNs), and more precisely Long Short-Term Memory (LSTM) networks, have been employed in modeling temporal dependencies of musical sequences. They may be used to generate melodies, harmonies, and counterpoints in various musical genres.[83] Transformer models such as Music Transformer and MuseNet became more popular for symbolic generation due to their ability to model long-range dependencies and scalability. These models were employed to generate multi-instrument polyphonic music and stylistic imitations.[84] This method generates music as raw audio waveforms instead of symbolic notation. DeepMind's WaveNet is an early example that uses autoregressive sampling to generate high-fidelity audio. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) are being used more and more in new audio texture synthesis and timbre combination of different instruments.[80] NSynth (Neural Synthesizer), a Google Magenta project, uses a WaveNet-like autoencoder to learn latent audio representations and thereby generate completely novel instrumental sounds.[85] Music Information Retrieval (MIR) is the extraction of musically relevant information from audio recordings to be utilized in applications such as genre classification, instrument recognition, mood recognition, beat detection, and similarity estimation. CNNs on spectrogram features have been very accurate on these tasks.[82]SVMs and k-Nearest Neighbors (k-NN) are also used for classification on features such as Mel-frequency cepstral coefficients (MFCCs). Hybrid systems combine symbolic and sound-based methods to draw on their respective strengths. They can compose high-level symbolic compositions and synthesize them as natural sound. Interactive systems in real-time allow for AI to instantaneously respond to human input to support live performance. Reinforcement learning and rule-based agents tend to be utilized to allow for human–AI co-creation in improvisation contexts.[81] Affective computing techniques enable AI systems to classify or create music based on some affective content. The models use musical features such as tempo, mode, and timbre to classify or influence listener emotions. Deep learning models have been trained for classifying music based on affective content and even creating music intended to have affective impacts.[86] Music recommenders employ AI to suggest tracks to users based on what they have heard, their tastes, and information available in context. Collaborative filtering, content-based filtering, and hybrid filtering are most widely applied, deep learning being utilized for fine-tuning. Graph-based and matrix factorization methods are used within commercial systems like Spotify and YouTube Music to represent complex user-item relationships.[87] AI is also used in audio engineering automation such as mixing and mastering. Such systems level, equalize, pan, and compress to give well-balanced sound outputs. Software such as LANDR and iZotope Ozone utilize machine learning in emulating professional audio engineers' decisions.[88] Natural language generation also applies to songwriting assistance and lyrics generation. Transformer language models like GPT-3 have also been proven to be able to generate stylistic and coherent lyrics from input prompts, themes, or feeling. There even exist AI programs that assist with rhyme scheme, syllable count, and poem form. .[89] Recent developments include multimodal AI systems that integrate music with other media, e.g., dance, video, and text. These can generate background scores in synchronization with video sequences or generate dance choreography from audio input. Cross-modal retrieval systems allow one to search for music using images, text, or gestures.[90] The advent of AI music has caused heated cultural debates, especially its impacts on creativity, morality, and audience. As much as there have been praises about the democratization of music production, there have been fears raised about its impacts on producers, audience, and society in general. The most contentious application of AI music creation has been its misuse to produce offensive work. The music AI platforms have been used in several instances to produce songs with offensive lyrics that were racist, antisemitic, or contained violence and have tested moderation and accountability in generative AI platforms.[[91]] The case has renewed argument about accountability in users and developers in producing moral outputs in generative models. Aside from that, there have been several producers and artists denouncing the use of AI music due to threats to originality, handmade craftsmanship, and cultural authenticity. The music created by AIs lacks the emotional intelligence and lived life upon which human work relies, according to its critics. The concern comes in an era when there are steadily more songs made from AIs appearing on platforms and which others consider lowering human artistry.[[92]] Interestingly enough, while professional musicians have been generally more dismissive about using AI in music production, the general consumer or listener has been receptive or neutral to the idea. Surveys have found that in a commercial context, the average consumer often doesn't know or even care whether they hear music made by human beings or AI and that a high percentage says that it doesn't affect their enjoyment.[[92]] The contrast between artist sentiment and consumer sentiment may hold far-reaching consequences in terms of the future economics within the music industry and the worth assigned to human creativity. The cultural value placed on AI music is similarly related to overall popular perceptions regarding generative AI. How generative AI-produced work—whether music or writing—is received in human terms has been found to be dependent upon such factors as emotional meaning and authenticity.[[93]] As long as the output from AI proves persuasive and engaging, audiences may in some cases be willing to accept music whose author is not a human being, with the potential to reshape conventions regarding creators and creativity. The field of music and artificial intelligence is still evolving. Some of the key future directions for advancement include advancements in generation models, changes in how humans and AI collaborate musically, and the development of legal and ethical frameworks to address the technology's impact. Future research and development is expected to move beyond established techniques such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). More recent architectures such as diffusion models and transformer based networks[94]are showing promise for generating more complex, nuanced, and stylistically coherent music. These models may lead to higher quality audio generation and better long term structure in music compositions. Besides the act of generation itself, a significant future direction of interest involves deepening the collaboration between human musicians and AI. Developments are increasingly focused on understanding the way these collaborations can occur, and how they can be facilitated to be ethically sound.[95]This involves studying musicians perceptions and experiences with AI tools to inform the design of future systems. Research actively explores these collaborative models in different domains. For instance, studies investigate how AI can be co-designed with professionals such as music therapists to act as supportive partners in complex creative and therapeutic processes,[96]showing a trend towards developing AI not just as an output tool, but as an integrated component designed to augment human skills. As AI generated music becomes more capable and widespread, legal and ethical frameworks worldwide are expected to continue adapting. Current policy discussions have been focusing on copyright ownership, the use of AI to mimic artists (deepfakes), and fair compensation for artists.[97]Recent legislative efforts and debates, such as those concerning AI safety and regulation in places like California, show the challenges involved in balancing innovation with potential risks and societal impacts.[98]Tracking these developments is crucial for understanding the future of AI in the music industry.[99]
https://en.wikipedia.org/wiki/Music_and_artificial_intelligence