text
stringlengths 105
4.17k
| source
stringclasses 883
values |
---|---|
Lorentz transformations are used to transform the coordinates of an event from one frame to another in special relativity.
The Lorentz factor appears in the Lorentz transformations:
The inverse Lorentz transformations are:
When v ≪ c and x is small enough, the v2/c2 and vx/c2 terms approach zero, and the Lorentz transformations approximate to the Galilean transformations.
etc., most often really mean etc. Although for brevity the Lorentz transformation equations are written without deltas, x means Δx, etc. We are, in general, always concerned with the space and time differences between events.
Calling one set of transformations the normal Lorentz transformations and the other the inverse transformations is misleading, since there is no intrinsic difference between the frames. Different authors call one or the other set of transformations the "inverse" set. The forwards and inverse transformations are trivially related to each other, since the S frame can only be moving forwards or reverse with respect to . So inverting the equations simply entails switching the primed and unprimed variables and replacing v with −v.
Example: Terence and Stella are at an Earth-to-Mars space race. Terence is an official at the starting line, while Stella is a participant. At time , Stella's spaceship accelerates instantaneously to a speed of 0.5 c.
|
https://en.wikipedia.org/wiki/Spacetime
|
Terence is an official at the starting line, while Stella is a participant. At time , Stella's spaceship accelerates instantaneously to a speed of 0.5 c. The distance from Earth to Mars is 300 light-seconds (about ). Terence observes Stella crossing the finish-line clock at . But Stella observes the time on her ship chronometer to be as she passes the finish line, and she calculates the distance between the starting and finish lines, as measured in her frame, to be 259.81 light-seconds (about ).
1).
#### Deriving the Lorentz transformations
There have been many dozens of derivations of the Lorentz transformations since Einstein's original work in 1905, each with its particular focus. Although Einstein's derivation was based on the invariance of the speed of light, there are other physical principles that may serve as starting points. Ultimately, these alternative starting points can be considered different expressions of the underlying principle of locality, which states that the influence that one particle exerts on another can not be transmitted instantaneously.
The derivation given here and illustrated in Fig. 3-5 is based on one presented by Bais and makes use of previous results from the Relativistic Composition of Velocities, Time Dilation, and Length Contraction sections. Event P has coordinates (w, x) in the black "rest system" and coordinates in the red frame that is moving with velocity parameter .
|
https://en.wikipedia.org/wiki/Spacetime
|
The derivation given here and illustrated in Fig. 3-5 is based on one presented by Bais and makes use of previous results from the Relativistic Composition of Velocities, Time Dilation, and Length Contraction sections. Event P has coordinates (w, x) in the black "rest system" and coordinates in the red frame that is moving with velocity parameter . To determine and in terms of w and x (or the other way around) it is easier at first to derive the inverse Lorentz transformation.
1. There can be no such thing as length expansion/contraction in the transverse directions. y must equal y and must equal z, otherwise whether a fast moving 1 m ball could fit through a 1 m circular hole would depend on the observer. The first postulate of relativity states that all inertial frames are equivalent, and transverse expansion/contraction would violate this law.
1. From the drawing, w = a + b and
1. From previous results using similar triangles, we know that .
1. Because of time dilation,
1. Substituting equation (4) into yields .
1. Length contraction and similar triangles give us and
1. Substituting the expressions for s, a, r and b into the equations in Step 2 immediately yield
1. :
1. :
|
https://en.wikipedia.org/wiki/Spacetime
|
1. Length contraction and similar triangles give us and
1. Substituting the expressions for s, a, r and b into the equations in Step 2 immediately yield
1. :
1. :
The above equations are alternate expressions for the t and x equations of the inverse Lorentz transformation, as can be seen by substituting ct for w, for , and v/c for β. From the inverse transformation, the equations of the forwards transformation can be derived by solving for and .
#### Linearity of the Lorentz transformations
The Lorentz transformations have a mathematical property called linearity, since and are obtained as linear combinations of x and t, with no higher powers involved. The linearity of the transformation reflects a fundamental property of spacetime that was tacitly assumed in the derivation, namely, that the properties of inertial frames of reference are independent of location and time. In the absence of gravity, spacetime looks the same everywhere. All inertial observers will agree on what constitutes accelerating and non-accelerating motion. Any one observer can use her own measurements of space and time, but there is nothing absolute about them. Another observer's conventions will do just as well.
A result of linearity is that if two Lorentz transformations are applied sequentially, the result is also a Lorentz transformation.
|
https://en.wikipedia.org/wiki/Spacetime
|
Another observer's conventions will do just as well.
A result of linearity is that if two Lorentz transformations are applied sequentially, the result is also a Lorentz transformation.
Example: Terence observes Stella speeding away from him at 0.500 c, and he can use the Lorentz transformations with to relate Stella's measurements to his own. Stella, in her frame, observes Ursula traveling away from her at 0.250 c, and she can use the Lorentz transformations with to relate Ursula's measurements with her own. Because of the linearity of the transformations and the relativistic composition of velocities, Terence can use the Lorentz transformations with to relate Ursula's measurements with his own.
### Doppler effect
The Doppler effect is the change in frequency or wavelength of a wave for a receiver and source in relative motion. For simplicity, we consider here two basic scenarios: (1) The motions of the source and/or receiver are exactly along the line connecting them (longitudinal Doppler effect), and (2) the motions are at right angles to the said line (transverse Doppler effect). We are ignoring scenarios where they move along intermediate angles.
|
https://en.wikipedia.org/wiki/Spacetime
|
The motions of the source and/or receiver are exactly along the line connecting them (longitudinal Doppler effect), and (2) the motions are at right angles to the said line (transverse Doppler effect). We are ignoring scenarios where they move along intermediate angles.
#### Longitudinal Doppler effect
The classical Doppler analysis deals with waves that are propagating in a medium, such as sound waves or water ripples, and which are transmitted between sources and receivers that are moving towards or away from each other. The analysis of such waves depends on whether the source, the receiver, or both are moving relative to the medium. Given the scenario where the receiver is stationary with respect to the medium, and the source is moving directly away from the receiver at a speed of vs for a velocity parameter of βs, the wavelength is increased, and the observed frequency f is given by
On the other hand, given the scenario where source is stationary, and the receiver is moving directly away from the source at a speed of vr for a velocity parameter of βr, the wavelength is not changed, but the transmission velocity of the waves relative to the receiver is decreased, and the observed frequency f is given by
Light, unlike sound or water ripples, does not propagate through a medium, and there is no distinction between a source moving away from the receiver or a receiver moving away from the source. Fig.
|
https://en.wikipedia.org/wiki/Spacetime
|
Given the scenario where the receiver is stationary with respect to the medium, and the source is moving directly away from the receiver at a speed of vs for a velocity parameter of βs, the wavelength is increased, and the observed frequency f is given by
On the other hand, given the scenario where source is stationary, and the receiver is moving directly away from the source at a speed of vr for a velocity parameter of βr, the wavelength is not changed, but the transmission velocity of the waves relative to the receiver is decreased, and the observed frequency f is given by
Light, unlike sound or water ripples, does not propagate through a medium, and there is no distinction between a source moving away from the receiver or a receiver moving away from the source. Fig. 3-6 illustrates a relativistic spacetime diagram showing a source separating from the receiver with a velocity parameter so that the separation between source and receiver at time is . Because of time dilation, Since the slope of the green light ray is −1, Hence, the relativistic Doppler effect is given by
#### Transverse Doppler effect
Suppose that a source and a receiver, both approaching each other in uniform inertial motion along non-intersecting lines, are at their closest approach to each other. It would appear that the classical analysis predicts that the receiver detects no Doppler shift.
|
https://en.wikipedia.org/wiki/Spacetime
|
#### Transverse Doppler effect
Suppose that a source and a receiver, both approaching each other in uniform inertial motion along non-intersecting lines, are at their closest approach to each other. It would appear that the classical analysis predicts that the receiver detects no Doppler shift. Due to subtleties in the analysis, that expectation is not necessarily true. Nevertheless, when appropriately defined, transverse Doppler shift is a relativistic effect that has no classical analog. The subtleties are these:
<!—end plainlist—>
In scenario (a), the point of closest approach is frame-independent and represents the moment where there is no change in distance versus time (i.e. dr/dt = 0 where r is the distance between receiver and source) and hence no longitudinal Doppler shift. The source observes the receiver as being illuminated by light of frequency , but also observes the receiver as having a time-dilated clock. In frame S, the receiver is therefore illuminated by blueshifted light of frequency
In scenario (b) the illustration shows the receiver being illuminated by light from when the source was closest to the receiver, even though the source has moved on.
|
https://en.wikipedia.org/wiki/Spacetime
|
The source observes the receiver as being illuminated by light of frequency , but also observes the receiver as having a time-dilated clock. In frame S, the receiver is therefore illuminated by blueshifted light of frequency
In scenario (b) the illustration shows the receiver being illuminated by light from when the source was closest to the receiver, even though the source has moved on. Because the source's clocks are time dilated as measured in frame S, and since dr/dt was equal to zero at this point, the light from the source, emitted from this closest point, is redshifted with frequency
Scenarios (c) and (d) can be analyzed by simple time dilation arguments. In (c), the receiver observes light from the source as being blueshifted by a factor of , and in (d), the light is redshifted. The only seeming complication is that the orbiting objects are in accelerated motion. However, if an inertial observer looks at an accelerating clock, only the clock's instantaneous speed is important when computing time dilation. (The converse, however, is not true.) Most reports of transverse Doppler shift refer to the effect as a redshift and analyze the effect in terms of scenarios (b) or (d).Not all experiments characterize the effect in terms of a redshift.
|
https://en.wikipedia.org/wiki/Spacetime
|
(The converse, however, is not true.) Most reports of transverse Doppler shift refer to the effect as a redshift and analyze the effect in terms of scenarios (b) or (d).Not all experiments characterize the effect in terms of a redshift. For example, the Kündig experiment measures transverse blueshift using a Mössbauer source setup at the center of a centrifuge rotor and an absorber at the rim.
### Energy and momentum
#### Extending momentum to four dimensions
In classical mechanics, the state of motion of a particle is characterized by its mass and its velocity. Linear momentum, the product of a particle's mass and velocity, is a vector quantity, possessing the same direction as the velocity: . It is a conserved quantity, meaning that if a closed system is not affected by external forces, its total linear momentum cannot change.
In relativistic mechanics, the momentum vector is extended to four dimensions. Added to the momentum vector is a time component that allows the spacetime momentum vector to transform like the spacetime position vector . In exploring the properties of the spacetime momentum, we start, in Fig. 3-8a, by examining what a particle looks like at rest.
|
https://en.wikipedia.org/wiki/Spacetime
|
Added to the momentum vector is a time component that allows the spacetime momentum vector to transform like the spacetime position vector . In exploring the properties of the spacetime momentum, we start, in Fig. 3-8a, by examining what a particle looks like at rest. In the rest frame, the spatial component of the momentum is zero, i.e. , but the time component equals mc.
We can obtain the transformed components of this vector in the moving frame by using the Lorentz transformations, or we can read it directly from the figure because we know that and , since the red axes are rescaled by gamma. Fig. 3-8b illustrates the situation as it appears in the moving frame. It is apparent that the space and time components of the four-momentum go to infinity as the velocity of the moving frame approaches c.
We will use this information shortly to obtain an expression for the four-momentum.
#### Momentum of light
Light particles, or photons, travel at the speed of c, the constant that is conventionally known as the speed of light. This statement is not a tautology, since many modern formulations of relativity do not start with constant speed of light as a postulate. Photons therefore propagate along a lightlike world line and, in appropriate units, have equal space and time components for every observer.
|
https://en.wikipedia.org/wiki/Spacetime
|
This statement is not a tautology, since many modern formulations of relativity do not start with constant speed of light as a postulate. Photons therefore propagate along a lightlike world line and, in appropriate units, have equal space and time components for every observer.
A consequence of Maxwell's theory of electromagnetism is that light carries energy and momentum, and that their ratio is a constant: . Rearranging, , and since for photons, the space and time components are equal, E/c must therefore be equated with the time component of the spacetime momentum vector.
Photons travel at the speed of light, yet have finite momentum and energy. For this to be so, the mass term in γmc must be zero, meaning that photons are massless particles. Infinity times zero is an ill-defined quantity, but E/c is well-defined.
By this analysis, if the energy of a photon equals E in the rest frame, it equals in a moving frame. This result can be derived by inspection of Fig. 3-9 or by application of the Lorentz transformations, and is consistent with the analysis of Doppler effect given previously.
#### Mass–energy relationship
Consideration of the interrelationships between the various components of the relativistic momentum vector led Einstein to several important conclusions.
-
|
https://en.wikipedia.org/wiki/Spacetime
|
This result can be derived by inspection of Fig. 3-9 or by application of the Lorentz transformations, and is consistent with the analysis of Doppler effect given previously.
#### Mass–energy relationship
Consideration of the interrelationships between the various components of the relativistic momentum vector led Einstein to several important conclusions.
- In the low speed limit as approaches zero, approaches 1, so the spatial component of the relativistic momentum approaches mv, the classical term for momentum. Following this perspective, γm can be interpreted as a relativistic generalization of m. Einstein proposed that the relativistic mass of an object increases with velocity according to the formula .
- Likewise, comparing the time component of the relativistic momentum with that of the photon, , so that Einstein arrived at the relationship . Simplified to the case of zero velocity, this is Einstein's equation relating energy and mass.
Another way of looking at the relationship between mass and energy is to consider a series expansion of at low velocity:
The second term is just an expression for the kinetic energy of the particle. Mass indeed appears to be another form of energy.
|
https://en.wikipedia.org/wiki/Spacetime
|
The second term is just an expression for the kinetic energy of the particle. Mass indeed appears to be another form of energy.
The concept of relativistic mass that Einstein introduced in 1905, mrel, although amply validated every day in particle accelerators around the globe (or indeed in any instrumentation whose use depends on high velocity particles, such as electron microscopes, old-fashioned color television sets, etc.), has nevertheless not proven to be a fruitful concept in physics in the sense that it is not a concept that has served as a basis for other theoretical development. Relativistic mass, for instance, plays no role in general relativity.
For this reason, as well as for pedagogical concerns, most physicists currently prefer a different terminology when referring to the relationship between mass and energy. "Relativistic mass" is a deprecated term. The term "mass" by itself refers to the rest mass or invariant mass, and is equal to the invariant length of the relativistic momentum vector. Expressed as a formula,
This formula applies to all particles, massless as well as massive. For photons where mrest equals zero, it yields, .
#### Four-momentum
Because of the close relationship between mass and energy, the four-momentum (also called 4-momentum) is also called the energy–momentum 4-vector.
|
https://en.wikipedia.org/wiki/Spacetime
|
For photons where mrest equals zero, it yields, .
#### Four-momentum
Because of the close relationship between mass and energy, the four-momentum (also called 4-momentum) is also called the energy–momentum 4-vector. Using an uppercase P to represent the four-momentum and a lowercase p to denote the spatial momentum, the four-momentum may be written as
$$
P \equiv (E/c, \vec{p}) = (E/c, p_x, p_y, p_z)
$$
or alternatively,
$$
P \equiv (E, \vec{p}) = (E, p_x, p_y, p_z)
$$
using the convention that
$$
c = 1 .
$$
### Conservation laws
In physics, conservation laws state that certain particular measurable properties of an isolated physical system do not change as the system evolves over time. In 1915, Emmy Noether discovered that underlying each conservation law is a fundamental symmetry of nature. The fact that physical processes do not care where in space they take place (space translation symmetry) yields conservation of momentum, the fact that such processes do not care when they take place (time translation symmetry) yields conservation of energy, and so on.
|
https://en.wikipedia.org/wiki/Spacetime
|
In 1915, Emmy Noether discovered that underlying each conservation law is a fundamental symmetry of nature. The fact that physical processes do not care where in space they take place (space translation symmetry) yields conservation of momentum, the fact that such processes do not care when they take place (time translation symmetry) yields conservation of energy, and so on. In this section, we examine the Newtonian views of conservation of mass, momentum and energy from a relativistic perspective.
#### Total momentum
To understand how the Newtonian view of conservation of momentum needs to be modified in a relativistic context, we examine the problem of two colliding bodies limited to a single dimension.
In Newtonian mechanics, two extreme cases of this problem may be distinguished yielding mathematics of minimum complexity:
(1) The two bodies rebound from each other in a completely elastic collision.
(2) The two bodies stick together and continue moving as a single particle. This second case is the case of completely inelastic collision.
For both cases (1) and (2), momentum, mass, and total energy are conserved. However, kinetic energy is not conserved in cases of inelastic collision. A certain fraction of the initial kinetic energy is converted to heat.
|
https://en.wikipedia.org/wiki/Spacetime
|
However, kinetic energy is not conserved in cases of inelastic collision. A certain fraction of the initial kinetic energy is converted to heat.
In case (2), two masses with momentums
and collide to produce a single particle of conserved mass traveling at the center of mass velocity of the original system,
$$
\boldsymbol{v_{c m}}=\left(m_{1} \boldsymbol{v_1}+m_{2} \boldsymbol{v_2}\right) /\left(m_{1}+m_{2}\right)
$$
. The total momentum is conserved.
Fig. 3-10 illustrates the inelastic collision of two particles from a relativistic perspective. The time components and add up to total E/c of the resultant vector, meaning that energy is conserved. Likewise, the space components and add up to form p of the resultant vector. The four-momentum is, as expected, a conserved quantity. However, the invariant mass of the fused particle, given by the point where the invariant hyperbola of the total momentum intersects the energy axis, is not equal to the sum of the invariant masses of the individual particles that collided. Indeed, it is larger than the sum of the individual masses: .
|
https://en.wikipedia.org/wiki/Spacetime
|
However, the invariant mass of the fused particle, given by the point where the invariant hyperbola of the total momentum intersects the energy axis, is not equal to the sum of the invariant masses of the individual particles that collided. Indeed, it is larger than the sum of the individual masses: .
Looking at the events of this scenario in reverse sequence, we see that non-conservation of mass is a common occurrence: when an unstable elementary particle spontaneously decays into two lighter particles, total energy is conserved, but the mass is not. Part of the mass is converted into kinetic energy.
#### Choice of reference frames
The freedom to choose any frame in which to perform an analysis allows us to pick one which may be particularly convenient. For analysis of momentum and energy problems, the most convenient frame is usually the "center-of-momentum frame" (also called the zero-momentum frame, or COM frame). This is the frame in which the space component of the system's total momentum is zero. Fig. 3-11 illustrates the breakup of a high speed particle into two daughter particles. In the lab frame, the daughter particles are preferentially emitted in a direction oriented along the original particle's trajectory.
|
https://en.wikipedia.org/wiki/Spacetime
|
3-11 illustrates the breakup of a high speed particle into two daughter particles. In the lab frame, the daughter particles are preferentially emitted in a direction oriented along the original particle's trajectory. In the COM frame, however, the two daughter particles are emitted in opposite directions, although their masses and the magnitude of their velocities are generally not the same.
#### Energy and momentum conservation
In a Newtonian analysis of interacting particles, transformation between frames is simple because all that is necessary is to apply the Galilean transformation to all velocities. Since , the momentum . If the total momentum of an interacting system of particles is observed to be conserved in one frame, it will likewise be observed to be conserved in any other frame.
Conservation of momentum in the COM frame amounts to the requirement that both before and after collision. In the Newtonian analysis, conservation of mass dictates that . In the simplified, one-dimensional scenarios that we have been considering, only one additional constraint is necessary before the outgoing momenta of the particles can be determined—an energy condition. In the one-dimensional case of a completely elastic collision with no loss of kinetic energy, the outgoing velocities of the rebounding particles in the COM frame will be precisely equal and opposite to their incoming velocities.
|
https://en.wikipedia.org/wiki/Spacetime
|
In the simplified, one-dimensional scenarios that we have been considering, only one additional constraint is necessary before the outgoing momenta of the particles can be determined—an energy condition. In the one-dimensional case of a completely elastic collision with no loss of kinetic energy, the outgoing velocities of the rebounding particles in the COM frame will be precisely equal and opposite to their incoming velocities. In the case of a completely inelastic collision with total loss of kinetic energy, the outgoing velocities of the rebounding particles will be zero.
Newtonian momenta, calculated as , fail to behave properly under Lorentzian transformation. The linear transformation of velocities is replaced by the highly nonlinear
so that a calculation demonstrating conservation of momentum in one frame will be invalid in other frames. Einstein was faced with either having to give up conservation of momentum, or to change the definition of momentum. This second option was what he chose.
The relativistic conservation law for energy and momentum replaces the three classical conservation laws for energy, momentum and mass. Mass is no longer conserved independently, because it has been subsumed into the total relativistic energy. This makes the relativistic conservation of energy a simpler concept than in nonrelativistic mechanics, because the total energy is conserved without any qualifications.
|
https://en.wikipedia.org/wiki/Spacetime
|
Mass is no longer conserved independently, because it has been subsumed into the total relativistic energy. This makes the relativistic conservation of energy a simpler concept than in nonrelativistic mechanics, because the total energy is conserved without any qualifications. Kinetic energy converted into heat or internal potential energy shows up as an increase in mass.
## Introduction to curved spacetime
## Technical topics
### Is spacetime really curved?
In Poincaré's conventionalist views, the essential criteria according to which one should select a Euclidean versus non-Euclidean geometry would be economy and simplicity. A realist would say that Einstein discovered spacetime to be non-Euclidean. A conventionalist would say that Einstein merely found it more convenient to use non-Euclidean geometry. The conventionalist would maintain that Einstein's analysis said nothing about what the geometry of spacetime really is.
Such being said,
Is it possible to represent general relativity in terms of flat spacetime?
Are there any situations where a flat spacetime interpretation of general relativity may be more convenient than the usual curved spacetime interpretation?
In response to the first question, a number of authors including Deser, Grishchuk, Rosen, Weinberg, etc. have provided various formulations of gravitation as a field in a flat manifold.
|
https://en.wikipedia.org/wiki/Spacetime
|
Are there any situations where a flat spacetime interpretation of general relativity may be more convenient than the usual curved spacetime interpretation?
In response to the first question, a number of authors including Deser, Grishchuk, Rosen, Weinberg, etc. have provided various formulations of gravitation as a field in a flat manifold. Those theories are variously called "bimetric gravity", the "field-theoretical approach to general relativity", and so forth. Kip Thorne has provided a popular review of these theories.
The flat spacetime paradigm posits that matter creates a gravitational field that causes rulers to shrink when they are turned from circumferential orientation to radial, and that causes the ticking rates of clocks to dilate. The flat spacetime paradigm is fully equivalent to the curved spacetime paradigm in that they both represent the same physical phenomena. However, their mathematical formulations are entirely different. Working physicists routinely switch between using curved and flat spacetime techniques depending on the requirements of the problem. The flat spacetime paradigm is convenient when performing approximate calculations in weak fields. Hence, flat spacetime techniques tend be used when solving gravitational wave problems, while curved spacetime techniques tend be used in the analysis of black holes.
### Asymptotic symmetries
|
https://en.wikipedia.org/wiki/Spacetime
|
Hence, flat spacetime techniques tend be used when solving gravitational wave problems, while curved spacetime techniques tend be used in the analysis of black holes.
### Asymptotic symmetries
The spacetime symmetry group for Special Relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries if any might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group.
In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at lightlike infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists.
|
https://en.wikipedia.org/wiki/Spacetime
|
In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at lightlike infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields.
What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity.
|
https://en.wikipedia.org/wiki/Spacetime
|
What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances.
### Riemannian geometry
### Curved manifolds
For physical reasons, a spacetime continuum is mathematically defined as a four-dimensional, smooth, connected Lorentzian manifold
$$
(M, g)
$$
. This means the smooth Lorentz metric
$$
g
$$
has signature
$$
(3,1)
$$
.
|
https://en.wikipedia.org/wiki/Spacetime
|
### Curved manifolds
For physical reasons, a spacetime continuum is mathematically defined as a four-dimensional, smooth, connected Lorentzian manifold
$$
(M, g)
$$
. This means the smooth Lorentz metric
$$
g
$$
has signature
$$
(3,1)
$$
. The metric determines the , as well as determining the geodesics of particles and light beams. About each point (event) on this manifold, coordinate charts are used to represent observers in reference frames. Usually, Cartesian coordinates
$$
(x, y, z, t)
$$
are used. Moreover, for simplicity's sake, units of measurement are usually chosen such that the speed of light
$$
c
$$
is equal to 1.
A reference frame (observer) can be identified with one of these coordinate charts; any such observer can describe any event
$$
p
$$
. Another reference frame may be identified by a second coordinate chart about
$$
p
$$
. Two observers (one in each reference frame) may describe the same event
$$
p
$$
but obtain different descriptions.
Usually, many overlapping coordinate charts are needed to cover a manifold.
|
https://en.wikipedia.org/wiki/Spacetime
|
Two observers (one in each reference frame) may describe the same event
$$
p
$$
but obtain different descriptions.
Usually, many overlapping coordinate charts are needed to cover a manifold. Given two coordinate charts, one containing
$$
p
$$
(representing an observer) and another containing
$$
q
$$
(representing another observer), the intersection of the charts represents the region of spacetime in which both observers can measure physical quantities and hence compare results. The relation between the two sets of measurements is given by a non-singular coordinate transformation on this intersection. The idea of coordinate charts as local observers who can perform measurements in their vicinity also makes good physical sense, as this is how one actually collects physical data—locally.
For example, two observers, one of whom is on Earth, but the other one who is on a fast rocket to Jupiter, may observe a comet crashing into Jupiter (this is the event
$$
p
$$
). In general, they will disagree about the exact location and timing of this impact, i.e., they will have different 4-tuples
$$
(x, y, z, t)
$$
(as they are using different coordinate systems). Although their kinematic descriptions will differ, dynamical (physical) laws, such as momentum conservation and the first law of thermodynamics, will still hold.
|
https://en.wikipedia.org/wiki/Spacetime
|
In general, they will disagree about the exact location and timing of this impact, i.e., they will have different 4-tuples
$$
(x, y, z, t)
$$
(as they are using different coordinate systems). Although their kinematic descriptions will differ, dynamical (physical) laws, such as momentum conservation and the first law of thermodynamics, will still hold. In fact, relativity theory requires more than this in the sense that it stipulates these (and all other physical) laws must take the same form in all coordinate systems. This introduces tensors into relativity, by which all physical quantities are represented.
Geodesics are said to be timelike, null, or spacelike if the tangent vector to one point of the geodesic is of this nature. Paths of particles and light beams in spacetime are represented by timelike and null (lightlike) geodesics, respectively.
### Privileged character of 3+1 spacetime
|
https://en.wikipedia.org/wiki/Spacetime
|
Data ( , ) are a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted formally. A datum is an individual value in a collection of data. Data are usually organized into structures such as tables that provide additional context and meaning, and may themselves be used as data in larger structures. Data may be used as variables in a computational process. Data may represent abstract ideas or concrete measurements.
Data are commonly used in scientific research, economics, and virtually every other form of human organizational activity. Examples of data sets include price indices (such as the consumer price index), unemployment rates, literacy rates, and census data. In this context, data represent the raw facts and figures from which useful information can be extracted.
Data are collected using techniques such as measurement, observation, query, or analysis, and are typically represented as numbers or characters that may be further processed. Field data are data that are collected in an uncontrolled, in-situ environment. Experimental data are data that are generated in the course of a controlled scientific experiment. Data are analyzed using techniques such as calculation, reasoning, discussion, presentation, visualization, or other forms of post-analysis.
|
https://en.wikipedia.org/wiki/Data
|
Experimental data are data that are generated in the course of a controlled scientific experiment. Data are analyzed using techniques such as calculation, reasoning, discussion, presentation, visualization, or other forms of post-analysis. Prior to analysis, raw data (or unprocessed data) is typically cleaned: Outliers are removed, and obvious instrument or data entry errors are corrected.
Data can be seen as the smallest units of factual information that can be used as a basis for calculation, reasoning, or discussion. Data can range from abstract ideas to concrete measurements, including, but not limited to, statistics. Thematically connected data presented in some relevant context can be viewed as information. Contextually connected pieces of information can then be described as data insights or intelligence. The stock of insights and intelligence that accumulate over time resulting from the synthesis of data into information, can then be described as knowledge. Data has been described as "the new oil of the digital economy". Data, as a general concept, refers to the fact that some existing information or knowledge is represented or coded in some form suitable for better usage or processing.
Advances in computing technologies have led to the advent of big data, which usually refers to very large quantities of data, usually at the petabyte scale. Using traditional data analysis methods and computing, working with such large (and growing) datasets is difficult, even impossible.
|
https://en.wikipedia.org/wiki/Data
|
Advances in computing technologies have led to the advent of big data, which usually refers to very large quantities of data, usually at the petabyte scale. Using traditional data analysis methods and computing, working with such large (and growing) datasets is difficult, even impossible. (Theoretically speaking, infinite data would yield infinite information, which would render extracting insights or intelligence impossible.) In response, the relatively new field of data science uses machine learning (and other artificial intelligence) methods that allow for efficient applications of analytic methods to big data.
## Etymology and terminology
The Latin word is the plural of , "(thing) given," and the neuter past participle of , "to give". The first English use of the word "data" is from the 1640s. The word "data" was first used to mean "transmissible and storable computer information" in 1946. The expression "data processing" was first used in 1954.
When "data" is used more generally as a synonym for "information", it is treated as a mass noun in singular form. This usage is common in everyday language and in technical and scientific fields such as software development and computer science. One example of this usage is the term "big data". When used more specifically to refer to the processing and analysis of sets of data, the term retains its plural form.
|
https://en.wikipedia.org/wiki/Data
|
One example of this usage is the term "big data". When used more specifically to refer to the processing and analysis of sets of data, the term retains its plural form. This usage is common in the natural sciences, life sciences, social sciences, software development and computer science, and grew in popularity in the 20th and 21st centuries. Some style guides do not recognize the different meanings of the term and simply recommend the form that best suits the target audience of the guide. For example, APA style as of the 7th edition requires "data" to be treated as a plural form.
## Meaning
Data, information, knowledge, and wisdom are closely related concepts, but each has its role concerning the other, and each term has its meaning. According to a common view, data is collected and analyzed; data only becomes information suitable for making decisions once it has been analyzed in some fashion. One can say that the extent to which a set of data is informative to someone depends on the extent to which it is unexpected by that person. The amount of information contained in a data stream may be characterized by its Shannon entropy.
Knowledge is the awareness of its environment that some entity possesses, whereas data merely communicates that knowledge. For example, the entry in a database specifying the height of Mount Everest is a datum that communicates a precisely measured value.
|
https://en.wikipedia.org/wiki/Data
|
Knowledge is the awareness of its environment that some entity possesses, whereas data merely communicates that knowledge. For example, the entry in a database specifying the height of Mount Everest is a datum that communicates a precisely measured value. This measurement may be included in a book along with other data on Mount Everest to describe the mountain in a manner useful for those who wish to decide on the best method to climb it. Awareness of the characteristics represented by this data is knowledge.
Data are often assumed to be the least abstract concept, information the next least, and knowledge the most abstract. In this view, data becomes information by interpretation; e.g., the height of Mount Everest is generally considered "data", a book on Mount Everest geological characteristics may be considered "information", and a climber's guidebook containing practical information on the best way to reach Mount Everest's peak may be considered "knowledge". "Information" bears a diversity of meanings that range from everyday usage to technical use. This view, however, has also been argued to reverse how data emerges from information, and information from knowledge. Generally speaking, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, and representation.
|
https://en.wikipedia.org/wiki/Data
|
This view, however, has also been argued to reverse how data emerges from information, and information from knowledge. Generally speaking, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, and representation. Beynon-Davies uses the concept of a sign to differentiate between data and information; data is a series of symbols, while information occurs when the symbols are used to refer to something.
Before the development of computing devices and machines, people had to manually collect data and impose patterns on it. With the development of computing devices and machines, these devices can also collect data. In the 2010s, computers were widely used in many fields to collect data and sort or process it, in disciplines ranging from marketing, analysis of social service usage by citizens to scientific research. These patterns in the data are seen as information that can be used to enhance knowledge. These patterns may be interpreted as "truth" (though "truth" can be a subjective concept) and may be authorized as aesthetic and ethical criteria in some disciplines or cultures. Events that leave behind perceivable physical or virtual remains can be traced back through data. Marks are no longer considered data once the link between the mark and observation is broken.
Mechanical computing devices are classified according to how they represent data.
|
https://en.wikipedia.org/wiki/Data
|
Marks are no longer considered data once the link between the mark and observation is broken.
Mechanical computing devices are classified according to how they represent data. An analog computer represents a datum as a voltage, distance, position, or other physical quantity. A digital computer represents a piece of data as a sequence of symbols drawn from a fixed alphabet. The most common digital computers use a binary alphabet, that is, an alphabet of two characters typically denoted "0" and "1". More familiar representations, such as numbers or letters, are then constructed from the binary alphabet. Some special forms of data are distinguished. A computer program is a collection of data, that can be interpreted as instructions. Most computer languages make a distinction between programs and the other data on which programs operate, but in some languages, notably Lisp and similar languages, programs are essentially indistinguishable from other data. It is also useful to distinguish metadata, that is, a description of other data. A similar yet earlier term for metadata is "ancillary data." The prototypical example of metadata is the library catalog, which is a description of the contents of books.
## Data sources
With respect to ownership of data collected in the course of marketing or other corporate collection, data has been characterized according to "party" depending on how close the data is to the source or if it has been generated through additional processing.
|
https://en.wikipedia.org/wiki/Data
|
The prototypical example of metadata is the library catalog, which is a description of the contents of books.
## Data sources
With respect to ownership of data collected in the course of marketing or other corporate collection, data has been characterized according to "party" depending on how close the data is to the source or if it has been generated through additional processing. "Zero-party data" refers to data that customers "intentionally and proactively shares". This kind of data can come from a variety of sources, including: subscriptions, preference centers, quizzes, surveys, pop-up forms, and interactive digital experiences. "First-party data" may be collected by a company directly from its customers. The secure exchange of first-party data among companies can be done using data clean rooms. "Second-party data" refers to data obtained from other organizations or partners, through purchase or other means and has been described as "another organization's first-party data". "Third-party data" is data collected by other organizations and subsequently aggregated from different sources, websites, and platforms.
|
https://en.wikipedia.org/wiki/Data
|
"Second-party data" refers to data obtained from other organizations or partners, through purchase or other means and has been described as "another organization's first-party data". "Third-party data" is data collected by other organizations and subsequently aggregated from different sources, websites, and platforms.
+Summary of data sourcesData sourceOwned byAccuracyUse casePrivacy riskFirst-partyThe businessHighPersonalization, retargetingLowSecond-partyPartnerModeratePartnership campaignsModerateThird-partyExternal entityLowBroad targetingHigh
"No-party" data can sometimes refer to synthetic data that is generated based on patterns from original data.
## Data documents
Whenever data needs to be registered, data exists in the form of a data document. Kinds of data documents include:
- data repository
- data study
- data set
- software
- data paper
- database
- data handbook
- data journal
Some of these data documents (data repositories, data studies, data sets, and software) are indexed in Data Citation Indexes, while data papers are indexed in traditional bibliographic databases, e.g., Science Citation Index.
|
https://en.wikipedia.org/wiki/Data
|
Whenever data needs to be registered, data exists in the form of a data document. Kinds of data documents include:
- data repository
- data study
- data set
- software
- data paper
- database
- data handbook
- data journal
Some of these data documents (data repositories, data studies, data sets, and software) are indexed in Data Citation Indexes, while data papers are indexed in traditional bibliographic databases, e.g., Science Citation Index.
### Data collection
Gathering data can be accomplished through a primary source (the researcher is the first person to obtain the data) or a secondary source (the researcher obtains the data that has already been collected by other sources, such as data disseminated in a scientific journal). Data analysis methodologies vary and include data triangulation and data percolation. The latter offers an articulate method of collecting, classifying, and analyzing data using five possible angles of analysis (at least three) to maximize the research's objectivity and permit an understanding of the phenomena under investigation as complete as possible: qualitative and quantitative methods, literature reviews (including scholarly articles), interviews with experts, and computer simulation. The data is thereafter "percolated" using a series of pre-determined steps so as to extract the most relevant information.
## Data longevity and accessibility
An important field in computer science, technology, and library science is the longevity of data.
|
https://en.wikipedia.org/wiki/Data
|
The data is thereafter "percolated" using a series of pre-determined steps so as to extract the most relevant information.
## Data longevity and accessibility
An important field in computer science, technology, and library science is the longevity of data. Scientific research generates huge amounts of data, especially in genomics and astronomy, but also in the medical sciences, e.g. in medical imaging. In the past, scientific data has been published in papers and books, stored in libraries, but more recently practically all data is stored on hard drives or optical discs. However, in contrast to paper, these storage devices may become unreadable after a few decades. Scientific publishers and libraries have been struggling with this problem for a few decades, and there is still no satisfactory solution for the long-term storage of data over centuries or even for eternity.
Data accessibility. Another problem is that much scientific data is never published or deposited in data repositories such as databases. In a recent survey, data was requested from 516 studies that were published between 2 and 22 years earlier, but less than one out of five of these studies were able or willing to provide the requested data. Overall, the likelihood of retrieving data dropped by 17% each year after publication. Similarly, a survey of 100 datasets in Dryad found that more than half lacked the details to reproduce the research results from these studies.
|
https://en.wikipedia.org/wiki/Data
|
Overall, the likelihood of retrieving data dropped by 17% each year after publication. Similarly, a survey of 100 datasets in Dryad found that more than half lacked the details to reproduce the research results from these studies. This shows the dire situation of access to scientific data that is not published or does not have enough details to be reproduced.
A solution to the problem of reproducibility is the attempt to require FAIR data, that is, data that is Findable, Accessible, Interoperable, and Reusable. Data that fulfills these requirements can be used in subsequent research and thus advances science and technology.
## In other fields
Although data is also increasingly used in other fields, it has been suggested that their highly interpretive nature might be at odds with the ethos of data as "given". Peter Checkland introduced the term capta (from the Latin capere, "to take") to distinguish between an immense number of possible data and a sub-set of them, to which attention is oriented. Johanna Drucker has argued that since the humanities affirm knowledge production as "situated, partial, and constitutive," using data may introduce assumptions that are counterproductive, for example, that phenomena are discrete or are observer-independent. The term capta, which emphasizes the act of observation as constitutive, is offered as an alternative to data for visual representations in the humanities.
|
https://en.wikipedia.org/wiki/Data
|
Johanna Drucker has argued that since the humanities affirm knowledge production as "situated, partial, and constitutive," using data may introduce assumptions that are counterproductive, for example, that phenomena are discrete or are observer-independent. The term capta, which emphasizes the act of observation as constitutive, is offered as an alternative to data for visual representations in the humanities.
The term data-driven is a neologism applied to an activity which is primarily compelled by data over all other factors. Data-driven applications include data-driven programming and data-driven journalism.
|
https://en.wikipedia.org/wiki/Data
|
The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves (e.g. water waves, sound waves and seismic waves) or electromagnetic waves (including light waves). It arises in fields like acoustics, electromagnetism, and fluid dynamics.
This article focuses on waves in classical physics. Quantum physics uses an operator-based wave equation often as a relativistic wave equation.
## Introduction
The wave equation is a hyperbolic partial differential equation describing waves, including traveling and standing waves; the latter can be considered as linear superpositions of waves traveling in opposite directions. This article mostly focuses on the scalar wave equation describing waves in scalars by scalar functions of a time variable (a variable representing time) and one or more spatial variables (variables representing a position in a space under discussion). At the same time, there are vector wave equations describing waves in vectors such as waves for an electrical field, magnetic field, and magnetic vector potential and elastic waves.
|
https://en.wikipedia.org/wiki/Wave_equation
|
This article mostly focuses on the scalar wave equation describing waves in scalars by scalar functions of a time variable (a variable representing time) and one or more spatial variables (variables representing a position in a space under discussion). At the same time, there are vector wave equations describing waves in vectors such as waves for an electrical field, magnetic field, and magnetic vector potential and elastic waves. By comparison with vector wave equations, the scalar wave equation can be seen as a special case of the vector wave equations; in the Cartesian coordinate system, the scalar wave equation is the equation to be satisfied by each component (for each coordinate axis, such as the x component for the x axis) of a vector wave without sources of waves in the considered domain (i.e., space and time). For example, in the Cartesian coordinate system, for
$$
(E_x, E_y, E_z)
$$
as the representation of an electric vector field wave
$$
\vec{E}
$$
in the absence of wave sources, each coordinate axis component
$$
E_i
$$
(i = x, y, z) must satisfy the scalar wave equation.
|
https://en.wikipedia.org/wiki/Wave_equation
|
By comparison with vector wave equations, the scalar wave equation can be seen as a special case of the vector wave equations; in the Cartesian coordinate system, the scalar wave equation is the equation to be satisfied by each component (for each coordinate axis, such as the x component for the x axis) of a vector wave without sources of waves in the considered domain (i.e., space and time). For example, in the Cartesian coordinate system, for
$$
(E_x, E_y, E_z)
$$
as the representation of an electric vector field wave
$$
\vec{E}
$$
in the absence of wave sources, each coordinate axis component
$$
E_i
$$
(i = x, y, z) must satisfy the scalar wave equation. Other scalar wave equation solutions are for physical quantities in scalars such as pressure in a liquid or gas, or the displacement along some specific direction of particles of a vibrating solid away from their resting (equilibrium) positions.
The scalar wave equation is
where
- is a fixed non-negative real coefficient representing the propagation speed of the wave
- is a scalar field representing the displacement or, more generally, the conserved quantity (e.g. pressure or density)
- , and are the three spatial coordinates and being the time coordinate.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Other scalar wave equation solutions are for physical quantities in scalars such as pressure in a liquid or gas, or the displacement along some specific direction of particles of a vibrating solid away from their resting (equilibrium) positions.
The scalar wave equation is
where
- is a fixed non-negative real coefficient representing the propagation speed of the wave
- is a scalar field representing the displacement or, more generally, the conserved quantity (e.g. pressure or density)
- , and are the three spatial coordinates and being the time coordinate.
The equation states that, at any given point, the second derivative of
$$
u
$$
with respect to time is proportional to the sum of the second derivatives of
$$
u
$$
with respect to space, with the constant of proportionality being the square of the speed of the wave.
|
https://en.wikipedia.org/wiki/Wave_equation
|
The scalar wave equation is
where
- is a fixed non-negative real coefficient representing the propagation speed of the wave
- is a scalar field representing the displacement or, more generally, the conserved quantity (e.g. pressure or density)
- , and are the three spatial coordinates and being the time coordinate.
The equation states that, at any given point, the second derivative of
$$
u
$$
with respect to time is proportional to the sum of the second derivatives of
$$
u
$$
with respect to space, with the constant of proportionality being the square of the speed of the wave.
Using notations from vector calculus, the wave equation can be written compactly as
$$
u_{tt} = c^2 \Delta u,
$$
or
$$
\Box u = 0,
$$
where the double subscript denotes the second-order partial derivative with respect to time,
$$
\Delta
$$
is the Laplace operator and
$$
\Box
$$
the d'Alembert operator, defined as:
$$
u_{tt} = \frac{\partial^2 u}{\partial t^2}, \qquad \Delta = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2}, \qquad \Box = \frac{1}{c^2} \frac{\partial^2}{\partial t^2} - \Delta.
$$
A solution to this (two-way) wave equation can be quite complicated.
|
https://en.wikipedia.org/wiki/Wave_equation
|
The equation states that, at any given point, the second derivative of
$$
u
$$
with respect to time is proportional to the sum of the second derivatives of
$$
u
$$
with respect to space, with the constant of proportionality being the square of the speed of the wave.
Using notations from vector calculus, the wave equation can be written compactly as
$$
u_{tt} = c^2 \Delta u,
$$
or
$$
\Box u = 0,
$$
where the double subscript denotes the second-order partial derivative with respect to time,
$$
\Delta
$$
is the Laplace operator and
$$
\Box
$$
the d'Alembert operator, defined as:
$$
u_{tt} = \frac{\partial^2 u}{\partial t^2}, \qquad \Delta = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2}, \qquad \Box = \frac{1}{c^2} \frac{\partial^2}{\partial t^2} - \Delta.
$$
A solution to this (two-way) wave equation can be quite complicated. Still, it can be analyzed as a linear combination of simple solutions that are sinusoidal plane waves with various directions of propagation and wavelengths but all with the same propagation speed .
|
https://en.wikipedia.org/wiki/Wave_equation
|
Using notations from vector calculus, the wave equation can be written compactly as
$$
u_{tt} = c^2 \Delta u,
$$
or
$$
\Box u = 0,
$$
where the double subscript denotes the second-order partial derivative with respect to time,
$$
\Delta
$$
is the Laplace operator and
$$
\Box
$$
the d'Alembert operator, defined as:
$$
u_{tt} = \frac{\partial^2 u}{\partial t^2}, \qquad \Delta = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2}, \qquad \Box = \frac{1}{c^2} \frac{\partial^2}{\partial t^2} - \Delta.
$$
A solution to this (two-way) wave equation can be quite complicated. Still, it can be analyzed as a linear combination of simple solutions that are sinusoidal plane waves with various directions of propagation and wavelengths but all with the same propagation speed . This analysis is possible because the wave equation is linear and homogeneous, so that any multiple of a solution is also a solution, and the sum of any two solutions is again a solution.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Still, it can be analyzed as a linear combination of simple solutions that are sinusoidal plane waves with various directions of propagation and wavelengths but all with the same propagation speed . This analysis is possible because the wave equation is linear and homogeneous, so that any multiple of a solution is also a solution, and the sum of any two solutions is again a solution. This property is called the superposition principle in physics.
The wave equation alone does not specify a physical solution; a unique solution is usually obtained by setting a problem with further conditions, such as initial conditions, which prescribe the amplitude and phase of the wave. Another important class of problems occurs in enclosed spaces specified by boundary conditions, for which the solutions represent standing waves, or harmonics, analogous to the harmonics of musical instruments.
## Wave equation in one space dimension
The wave equation in one spatial dimension can be written as follows:
$$
\frac{\partial^2 u}{\partial t^2} = c^2 \frac{\partial^2 u}{\partial x^2}.
$$
This equation is typically described as having only one spatial dimension , because the only other independent variable is the time .
### Derivation
The wave equation in one space dimension can be derived in a variety of different physical settings.
|
https://en.wikipedia.org/wiki/Wave_equation
|
## Wave equation in one space dimension
The wave equation in one spatial dimension can be written as follows:
$$
\frac{\partial^2 u}{\partial t^2} = c^2 \frac{\partial^2 u}{\partial x^2}.
$$
This equation is typically described as having only one spatial dimension , because the only other independent variable is the time .
### Derivation
The wave equation in one space dimension can be derived in a variety of different physical settings. Most famously, it can be derived for the case of a string vibrating in a two-dimensional plane, with each of its elements being pulled in opposite directions by the force of tension.
Another physical setting for derivation of the wave equation in one space dimension uses
#### Hooke's law
. In the theory of elasticity, Hooke's law is an approximation for certain materials, stating that the amount by which a material body is deformed (the strain) is linearly related to the force causing the deformation (the stress).
Hooke's law
The wave equation in the one-dimensional case can be derived from Hooke's law in the following way: imagine an array of little weights of mass interconnected with massless springs of length .
|
https://en.wikipedia.org/wiki/Wave_equation
|
In the theory of elasticity, Hooke's law is an approximation for certain materials, stating that the amount by which a material body is deformed (the strain) is linearly related to the force causing the deformation (the stress).
Hooke's law
The wave equation in the one-dimensional case can be derived from Hooke's law in the following way: imagine an array of little weights of mass interconnected with massless springs of length . The springs have a spring constant of :
Here the dependent variable measures the distance from the equilibrium of the mass situated at , so that essentially measures the magnitude of a disturbance (i.e. strain) that is traveling in an elastic material.
|
https://en.wikipedia.org/wiki/Wave_equation
|
The resulting force exerted on the mass at the location is:
$$
\begin{align}
F_\text{Hooke} &= F_{x+2h} - F_x = k [u(x + 2h, t) - u(x + h, t)] - k[u(x + h,t) - u(x, t)].
\end{align}
$$
By equating the latter equation with
$$
\begin{align}
F_\text{Newton} &= m \, a(t) = m \, \frac{\partial^2}{\partial t^2} u(x + h, t),
\end{align}
$$
the equation of motion for the weight at the location is obtained:
$$
\frac{\partial^2}{\partial t^2} u(x + h, t) = \frac{k}{m} [u(x + 2h, t) - u(x + h, t) - u(x + h, t) + u(x, t)].
$$
If the array of weights consists of weights spaced evenly over the length of total mass , and the total spring constant of the array , we can write the above equation as
$$
\frac{\partial^2}{\partial t^2} u(x + h, t) = \frac{KL^2}{M} \frac{[u(x + 2h, t) - 2u(x + h, t) + u(x, t)]}{h^2}.
$$
Taking the limit and assuming smoothness, one gets
$$
\frac{\partial^2 u(x, t)}{\partial t^2} = \frac{KL^2}{M} \frac{\partial^2 u(x, t)}{\partial x^2},
$$
which is from the definition of a second derivative.
|
https://en.wikipedia.org/wiki/Wave_equation
|
is the square of the propagation speed in this particular case.
#### Stress pulse in a bar
In the case of a stress pulse propagating longitudinally through a bar, the bar acts much like an infinite number of springs in series and can be taken as an extension of the equation derived for Hooke's law. A uniform bar, i.e. of constant cross-section, made from a linear elastic material has a stiffness given by
$$
K = \frac{EA}{L},
$$
where is the cross-sectional area, and is the Young's modulus of the material. The wave equation becomes
$$
\frac{\partial^2 u(x, t)}{\partial t^2} = \frac{EAL}{M} \frac{\partial^2 u(x, t)}{\partial x^2}.
$$
is equal to the volume of the bar, and therefore
$$
\frac{AL}{M} = \frac{1}{\rho},
$$
where is the density of the material.
|
https://en.wikipedia.org/wiki/Wave_equation
|
A uniform bar, i.e. of constant cross-section, made from a linear elastic material has a stiffness given by
$$
K = \frac{EA}{L},
$$
where is the cross-sectional area, and is the Young's modulus of the material. The wave equation becomes
$$
\frac{\partial^2 u(x, t)}{\partial t^2} = \frac{EAL}{M} \frac{\partial^2 u(x, t)}{\partial x^2}.
$$
is equal to the volume of the bar, and therefore
$$
\frac{AL}{M} = \frac{1}{\rho},
$$
where is the density of the material. The wave equation reduces to
$$
\frac{\partial^2 u(x, t)}{\partial t^2} = \frac{E}{\rho} \frac{\partial^2 u(x, t)}{\partial x^2}.
$$
The speed of a stress wave in a bar is therefore
$$
\sqrt{E/\rho}
$$
.
### General solution
#### Algebraic approach
For the one-dimensional wave equation a relatively simple general solution may be found.
|
https://en.wikipedia.org/wiki/Wave_equation
|
### General solution
#### Algebraic approach
For the one-dimensional wave equation a relatively simple general solution may be found. Defining new variables
$$
\begin{align}
\xi &= x - c t, \\
\eta &= x + c t
\end{align}
$$
changes the wave equation into
$$
\frac{\partial^2 u}{\partial \xi \partial \eta}(x, t) = 0,
$$
which leads to the general solution
$$
u(x, t) = F(\xi) + G(\eta) = F(x - c t) + G(x + c t).
$$
In other words, the solution is the sum of a right-traveling function and a left-traveling function . "Traveling" means that the shape of these individual arbitrary functions with respect to stays constant, however, the functions are translated left and right with time at the speed . This was derived by Jean le Rond d'Alembert.
Another way to arrive at this result is to factor the wave equation using two first-order differential operators: _
|
https://en.wikipedia.org/wiki/Wave_equation
|
"Traveling" means that the shape of these individual arbitrary functions with respect to stays constant, however, the functions are translated left and right with time at the speed . This was derived by Jean le Rond d'Alembert.
Another way to arrive at this result is to factor the wave equation using two first-order differential operators: _ BLOCK3_Then, for our original equation, we can define
$$
v \equiv \frac{\partial u}{\partial t} + c\frac{\partial u}{\partial x},
$$
and find that we must have
$$
\frac{\partial v}{\partial t} - c\frac{\partial v}{\partial x} = 0.
$$
This advection equation can be solved by interpreting it as telling us that the directional derivative of in the direction is 0. This means that the value of is constant on characteristic lines of the form , and thus that must depend only on , that is, have the form . Then, to solve the first (inhomogenous) equation relating to , we can note that its homogenous solution must be a function of the form , by logic similar to the above.
|
https://en.wikipedia.org/wiki/Wave_equation
|
This means that the value of is constant on characteristic lines of the form , and thus that must depend only on , that is, have the form . Then, to solve the first (inhomogenous) equation relating to , we can note that its homogenous solution must be a function of the form , by logic similar to the above. Guessing a particular solution of the form , we find that
$$
\left[\frac{\partial}{\partial t} + c\frac{\partial}{\partial x}\right] G(x + ct) = H(x + ct).
$$
Expanding out the left side, rearranging terms, then using the change of variables simplifies the equation to
$$
G'(s) = \frac{H(s)}{2c}.
$$
This means we can find a particular solution of the desired form by integration. Thus, we have again shown that obeys .
|
https://en.wikipedia.org/wiki/Wave_equation
|
Guessing a particular solution of the form , we find that
$$
\left[\frac{\partial}{\partial t} + c\frac{\partial}{\partial x}\right] G(x + ct) = H(x + ct).
$$
Expanding out the left side, rearranging terms, then using the change of variables simplifies the equation to
$$
G'(s) = \frac{H(s)}{2c}.
$$
This means we can find a particular solution of the desired form by integration. Thus, we have again shown that obeys .
For an initial-value problem, the arbitrary functions and can be determined to satisfy initial conditions:
$$
u(x, 0) = f(x),
$$
$$
u_t(x, 0) = g(x).
$$
The result is d'Alembert's formula:
$$
u(x, t) = \frac{f(x - ct) + f(x + ct)}{2} + \frac{1}{2c} \int_{x-ct}^{x+ct} g(s) \, ds.
$$
In the classical sense, if , and , then . However, the waveforms and may also be generalized functions, such as the delta-function.
|
https://en.wikipedia.org/wiki/Wave_equation
|
For an initial-value problem, the arbitrary functions and can be determined to satisfy initial conditions:
$$
u(x, 0) = f(x),
$$
$$
u_t(x, 0) = g(x).
$$
The result is d'Alembert's formula:
$$
u(x, t) = \frac{f(x - ct) + f(x + ct)}{2} + \frac{1}{2c} \int_{x-ct}^{x+ct} g(s) \, ds.
$$
In the classical sense, if , and , then . However, the waveforms and may also be generalized functions, such as the delta-function. In that case, the solution may be interpreted as an impulse that travels to the right or the left.
The basic wave equation is a linear differential equation, and so it will adhere to the superposition principle. This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components.
|
https://en.wikipedia.org/wiki/Wave_equation
|
This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components.
#### Plane-wave eigenmodes
Another way to solve the one-dimensional wave equation is to first analyze its frequency eigenmodes.
|
https://en.wikipedia.org/wiki/Wave_equation
|
In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components.
#### Plane-wave eigenmodes
Another way to solve the one-dimensional wave equation is to first analyze its frequency eigenmodes. A so-called eigenmode is a solution that oscillates in time with a well-defined constant angular frequency , so that the temporal part of the wave function takes the form , and the amplitude is a function of the spatial variable , giving a separation of variables for the wave function:
$$
u_\omega(x, t) = e^{-i\omega t} f(x).
$$
This produces an ordinary differential equation for the spatial part :
$$
\frac{\partial^2 u_\omega }{\partial t^2} = \frac{\partial^2}{\partial t^2} \left(e^{-i\omega t} f(x)\right) = -\omega^2 e^{-i\omega t} f(x) = c^2 \frac{\partial^2}{\partial x^2} \left(e^{-i\omega t} f(x)\right).
$$
Therefore,
$$
\frac{d^2}{dx^2}f(x) = -\left(\frac{\omega}{c}\right)^2 f(x),
$$
which is precisely an eigenvalue equation for , hence the name eigenmode.
|
https://en.wikipedia.org/wiki/Wave_equation
|
#### Plane-wave eigenmodes
Another way to solve the one-dimensional wave equation is to first analyze its frequency eigenmodes. A so-called eigenmode is a solution that oscillates in time with a well-defined constant angular frequency , so that the temporal part of the wave function takes the form , and the amplitude is a function of the spatial variable , giving a separation of variables for the wave function:
$$
u_\omega(x, t) = e^{-i\omega t} f(x).
$$
This produces an ordinary differential equation for the spatial part :
$$
\frac{\partial^2 u_\omega }{\partial t^2} = \frac{\partial^2}{\partial t^2} \left(e^{-i\omega t} f(x)\right) = -\omega^2 e^{-i\omega t} f(x) = c^2 \frac{\partial^2}{\partial x^2} \left(e^{-i\omega t} f(x)\right).
$$
Therefore,
$$
\frac{d^2}{dx^2}f(x) = -\left(\frac{\omega}{c}\right)^2 f(x),
$$
which is precisely an eigenvalue equation for , hence the name eigenmode. Known as the Helmholtz equation, it has the well-known plane-wave solutions
$$
f(x) = A e^{\pm ikx},
$$
with wave number .
|
https://en.wikipedia.org/wiki/Wave_equation
|
A so-called eigenmode is a solution that oscillates in time with a well-defined constant angular frequency , so that the temporal part of the wave function takes the form , and the amplitude is a function of the spatial variable , giving a separation of variables for the wave function:
$$
u_\omega(x, t) = e^{-i\omega t} f(x).
$$
This produces an ordinary differential equation for the spatial part :
$$
\frac{\partial^2 u_\omega }{\partial t^2} = \frac{\partial^2}{\partial t^2} \left(e^{-i\omega t} f(x)\right) = -\omega^2 e^{-i\omega t} f(x) = c^2 \frac{\partial^2}{\partial x^2} \left(e^{-i\omega t} f(x)\right).
$$
Therefore,
$$
\frac{d^2}{dx^2}f(x) = -\left(\frac{\omega}{c}\right)^2 f(x),
$$
which is precisely an eigenvalue equation for , hence the name eigenmode. Known as the Helmholtz equation, it has the well-known plane-wave solutions
$$
f(x) = A e^{\pm ikx},
$$
with wave number .
The total wave function for this eigenmode is then the linear combination
$$
u_\omega(x, t) = e^{-i\omega t} \left(A e^{-ikx} + B e^{ikx}\right) = A e^{-i (kx + \omega t)} + B e^{i (kx - \omega t)},
$$
where complex numbers , depend in general on any initial and boundary conditions of the problem.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Eigenmodes are useful in constructing a full solution to the wave equation, because each of them evolves in time trivially with the phase factor
$$
e^{-i\omega t},
$$
so that a full solution can be decomposed into an eigenmode expansion:
$$
u(x, t) = \int_{-\infty}^\infty s(\omega) u_\omega(x, t) \, d\omega,
$$
or in terms of the plane waves,
$$
\begin{align}
u(x, t) &= \int_{-\infty}^\infty s_+(\omega) e^{-i(kx+\omega t)} \, d\omega + \int_{-\infty}^\infty s_-(\omega) e^{i(kx-\omega t)} \, d\omega \\
&= \int_{-\infty}^\infty s_+(\omega) e^{-ik(x+ct)} \, d\omega + \int_{-\infty}^\infty s_-(\omega) e^{ik (x-ct)} \, d\omega \\
&= F(x - ct) + G(x + ct),
\end{align}
$$
which is exactly in the same form as in the algebraic approach.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Functions are known as the Fourier component and are determined by initial and boundary conditions. This is a so-called frequency-domain method, alternative to direct time-domain propagations, such as FDTD method, of the wave packet , which is complete for representing waves in absence of time dilations. Completeness of the Fourier expansion for representing waves in the presence of time dilations has been challenged by chirp wave solutions allowing for time variation of . The chirp wave solutions seem particularly implied by very large but previously inexplicable radar residuals in the flyby anomaly and differ from the sinusoidal solutions in being receivable at any distance only at proportionally shifted frequencies and time dilations, corresponding to past chirp states of the source.
## Vectorial wave equation in three space dimensions
The vectorial wave equation (from which the scalar wave equation can be directly derived) can be obtained by applying a force equilibrium to an infinitesimal volume element.
|
https://en.wikipedia.org/wiki/Wave_equation
|
The chirp wave solutions seem particularly implied by very large but previously inexplicable radar residuals in the flyby anomaly and differ from the sinusoidal solutions in being receivable at any distance only at proportionally shifted frequencies and time dilations, corresponding to past chirp states of the source.
## Vectorial wave equation in three space dimensions
The vectorial wave equation (from which the scalar wave equation can be directly derived) can be obtained by applying a force equilibrium to an infinitesimal volume element. If the medium has a modulus of elasticity
$$
E
$$
that is homogeneous (i.e. independent of
$$
\mathbf{x}
$$
) within the volume element, then its stress tensor is given by
$$
\mathbf{T} = E \nabla \mathbf{u}
$$
, for a vectorial elastic deflection
$$
\mathbf{u}(\mathbf{x}, t)
$$
. The local equilibrium of:
1. the tension force
$$
\operatorname{div} \mathbf{T} = \nabla\cdot(E \nabla \mathbf{u}) = E \Delta\mathbf{u}
$$
due to deflection
$$
\mathbf{u}
$$
, and
1.
|
https://en.wikipedia.org/wiki/Wave_equation
|
If the medium has a modulus of elasticity
$$
E
$$
that is homogeneous (i.e. independent of
$$
\mathbf{x}
$$
) within the volume element, then its stress tensor is given by
$$
\mathbf{T} = E \nabla \mathbf{u}
$$
, for a vectorial elastic deflection
$$
\mathbf{u}(\mathbf{x}, t)
$$
. The local equilibrium of:
1. the tension force
$$
\operatorname{div} \mathbf{T} = \nabla\cdot(E \nabla \mathbf{u}) = E \Delta\mathbf{u}
$$
due to deflection
$$
\mathbf{u}
$$
, and
1. the inertial force
$$
\rho \partial^2\mathbf{u}/\partial t^2
$$
caused by the local acceleration
$$
\partial^2\mathbf{u} / \partial t^2
$$
can be written as
$$
\rho \frac{\partial^2 \mathbf{u}}{\partial t^2} - E \Delta \mathbf{u} = \mathbf{0}.
$$
By merging density
$$
\rho
$$
and elasticity module
$$
E,
$$
|
https://en.wikipedia.org/wiki/Wave_equation
|
The local equilibrium of:
1. the tension force
$$
\operatorname{div} \mathbf{T} = \nabla\cdot(E \nabla \mathbf{u}) = E \Delta\mathbf{u}
$$
due to deflection
$$
\mathbf{u}
$$
, and
1. the inertial force
$$
\rho \partial^2\mathbf{u}/\partial t^2
$$
caused by the local acceleration
$$
\partial^2\mathbf{u} / \partial t^2
$$
can be written as
$$
\rho \frac{\partial^2 \mathbf{u}}{\partial t^2} - E \Delta \mathbf{u} = \mathbf{0}.
$$
By merging density
$$
\rho
$$
and elasticity module
$$
E,
$$
the sound velocity
$$
c = \sqrt{E/\rho}
$$
results (material law).
|
https://en.wikipedia.org/wiki/Wave_equation
|
the inertial force
$$
\rho \partial^2\mathbf{u}/\partial t^2
$$
caused by the local acceleration
$$
\partial^2\mathbf{u} / \partial t^2
$$
can be written as
$$
\rho \frac{\partial^2 \mathbf{u}}{\partial t^2} - E \Delta \mathbf{u} = \mathbf{0}.
$$
By merging density
$$
\rho
$$
and elasticity module
$$
E,
$$
the sound velocity
$$
c = \sqrt{E/\rho}
$$
results (material law). After insertion, follows the well-known governing wave equation for a homogeneous medium:
$$
\frac{\partial^2 \mathbf{u}}{\partial t^2} - c^2 \Delta \mathbf{u} = \boldsymbol{0}.
$$
(Note: Instead of vectorial
$$
\mathbf{u}(\mathbf{x}, t),
$$
only scalar
$$
u(x, t)
$$
can be used, i.e. waves are travelling only along the
$$
x
$$
axis, and the scalar wave equation follows as
$$
\frac{\partial^2 u}{\partial t^2} - c^2 \frac{\partial^2 u}{\partial x^2} = 0
$$
.)
|
https://en.wikipedia.org/wiki/Wave_equation
|
the sound velocity
$$
c = \sqrt{E/\rho}
$$
results (material law). After insertion, follows the well-known governing wave equation for a homogeneous medium:
$$
\frac{\partial^2 \mathbf{u}}{\partial t^2} - c^2 \Delta \mathbf{u} = \boldsymbol{0}.
$$
(Note: Instead of vectorial
$$
\mathbf{u}(\mathbf{x}, t),
$$
only scalar
$$
u(x, t)
$$
can be used, i.e. waves are travelling only along the
$$
x
$$
axis, and the scalar wave equation follows as
$$
\frac{\partial^2 u}{\partial t^2} - c^2 \frac{\partial^2 u}{\partial x^2} = 0
$$
.)
The above vectorial partial differential equation of the 2nd order delivers two mutually independent solutions. From the quadratic velocity term
$$
c^2 = (+c)^2 = (-c)^2
$$
can be seen that there are two waves travelling in opposite directions
$$
+c
$$
and
$$
-c
$$
are possible, hence results the designation “two-way wave equation”.
|
https://en.wikipedia.org/wiki/Wave_equation
|
The above vectorial partial differential equation of the 2nd order delivers two mutually independent solutions. From the quadratic velocity term
$$
c^2 = (+c)^2 = (-c)^2
$$
can be seen that there are two waves travelling in opposite directions
$$
+c
$$
and
$$
-c
$$
are possible, hence results the designation “two-way wave equation”.
It can be shown for plane longitudinal wave propagation that the synthesis of two one-way wave equations leads to a general two-way wave equation.
|
https://en.wikipedia.org/wiki/Wave_equation
|
For
$$
\nabla\mathbf{c} = \mathbf{0},
$$
special two-wave equation with the d'Alembert operator results:
$$
\left(\frac{\partial}{\partial t} - \mathbf{c} \cdot \nabla\right)\left(\frac{\partial}{\partial t} + \mathbf{c} \cdot \nabla \right) \mathbf{u} =
\left(\frac{\partial^2}{\partial t^2} + (\mathbf{c} \cdot \nabla) \mathbf{c} \cdot \nabla\right) \mathbf{u} =
\left(\frac{\partial^2}{\partial t^2} + (\mathbf{c} \cdot \nabla)^2\right) \mathbf{u} = \mathbf{0}.
$$
For
$$
\nabla \mathbf{c} = \mathbf{0},
$$
this simplifies to
$$
\left(\frac{\partial^2}{\partial t^2} + c^2\Delta\right) \mathbf{u} = \mathbf{0}.
$$
Therefore, the vectorial 1st-order one-way wave equation with waves travelling in a pre-defined propagation direction
$$
\mathbf{c}
$$
results as
$$
\frac{\partial \mathbf{u}}{\partial t} - \mathbf{c} \cdot \nabla \mathbf{u} = \mathbf{0}.
$$
|
https://en.wikipedia.org/wiki/Wave_equation
|
## Scalar wave equation in three space dimensions
A solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the corresponding solution for a spherical wave. The result can then be also used to obtain the same solution in two space dimensions.
### Spherical waves
To obtain a solution with constant frequencies, apply the Fourier transform
$$
\Psi(\mathbf{r}, t) = \int_{-\infty}^\infty \Psi(\mathbf{r}, \omega) e^{-i\omega t} \, d\omega,
$$
which transforms the wave equation into an elliptic partial differential equation of the form:
$$
\left(\nabla^2 + \frac{\omega^2}{c^2}\right) \Psi(\mathbf{r}, \omega) = 0.
$$
This is the Helmholtz equation and can be solved using separation of variables.
|
https://en.wikipedia.org/wiki/Wave_equation
|
The result can then be also used to obtain the same solution in two space dimensions.
### Spherical waves
To obtain a solution with constant frequencies, apply the Fourier transform
$$
\Psi(\mathbf{r}, t) = \int_{-\infty}^\infty \Psi(\mathbf{r}, \omega) e^{-i\omega t} \, d\omega,
$$
which transforms the wave equation into an elliptic partial differential equation of the form:
$$
\left(\nabla^2 + \frac{\omega^2}{c^2}\right) \Psi(\mathbf{r}, \omega) = 0.
$$
This is the Helmholtz equation and can be solved using separation of variables. In spherical coordinates this leads to a separation of the radial and angular variables, writing the solution as:
$$
\Psi(\mathbf{r}, \omega) = \sum_{l,m} f_{lm}(r) Y_{lm}(\theta, \phi).
$$
The angular part of the solution take the form of spherical harmonics and the radial function satisfies:
$$
\left[\frac{d^2}{dr^2} + \frac{2}{r} \frac{d}{dr} + k^2 - \frac{l(l + 1)}{r^2}\right] f_l(r) = 0.
$$
independent of
$$
m
$$
, with
$$
k^2=\omega^2 / c^2
$$
.
|
https://en.wikipedia.org/wiki/Wave_equation
|
### Spherical waves
To obtain a solution with constant frequencies, apply the Fourier transform
$$
\Psi(\mathbf{r}, t) = \int_{-\infty}^\infty \Psi(\mathbf{r}, \omega) e^{-i\omega t} \, d\omega,
$$
which transforms the wave equation into an elliptic partial differential equation of the form:
$$
\left(\nabla^2 + \frac{\omega^2}{c^2}\right) \Psi(\mathbf{r}, \omega) = 0.
$$
This is the Helmholtz equation and can be solved using separation of variables. In spherical coordinates this leads to a separation of the radial and angular variables, writing the solution as:
$$
\Psi(\mathbf{r}, \omega) = \sum_{l,m} f_{lm}(r) Y_{lm}(\theta, \phi).
$$
The angular part of the solution take the form of spherical harmonics and the radial function satisfies:
$$
\left[\frac{d^2}{dr^2} + \frac{2}{r} \frac{d}{dr} + k^2 - \frac{l(l + 1)}{r^2}\right] f_l(r) = 0.
$$
independent of
$$
m
$$
, with
$$
k^2=\omega^2 / c^2
$$
. Substituting
$$
f_{l}(r)=\frac{1}{\sqrt{r}}u_{l}(r),
$$
transforms the equation into
$$
\left[\frac{d^2}{dr^2} + \frac{1}{r} \frac{d}{dr} + k^2 - \frac{(l + \frac{1}{2})^2}{r^2}\right] u_l(r) = 0,
$$
which is the Bessel equation.
|
https://en.wikipedia.org/wiki/Wave_equation
|
In spherical coordinates this leads to a separation of the radial and angular variables, writing the solution as:
$$
\Psi(\mathbf{r}, \omega) = \sum_{l,m} f_{lm}(r) Y_{lm}(\theta, \phi).
$$
The angular part of the solution take the form of spherical harmonics and the radial function satisfies:
$$
\left[\frac{d^2}{dr^2} + \frac{2}{r} \frac{d}{dr} + k^2 - \frac{l(l + 1)}{r^2}\right] f_l(r) = 0.
$$
independent of
$$
m
$$
, with
$$
k^2=\omega^2 / c^2
$$
. Substituting
$$
f_{l}(r)=\frac{1}{\sqrt{r}}u_{l}(r),
$$
transforms the equation into
$$
\left[\frac{d^2}{dr^2} + \frac{1}{r} \frac{d}{dr} + k^2 - \frac{(l + \frac{1}{2})^2}{r^2}\right] u_l(r) = 0,
$$
which is the Bessel equation.
#### Example
Consider the case .
|
https://en.wikipedia.org/wiki/Wave_equation
|
Substituting
$$
f_{l}(r)=\frac{1}{\sqrt{r}}u_{l}(r),
$$
transforms the equation into
$$
\left[\frac{d^2}{dr^2} + \frac{1}{r} \frac{d}{dr} + k^2 - \frac{(l + \frac{1}{2})^2}{r^2}\right] u_l(r) = 0,
$$
which is the Bessel equation.
#### Example
Consider the case . Then there is no angular dependence and the amplitude depends only on the radial distance, i.e., .
|
https://en.wikipedia.org/wiki/Wave_equation
|
#### Example
Consider the case . Then there is no angular dependence and the amplitude depends only on the radial distance, i.e., . In this case, the wave equation reduces to
$$
\left(\nabla^2 - \frac{1}{c^2} \frac{\partial^2 }{\partial t^2}\right) \Psi(\mathbf{r}, t) = 0,
$$
or
$$
\left(\frac{\partial^2}{\partial r^2} + \frac{2}{r} \frac{\partial}{\partial r} - \frac{1}{c^2} \frac{\partial^2}{\partial t^2}\right) u(r, t) = 0.
$$
This equation can be rewritten as
$$
\frac{\partial^2(ru)}{\partial t^2} - c^2 \frac{\partial^2(ru)}{\partial r^2} = 0,
$$
where the quantity satisfies the one-dimensional wave equation.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Then there is no angular dependence and the amplitude depends only on the radial distance, i.e., . In this case, the wave equation reduces to
$$
\left(\nabla^2 - \frac{1}{c^2} \frac{\partial^2 }{\partial t^2}\right) \Psi(\mathbf{r}, t) = 0,
$$
or
$$
\left(\frac{\partial^2}{\partial r^2} + \frac{2}{r} \frac{\partial}{\partial r} - \frac{1}{c^2} \frac{\partial^2}{\partial t^2}\right) u(r, t) = 0.
$$
This equation can be rewritten as
$$
\frac{\partial^2(ru)}{\partial t^2} - c^2 \frac{\partial^2(ru)}{\partial r^2} = 0,
$$
where the quantity satisfies the one-dimensional wave equation. Therefore, there are solutions in the form
$$
u(r, t) = \frac{1}{r} F(r - ct) + \frac{1}{r} G(r + ct),
$$
where and are general solutions to the one-dimensional wave equation and can be interpreted as respectively an outgoing and incoming spherical waves.
|
https://en.wikipedia.org/wiki/Wave_equation
|
In this case, the wave equation reduces to
$$
\left(\nabla^2 - \frac{1}{c^2} \frac{\partial^2 }{\partial t^2}\right) \Psi(\mathbf{r}, t) = 0,
$$
or
$$
\left(\frac{\partial^2}{\partial r^2} + \frac{2}{r} \frac{\partial}{\partial r} - \frac{1}{c^2} \frac{\partial^2}{\partial t^2}\right) u(r, t) = 0.
$$
This equation can be rewritten as
$$
\frac{\partial^2(ru)}{\partial t^2} - c^2 \frac{\partial^2(ru)}{\partial r^2} = 0,
$$
where the quantity satisfies the one-dimensional wave equation. Therefore, there are solutions in the form
$$
u(r, t) = \frac{1}{r} F(r - ct) + \frac{1}{r} G(r + ct),
$$
where and are general solutions to the one-dimensional wave equation and can be interpreted as respectively an outgoing and incoming spherical waves. The outgoing wave can be generated by a point source, and they make possible sharp signals whose form is altered only by a decrease in amplitude as increases (see an illustration of a spherical wave on the top right).
|
https://en.wikipedia.org/wiki/Wave_equation
|
Therefore, there are solutions in the form
$$
u(r, t) = \frac{1}{r} F(r - ct) + \frac{1}{r} G(r + ct),
$$
where and are general solutions to the one-dimensional wave equation and can be interpreted as respectively an outgoing and incoming spherical waves. The outgoing wave can be generated by a point source, and they make possible sharp signals whose form is altered only by a decrease in amplitude as increases (see an illustration of a spherical wave on the top right). Such waves exist only in cases of space with odd dimensions.
For physical examples of solutions to the 3D wave equation that possess angular dependence, see dipole radiation.
#### Monochromatic spherical wave
Although the word "monochromatic" is not exactly accurate, since it refers to light or electromagnetic radiation with well-defined frequency, the spirit is to discover the eigenmode of the wave equation in three dimensions.
|
https://en.wikipedia.org/wiki/Wave_equation
|
#### Monochromatic spherical wave
Although the word "monochromatic" is not exactly accurate, since it refers to light or electromagnetic radiation with well-defined frequency, the spirit is to discover the eigenmode of the wave equation in three dimensions. Following the derivation in the previous section on plane-wave eigenmodes, if we again restrict our solutions to spherical waves that oscillate in time with well-defined constant angular frequency , then the transformed function has simply plane-wave solutions:
$$
r u(r, t) = Ae^{i(\omega t \pm kr)},
$$
or
$$
u(r, t) = \frac{A}{r} e^{i(\omega t \pm kr)}.
$$
From this we can observe that the peak intensity of the spherical-wave oscillation, characterized as the squared wave amplitude
$$
I = |u(r, t)|^2 = \frac{|A|^2}{r^2},
$$
drops at the rate proportional to , an example of the inverse-square law.
### Solution of a general initial-value problem
The wave equation is linear in and is left unaltered by translations in space and time.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Following the derivation in the previous section on plane-wave eigenmodes, if we again restrict our solutions to spherical waves that oscillate in time with well-defined constant angular frequency , then the transformed function has simply plane-wave solutions:
$$
r u(r, t) = Ae^{i(\omega t \pm kr)},
$$
or
$$
u(r, t) = \frac{A}{r} e^{i(\omega t \pm kr)}.
$$
From this we can observe that the peak intensity of the spherical-wave oscillation, characterized as the squared wave amplitude
$$
I = |u(r, t)|^2 = \frac{|A|^2}{r^2},
$$
drops at the rate proportional to , an example of the inverse-square law.
### Solution of a general initial-value problem
The wave equation is linear in and is left unaltered by translations in space and time. Therefore, we can generate a great variety of solutions by translating and summing spherical waves. Let be an arbitrary function of three independent variables, and let the spherical wave form be a delta function. Let a family of spherical waves have center at , and let be the radial distance from that point.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Let be an arbitrary function of three independent variables, and let the spherical wave form be a delta function. Let a family of spherical waves have center at , and let be the radial distance from that point. Thus
$$
r^2 = (x - \xi)^2 + (y - \eta)^2 + (z - \zeta)^2.
$$
If is a superposition of such waves with weighting function , then
$$
u(t, x, y, z) = \frac{1}{4\pi c} \iiint \varphi(\xi, \eta, \zeta) \frac{\delta(r - ct)}{r} \, d\xi \, d\eta \, d\zeta;
$$
the denominator is a convenience.
From the definition of the delta function, may also be written as
$$
u(t, x, y, z) = \frac{t}{4\pi} \iint_S \varphi(x + ct\alpha, y + ct\beta, z + ct\gamma) \, d\omega,
$$
where , , and are coordinates on the unit sphere , and is the area element on .
|
https://en.wikipedia.org/wiki/Wave_equation
|
Thus
$$
r^2 = (x - \xi)^2 + (y - \eta)^2 + (z - \zeta)^2.
$$
If is a superposition of such waves with weighting function , then
$$
u(t, x, y, z) = \frac{1}{4\pi c} \iiint \varphi(\xi, \eta, \zeta) \frac{\delta(r - ct)}{r} \, d\xi \, d\eta \, d\zeta;
$$
the denominator is a convenience.
From the definition of the delta function, may also be written as
$$
u(t, x, y, z) = \frac{t}{4\pi} \iint_S \varphi(x + ct\alpha, y + ct\beta, z + ct\gamma) \, d\omega,
$$
where , , and are coordinates on the unit sphere , and is the area element on . This result has the interpretation that is times the mean value of on a sphere of radius centered at :
$$
u(t, x, y, z) = t M_{ct}[\varphi].
$$
It follows that
$$
u(0, x, y, z) = 0, \quad u_t(0, x, y, z) = \varphi(x, y, z).
$$
The mean value is an even function of , and hence if
$$
v(t, x, y, z) = \frac{\partial}{\partial t} \big(t M_{ct}[\varphi]\big),
$$
then
$$
v(0, x, y, z) = \varphi(x, y, z), \quad v_t(0, x, y, z) = 0.
$$
These formulas provide the solution for the initial-value problem for the wave equation.
|
https://en.wikipedia.org/wiki/Wave_equation
|
From the definition of the delta function, may also be written as
$$
u(t, x, y, z) = \frac{t}{4\pi} \iint_S \varphi(x + ct\alpha, y + ct\beta, z + ct\gamma) \, d\omega,
$$
where , , and are coordinates on the unit sphere , and is the area element on . This result has the interpretation that is times the mean value of on a sphere of radius centered at :
$$
u(t, x, y, z) = t M_{ct}[\varphi].
$$
It follows that
$$
u(0, x, y, z) = 0, \quad u_t(0, x, y, z) = \varphi(x, y, z).
$$
The mean value is an even function of , and hence if
$$
v(t, x, y, z) = \frac{\partial}{\partial t} \big(t M_{ct}[\varphi]\big),
$$
then
$$
v(0, x, y, z) = \varphi(x, y, z), \quad v_t(0, x, y, z) = 0.
$$
These formulas provide the solution for the initial-value problem for the wave equation. They show that the solution at a given point , given depends only on the data on the sphere of radius that is intersected by the light cone drawn backwards from .
|
https://en.wikipedia.org/wiki/Wave_equation
|
This result has the interpretation that is times the mean value of on a sphere of radius centered at :
$$
u(t, x, y, z) = t M_{ct}[\varphi].
$$
It follows that
$$
u(0, x, y, z) = 0, \quad u_t(0, x, y, z) = \varphi(x, y, z).
$$
The mean value is an even function of , and hence if
$$
v(t, x, y, z) = \frac{\partial}{\partial t} \big(t M_{ct}[\varphi]\big),
$$
then
$$
v(0, x, y, z) = \varphi(x, y, z), \quad v_t(0, x, y, z) = 0.
$$
These formulas provide the solution for the initial-value problem for the wave equation. They show that the solution at a given point , given depends only on the data on the sphere of radius that is intersected by the light cone drawn backwards from . It does not depend upon data on the interior of this sphere. Thus the interior of the sphere is a lacuna for the solution. This phenomenon is called Huygens' principle.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Thus the interior of the sphere is a lacuna for the solution. This phenomenon is called Huygens' principle. It is only true for odd numbers of space dimension, where for one dimension the integration is performed over the boundary of an interval with respect to the Dirac measure.
## Scalar wave equation in two space dimensions
In two space dimensions, the wave equation is
$$
u_{tt} = c^2 \left( u_{xx} + u_{yy} \right).
$$
We can use the three-dimensional theory to solve this problem if we regard as a function in three dimensions that is independent of the third dimension. If
$$
u(0,x,y)=0, \quad u_t(0,x,y) = \phi(x,y),
$$
then the three-dimensional solution formula becomes
$$
u(t,x,y) = tM_{ct}[\phi] = \frac{t}{4\pi} \iint_S \phi(x + ct\alpha,\, y + ct\beta) \, d\omega,
$$
where and are the first two coordinates on the unit sphere, and is the area element on the sphere.
|
https://en.wikipedia.org/wiki/Wave_equation
|
## Scalar wave equation in two space dimensions
In two space dimensions, the wave equation is
$$
u_{tt} = c^2 \left( u_{xx} + u_{yy} \right).
$$
We can use the three-dimensional theory to solve this problem if we regard as a function in three dimensions that is independent of the third dimension. If
$$
u(0,x,y)=0, \quad u_t(0,x,y) = \phi(x,y),
$$
then the three-dimensional solution formula becomes
$$
u(t,x,y) = tM_{ct}[\phi] = \frac{t}{4\pi} \iint_S \phi(x + ct\alpha,\, y + ct\beta) \, d\omega,
$$
where and are the first two coordinates on the unit sphere, and is the area element on the sphere. This integral may be rewritten as a double integral over the disc with center and radius
$$
u(t,x,y) = \frac{1}{2\pi c} \iint_D \frac{\phi(x+\xi, y +\eta)}{\sqrt{(ct)^2 - \xi^2 - \eta^2}} d\xi \, d\eta.
$$
It is apparent that the solution at depends not only on the data on the light cone where
$$
(x -\xi)^2 + (y - \eta)^2 = c^2 t^2 ,
$$
but also on data that are interior to that cone.
|
https://en.wikipedia.org/wiki/Wave_equation
|
If
$$
u(0,x,y)=0, \quad u_t(0,x,y) = \phi(x,y),
$$
then the three-dimensional solution formula becomes
$$
u(t,x,y) = tM_{ct}[\phi] = \frac{t}{4\pi} \iint_S \phi(x + ct\alpha,\, y + ct\beta) \, d\omega,
$$
where and are the first two coordinates on the unit sphere, and is the area element on the sphere. This integral may be rewritten as a double integral over the disc with center and radius
$$
u(t,x,y) = \frac{1}{2\pi c} \iint_D \frac{\phi(x+\xi, y +\eta)}{\sqrt{(ct)^2 - \xi^2 - \eta^2}} d\xi \, d\eta.
$$
It is apparent that the solution at depends not only on the data on the light cone where
$$
(x -\xi)^2 + (y - \eta)^2 = c^2 t^2 ,
$$
but also on data that are interior to that cone.
## Scalar wave equation in general dimension and Kirchhoff's formulae
We want to find solutions to for with and .
|
https://en.wikipedia.org/wiki/Wave_equation
|
## Scalar wave equation in general dimension and Kirchhoff's formulae
We want to find solutions to for with and .
### Odd dimensions
Assume is an odd integer, and , for .
|
https://en.wikipedia.org/wiki/Wave_equation
|
Let and let
$$
u(x, t)
= \frac{1}{\gamma_n} \left[\partial_t \left(\frac{1}{t} \partial_t \right)^{\frac{n-3}{2}} \left(t^{n-2} \frac{1}{|\partial B_t(x)|} \int_{\partial B_t(x)} g \, dS \right) + \left(\frac{1}{t} \partial_t \right)^{\frac{n-3}{2}} \left(t^{n-2} \frac{1}{|\partial B_t(x)|} \int_{\partial B_t(x)} h \, dS \right) \right]
$$
Then
-
$$
u \in C^2\big(\mathbf{R}^n \times [0, \infty)\big)
$$
,
-
$$
u_{tt} - \Delta u = 0
$$
in
$$
\mathbf{R}^n \times (0, \infty)
$$
,
-
$$
\lim_{(x,t) \to (x^0,0)} u(x,t) = g(x^0)
$$
,
-
$$
\lim_{(x,t) \to (x^0,0)} u_t(x,t) = h(x^0)
$$
.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Let and let
$$
u(x,t) = \frac{1}{\gamma_n} \left [\partial_t \left (\frac{1}{t} \partial_t \right )^{\frac{n-2}{2}} \left (t^n \frac{1}{|B_t(x)|}\int_{B_t(x)} \frac{g}{(t^2 - |y - x|^2)^{\frac{1}{2}}} dy \right ) + \left (\frac{1}{t} \partial_t \right )^{\frac{n-2}{2}} \left (t^n \frac{1}{|B_t(x)|}\int_{B_t(x)} \frac{h}{(t^2 - |y-x|^2)^{\frac{1}{2}}} dy \right ) \right ]
$$
then
-
- in
-
$$
\lim_{(x,t)\to (x^0,0)} u(x,t) = g(x^0)
$$
-
$$
\lim_{(x,t)\to (x^0,0)} u_t(x,t) = h(x^0)
$$
|
https://en.wikipedia.org/wiki/Wave_equation
|
## Green's function
Consider the inhomogeneous wave equation in
$$
1+D
$$
dimensions
$$
(\partial_{tt} - c^2\nabla^2) u = s(t, x)
$$
By rescaling time, we can set wave speed
$$
c = 1
$$
.
Since the wave equation
$$
(\partial_{tt} - \nabla^2) u = s(t, x)
$$
has order 2 in time, there are two impulse responses: an acceleration impulse and a velocity impulse. The effect of inflicting an acceleration impulse is to suddenly change the wave velocity
$$
\partial_t u
$$
. The effect of inflicting a velocity impulse is to suddenly change the wave displacement
$$
u
$$
.
For acceleration impulse,
$$
s(t,x) = \delta^{D+1}(t,x)
$$
where
$$
\delta
$$
is the Dirac delta function. The solution to this case is called the Green's function
$$
G
$$
for the wave equation.
|
https://en.wikipedia.org/wiki/Wave_equation
|
For acceleration impulse,
$$
s(t,x) = \delta^{D+1}(t,x)
$$
where
$$
\delta
$$
is the Dirac delta function. The solution to this case is called the Green's function
$$
G
$$
for the wave equation.
For velocity impulse,
$$
s(t, x) = \partial_t \delta^{D+1}(t,x)
$$
, so if we solve the Green function
$$
G
$$
, the solution for this case is just
$$
\partial_t G
$$
.
### Duhamel's principle
The main use of Green's functions is to solve initial value problems by Duhamel's principle, both for the homogeneous and the inhomogeneous case.
Given the Green function
$$
G
$$
, and initial conditions
$$
u(0,x), \partial_t u(0,x)
$$
, the solution to the homogeneous wave equation iswhere the asterisk is convolution in space.
|
https://en.wikipedia.org/wiki/Wave_equation
|
### Duhamel's principle
The main use of Green's functions is to solve initial value problems by Duhamel's principle, both for the homogeneous and the inhomogeneous case.
Given the Green function
$$
G
$$
, and initial conditions
$$
u(0,x), \partial_t u(0,x)
$$
, the solution to the homogeneous wave equation iswhere the asterisk is convolution in space. More explicitly,
$$
u(t, x) = \int (\partial_t G)(t, x-x') u(0, x') dx' + \int G(t, x-x') (\partial_t u)(0, x') dx'.
$$
For the inhomogeneous case, the solution has one additional term by convolution over spacetime:
$$
\iint_{t' < t} G(t-t', x-x') s(t', x')dt' dx'.
$$
|
https://en.wikipedia.org/wiki/Wave_equation
|
Given the Green function
$$
G
$$
, and initial conditions
$$
u(0,x), \partial_t u(0,x)
$$
, the solution to the homogeneous wave equation iswhere the asterisk is convolution in space. More explicitly,
$$
u(t, x) = \int (\partial_t G)(t, x-x') u(0, x') dx' + \int G(t, x-x') (\partial_t u)(0, x') dx'.
$$
For the inhomogeneous case, the solution has one additional term by convolution over spacetime:
$$
\iint_{t' < t} G(t-t', x-x') s(t', x')dt' dx'.
$$
### Solution by Fourier transform
By a Fourier transform,
$$
\hat G (\omega)= \frac{1}{-\omega_0^2 + \omega_1^2 + \cdots + \omega_D^2},
\quad G(t, x) = \frac{1}{(2\pi)^{D+1}} \int \hat G(\omega) e^{+i \omega_0 t + i \vec \omega \cdot \vec x}d\omega_0 d\vec\omega.
$$
The
$$
\omega_0
$$
term can be integrated by the residue theorem.
|
https://en.wikipedia.org/wiki/Wave_equation
|
More explicitly,
$$
u(t, x) = \int (\partial_t G)(t, x-x') u(0, x') dx' + \int G(t, x-x') (\partial_t u)(0, x') dx'.
$$
For the inhomogeneous case, the solution has one additional term by convolution over spacetime:
$$
\iint_{t' < t} G(t-t', x-x') s(t', x')dt' dx'.
$$
### Solution by Fourier transform
By a Fourier transform,
$$
\hat G (\omega)= \frac{1}{-\omega_0^2 + \omega_1^2 + \cdots + \omega_D^2},
\quad G(t, x) = \frac{1}{(2\pi)^{D+1}} \int \hat G(\omega) e^{+i \omega_0 t + i \vec \omega \cdot \vec x}d\omega_0 d\vec\omega.
$$
The
$$
\omega_0
$$
term can be integrated by the residue theorem. It would require us to perturb the integral slightly either by
$$
+i\epsilon
$$
or by
$$
-i\epsilon
$$
, because it is an improper integral.
|
https://en.wikipedia.org/wiki/Wave_equation
|
### Solution by Fourier transform
By a Fourier transform,
$$
\hat G (\omega)= \frac{1}{-\omega_0^2 + \omega_1^2 + \cdots + \omega_D^2},
\quad G(t, x) = \frac{1}{(2\pi)^{D+1}} \int \hat G(\omega) e^{+i \omega_0 t + i \vec \omega \cdot \vec x}d\omega_0 d\vec\omega.
$$
The
$$
\omega_0
$$
term can be integrated by the residue theorem. It would require us to perturb the integral slightly either by
$$
+i\epsilon
$$
or by
$$
-i\epsilon
$$
, because it is an improper integral. One perturbation gives the forward solution, and the other the backward solution.
|
https://en.wikipedia.org/wiki/Wave_equation
|
It would require us to perturb the integral slightly either by
$$
+i\epsilon
$$
or by
$$
-i\epsilon
$$
, because it is an improper integral. One perturbation gives the forward solution, and the other the backward solution. The forward solution gives
$$
G(t,x) = \frac{1}{(2\pi)^D} \int \frac{\sin (\|\vec \omega\| t)}{\|\vec \omega\|} e^{i \vec \omega \cdot \vec x}d\vec \omega,
\quad
\partial_t G(t, x) = \frac{1}{(2\pi)^D} \int \cos(\|\vec \omega\| t) e^{i \vec \omega \cdot \vec x}d\vec \omega.
$$
The integral can be solved by analytically continuing the Poisson kernel, giving
$$
G(t, x) = \lim _{\epsilon \rightarrow 0^{+}} \frac{C_D}{D-1}
\operatorname{Im}\left[\|x\|^2-(t-i \epsilon)^2\right]^{-(D-1) / 2}
$$
where
$$
C_D=\pi^{-(D+1) / 2} \Gamma((D+1) / 2)
$$
is half the surface area of a
$$
(D + 1)
$$
-dimensional hypersphere.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.