text
stringlengths 105
4.17k
| source
stringclasses 883
values |
---|---|
One perturbation gives the forward solution, and the other the backward solution. The forward solution gives
$$
G(t,x) = \frac{1}{(2\pi)^D} \int \frac{\sin (\|\vec \omega\| t)}{\|\vec \omega\|} e^{i \vec \omega \cdot \vec x}d\vec \omega,
\quad
\partial_t G(t, x) = \frac{1}{(2\pi)^D} \int \cos(\|\vec \omega\| t) e^{i \vec \omega \cdot \vec x}d\vec \omega.
$$
The integral can be solved by analytically continuing the Poisson kernel, giving
$$
G(t, x) = \lim _{\epsilon \rightarrow 0^{+}} \frac{C_D}{D-1}
\operatorname{Im}\left[\|x\|^2-(t-i \epsilon)^2\right]^{-(D-1) / 2}
$$
where
$$
C_D=\pi^{-(D+1) / 2} \Gamma((D+1) / 2)
$$
is half the surface area of a
$$
(D + 1)
$$
-dimensional hypersphere.
### Solutions in particular dimensions
We can relate the Green's function in
$$
D
$$
dimensions to the Green's function in
$$
D+n
$$
dimensions.
|
https://en.wikipedia.org/wiki/Wave_equation
|
The forward solution gives
$$
G(t,x) = \frac{1}{(2\pi)^D} \int \frac{\sin (\|\vec \omega\| t)}{\|\vec \omega\|} e^{i \vec \omega \cdot \vec x}d\vec \omega,
\quad
\partial_t G(t, x) = \frac{1}{(2\pi)^D} \int \cos(\|\vec \omega\| t) e^{i \vec \omega \cdot \vec x}d\vec \omega.
$$
The integral can be solved by analytically continuing the Poisson kernel, giving
$$
G(t, x) = \lim _{\epsilon \rightarrow 0^{+}} \frac{C_D}{D-1}
\operatorname{Im}\left[\|x\|^2-(t-i \epsilon)^2\right]^{-(D-1) / 2}
$$
where
$$
C_D=\pi^{-(D+1) / 2} \Gamma((D+1) / 2)
$$
is half the surface area of a
$$
(D + 1)
$$
-dimensional hypersphere.
### Solutions in particular dimensions
We can relate the Green's function in
$$
D
$$
dimensions to the Green's function in
$$
D+n
$$
dimensions.
#### Lowering dimensions
Given a function
$$
s(t, x)
$$
and a solution
$$
u(t, x)
$$
of a differential equation in
$$
(1+D)
$$
dimensions, we can trivially extend it to
$$
(1+D+n)
$$
dimensions by setting the additional
$$
n
$$
dimensions to be constant:
$$
s(t, x_{1:D}, x_{D+1:D+n}) = s(t, x_{1:D}), \quad u(t, x_{1:D}, x_{D+1:D+n}) = u(t, x_{1:D}).
$$
Since the Green's function is constructed from
$$
f
$$
and
$$
u
$$
, the Green's function in
$$
(1+D+n)
$$
dimensions integrates to the Green's function in
$$
(1+D)
$$
dimensions:
$$
G_D(t, x_{1:D}) = \int_{\R^n} G_{D+n}(t, x_{1:D}, x_{D+1:D+n}) d^n x_{D+1:D+n}.
$$
|
https://en.wikipedia.org/wiki/Wave_equation
|
### Solutions in particular dimensions
We can relate the Green's function in
$$
D
$$
dimensions to the Green's function in
$$
D+n
$$
dimensions.
#### Lowering dimensions
Given a function
$$
s(t, x)
$$
and a solution
$$
u(t, x)
$$
of a differential equation in
$$
(1+D)
$$
dimensions, we can trivially extend it to
$$
(1+D+n)
$$
dimensions by setting the additional
$$
n
$$
dimensions to be constant:
$$
s(t, x_{1:D}, x_{D+1:D+n}) = s(t, x_{1:D}), \quad u(t, x_{1:D}, x_{D+1:D+n}) = u(t, x_{1:D}).
$$
Since the Green's function is constructed from
$$
f
$$
and
$$
u
$$
, the Green's function in
$$
(1+D+n)
$$
dimensions integrates to the Green's function in
$$
(1+D)
$$
dimensions:
$$
G_D(t, x_{1:D}) = \int_{\R^n} G_{D+n}(t, x_{1:D}, x_{D+1:D+n}) d^n x_{D+1:D+n}.
$$
#### Raising dimensions
The Green's function in
$$
D
$$
dimensions can be related to the Green's function in
$$
D+2
$$
dimensions.
|
https://en.wikipedia.org/wiki/Wave_equation
|
#### Lowering dimensions
Given a function
$$
s(t, x)
$$
and a solution
$$
u(t, x)
$$
of a differential equation in
$$
(1+D)
$$
dimensions, we can trivially extend it to
$$
(1+D+n)
$$
dimensions by setting the additional
$$
n
$$
dimensions to be constant:
$$
s(t, x_{1:D}, x_{D+1:D+n}) = s(t, x_{1:D}), \quad u(t, x_{1:D}, x_{D+1:D+n}) = u(t, x_{1:D}).
$$
Since the Green's function is constructed from
$$
f
$$
and
$$
u
$$
, the Green's function in
$$
(1+D+n)
$$
dimensions integrates to the Green's function in
$$
(1+D)
$$
dimensions:
$$
G_D(t, x_{1:D}) = \int_{\R^n} G_{D+n}(t, x_{1:D}, x_{D+1:D+n}) d^n x_{D+1:D+n}.
$$
#### Raising dimensions
The Green's function in
$$
D
$$
dimensions can be related to the Green's function in
$$
D+2
$$
dimensions. By spherical symmetry,
$$
G_D(t, r) = \int_{\R^2} G_{D+2}(t, \sqrt{r^2 + y^2 + z^2}) dydz.
$$
Integrating in polar coordinates,
$$
G_D(t, r) = 2\pi \int_0^\infty G_{D+2}(t, \sqrt{r^2 + q^2}) qdq = 2\pi \int_r^\infty G_{D+2}(t, q') q'dq',
$$
where in the last equality we made the change of variables
$$
q' = \sqrt{r^2 + q^2}
$$
.
|
https://en.wikipedia.org/wiki/Wave_equation
|
#### Raising dimensions
The Green's function in
$$
D
$$
dimensions can be related to the Green's function in
$$
D+2
$$
dimensions. By spherical symmetry,
$$
G_D(t, r) = \int_{\R^2} G_{D+2}(t, \sqrt{r^2 + y^2 + z^2}) dydz.
$$
Integrating in polar coordinates,
$$
G_D(t, r) = 2\pi \int_0^\infty G_{D+2}(t, \sqrt{r^2 + q^2}) qdq = 2\pi \int_r^\infty G_{D+2}(t, q') q'dq',
$$
where in the last equality we made the change of variables
$$
q' = \sqrt{r^2 + q^2}
$$
. Thus, we obtain the recurrence relation
$$
G_{D+2}(t, r) = -\frac{1}{2\pi r} \partial_r G_D(t, r).
$$
|
https://en.wikipedia.org/wiki/Wave_equation
|
By spherical symmetry,
$$
G_D(t, r) = \int_{\R^2} G_{D+2}(t, \sqrt{r^2 + y^2 + z^2}) dydz.
$$
Integrating in polar coordinates,
$$
G_D(t, r) = 2\pi \int_0^\infty G_{D+2}(t, \sqrt{r^2 + q^2}) qdq = 2\pi \int_r^\infty G_{D+2}(t, q') q'dq',
$$
where in the last equality we made the change of variables
$$
q' = \sqrt{r^2 + q^2}
$$
. Thus, we obtain the recurrence relation
$$
G_{D+2}(t, r) = -\frac{1}{2\pi r} \partial_r G_D(t, r).
$$
### Solutions in D = 1, 2, 3
When
$$
D=1
$$
, the integrand in the Fourier transform is the sinc function
$$
\begin{aligned}
G_1(t, x) &= \frac{1}{2\pi} \int_\R \frac{\sin(|\omega| t)}{|\omega|} e^{i\omega x}d\omega \\
&= \frac{1}{2\pi} \int \operatorname{sinc}(\omega) e^{i \omega \frac xt} d\omega \\
&= \frac{\sgn(t-x) + \sgn(t+x)}{4} \\
&= \begin{cases}
\frac 12 \theta(t-|x|) \quad t > 0 \\
-\frac 12 \theta(-t-|x|) \quad t < 0
\end{cases}
\end{aligned}
$$
where
$$
\sgn
$$
is the sign function and
$$
\theta
$$
is the unit step function.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Thus, we obtain the recurrence relation
$$
G_{D+2}(t, r) = -\frac{1}{2\pi r} \partial_r G_D(t, r).
$$
### Solutions in D = 1, 2, 3
When
$$
D=1
$$
, the integrand in the Fourier transform is the sinc function
$$
\begin{aligned}
G_1(t, x) &= \frac{1}{2\pi} \int_\R \frac{\sin(|\omega| t)}{|\omega|} e^{i\omega x}d\omega \\
&= \frac{1}{2\pi} \int \operatorname{sinc}(\omega) e^{i \omega \frac xt} d\omega \\
&= \frac{\sgn(t-x) + \sgn(t+x)}{4} \\
&= \begin{cases}
\frac 12 \theta(t-|x|) \quad t > 0 \\
-\frac 12 \theta(-t-|x|) \quad t < 0
\end{cases}
\end{aligned}
$$
where
$$
\sgn
$$
is the sign function and
$$
\theta
$$
is the unit step function. One solution is the forward solution, the other is the backward solution.
|
https://en.wikipedia.org/wiki/Wave_equation
|
### Solutions in D = 1, 2, 3
When
$$
D=1
$$
, the integrand in the Fourier transform is the sinc function
$$
\begin{aligned}
G_1(t, x) &= \frac{1}{2\pi} \int_\R \frac{\sin(|\omega| t)}{|\omega|} e^{i\omega x}d\omega \\
&= \frac{1}{2\pi} \int \operatorname{sinc}(\omega) e^{i \omega \frac xt} d\omega \\
&= \frac{\sgn(t-x) + \sgn(t+x)}{4} \\
&= \begin{cases}
\frac 12 \theta(t-|x|) \quad t > 0 \\
-\frac 12 \theta(-t-|x|) \quad t < 0
\end{cases}
\end{aligned}
$$
where
$$
\sgn
$$
is the sign function and
$$
\theta
$$
is the unit step function. One solution is the forward solution, the other is the backward solution.
The dimension can be raised to give the
$$
D=3
$$
case
$$
G_3(t, r) = \frac{\delta(t-r)}{4\pi r}
$$
and similarly for the backward solution.
|
https://en.wikipedia.org/wiki/Wave_equation
|
One solution is the forward solution, the other is the backward solution.
The dimension can be raised to give the
$$
D=3
$$
case
$$
G_3(t, r) = \frac{\delta(t-r)}{4\pi r}
$$
and similarly for the backward solution. This can be integrated down by one dimension to give the
$$
D=2
$$
case
$$
G_2(t, r) = \int_\R \frac{\delta(t - \sqrt{r^2 + z^2})}{4\pi \sqrt{r^2 + z^2}} dz
= \frac{\theta(t - r)}{2\pi \sqrt{t^2 - r^2}}
$$
### Wavefronts and wakes
In
$$
D=1
$$
case, the Green's function solution is the sum of two wavefronts
$$
\frac{\sgn(t-x)}{4} +
\frac{\sgn(t+x)}{4}
$$
moving in opposite directions.
In odd dimensions, the forward solution is nonzero only at
$$
t = r
$$
. As the dimensions increase, the shape of wavefront becomes increasingly complex, involving higher derivatives of the Dirac delta function.
|
https://en.wikipedia.org/wiki/Wave_equation
|
In odd dimensions, the forward solution is nonzero only at
$$
t = r
$$
. As the dimensions increase, the shape of wavefront becomes increasingly complex, involving higher derivatives of the Dirac delta function. For example,where
$$
\tau = t- r
$$
, and the wave speed
$$
c
$$
is restored.
In even dimensions, the forward solution is nonzero in
$$
r \leq t
$$
, the entire region behind the wavefront becomes nonzero, called a wake. The wake has equation:The wavefront itself also involves increasingly higher derivatives of the Dirac delta function.
This means that a general Huygens' principle – the wave displacement at a point
$$
(t, x)
$$
in spacetime depends only on the state at points on characteristic rays passing
$$
(t, x)
$$
– only holds in odd dimensions. A physical interpretation is that signals transmitted by waves remain undistorted in odd dimensions, but distorted in even dimensions.
Hadamard's conjecture states that this generalized Huygens' principle still holds in all odd dimensions even when the coefficients in the wave equation are no longer constant. It is not strictly correct, but it is correct for certain families of coefficients
## Problems with boundaries
### One space dimension
|
https://en.wikipedia.org/wiki/Wave_equation
|
## Problems with boundaries
### One space dimension
#### Reflection and transmission at the boundary of two media
For an incident wave traveling from one medium (where the wave speed is ) to another medium (where the wave speed is ), one part of the wave will transmit into the second medium, while another part reflects back into the other direction and stays in the first medium. The amplitude of the transmitted wave and the reflected wave can be calculated by using the continuity condition at the boundary.
Consider the component of the incident wave with an angular frequency of , which has the waveform
$$
u^\text{inc}(x, t) = Ae^{i(k_1 x - \omega t)},\quad A \in \C.
$$
At , the incident reaches the boundary between the two media at .
|
https://en.wikipedia.org/wiki/Wave_equation
|
Therefore, the corresponding reflected wave and the transmitted wave will have the waveforms
$$
u^\text{refl}(x, t) = Be^{i(-k_1 x - \omega t)}, \quad
u^\text{trans}(x, t) = Ce^{i(k_2 x - \omega t)}, \quad
B, C \in \C.
$$
The continuity condition at the boundary is
$$
u^\text{inc}(0, t) + u^\text{refl}(0, t) = u^\text{trans}(0, t), \quad
u_x^\text{inc}(0, t) + u_x^\text{ref}(0, t) = u_x^\text{trans}(0, t).
$$
This gives the equations
$$
A + B = C, \quad
A - B = \frac{k_2}{k_1} C = \frac{c_1}{c_2} C,
$$
and we have the reflectivity and transmissivity
$$
\frac{B}{A} = \frac{c_2 - c_1}{c_2 + c_1}, \quad
\frac{C}{A} = \frac{2c_2}{c_2 + c_1}.
$$
When , the reflected wave has a reflection phase change of 180°, since .
|
https://en.wikipedia.org/wiki/Wave_equation
|
The energy conservation can be verified by
$$
\frac{B^2}{c_1} + \frac{C^2}{c_2} = \frac{A^2}{c_1}.
$$
The above discussion holds true for any component, regardless of its angular frequency of .
The limiting case of corresponds to a "fixed end" that does not move, whereas the limiting case of corresponds to a "free end".
#### The Sturm–Liouville formulation
A flexible string that is stretched between two points and satisfies the wave equation for and . On the boundary points, may satisfy a variety of boundary conditions. A general form that is appropriate for applications is
$$
\begin{align}
-u_x(t, 0) + a u(t, 0) &= 0, \\
u_x(t, L) + b u(t, L) &= 0,
\end{align}
$$
where and are non-negative. The case where is required to vanish at an endpoint (i.e. "fixed end") is the limit of this condition when the respective or approaches infinity.
|
https://en.wikipedia.org/wiki/Wave_equation
|
A general form that is appropriate for applications is
$$
\begin{align}
-u_x(t, 0) + a u(t, 0) &= 0, \\
u_x(t, L) + b u(t, L) &= 0,
\end{align}
$$
where and are non-negative. The case where is required to vanish at an endpoint (i.e. "fixed end") is the limit of this condition when the respective or approaches infinity. The method of separation of variables consists in looking for solutions of this problem in the special form
$$
u(t, x) = T(t) v(x).
$$
A consequence is that
$$
\frac{T}{c^2 T} = \frac{v}{v} = -\lambda.
$$
The eigenvalue must be determined so that there is a non-trivial solution of the boundary-value problem
$$
\begin{align}
v'' + \lambda v = 0,& \\
-v'(0) + a v(0) &= 0, \\
v'(L) + b v(L) &= 0.
\end{align}
$$
This is a special case of the general problem of Sturm–Liouville theory. If and are positive, the eigenvalues are all positive, and the solutions are trigonometric functions.
|
https://en.wikipedia.org/wiki/Wave_equation
|
The method of separation of variables consists in looking for solutions of this problem in the special form
$$
u(t, x) = T(t) v(x).
$$
A consequence is that
$$
\frac{T}{c^2 T} = \frac{v}{v} = -\lambda.
$$
The eigenvalue must be determined so that there is a non-trivial solution of the boundary-value problem
$$
\begin{align}
v'' + \lambda v = 0,& \\
-v'(0) + a v(0) &= 0, \\
v'(L) + b v(L) &= 0.
\end{align}
$$
This is a special case of the general problem of Sturm–Liouville theory. If and are positive, the eigenvalues are all positive, and the solutions are trigonometric functions. A solution that satisfies square-integrable initial conditions for and can be obtained from expansion of these functions in the appropriate trigonometric series.
### Several space dimensions
The one-dimensional initial-boundary value theory may be extended to an arbitrary number of space dimensions. Consider a domain in -dimensional space, with boundary . Then the wave equation is to be satisfied if is in , and .
|
https://en.wikipedia.org/wiki/Wave_equation
|
Consider a domain in -dimensional space, with boundary . Then the wave equation is to be satisfied if is in , and . On the boundary of , the solution shall satisfy
$$
\frac{\partial u}{\partial n} + a u = 0,
$$
where is the unit outward normal to , and is a non-negative function defined on . The case where vanishes on is a limiting case for approaching infinity. The initial conditions are
$$
u(0, x) = f(x), \quad u_t(0, x) = g(x),
$$
where and are defined in . This problem may be solved by expanding and in the eigenfunctions of the Laplacian in , which satisfy the boundary conditions. Thus the eigenfunction satisfies
$$
\nabla \cdot \nabla v + \lambda v = 0
$$
in , and
$$
\frac{\partial v}{\partial n} + a v = 0
$$
on .
In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundary .
|
https://en.wikipedia.org/wiki/Wave_equation
|
Thus the eigenfunction satisfies
$$
\nabla \cdot \nabla v + \lambda v = 0
$$
in , and
$$
\frac{\partial v}{\partial n} + a v = 0
$$
on .
In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundary . If is a circle, then these eigenfunctions have an angular component that is a trigonometric function of the polar angle , multiplied by a Bessel function (of integer order) of the radial component. Further details are in Helmholtz equation.
If the boundary is a sphere in three space dimensions, the angular components of the eigenfunctions are spherical harmonics, and the radial components are Bessel functions of half-integer order.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Further details are in Helmholtz equation.
If the boundary is a sphere in three space dimensions, the angular components of the eigenfunctions are spherical harmonics, and the radial components are Bessel functions of half-integer order.
## Inhomogeneous wave equation in one dimension
The inhomogeneous wave equation in one dimension is
$$
u_{t t}(x, t) - c^2 u_{xx}(x, t) = s(x, t)
$$
with initial conditions
$$
u(x, 0) = f(x),
$$
$$
u_t(x, 0) = g(x).
$$
The function is often called the source function because in practice it describes the effects of the sources of waves on the medium carrying them. Physical examples of source functions include the force driving a wave on a string, or the charge or current density in the Lorenz gauge of electromagnetism.
One method to solve the initial-value problem (with the initial values as posed above) is to take advantage of a special property of the wave equation in an odd number of space dimensions, namely that its solutions respect causality. That is, for any point , the value of depends only on the values of and and the values of the function between and .
|
https://en.wikipedia.org/wiki/Wave_equation
|
One method to solve the initial-value problem (with the initial values as posed above) is to take advantage of a special property of the wave equation in an odd number of space dimensions, namely that its solutions respect causality. That is, for any point , the value of depends only on the values of and and the values of the function between and . This can be seen in d'Alembert's formula, stated above, where these quantities are the only ones that show up in it. Physically, if the maximum propagation speed is , then no part of the wave that cannot propagate to a given point by a given time can affect the amplitude at the same point and time.
In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that causally affects point as .
|
https://en.wikipedia.org/wiki/Wave_equation
|
In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that causally affects point as . Suppose we integrate the inhomogeneous wave equation over this region:
$$
\iint_{R_C} \big(c^2 u_{xx}(x, t) - u_{tt}(x, t)\big) \, dx \, dt = \iint_{R_C} s(x, t) \, dx \, dt.
$$
To simplify this greatly, we can use Green's theorem to simplify the left side to get the following:
$$
\int_{L_0 + L_1 + L_2} \big({-}c^2 u_x(x, t) \, dt - u_t(x, t) \, dx\big) = \iint_{R_C} s(x, t) \, dx \, dt.
$$
The left side is now the sum of three line integrals along the bounds of the causality region.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Denote the area that causally affects point as . Suppose we integrate the inhomogeneous wave equation over this region:
$$
\iint_{R_C} \big(c^2 u_{xx}(x, t) - u_{tt}(x, t)\big) \, dx \, dt = \iint_{R_C} s(x, t) \, dx \, dt.
$$
To simplify this greatly, we can use Green's theorem to simplify the left side to get the following:
$$
\int_{L_0 + L_1 + L_2} \big({-}c^2 u_x(x, t) \, dt - u_t(x, t) \, dx\big) = \iint_{R_C} s(x, t) \, dx \, dt.
$$
The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute:
$$
\int^{x_i + c t_i}_{x_i - c t_i} -u_t(x, 0) \, dx = -\int^{x_i + c t_i}_{x_i - c t_i} g(x) \, dx.
$$
In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thus .
|
https://en.wikipedia.org/wiki/Wave_equation
|
Suppose we integrate the inhomogeneous wave equation over this region:
$$
\iint_{R_C} \big(c^2 u_{xx}(x, t) - u_{tt}(x, t)\big) \, dx \, dt = \iint_{R_C} s(x, t) \, dx \, dt.
$$
To simplify this greatly, we can use Green's theorem to simplify the left side to get the following:
$$
\int_{L_0 + L_1 + L_2} \big({-}c^2 u_x(x, t) \, dt - u_t(x, t) \, dx\big) = \iint_{R_C} s(x, t) \, dx \, dt.
$$
The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute:
$$
\int^{x_i + c t_i}_{x_i - c t_i} -u_t(x, 0) \, dx = -\int^{x_i + c t_i}_{x_i - c t_i} g(x) \, dx.
$$
In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thus .
For the other two sides of the region, it is worth noting that is a constant, namely , where the sign is chosen appropriately.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Using this, we can get the relation , again choosing the right sign:
$$
\begin{align}
\int_{L_1} \big({-}c^2 u_x(x, t) \, dt - u_t(x, t) \, dx\big) &= \int_{L_1} \big(c u_x(x, t) \, dx + c u_t(x, t) \, dt \big) \\
&= c \int_{L_1} \, du(x, t) \\
&= c u(x_i, t_i) - c f(x_i + c t_i).
\end{align}
$$
And similarly for the final boundary segment:
$$
\begin{align}
\int_{L_2} \big({-}c^2 u_x(x, t) \, dt - u_t(x, t) \, dx\big) &= -\int_{L_2} \big(c u_x(x, t) \, dx + c u_t(x, t) \, dt \big) \\
&= -c \int_{L_2} \, du(x, t) \\
&= c u(x_i, t_i) - c f(x_i - c t_i).
\end{align}
$$
Adding the three results together and putting them back in the original integral gives
$$
\begin{align}
\iint_{R_C} s(x, t) \, dx \, dt &= - \int^{x_i + c t_i}_{x_i - c t_i} g(x) \, dx + c u(x_i, t_i) - c f(x_i + c t_i) + c u(x_i,t_i) - c f(x_i - c t_i) \\
&= 2 c u(x_i, t_i) - c f(x_i + c t_i) - c f(x_i - c t_i) - \int^{x_i + c t_i}_{x_i - c t_i} g(x) \, dx.
\end{align}
$$
Solving for , we arrive at
$$
u(x_i, t_i) = \frac{f(x_i + c t_i) + f(x_i - c t_i)}{2} +
_BLOCK0_$$
In the last equation of the sequence, the bounds of the integral over the source function have been made explicit.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Looking at this solution, which is valid for all choices compatible with the wave equation, it is clear that the first two terms are simply d'Alembert's formula, as stated above as the solution of the homogeneous wave equation in one dimension. The difference is in the third term, the integral over the source.
## Further generalizations
### Elastic waves
The elastic wave equation (also known as the Navier–Cauchy equation) in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion:
$$
\rho \ddot{\mathbf{u}} = \mathbf{f} + (\lambda + 2\mu) \nabla(\nabla \cdot \mathbf{u}) - \mu\nabla \times (\nabla \times \mathbf{u}),
$$
where:
and are the so-called Lamé parameters describing the elastic properties of the medium,
is the density,
is the source function (driving force),
is the displacement vector.
|
https://en.wikipedia.org/wiki/Wave_equation
|
Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion:
$$
\rho \ddot{\mathbf{u}} = \mathbf{f} + (\lambda + 2\mu) \nabla(\nabla \cdot \mathbf{u}) - \mu\nabla \times (\nabla \times \mathbf{u}),
$$
where:
and are the so-called Lamé parameters describing the elastic properties of the medium,
is the density,
is the source function (driving force),
is the displacement vector.
By using , the elastic wave equation can be rewritten into the more common form of the Navier–Cauchy equation.
Note that in the elastic wave equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation.
As an aid to understanding, the reader will observe that if and are set to zero, this becomes (effectively) Maxwell's equation for the propagation of the electric field , which has only transverse waves.
|
https://en.wikipedia.org/wiki/Wave_equation
|
As an aid to understanding, the reader will observe that if and are set to zero, this becomes (effectively) Maxwell's equation for the propagation of the electric field , which has only transverse waves.
### Dispersion relation
In dispersive wave phenomena, the speed of wave propagation varies with the wavelength of the wave, which is reflected by a dispersion relation
$$
\omega = \omega(\mathbf{k}),
$$
where is the angular frequency, and is the wavevector describing plane-wave solutions. For light waves, the dispersion relation is , but in general, the constant speed gets replaced by a variable phase velocity:
$$
v_\text{p} = \frac{\omega(k)}{k}.
$$
|
https://en.wikipedia.org/wiki/Wave_equation
|
In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives.
The function is often thought of as an "unknown" that solves the equation, similar to how is thought of as an unknown number solving, e.g., an algebraic equation like . However, it is usually impossible to write down explicit formulae for solutions of partial differential equations. There is correspondingly a vast amount of modern mathematical and scientific research on methods to numerically approximate solutions of certain partial differential equations using computers. Partial differential equations also occupy a large sector of pure mathematical research, in which the usual questions are, broadly speaking, on the identification of general qualitative features of solutions of various partial differential equations, such as existence, uniqueness, regularity and stability. Among the many open questions are the existence and smoothness of solutions to the Navier–Stokes equations, named as one of the Millennium Prize Problems in 2000.
Partial differential equations are ubiquitous in mathematically oriented scientific fields, such as physics and engineering.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
Among the many open questions are the existence and smoothness of solutions to the Navier–Stokes equations, named as one of the Millennium Prize Problems in 2000.
Partial differential equations are ubiquitous in mathematically oriented scientific fields, such as physics and engineering. For instance, they are foundational in the modern scientific understanding of sound, heat, diffusion, electrostatics, electrodynamics, thermodynamics, fluid dynamics, elasticity, general relativity, and quantum mechanics (Schrödinger equation, Pauli equation etc.). They also arise from many purely mathematical considerations, such as differential geometry and the calculus of variations; among other notable applications, they are the fundamental tool in the proof of the Poincaré conjecture from geometric topology.
Partly due to this variety of sources, there is a wide spectrum of different types of partial differential equations, where the meaning of a solution depends on the context of the problem, and methods have been developed for dealing with many of the individual equations which arise. As such, it is usually acknowledged that there is no "universal theory" of partial differential equations, with specialist knowledge being somewhat divided between several essentially distinct subfields.
Ordinary differential equations can be viewed as a subclass of partial differential equations, corresponding to functions of a single variable.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
As such, it is usually acknowledged that there is no "universal theory" of partial differential equations, with specialist knowledge being somewhat divided between several essentially distinct subfields.
Ordinary differential equations can be viewed as a subclass of partial differential equations, corresponding to functions of a single variable. Stochastic partial differential equations and nonlocal equations are, as of 2020, particularly widely studied extensions of the "PDE" notion. More classical topics, on which there is still much active research, include elliptic and parabolic partial differential equations, fluid mechanics, Boltzmann equations, and dispersive partial differential equations.
## Introduction
A function of three variables is "harmonic" or "a solution of the Laplace equation" if it satisfies the condition
$$
\frac{\partial^2u}{\partial x^2}+\frac{\partial^2u}{\partial y^2}+\frac{\partial^2u}{\partial z^2}=0.
$$
Such functions were widely studied in the 19th century due to their relevance for classical mechanics, for example the equilibrium temperature distribution of a homogeneous solid is a harmonic function. If explicitly given a function, it is usually a matter of straightforward computation to check whether or not it is harmonic.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
## Introduction
A function of three variables is "harmonic" or "a solution of the Laplace equation" if it satisfies the condition
$$
\frac{\partial^2u}{\partial x^2}+\frac{\partial^2u}{\partial y^2}+\frac{\partial^2u}{\partial z^2}=0.
$$
Such functions were widely studied in the 19th century due to their relevance for classical mechanics, for example the equilibrium temperature distribution of a homogeneous solid is a harmonic function. If explicitly given a function, it is usually a matter of straightforward computation to check whether or not it is harmonic. For instance
$$
u(x,y,z) = \frac{1}{\sqrt{x^2 - 2x + y^2 + z^2 + 1}}
$$
and
$$
u(x,y,z) = 2x^2 - y^2 - z^2
$$
are both harmonic while
$$
u(x,y,z)=\sin(xy)+z
$$
is not. It may be surprising that the two examples of harmonic functions are of such strikingly different form. This is a reflection of the fact that they are not, in any immediate way, special cases of a "general solution formula" of the Laplace equation.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
It may be surprising that the two examples of harmonic functions are of such strikingly different form. This is a reflection of the fact that they are not, in any immediate way, special cases of a "general solution formula" of the Laplace equation. This is in striking contrast to the case of ordinary differential equations (ODEs) roughly similar to the Laplace equation, with the aim of many introductory textbooks being to find algorithms leading to general solution formulas. For the Laplace equation, as for a large number of partial differential equations, such solution formulas fail to exist.
The nature of this failure can be seen more concretely in the case of the following PDE: for a function of two variables, consider the equation
$$
\frac{\partial^2v}{\partial x\partial y}=0.
$$
It can be directly checked that any function of the form , for any single-variable functions and whatsoever, will satisfy this condition. This is far beyond the choices available in ODE solution formulas, which typically allow the free choice of some numbers. In the study of PDEs, one generally has the free choice of functions.
The nature of this choice varies from PDE to PDE. To understand it for any given equation, existence and uniqueness theorems are usually important organizational principles.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
The nature of this choice varies from PDE to PDE. To understand it for any given equation, existence and uniqueness theorems are usually important organizational principles. In many introductory textbooks, the role of existence and uniqueness theorems for ODE can be somewhat opaque; the existence half is usually unnecessary, since one can directly check any proposed solution formula, while the uniqueness half is often only present in the background in order to ensure that a proposed solution formula is as general as possible. By contrast, for PDE, existence and uniqueness theorems are often the only means by which one can navigate through the plethora of different solutions at hand. For this reason, they are also fundamental when carrying out a purely numerical simulation, as one must have an understanding of what data is to be prescribed by the user and what is to be left to the computer to calculate.
To discuss such existence and uniqueness theorems, it is necessary to be precise about the domain of the "unknown function". Otherwise, speaking only in terms such as "a function of two variables", it is impossible to meaningfully formulate the results. That is, the domain of the unknown function must be regarded as part of the structure of the PDE itself.
The following provides two classic examples of such existence and uniqueness theorems.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
That is, the domain of the unknown function must be regarded as part of the structure of the PDE itself.
The following provides two classic examples of such existence and uniqueness theorems. Even though the two PDE in question are so similar, there is a striking difference in behavior: for the first PDE, one has the free prescription of a single function, while for the second PDE, one has the free prescription of two functions.
- Let denote the unit-radius disk around the origin in the plane. For any continuous function on the unit circle, there is exactly one function on such that
$$
\frac{\partial^2u}{\partial x^2} + \frac{\partial^2u}{\partial y^2} = 0
$$
and whose restriction to the unit circle is given by .
- For any functions and on the real line , there is exactly one function on such that
$$
\frac{\partial^2u}{\partial x^2} - \frac{\partial^2u}{\partial y^2} = 0
$$
and with and for all values of .
Even more phenomena are possible. For instance, the following PDE, arising naturally in the field of differential geometry, illustrates an example where there is a simple and completely explicit solution formula, but with the free choice of only three numbers and not even one function.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
Even more phenomena are possible. For instance, the following PDE, arising naturally in the field of differential geometry, illustrates an example where there is a simple and completely explicit solution formula, but with the free choice of only three numbers and not even one function.
- If is a function on with
$$
\frac{\partial}{\partial x} \frac{\frac{\partial u}{\partial x}}{\sqrt{1 + \left(\frac{\partial u}{\partial x}\right)^2 + \left(\frac{\partial u}{\partial y}\right)^2}} +
\frac{\partial}{\partial y} \frac{\frac{\partial u}{\partial y}}{\sqrt{1 + \left(\frac{\partial u}{\partial x}\right)^2 + \left(\frac{\partial u}{\partial y}\right)^2}}=0,
$$
then there are numbers , , and with .
In contrast to the earlier examples, this PDE is nonlinear, owing to the square roots and the squares. A linear PDE is one such that, if it is homogeneous, the sum of any two solutions is also a solution, and any constant multiple of any solution is also a solution.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
In contrast to the earlier examples, this PDE is nonlinear, owing to the square roots and the squares. A linear PDE is one such that, if it is homogeneous, the sum of any two solutions is also a solution, and any constant multiple of any solution is also a solution.
## Definition
A partial differential equation is an equation that involves an unknown function of
$$
n\geq 2
$$
variables and (some of) its partial derivatives. That is, for the unknown function
$$
u : U \rightarrow \mathbb{R},
$$
of variables
$$
x = (x_1,\dots,x_n)
$$
belonging to the open subset
$$
U
$$
of
$$
\mathbb{R}^n
$$
, the
$$
k^{th}
$$
-order partial differential equation is defined as
$$
F[D^{k} u, D^{k-1}u,\dots, D u, u, x]=0,
$$
where
$$
F: \mathbb{R}^{n^{k}}\times \mathbb{R}^{n^{k-1}}\dots \times \mathbb{R}^{n} \times \mathbb{R} \times U \rightarrow \mathbb{R},
$$
and _
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
## Definition
A partial differential equation is an equation that involves an unknown function of
$$
n\geq 2
$$
variables and (some of) its partial derivatives. That is, for the unknown function
$$
u : U \rightarrow \mathbb{R},
$$
of variables
$$
x = (x_1,\dots,x_n)
$$
belonging to the open subset
$$
U
$$
of
$$
\mathbb{R}^n
$$
, the
$$
k^{th}
$$
-order partial differential equation is defined as
$$
F[D^{k} u, D^{k-1}u,\dots, D u, u, x]=0,
$$
where
$$
F: \mathbb{R}^{n^{k}}\times \mathbb{R}^{n^{k-1}}\dots \times \mathbb{R}^{n} \times \mathbb{R} \times U \rightarrow \mathbb{R},
$$
and _ BLOCK8_ is the partial derivative operator.
### Notation
When writing PDEs, it is common to denote partial derivatives using subscripts.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
BLOCK8_ is the partial derivative operator.
### Notation
When writing PDEs, it is common to denote partial derivatives using subscripts. For example:
$$
u_x = \frac{\partial u}{\partial x},\quad u_{xx} = \frac{\partial^2 u}{\partial x^2},\quad u_{xy} = \frac{\partial^2 u}{\partial y\, \partial x} = \frac{\partial}{\partial y } \left(\frac{\partial u}{\partial x}\right).
$$
In the general situation that is a function of variables, then denotes the first partial derivative relative to the -th input, denotes the second partial derivative relative to the -th and -th inputs, and so on.
The Greek letter denotes the Laplace operator; if is a function of variables, then
$$
\Delta u = u_{11} + u_{22} + \cdots + u_{nn}.
$$
In the physics literature, the Laplace operator is often denoted by ; in the mathematics literature, may also denote the Hessian matrix of .
## Classification
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
The Greek letter denotes the Laplace operator; if is a function of variables, then
$$
\Delta u = u_{11} + u_{22} + \cdots + u_{nn}.
$$
In the physics literature, the Laplace operator is often denoted by ; in the mathematics literature, may also denote the Hessian matrix of .
## Classification
### Linear and nonlinear equations
A PDE is called linear if it is linear in the unknown and its derivatives. For example, for a function of and , a second order linear PDE is of the form
$$
a_1(x,y)u_{xx} + a_2(x,y)u_{xy} + a_3(x,y)u_{yx} + a_4(x,y)u_{yy} + a_5(x,y)u_x + a_6(x,y)u_y + a_7(x,y)u = f(x,y)
$$
where and are functions of the independent variables and only. (Often the mixed-partial derivatives and will be equated, but this is not required for the discussion of linearity.)
If the are constants (independent of and ) then the PDE is called linear with constant coefficients.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
(Often the mixed-partial derivatives and will be equated, but this is not required for the discussion of linearity.)
If the are constants (independent of and ) then the PDE is called linear with constant coefficients. If is zero everywhere then the linear PDE is homogeneous, otherwise it is inhomogeneous. (This is separate from asymptotic homogenization, which studies the effects of high-frequency oscillations in the coefficients upon solutions to PDEs.)
Nearest to linear PDEs are semi-linear PDEs, where only the highest order derivatives appear as linear terms, with coefficients that are functions of the independent variables. The lower order derivatives and the unknown function may appear arbitrarily.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
Nearest to linear PDEs are semi-linear PDEs, where only the highest order derivatives appear as linear terms, with coefficients that are functions of the independent variables. The lower order derivatives and the unknown function may appear arbitrarily. For example, a general second order semi-linear PDE in two variables is
$$
a_1(x,y)u_{xx} + a_2(x,y)u_{xy} + a_3(x,y)u_{yx} + a_4(x,y)u_{yy} + f(u_x, u_y, u, x, y) = 0
$$
In a quasilinear PDE the highest order derivatives likewise appear only as linear terms, but with coefficients possibly functions of the unknown and lower-order derivatives:
$$
a_1(u_x, u_y, u, x, y)u_{xx} + a_2(u_x, u_y, u, x, y)u_{xy} + a_3(u_x, u_y, u, x, y)u_{yx} + a_4(u_x, u_y, u, x, y)u_{yy} + f(u_x, u_y, u, x, y) = 0
$$
Many of the fundamental PDEs in physics are quasilinear, such as the Einstein equations of general relativity and the Navier–Stokes equations describing fluid motion.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
The lower order derivatives and the unknown function may appear arbitrarily. For example, a general second order semi-linear PDE in two variables is
$$
a_1(x,y)u_{xx} + a_2(x,y)u_{xy} + a_3(x,y)u_{yx} + a_4(x,y)u_{yy} + f(u_x, u_y, u, x, y) = 0
$$
In a quasilinear PDE the highest order derivatives likewise appear only as linear terms, but with coefficients possibly functions of the unknown and lower-order derivatives:
$$
a_1(u_x, u_y, u, x, y)u_{xx} + a_2(u_x, u_y, u, x, y)u_{xy} + a_3(u_x, u_y, u, x, y)u_{yx} + a_4(u_x, u_y, u, x, y)u_{yy} + f(u_x, u_y, u, x, y) = 0
$$
Many of the fundamental PDEs in physics are quasilinear, such as the Einstein equations of general relativity and the Navier–Stokes equations describing fluid motion.
A PDE without any linearity properties is called fully nonlinear, and possesses nonlinearities on one or more of the highest-order derivatives.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
For example, a general second order semi-linear PDE in two variables is
$$
a_1(x,y)u_{xx} + a_2(x,y)u_{xy} + a_3(x,y)u_{yx} + a_4(x,y)u_{yy} + f(u_x, u_y, u, x, y) = 0
$$
In a quasilinear PDE the highest order derivatives likewise appear only as linear terms, but with coefficients possibly functions of the unknown and lower-order derivatives:
$$
a_1(u_x, u_y, u, x, y)u_{xx} + a_2(u_x, u_y, u, x, y)u_{xy} + a_3(u_x, u_y, u, x, y)u_{yx} + a_4(u_x, u_y, u, x, y)u_{yy} + f(u_x, u_y, u, x, y) = 0
$$
Many of the fundamental PDEs in physics are quasilinear, such as the Einstein equations of general relativity and the Navier–Stokes equations describing fluid motion.
A PDE without any linearity properties is called fully nonlinear, and possesses nonlinearities on one or more of the highest-order derivatives. An example is the Monge–Ampère equation, which arises in differential geometry.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
A PDE without any linearity properties is called fully nonlinear, and possesses nonlinearities on one or more of the highest-order derivatives. An example is the Monge–Ampère equation, which arises in differential geometry.
### Second order equations
The elliptic/parabolic/hyperbolic classification provides a guide to appropriate initial- and boundary conditions and to the smoothness of the solutions. Assuming , the general linear second-order PDE in two independent variables has the form
$$
Au_{xx} + 2Bu_{xy} + Cu_{yy} + \cdots \mbox{(lower order terms)} = 0,
$$
where the coefficients , , ... may depend upon and . If over a region of the -plane, the PDE is second-order in that region. This form is analogous to the equation for a conic section:
$$
Ax^2 + 2Bxy + Cy^2 + \cdots = 0.
$$
More precisely, replacing by , and likewise for other variables (formally this is done by a Fourier transform), converts a constant-coefficient PDE into a polynomial of the same degree, with the terms of the highest degree (a homogeneous polynomial, here a quadratic form) being most significant for the classification.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
If over a region of the -plane, the PDE is second-order in that region. This form is analogous to the equation for a conic section:
$$
Ax^2 + 2Bxy + Cy^2 + \cdots = 0.
$$
More precisely, replacing by , and likewise for other variables (formally this is done by a Fourier transform), converts a constant-coefficient PDE into a polynomial of the same degree, with the terms of the highest degree (a homogeneous polynomial, here a quadratic form) being most significant for the classification.
Just as one classifies conic sections and quadratic forms into parabolic, hyperbolic, and elliptic based on the discriminant , the same can be done for a second-order PDE at a given point. However, the discriminant in a PDE is given by due to the convention of the term being rather than ; formally, the discriminant (of the associated quadratic form) is , with the factor of 4 dropped for simplicity.
1. (elliptic partial differential equation): Solutions of elliptic PDEs are as smooth as the coefficients allow, within the interior of the region where the equation and solutions are defined. For example, solutions of Laplace's equation are analytic within the domain where they are defined, but solutions may assume boundary values that are not smooth.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
1. (elliptic partial differential equation): Solutions of elliptic PDEs are as smooth as the coefficients allow, within the interior of the region where the equation and solutions are defined. For example, solutions of Laplace's equation are analytic within the domain where they are defined, but solutions may assume boundary values that are not smooth. The motion of a fluid at subsonic speeds can be approximated with elliptic PDEs, and the Euler–Tricomi equation is elliptic where . By change of variables, the equation can always be expressed in the form:
$$
u_{xx} + u_{yy} + \cdots = 0 ,
$$
where x and y correspond to changed variables. This justifies Laplace equation as an example of this type.
1. (parabolic partial differential equation): Equations that are parabolic at every point can be transformed into a form analogous to the heat equation by a change of independent variables. Solutions smooth out as the transformed time variable increases. The Euler–Tricomi equation has parabolic type on the line where . By change of variables, the equation can always be expressed in the form: _ BLOCK3_where x correspond to changed variables.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
By change of variables, the equation can always be expressed in the form: _ BLOCK3_where x correspond to changed variables. This justifies heat equation, which are of form
$$
u_t - u_{xx} + \cdots = 0
$$
, as an example of this type.
1. (hyperbolic partial differential equation): hyperbolic equations retain any discontinuities of functions or derivatives in the initial data. An example is the wave equation. The motion of a fluid at supersonic speeds can be approximated with hyperbolic PDEs, and the Euler–Tricomi equation is hyperbolic where . By change of variables, the equation can always be expressed in the form:
$$
u_{xx} - u_{yy} + \cdots = 0,
$$
where x and y correspond to changed variables. This justifies wave equation as an example of this type.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
and y correspond to changed variables. This justifies wave equation as an example of this type.
If there are independent variables , a general linear partial differential equation of second order has the form
$$
L u =\sum_{i=1}^n\sum_{j=1}^n a_{i,j} \frac{\partial^2 u}{\partial x_i \partial x_j} \quad+ \text{lower-order terms} = 0.
$$
The classification depends upon the signature of the eigenvalues of the coefficient matrix .
1. Elliptic: the eigenvalues are all positive or all negative.
1. Parabolic: the eigenvalues are all positive or all negative, except one that is zero.
1. Hyperbolic: there is only one negative eigenvalue and all the rest are positive, or there is only one positive eigenvalue and all the rest are negative.
1. Ultrahyperbolic: there is more than one positive eigenvalue and more than one negative eigenvalue, and there are no zero eigenvalues.
The theory of elliptic, parabolic, and hyperbolic equations have been studied for centuries, largely centered around or based upon the standard examples of the Laplace equation, the heat equation, and the wave equation.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
Ultrahyperbolic: there is more than one positive eigenvalue and more than one negative eigenvalue, and there are no zero eigenvalues.
The theory of elliptic, parabolic, and hyperbolic equations have been studied for centuries, largely centered around or based upon the standard examples of the Laplace equation, the heat equation, and the wave equation.
However, the classification only depends on linearity of the second-order terms and is therefore applicable to semi- and quasilinear PDEs as well. The basic types also extend to hybrids such as the Euler–Tricomi equation; varying from elliptic to hyperbolic for different regions of the domain, as well as higher-order PDEs, but such knowledge is more specialized.
### Systems of first-order equations and characteristic surfaces
The classification of partial differential equations can be extended to systems of first-order equations, where the unknown is now a vector with components, and the coefficient matrices are by matrices for . The partial differential equation takes the form
$$
Lu = \sum_{\nu=1}^{n} A_\nu \frac{\partial u}{\partial x_\nu} + B=0,
$$
where the coefficient matrices and the vector may depend upon and .
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
### Systems of first-order equations and characteristic surfaces
The classification of partial differential equations can be extended to systems of first-order equations, where the unknown is now a vector with components, and the coefficient matrices are by matrices for . The partial differential equation takes the form
$$
Lu = \sum_{\nu=1}^{n} A_\nu \frac{\partial u}{\partial x_\nu} + B=0,
$$
where the coefficient matrices and the vector may depend upon and . If a hypersurface is given in the implicit form
$$
\varphi(x_1, x_2, \ldots, x_n)=0,
$$
where has a non-zero gradient, then is a characteristic surface for the operator at a given point if the characteristic form vanishes:
$$
Q\left(\frac{\partial\varphi}{\partial x_1}, \ldots, \frac{\partial\varphi}{\partial x_n}\right) = \det\left[\sum_{\nu=1}^n A_\nu \frac{\partial \varphi}{\partial x_\nu}\right] = 0.
$$
The geometric interpretation of this condition is as follows: if data for are prescribed on the surface , then it may be possible to determine the normal derivative of on from the differential equation.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
The partial differential equation takes the form
$$
Lu = \sum_{\nu=1}^{n} A_\nu \frac{\partial u}{\partial x_\nu} + B=0,
$$
where the coefficient matrices and the vector may depend upon and . If a hypersurface is given in the implicit form
$$
\varphi(x_1, x_2, \ldots, x_n)=0,
$$
where has a non-zero gradient, then is a characteristic surface for the operator at a given point if the characteristic form vanishes:
$$
Q\left(\frac{\partial\varphi}{\partial x_1}, \ldots, \frac{\partial\varphi}{\partial x_n}\right) = \det\left[\sum_{\nu=1}^n A_\nu \frac{\partial \varphi}{\partial x_\nu}\right] = 0.
$$
The geometric interpretation of this condition is as follows: if data for are prescribed on the surface , then it may be possible to determine the normal derivative of on from the differential equation. If the data on and the differential equation determine the normal derivative of on , then is non-characteristic.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
If a hypersurface is given in the implicit form
$$
\varphi(x_1, x_2, \ldots, x_n)=0,
$$
where has a non-zero gradient, then is a characteristic surface for the operator at a given point if the characteristic form vanishes:
$$
Q\left(\frac{\partial\varphi}{\partial x_1}, \ldots, \frac{\partial\varphi}{\partial x_n}\right) = \det\left[\sum_{\nu=1}^n A_\nu \frac{\partial \varphi}{\partial x_\nu}\right] = 0.
$$
The geometric interpretation of this condition is as follows: if data for are prescribed on the surface , then it may be possible to determine the normal derivative of on from the differential equation. If the data on and the differential equation determine the normal derivative of on , then is non-characteristic. If the data on and the differential equation do not determine the normal derivative of on , then the surface is characteristic, and the differential equation restricts the data on : the differential equation is internal to .
1. A first-order system is elliptic if no surface is characteristic for : the values of on and the differential equation always determine the normal derivative of on .
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
1. A first-order system is elliptic if no surface is characteristic for : the values of on and the differential equation always determine the normal derivative of on .
1. A first-order system is hyperbolic at a point if there is a spacelike surface with normal at that point. This means that, given any non-trivial vector orthogonal to , and a scalar multiplier , the equation has real roots . The system is strictly hyperbolic if these roots are always distinct. The geometrical interpretation of this condition is as follows: the characteristic form defines a cone (the normal cone) with homogeneous coordinates ζ. In the hyperbolic case, this cone has sheets, and the axis runs inside these sheets: it does not intersect any of them. But when displaced from the origin by η, this axis intersects every sheet. In the elliptic case, the normal cone has no real sheets.
## Analytical solutions
### Separation of variables
Linear PDEs can be reduced to systems of ordinary differential equations by the important technique of separation of variables. This technique rests on a feature of solutions to differential equations: if one can find any solution that solves the equation and satisfies the boundary conditions, then it is the solution (this also applies to ODEs).
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
### Separation of variables
Linear PDEs can be reduced to systems of ordinary differential equations by the important technique of separation of variables. This technique rests on a feature of solutions to differential equations: if one can find any solution that solves the equation and satisfies the boundary conditions, then it is the solution (this also applies to ODEs). We assume as an ansatz that the dependence of a solution on the parameters space and time can be written as a product of terms that each depend on a single parameter, and then see if this can be made to solve the problem.
In the method of separation of variables, one reduces a PDE to a PDE in fewer variables, which is an ordinary differential equation if in one variable – these are in turn easier to solve.
This is possible for simple PDEs, which are called separable partial differential equations, and the domain is generally a rectangle (a product of intervals). Separable PDEs correspond to diagonal matrices – thinking of "the value for fixed " as a coordinate, each coordinate can be understood separately.
This generalizes to the method of characteristics, and is also used in integral transforms.
### Method of characteristics
The characteristic surface in dimensional space is called a characteristic curve.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
This generalizes to the method of characteristics, and is also used in integral transforms.
### Method of characteristics
The characteristic surface in dimensional space is called a characteristic curve.
In special cases, one can find characteristic curves on which the first-order PDE reduces to an ODE – changing coordinates in the domain to straighten these curves allows separation of variables, and is called the method of characteristics.
More generally, applying the method to first-order PDEs in higher dimensions, one may find characteristic surfaces.
### Integral transform
An integral transform may transform the PDE to a simpler one, in particular, a separable PDE. This corresponds to diagonalizing an operator.
An important example of this is Fourier analysis, which diagonalizes the heat equation using the eigenbasis of sinusoidal waves.
If the domain is finite or periodic, an infinite sum of solutions such as a Fourier series is appropriate, but an integral of solutions such as a Fourier integral is generally required for infinite domains. The solution for a point source for the heat equation given above is an example of the use of a Fourier integral.
### Change of variables
Often a PDE can be reduced to a simpler form with a known solution by a suitable change of variables.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
The solution for a point source for the heat equation given above is an example of the use of a Fourier integral.
### Change of variables
Often a PDE can be reduced to a simpler form with a known solution by a suitable change of variables. For example, the Black–Scholes equation
$$
\frac{\partial V}{\partial t} + \tfrac{1}{2} \sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} + rS \frac{\partial V}{\partial S} - rV = 0
$$
is reducible to the heat equation
$$
\frac{\partial u}{\partial \tau} = \frac{\partial^2 u}{\partial x^2}
$$
by the change of variables
$$
\begin{align}
V(S,t) &= v(x,\tau),\\[5px]
x &= \ln\left(S \right),\\[5px]
\tau &= \tfrac{1}{2} \sigma^2 (T - t),\\[5px]
v(x,\tau) &= e^{-\alpha x-\beta\tau} u(x,\tau).
\end{align}
$$
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
### Change of variables
Often a PDE can be reduced to a simpler form with a known solution by a suitable change of variables. For example, the Black–Scholes equation
$$
\frac{\partial V}{\partial t} + \tfrac{1}{2} \sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} + rS \frac{\partial V}{\partial S} - rV = 0
$$
is reducible to the heat equation
$$
\frac{\partial u}{\partial \tau} = \frac{\partial^2 u}{\partial x^2}
$$
by the change of variables
$$
\begin{align}
V(S,t) &= v(x,\tau),\\[5px]
x &= \ln\left(S \right),\\[5px]
\tau &= \tfrac{1}{2} \sigma^2 (T - t),\\[5px]
v(x,\tau) &= e^{-\alpha x-\beta\tau} u(x,\tau).
\end{align}
$$
### Fundamental solution
Inhomogeneous equations can often be solved (for constant coefficient PDEs, always be solved) by finding the fundamental solution (the solution for a point source
$$
P(D)u=\delta
$$
), then taking the convolution with the boundary conditions to get the solution.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
For example, the Black–Scholes equation
$$
\frac{\partial V}{\partial t} + \tfrac{1}{2} \sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} + rS \frac{\partial V}{\partial S} - rV = 0
$$
is reducible to the heat equation
$$
\frac{\partial u}{\partial \tau} = \frac{\partial^2 u}{\partial x^2}
$$
by the change of variables
$$
\begin{align}
V(S,t) &= v(x,\tau),\\[5px]
x &= \ln\left(S \right),\\[5px]
\tau &= \tfrac{1}{2} \sigma^2 (T - t),\\[5px]
v(x,\tau) &= e^{-\alpha x-\beta\tau} u(x,\tau).
\end{align}
$$
### Fundamental solution
Inhomogeneous equations can often be solved (for constant coefficient PDEs, always be solved) by finding the fundamental solution (the solution for a point source
$$
P(D)u=\delta
$$
), then taking the convolution with the boundary conditions to get the solution.
This is analogous in signal processing to understanding a filter by its impulse response.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
### Fundamental solution
Inhomogeneous equations can often be solved (for constant coefficient PDEs, always be solved) by finding the fundamental solution (the solution for a point source
$$
P(D)u=\delta
$$
), then taking the convolution with the boundary conditions to get the solution.
This is analogous in signal processing to understanding a filter by its impulse response.
### Superposition principle
The superposition principle applies to any linear system, including linear systems of PDEs. A common visualization of this concept is the interaction of two waves in phase being combined to result in a greater amplitude, for example . The same principle can be observed in PDEs where the solutions may be real or complex and additive. If and are solutions of linear PDE in some function space , then with any constants and are also a solution of that PDE in the same function space.
### Methods for non-linear equations
There are no generally applicable analytical methods to solve nonlinear PDEs. Still, existence and uniqueness results (such as the Cauchy–Kowalevski theorem) are often possible, as are proofs of important qualitative and quantitative properties of solutions (getting these results is a major part of analysis).
Nevertheless, some techniques can be used for several types of equations. The -principle is the most powerful method to solve underdetermined equations.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
Nevertheless, some techniques can be used for several types of equations. The -principle is the most powerful method to solve underdetermined equations. The Riquier–Janet theory is an effective method for obtaining information about many analytic overdetermined systems.
The method of characteristics can be used in some very special cases to solve nonlinear partial differential equations.
In some cases, a PDE can be solved via perturbation analysis in which the solution is considered to be a correction to an equation with a known solution. Alternatives are numerical analysis techniques from simple finite difference schemes to the more mature multigrid and finite element methods. Many interesting problems in science and engineering are solved in this way using computers, sometimes high performance supercomputers.
### Lie group method
From 1870 Sophus Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred, to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact.
A general approach to solving PDEs uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory).
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
He also emphasized the subject of transformations of contact.
A general approach to solving PDEs uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras and differential geometry are used to understand the structure of linear and nonlinear partial differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform and finally finding exact analytic solutions to the PDE.
Symmetry methods have been recognized to study differential equations arising in mathematics, physics, engineering, and many other disciplines.
### Semi-analytical methods
The Adomian decomposition method, the Lyapunov artificial small parameter method, and his homotopy perturbation method are all special cases of the more general homotopy analysis method. These are series expansion methods, and except for the Lyapunov method, are independent of small physical parameters as compared to the well known perturbation theory, thus giving these methods greater flexibility and solution generality.
## Numerical solutions
The three most widely used numerical methods to solve PDEs are the finite element method (FEM), finite volume methods (FVM) and finite difference methods (FDM), as well other kind of methods called meshfree methods, which were made to solve problems where the aforementioned methods are limited.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
These are series expansion methods, and except for the Lyapunov method, are independent of small physical parameters as compared to the well known perturbation theory, thus giving these methods greater flexibility and solution generality.
## Numerical solutions
The three most widely used numerical methods to solve PDEs are the finite element method (FEM), finite volume methods (FVM) and finite difference methods (FDM), as well other kind of methods called meshfree methods, which were made to solve problems where the aforementioned methods are limited. The FEM has a prominent position among these methods and especially its exceptionally efficient higher-order version hp-FEM. Other hybrid versions of FEM and Meshfree methods include the generalized finite element method (GFEM), extended finite element method (XFEM), spectral finite element method (SFEM), meshfree finite element method, discontinuous Galerkin finite element method (DGFEM), element-free Galerkin method (EFGM), interpolating element-free Galerkin method (IEFGM), etc.
### Finite element method
The finite element method (FEM) (its practical application often known as finite element analysis (FEA)) is a numerical technique for finding approximate solutions of partial differential equations (PDE) as well as of integral equations.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
Other hybrid versions of FEM and Meshfree methods include the generalized finite element method (GFEM), extended finite element method (XFEM), spectral finite element method (SFEM), meshfree finite element method, discontinuous Galerkin finite element method (DGFEM), element-free Galerkin method (EFGM), interpolating element-free Galerkin method (IEFGM), etc.
### Finite element method
The finite element method (FEM) (its practical application often known as finite element analysis (FEA)) is a numerical technique for finding approximate solutions of partial differential equations (PDE) as well as of integral equations. The solution approach is based either on eliminating the differential equation completely (steady state problems), or rendering the PDE into an approximating system of ordinary differential equations, which are then numerically integrated using standard techniques such as Euler's method, Runge–Kutta, etc.
### Finite difference method
Finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives.
### Finite volume method
Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
### Finite volume method
Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite volume method, surface integrals in a partial differential equation that contain a divergence term are converted to volume integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods conserve mass by design.
### Neural networks
## Weak solutions
Weak solutions are functions that satisfy the PDE, yet in other meanings than regular sense. The meaning for this term may differ with context, and one of the most commonly used definitions is based on the notion of distributions.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
## Weak solutions
Weak solutions are functions that satisfy the PDE, yet in other meanings than regular sense. The meaning for this term may differ with context, and one of the most commonly used definitions is based on the notion of distributions.
An example for the definition of a weak solution is as follows:
Consider the boundary-value problem given by:
$$
\begin{align}
Lu&=f \quad\text{in }U,\\
u&=0 \quad \text{on }\partial U,
\end{align}
$$
where
$$
Lu=-\sum_{i,j}\partial_j (a^{ij}\partial_i u)+\sum_{i}b^{i}\partial_i u + cu
$$
denotes a second-order partial differential operator in divergence form.
We say a is a weak solution if
$$
\int_{U} [\sum_{i,j}a^{ij}(\partial_{i}u)(\partial_{j}v)+\sum_{i}b^i (\partial_{i}u) v +cuv]dx=\int_{U} fvdx
$$
for every
$$
v\in H_{0}^{1}(U)
$$
, which can be derived by a formal integral by parts.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
An example for the definition of a weak solution is as follows:
Consider the boundary-value problem given by:
$$
\begin{align}
Lu&=f \quad\text{in }U,\\
u&=0 \quad \text{on }\partial U,
\end{align}
$$
where
$$
Lu=-\sum_{i,j}\partial_j (a^{ij}\partial_i u)+\sum_{i}b^{i}\partial_i u + cu
$$
denotes a second-order partial differential operator in divergence form.
We say a is a weak solution if
$$
\int_{U} [\sum_{i,j}a^{ij}(\partial_{i}u)(\partial_{j}v)+\sum_{i}b^i (\partial_{i}u) v +cuv]dx=\int_{U} fvdx
$$
for every
$$
v\in H_{0}^{1}(U)
$$
, which can be derived by a formal integral by parts.
An example for a weak solution is as follows:
$$
\phi(x)=\frac{1}{4\pi} \frac{1}{|x|}
$$
is a weak solution satisfying
$$
\nabla^2 \phi=\delta
\text{ in }R^3
$$
in distributional sense, as formally,
$$
\int_{R^3}\nabla^2 \phi(x)\psi(x)dx=\int_{R^3} \phi(x)\nabla^2 \psi(x)dx=\psi(0)\text{ for }\psi\in C_{c}^{\infty}(R^3).
$$
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
We say a is a weak solution if
$$
\int_{U} [\sum_{i,j}a^{ij}(\partial_{i}u)(\partial_{j}v)+\sum_{i}b^i (\partial_{i}u) v +cuv]dx=\int_{U} fvdx
$$
for every
$$
v\in H_{0}^{1}(U)
$$
, which can be derived by a formal integral by parts.
An example for a weak solution is as follows:
$$
\phi(x)=\frac{1}{4\pi} \frac{1}{|x|}
$$
is a weak solution satisfying
$$
\nabla^2 \phi=\delta
\text{ in }R^3
$$
in distributional sense, as formally,
$$
\int_{R^3}\nabla^2 \phi(x)\psi(x)dx=\int_{R^3} \phi(x)\nabla^2 \psi(x)dx=\psi(0)\text{ for }\psi\in C_{c}^{\infty}(R^3).
$$
## Theoretical Studies
As a branch of pure mathematics, the theoretical studies of PDEs focus on the criteria for a solution to exist, the properties of a solution, and finding its formula is often secondary.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
An example for a weak solution is as follows:
$$
\phi(x)=\frac{1}{4\pi} \frac{1}{|x|}
$$
is a weak solution satisfying
$$
\nabla^2 \phi=\delta
\text{ in }R^3
$$
in distributional sense, as formally,
$$
\int_{R^3}\nabla^2 \phi(x)\psi(x)dx=\int_{R^3} \phi(x)\nabla^2 \psi(x)dx=\psi(0)\text{ for }\psi\in C_{c}^{\infty}(R^3).
$$
## Theoretical Studies
As a branch of pure mathematics, the theoretical studies of PDEs focus on the criteria for a solution to exist, the properties of a solution, and finding its formula is often secondary.
### Well-posedness
Well-posedness refers to a common schematic package of information about a PDE. To say that a PDE is well-posed, one must have:
- an existence and uniqueness theorem, asserting that by the prescription of some freely chosen functions, one can single out one specific solution of the PDE
- by continuously changing the free choices, one continuously changes the corresponding solution
This is, by the necessity of being applicable to several different PDE, somewhat vague.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
### Well-posedness
Well-posedness refers to a common schematic package of information about a PDE. To say that a PDE is well-posed, one must have:
- an existence and uniqueness theorem, asserting that by the prescription of some freely chosen functions, one can single out one specific solution of the PDE
- by continuously changing the free choices, one continuously changes the corresponding solution
This is, by the necessity of being applicable to several different PDE, somewhat vague. The requirement of "continuity", in particular, is ambiguous, since there are usually many inequivalent means by which it can be rigorously defined. It is, however, somewhat unusual to study a PDE without specifying a way in which it is well-posed.
### Regularity
Regularity refers to the integrability and differentiability of weak solutions, which can often be represented by Sobolev spaces.
This problem arise due to the difficulty in searching for classical solutions. Researchers often tend to find weak solutions at first and then find out whether it is smooth enough to be qualified as a classical solution.
Results from functional analysis are often used in this field of study.
|
https://en.wikipedia.org/wiki/Partial_differential_equation
|
In statistics and machine learning, the bias–variance tradeoff describes the relationship between a model's complexity, the accuracy of its predictions, and how well it can make predictions on previously unseen data that were not used to train the model. In general, as we increase the number of tunable parameters in a model, it becomes more flexible, and can better fit a training data set. It is said to have lower error, or bias. However, for more flexible models, there will tend to be greater variance to the model fit each time we take a set of samples to create a new training data set. It is said that there is greater variance in the model's estimated parameters.
The bias–variance dilemma or bias–variance problem is the conflict in trying to simultaneously minimize these two sources of error that prevent supervised learning algorithms from generalizing beyond their training set:
- The bias error is an error from erroneous assumptions in the learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs (underfitting).
- The variance is an error from sensitivity to small fluctuations in the training set. High variance may result from an algorithm modeling the random noise in the training data (overfitting).
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
- The variance is an error from sensitivity to small fluctuations in the training set. High variance may result from an algorithm modeling the random noise in the training data (overfitting).
The bias–variance decomposition is a way of analyzing a learning algorithm's expected generalization error with respect to a particular problem as a sum of three terms, the bias, variance, and a quantity called the irreducible error, resulting from noise in the problem itself.
## Motivation
The bias–variance tradeoff is a central problem in supervised learning. Ideally, one wants to choose a model that both accurately captures the regularities in its training data, but also generalizes well to unseen data. Unfortunately, it is typically impossible to do both simultaneously. High-variance learning methods may be able to represent their training set well but are at risk of overfitting to noisy or unrepresentative training data. In contrast, algorithms with high bias typically produce simpler models that may fail to capture important regularities (i.e. underfit) in the data.
It is an often made fallacy to assume that complex models must have high variance. High variance models are "complex" in some sense, but the reverse needs not be true. In addition, one has to be careful how to define complexity. In particular, the number of parameters used to describe the model is a poor measure of complexity.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
In addition, one has to be careful how to define complexity. In particular, the number of parameters used to describe the model is a poor measure of complexity. This is illustrated by an example adapted from: The model
$$
f_{a,b}(x)=a\sin(bx)
$$
has only two parameters (
$$
a,b
$$
) but it can interpolate any number of points by oscillating with a high enough frequency, resulting in both a high bias and high variance.
An analogy can be made to the relationship between accuracy and precision. Accuracy is one way of quantifying bias and can intuitively be improved by selecting from only local information. Consequently, a sample will appear accurate (i.e. have low bias) under the aforementioned selection conditions, but may result in underfitting. In other words, test data may not agree as closely with training data, which would indicate imprecision and therefore inflated variance. A graphical example would be a straight line fit to data exhibiting quadratic behavior overall. Precision is a description of variance and generally can only be improved by selecting information from a comparatively larger space. The option to select many data points over a broad sample space is the ideal condition for any analysis.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
Precision is a description of variance and generally can only be improved by selecting information from a comparatively larger space. The option to select many data points over a broad sample space is the ideal condition for any analysis. However, intrinsic constraints (whether physical, theoretical, computational, etc.) will always play a limiting role. The limiting case where only a finite number of data points are selected over a broad sample space may result in improved precision and lower variance overall, but may also result in an overreliance on the training data (overfitting). This means that test data would also not agree as closely with the training data, but in this case the reason is inaccuracy or high bias. To borrow from the previous example, the graphical representation would appear as a high-order polynomial fit to the same data exhibiting quadratic behavior. Note that error in each case is measured the same way, but the reason ascribed to the error is different depending on the balance between bias and variance. To mitigate how much information is used from neighboring observations, a model can be smoothed via explicit regularization, such as shrinkage.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
Note that error in each case is measured the same way, but the reason ascribed to the error is different depending on the balance between bias and variance. To mitigate how much information is used from neighboring observations, a model can be smoothed via explicit regularization, such as shrinkage.
## Bias–variance decomposition of mean squared error
Suppose that we have a training set consisting of a set of points
$$
x_1, \dots, x_n
$$
and real-valued labels
$$
y_i
$$
associated with the points
$$
x_i
$$
. We assume that the data is generated by a function
$$
f(x)
$$
such as
$$
y = f(x) + \varepsilon
$$
, where the noise,
$$
\varepsilon
$$
, has zero mean and variance
$$
\sigma^2
$$
. That is,
$$
y_i = f(x_i) + \varepsilon_i
$$
, where
$$
\varepsilon_i
$$
is a noise sample.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
We assume that the data is generated by a function
$$
f(x)
$$
such as
$$
y = f(x) + \varepsilon
$$
, where the noise,
$$
\varepsilon
$$
, has zero mean and variance
$$
\sigma^2
$$
. That is,
$$
y_i = f(x_i) + \varepsilon_i
$$
, where
$$
\varepsilon_i
$$
is a noise sample.
We want to find a function
$$
\hat{f}(x;D)
$$
, that approximates the true function
$$
f(x)
$$
as well as possible, by means of some learning algorithm based on a training dataset (sample)
$$
D=\{(x_1,y_1) \dots, (x_n, y_n)\}
$$
. We make "as well as possible" precise by measuring the mean squared error between
$$
y
$$
and
$$
\hat{f}(x;D)
$$
: we want
$$
(y - \hat{f}(x;D))^2
$$
to be minimal, both for
$$
x_1, \dots, x_n
$$
and for points outside of our sample.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
We want to find a function
$$
\hat{f}(x;D)
$$
, that approximates the true function
$$
f(x)
$$
as well as possible, by means of some learning algorithm based on a training dataset (sample)
$$
D=\{(x_1,y_1) \dots, (x_n, y_n)\}
$$
. We make "as well as possible" precise by measuring the mean squared error between
$$
y
$$
and
$$
\hat{f}(x;D)
$$
: we want
$$
(y - \hat{f}(x;D))^2
$$
to be minimal, both for
$$
x_1, \dots, x_n
$$
and for points outside of our sample. Of course, we cannot hope to do so perfectly, since the
$$
y_i
$$
contain noise
$$
\varepsilon
$$
; this means we must be prepared to accept an irreducible error in any function we come up with.
Finding an
$$
\hat{f}
$$
that generalizes to points outside of the training set can be done with any of the countless algorithms used for supervised learning.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
It turns out that whichever function
$$
\hat{f}
$$
we select, we can decompose its expected error on an unseen sample
$$
x
$$
(i.e. conditional to x) as follows:
$$
\mathbb{E}_{D, \varepsilon} \Big[\big(y - \hat{f}(x;D)\big)^2\Big]
= \Big(\operatorname{Bias}_D\big[\hat{f}(x;D)\big] \Big) ^2 + \operatorname{Var}_D\big[\hat{f}(x;D)\big] + \sigma^2
$$
where
$$
\begin{align}
\operatorname{Bias}_D\big[\hat{f}(x;D)\big] &\triangleq \mathbb{E}_D\big[\hat{f}(x;D)- f(x)\big]\\
&= \mathbb{E}_D\big[\hat{f}(x;D)\big] \, - \, f(x)\\
&= \mathbb{E}_D\big[\hat{f}(x;D)\big] \, - \, \mathbb{E}_{y|x}\big[y(x)\big]
\end{align}
$$
and
$$
\operatorname{Var}_D\big[\hat{f}(x;D)\big] \triangleq \mathbb{E}_D \Big[ \big( \mathbb{E}_D[\hat{f}(x;D)] - \hat{f}(x;D) \big)^2 \Big]
$$
and
$$
\sigma^2 = \operatorname{E}_y \Big[ \big( y - \underbrace{f(x)}_{E_{y|x}[y]} \big)^2 \Big]
$$
The expectation ranges over different choices of the training set
$$
D=\{(x_1,y_1) \dots, (x_n, y_n)\}
$$
, all sampled from the same joint distribution
$$
P(x,y)
$$
which can for example be done via bootstrapping.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
The three terms represent:
- the square of the bias of the learning method, which can be thought of as the error caused by the simplifying assumptions built into the method. E.g., when approximating a non-linear function
$$
f(x)
$$
using a learning method for linear models, there will be error in the estimates
$$
\hat{f}(x)
$$
due to this assumption;
- the variance of the learning method, or, intuitively, how much the learning method
$$
\hat{f}(x)
$$
will move around its mean;
- the irreducible error
$$
\sigma^2
$$
.
Since all three terms are non-negative, the irreducible error forms a lower bound on the expected error on unseen samples.
The more complex the model
$$
\hat{f}(x)
$$
is, the more data points it will capture, and the lower the bias will be. However, complexity will make the model "move" more to capture the data points, and hence its variance will be larger.
### Derivation
The derivation of the bias–variance decomposition for squared error proceeds as follows. For convenience, we drop the
$$
D
$$
subscript in the following lines, such that
$$
\hat{f}(x;D) = \hat{f}(x)
$$
.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
Let us write the mean-squared error of our model:
$$
\begin{align}
\text{MSE} &\triangleq \mathbb{E}\Big[\big(y - \hat{f}(x)\big)^2\Big]\\
&= \mathbb{E}\Big[\big(f(x) + \varepsilon - \hat{f}(x)\big)^2\Big] && \text{since } y \triangleq f(x) + \varepsilon\\
&= \mathbb{E}\Big[\big(f(x) - \hat{f}(x)\big)^2\Big] \, + \, 2 \ \mathbb{E}\Big[ \big(f(x) - \hat{f}(x)\big) \varepsilon \Big] \, + \, \mathbb{E}[\varepsilon^2]
\end{align}
$$
We can show that the second term of this equation is null:
$$
\begin{align}
\mathbb{E}\Big[ \big(f(x) - \hat{f}(x)\big) \varepsilon \Big] &= \mathbb{E} \big[ f(x) - \hat{f}(x) \big] \ \mathbb{E} \big[ \varepsilon \big] && \text{since } \varepsilon \text{ is independent from } x\\
&= 0 && \text{since } \mathbb{E} \big[ \varepsilon \big] = 0
\end{align}
$$
Moreover, the third term of this equation is nothing but
$$
\sigma^2
$$
, the variance of
$$
\varepsilon
$$
.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
Let us now expand the remaining term:
$$
\begin{align}
\mathbb{E}\Big[\big(f(x) - \hat{f}(x)\big)^2\Big] &= \mathbb{E}\Big[\big(f(x) - \mathbb{E} \big[ \hat{f}(x) \big] + \mathbb{E} \big[ \hat{f}(x) \big] - \hat{f}(x)\big)^2\Big]\\
& = {\color{Blue} \mathbb{E}\Big[ \big( f(x) - \mathbb{E} \big[ \hat{f}(x) \big] \big)^2 \Big]}
\, + \, 2 \ {\color{PineGreen} \mathbb{E} \Big[ \big( f(x) - \mathbb{E} \big[ \hat{f}(x) \big] \big) \big( \mathbb{E} \big[ \hat{f}(x) \big] - \hat{f}(x) \big) \Big]}
\, + \, \mathbb{E} \Big[ \big( \mathbb{E} \big[ \hat{f}(x) \big] - \hat{f}(x) \big)^2 \Big]
\end{align}
$$
We show that:
$$
\begin{align}
{\color{Blue} \mathbb{E}\Big[ \big( f(x) - \mathbb{E} \big[ \hat{f}(x) \big] \big)^2 \Big]} &= \mathbb{E} \big[ f(x) ^2 \big] \, - \, 2 \ \mathbb{E} \Big[ f(x) \ \mathbb{E} \big[ \hat{f}(x) \big] \Big] \, + \, \mathbb{E} \Big[ \mathbb{E} \big[ \hat{f}(x) \big]^2 \Big]\\
&= f(x)^2 \, - \, 2 \ f(x) \ \mathbb{E} \big[ \hat{f}(x) \big] \, + \, \mathbb{E} \big[ \hat{f}(x) \big]^2\\
&= \Big( f(x) - \mathbb{E} \big[ \hat{f}(x) \big] \Big)^2
\end{align}
$$
This last series of equalities comes from the fact that _
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
BLOCK8_ is not a random variable, but a fixed, deterministic function of
$$
x
$$
. Therefore,
$$
\mathbb{E} \big[ f(x) \big] = f(x)
$$
. Similarly
$$
\mathbb{E} \big[ f(x)^2 \big] = f(x)^2
$$
, and
$$
\mathbb{E} \Big[ f(x) \ \mathbb{E} \big[ \hat{f}(x) \big] \Big] = f(x) \ \mathbb{E} \Big[ \ \mathbb{E} \big[ \hat{f}(x) \big] \Big] = f(x) \ \mathbb{E} \big[ \hat{f}(x) \big]
$$
.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
Using the same reasoning, we can expand the second term and show that it is null:
$$
\begin{align}
{\color{PineGreen} \mathbb{E} \Big[ \big( f(x) - \mathbb{E} \big[ \hat{f}(x) \big] \big) \big( \mathbb{E} \big[ \hat{f}(x) \big] - \hat{f}(x) \big) \Big]} &= \mathbb{E} \Big[ f(x) \ \mathbb{E} \big[ \hat{f}(x) \big] \, - \, f(x) \hat{f}(x) \, - \, \mathbb{E} \big[ \hat{f}(x) \big]^2 + \mathbb{E} \big[ \hat{f}(x) \big] \ \hat{f}(x) \Big]\\
&= f(x) \ \mathbb{E} \big[ \hat{f}(x) \big] \, - \, f(x) \ \mathbb{E} \big[ \hat{f}(x) \big] \, - \, \mathbb{E} \big[ \hat{f}(x) \big]^2 \, + \, \mathbb{E} \big[ \hat{f}(x) \big]^2\\
&= 0
\end{align}
$$
Eventually, we plug our derivations back into the original equation, and identify each term:
$$
\begin{align}
\text{MSE} &= \Big( f(x) - \mathbb{E} \big[ \hat{f}(x) \big] \Big)^2 + \mathbb{E} \Big[ \big( \mathbb{E} \big[ \hat{f}(x) \big] - \hat{f}(x) \big)^2 \Big] + \sigma^2\\
&= \operatorname{Bias} \big( \hat{f}(x) \big)^2 \, + \, \operatorname{Var} \big[ \hat{f}(x) \big] \, + \, \sigma^2
\end{align}
$$
Finally, the MSE loss function (or negative log-likelihood) is obtained by taking the expectation value over
$$
x\sim P
$$
:
$$
\text{MSE} = \mathbb{E}_x\bigg\{\operatorname{Bias}_D[\hat{f}(x;D)]^2+\operatorname{Var}_D\big[\hat{f}(x;D)\big]\bigg\} + \sigma^2.
$$
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
## Approaches
Dimensionality reduction and feature selection can decrease variance by simplifying models. Similarly, a larger training set tends to decrease variance. Adding features (predictors) tends to decrease bias, at the expense of introducing additional variance. Learning algorithms typically have some tunable parameters that control bias and variance; for example,
- linear and Generalized linear models can be regularized to decrease their variance at the cost of increasing their bias.
- In artificial neural networks, the variance increases and the bias decreases as the number of hidden units increase, although this classical assumption has been the subject of recent debate. Like in GLMs, regularization is typically applied.
- In k-nearest neighbor models, a high value of leads to high bias and low variance (see below).
- In instance-based learning, regularization can be achieved varying the mixture of prototypes and exemplars.
- In decision trees, the depth of the tree determines the variance. Decision trees are commonly pruned to control variance.
One way of resolving the trade-off is to use mixture models and ensemble learning. For example, boosting combines many "weak" (high bias) models in an ensemble that has lower bias than the individual models, while bagging combines "strong" learners in a way that reduces their variance.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
One way of resolving the trade-off is to use mixture models and ensemble learning. For example, boosting combines many "weak" (high bias) models in an ensemble that has lower bias than the individual models, while bagging combines "strong" learners in a way that reduces their variance.
Model validation methods such as cross-validation (statistics) can be used to tune models so as to optimize the trade-off.
### k-nearest neighbors
In the case of -nearest neighbors regression, when the expectation is taken over the possible labeling of a fixed training set, a closed-form expression exists that relates the bias–variance decomposition to the parameter :
$$
\mathbb{E}\left[(y - \hat{f}(x))^2\mid X=x\right] = \left( f(x) - \frac{1}{k}\sum_{i=1}^k f(N_i(x)) \right)^2 + \frac{\sigma^2}{k} + \sigma^2
$$
where _ BLOCK1_ are the nearest neighbors of in the training set. The bias (first term) is a monotone rising function of , while the variance (second term) drops off as is increased.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
BLOCK1_ are the nearest neighbors of in the training set. The bias (first term) is a monotone rising function of , while the variance (second term) drops off as is increased. In fact, under "reasonable assumptions" the bias of the first-nearest neighbor (1-NN) estimator vanishes entirely as the size of the training set approaches infinity.
## Applications
### In regression
The bias–variance decomposition forms the conceptual basis for regression regularization methods such as LASSO and ridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to the ordinary least squares (OLS) solution. Although the OLS solution provides non-biased regression estimates, the lower variance solutions produced by regularization techniques provide superior MSE performance.
### In classification
The bias–variance decomposition was originally formulated for least-squares regression. For the case of classification under the 0-1 loss (misclassification rate), it is possible to find a similar decomposition, with the caveat that the variance term becomes dependent on the target label. Alternatively, if the classification problem can be phrased as probabilistic classification, then the expected cross-entropy can instead be decomposed to give bias and variance terms with the same semantics but taking a different form.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
For the case of classification under the 0-1 loss (misclassification rate), it is possible to find a similar decomposition, with the caveat that the variance term becomes dependent on the target label. Alternatively, if the classification problem can be phrased as probabilistic classification, then the expected cross-entropy can instead be decomposed to give bias and variance terms with the same semantics but taking a different form.
It has been argued that as training data increases, the variance of learned models will tend to decrease, and hence that as training data quantity increases, error is minimised by methods that learn models with lesser bias, and that conversely, for smaller training data quantities it is ever more important to minimise variance.
### In reinforcement learning
Even though the bias–variance decomposition does not directly apply in reinforcement learning, a similar tradeoff can also characterize generalization. When an agent has limited information on its environment , the suboptimality of an RL algorithm can be decomposed into the sum of two terms: a term related to an asymptotic bias and a term due to overfitting. The asymptotic bias is directly related to the learning algorithm (independently of the quantity of data) while the overfitting term comes from the fact that the amount of data is limited.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
, the suboptimality of an RL algorithm can be decomposed into the sum of two terms: a term related to an asymptotic bias and a term due to overfitting. The asymptotic bias is directly related to the learning algorithm (independently of the quantity of data) while the overfitting term comes from the fact that the amount of data is limited.
### In Monte Carlo methods
While in traditional Monte Carlo methods the bias is typically zero, modern approaches, such as Markov chain Monte Carlo are only asymptotically unbiased, at best. Convergence diagnostics can be used to control bias via burn-in removal, but due to a limited computational budget, a bias–variance trade-off arises, leading to a wide-range of approaches, in which a controlled bias is accepted, if this allows to dramatically reduce the variance, and hence the overall estimation error.
### In human learning
While widely discussed in the context of machine learning, the bias–variance dilemma has been examined in the context of human cognition, most notably by Gerd Gigerenzer and co-workers in the context of learned heuristics. They have argued (see references below) that the human brain resolves the dilemma in the case of the typically sparse, poorly-characterized training-sets provided by experience by adopting high-bias/low variance heuristics.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
### In human learning
While widely discussed in the context of machine learning, the bias–variance dilemma has been examined in the context of human cognition, most notably by Gerd Gigerenzer and co-workers in the context of learned heuristics. They have argued (see references below) that the human brain resolves the dilemma in the case of the typically sparse, poorly-characterized training-sets provided by experience by adopting high-bias/low variance heuristics. This reflects the fact that a zero-bias approach has poor generalizability to new situations, and also unreasonably presumes precise knowledge of the true state of the world. The resulting heuristics are relatively simple, but produce better inferences in a wider variety of situations.
Geman et al. argue that the bias–variance dilemma implies that abilities such as generic object recognition cannot be learned from scratch, but require a certain degree of "hard wiring" that is later tuned by experience. This is because model-free approaches to inference require impractically large training sets if they are to avoid high variance.
|
https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff
|
Categorization is a type of cognition involving conceptual differentiation between characteristics of conscious experience, such as objects, events, or ideas. It involves the abstraction and differentiation of aspects of experience by sorting and distinguishing between groupings, through classification or typification on the basis of traits, features, similarities or other criteria that are universal to the group. Categorization is considered one of the most fundamental cognitive abilities, and it is studied particularly by psychology and cognitive linguistics.
Categorization is sometimes considered synonymous with classification (cf., Classification synonyms). Categorization and classification allow humans to organize things, objects, and ideas that exist around them and simplify their understanding of the world. Categorization is something that humans and other organisms do: "doing the right thing with the right kind of thing." The activity of categorizing things can be nonverbal or verbal. For humans, both concrete objects and abstract ideas are recognized, differentiated, and understood through categorization. Objects are usually categorized for some adaptive or pragmatic purposes.
Categorization is grounded in the features that distinguish the category's members from nonmembers. Categorization is important in learning, prediction, inference, decision making, language, and many forms of organisms' interaction with their environments.
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
Categorization is grounded in the features that distinguish the category's members from nonmembers. Categorization is important in learning, prediction, inference, decision making, language, and many forms of organisms' interaction with their environments.
## Overview
Categories are distinct collections of concrete or abstract instances (category members) that are considered equivalent by the cognitive system. Using category knowledge requires one to access mental representations that define the core features of category members (cognitive psychologists refer to these category-specific mental representations as concepts).
To categorization theorists, the categorization of objects is often considered using taxonomies with three hierarchical levels of abstraction. For example, a plant could be identified at a high level of abstraction by simply labeling it a flower, a medium level of abstraction by specifying that the flower is a rose, or a low level of abstraction by further specifying this particular rose as a dog rose. Categories in a taxonomy are related to one another via class inclusion, with the highest level of abstraction being the most inclusive and the lowest level of abstraction being the least inclusive. The three levels of abstraction are as follows:
- Superordinate level, Genus (e.g., Flower) - The highest and most inclusive level of abstraction.
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
The three levels of abstraction are as follows:
- Superordinate level, Genus (e.g., Flower) - The highest and most inclusive level of abstraction. Exhibits the highest degree of generality and the lowest degree of within-category similarity.
- Basic Level, Species (e.g., Rose) - The middle level of abstraction. Rosch and colleagues (1976) suggest the basic level to be the most cognitively efficient. Basic level categories exhibit high within-category similarities and high between-category dissimilarities. Furthermore, the basic level is the most inclusive level at which category exemplars share a generalized identifiable shape. Adults most-often use basic level object names, and children learn basic object names first.
- Subordinate level (e.g., Dog Rose) - The lowest level of abstraction. Exhibits the highest degree of specificity and within-category similarity.
## Beginning of categorization
The essential issue in studying categorization is how conceptual differentiation between characteristics of conscious experience begins in young, inexperienced organisms. Growing experimental data show evidence of differentiation between characteristics of objects and events in newborns and even in foetuses during the prenatal period.
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
## Beginning of categorization
The essential issue in studying categorization is how conceptual differentiation between characteristics of conscious experience begins in young, inexperienced organisms. Growing experimental data show evidence of differentiation between characteristics of objects and events in newborns and even in foetuses during the prenatal period. This development succeeds in organisms that only demonstrate simple reflexes (see articles on the binding problem, cognition, cognitive development, infant cognitive development, multisensory integration, and perception). For their nervous systems, the environment is a cacophony of sensory stimuli: electromagnetic waves, chemical interactions, and pressure fluctuations.
Categorization thought involves the abstraction and differentiation of aspects of experience that rely upon such power of mind as intentionality and perception. The problem is that these young organisms should already grasp the abilities of intentionality and perception to categorize the environment. Intentionality and perception already require their ability to recognise objects (or events), i.e., to identify objects by the sensory system. This is a vicious circle: categorization needs intentionality and perception, which only appear in the categorized environment. So, the young, inexperienced organism does not have abstract thinking and cannot independently accomplish conceptual differentiation between characteristics of conscious experience if it solves the categorization problem alone.
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
This is a vicious circle: categorization needs intentionality and perception, which only appear in the categorized environment. So, the young, inexperienced organism does not have abstract thinking and cannot independently accomplish conceptual differentiation between characteristics of conscious experience if it solves the categorization problem alone.
Studying the origins of social cognition in child development, developmental psychologist Michael Tomasello developed the notion of Shared intentionality to account for unaware processes during social learning after birth to explain processes in shaping intentionality. Further, Latvian professor Igor Val Danilov expanded this concept to the intrauterine period by introducing a Mother-Fetus Neurocognitive model: a hypothesis of neurophysiological processes occurring during Shared intentionality. The hypothesis attempts to explain the beginning of cognitive development in organisms at different levels of bio-system complexity, from interpersonal dynamics to neuronal interactions. Evidence in neuroscience supports the hypothesis. Hyperscanning research studies observed inter-brain activity under conditions without communication in pairs while subjects were solving the shared cognitive problem, and they registered an increased inter-brain activity in contrast to the condition when subjects solved a similar problem alone. Painter, D.R., Kim, J.J., Renton, A.I., Mattingley, J.B. (2021).
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
Hyperscanning research studies observed inter-brain activity under conditions without communication in pairs while subjects were solving the shared cognitive problem, and they registered an increased inter-brain activity in contrast to the condition when subjects solved a similar problem alone. Painter, D.R., Kim, J.J., Renton, A.I., Mattingley, J.B. (2021). "Joint control of visually guided actions involves concordant increases in behavioural and neural coupling." Commun Biol. 2021; 4: 816.Fishburn, F.A., Murty, V.P., Hlutkowsky, C.O., MacGillivray, C.E., Bemis, L.M., Murphy, M.E., et al. (2018). "Putting our heads together: Interpersonal neural synchronization as a biological mechanism for shared intentionality." Soc Cogn Affect Neurosci. 2018; 13: 841-849.Astolfi, L., Toppi, J., De Vico Fallani, F., Vecchiato, G., Salinari, S., Mattia, D., et al. (2010). "Neuroelectrical hyperscanning measures simultaneous brain activity in humans." Brain Topogr. 2010; 23: 243-256.
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
"Neuroelectrical hyperscanning measures simultaneous brain activity in humans." Brain Topogr. 2010; 23: 243-256. These data show that collaborative interaction without sensory cues can emerge in mother-child dyads, providing Shared intentionality. It shows the mode to cognize at the stage without communication and abstract thinking.
The significance of this knowledge is that it can reveal the new direction to study consciousness since the latter refers to awareness of internal and external existence relying on intentionality, perception and categorization of the environment.
## Theories
### Classical view
The classical theory of categorization, is a term used in cognitive linguistics to denote the approach to categorization that appears in Plato and Aristotle and that has been highly influential and dominant in Western culture, particularly in philosophy, linguistics and psychology. Aristotle's categorical method of analysis was transmitted to the scholastic medieval university through Porphyry's Isagoge. The classical view of categories can be summarized into three assumptions: a category can be described as a list of necessary and sufficient features that its membership must have, categories are discrete in that they have clearly defined boundaries (either an element belongs to one or not, with no possibilities in between), and all the members of a category have the same status. (There are no members of the category which belong more than others).
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
The classical view of categories can be summarized into three assumptions: a category can be described as a list of necessary and sufficient features that its membership must have, categories are discrete in that they have clearly defined boundaries (either an element belongs to one or not, with no possibilities in between), and all the members of a category have the same status. (There are no members of the category which belong more than others). In the classical view, categories need to be clearly defined, mutually exclusive and collectively exhaustive; this way, any entity in the given classification universe belongs unequivocally to one, and only one, of the proposed categories.
The classical view of categories first appeared in the context of Western Philosophy in the work of Plato, who, in his Statesman dialogue, introduces the approach of grouping objects based on their similar properties. This approach was further explored and systematized by Aristotle in his Categories treatise, where he analyzes the differences between classes and objects. Aristotle also applied intensively the classical categorization scheme in his approach to the classification of living beings (which uses the technique of applying successive narrowing questions such as "Is it an animal or vegetable?", "How many feet does it have?", "Does it have fur or feathers? ", "Can it fly?"...), establishing this way the basis for natural taxonomy.
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
Aristotle also applied intensively the classical categorization scheme in his approach to the classification of living beings (which uses the technique of applying successive narrowing questions such as "Is it an animal or vegetable?", "How many feet does it have?", "Does it have fur or feathers? ", "Can it fly?"...), establishing this way the basis for natural taxonomy.
## Examples
of the use of the classical view of categories can be found in the western philosophical works of Descartes, Blaise Pascal, Spinoza and John Locke, and in the 20th century in Bertrand Russell, G.E. Moore, the logical positivists. It has been a cornerstone of analytic philosophy and its conceptual analysis, with more recent formulations proposed in the 1990s by Frank Cameron Jackson and Christopher Peacocke. At the beginning of the 20th century, the question of categories was introduced into the empirical social sciences by Durkheim and Mauss, whose pioneering work has been revisited in contemporary scholarship.
The classical model of categorization has been used at least since the 1960s from linguists of the structural semantics paradigm, by Jerrold Katz and Jerry Fodor in 1963, which in turn have influenced its adoption also by psychologists like Allan M. Collins and M. Ross Quillian.
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
At the beginning of the 20th century, the question of categories was introduced into the empirical social sciences by Durkheim and Mauss, whose pioneering work has been revisited in contemporary scholarship.
The classical model of categorization has been used at least since the 1960s from linguists of the structural semantics paradigm, by Jerrold Katz and Jerry Fodor in 1963, which in turn have influenced its adoption also by psychologists like Allan M. Collins and M. Ross Quillian.
Modern versions of classical categorization theory study how the brain learns and represents categories by detecting the features that distinguish members from nonmembers.
### Prototype theory
The pioneering research by psychologist Eleanor Rosch and colleagues since 1973, introduced the prototype theory, according to which categorization can also be viewed as the process of grouping things based on prototypes. This approach has been highly influential, particularly for cognitive linguistics. It was in part based on previous insights, in particular the formulation of a category model based on family resemblance by Wittgenstein (1953), and by Roger Brown's How shall a thing be called? (1958).
Prototype theory has been then adopted by cognitive linguists like George Lakoff. The prototype theory is an example of a similarity-based approach to categorization, in which a stored category representation is used to assess the similarity of candidate category members.
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
Prototype theory has been then adopted by cognitive linguists like George Lakoff. The prototype theory is an example of a similarity-based approach to categorization, in which a stored category representation is used to assess the similarity of candidate category members. Under the prototype theory, this stored representation consists of a summary representation of the category's members. This prototype stimulus can take various forms. It might be a central tendency that represents the category's average member, a modal stimulus representing either the most frequent instance or a stimulus composed of the most common category features, or, lastly, the "ideal" category member, or a caricature that emphasizes the distinct features of the category. An important consideration of this prototype representation is that it does not necessarily reflect the existence of an actual instance of the category in the world. Furthermore, prototypes are highly sensitive to context. For example, while one's prototype for the category of beverages may be soda or seltzer, the context of brunch might lead them to select mimosa as a prototypical beverage.
The prototype theory claims that members of a given category share a family resemblance, and categories are defined by sets of typical features (as opposed to all members possessing necessary and sufficient features).
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
For example, while one's prototype for the category of beverages may be soda or seltzer, the context of brunch might lead them to select mimosa as a prototypical beverage.
The prototype theory claims that members of a given category share a family resemblance, and categories are defined by sets of typical features (as opposed to all members possessing necessary and sufficient features).
### Exemplar theory
Another instance of the similarity-based approach to categorization, the exemplar theory likewise compares the similarity of candidate category members to stored memory representations. Under the exemplar theory, all known instances of a category are stored in memory as exemplars. When evaluating an unfamiliar entity's category membership, exemplars from potentially relevant categories are retrieved from memory, and the entity's similarity to those exemplars is summed to formulate a categorization decision. Medin and Schaffer's (1978) Context model employs a nearest neighbor approach which, rather than summing an entity's similarities to relevant exemplars, multiplies them to provide weighted similarities that reflect the entity's proximity to relevant exemplars. This effectively biases categorization decisions towards exemplars most similar to the entity to be categorized. Goldstone, R. L., Kersten, A., & Carvalho, P. F. (2012). Concepts and categorization. Handbook of Psychology, Second Edition, 4.
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
Concepts and categorization. Handbook of Psychology, Second Edition, 4.
### Conceptual clustering
Conceptual clustering is a machine learning paradigm for unsupervised classification that was defined by Ryszard S. Michalski in 1980. It is a modern variation of the classical approach of categorization, and derives from attempts to explain how knowledge is represented. In this approach, classes (clusters or entities) are generated by first formulating their conceptual descriptions and then classifying the entities according to the descriptions.
Conceptual clustering developed mainly during the 1980s, as a machine paradigm for unsupervised learning. It is distinguished from ordinary data clustering by generating a concept description for each generated category.
Conceptual clustering is closely related to fuzzy set theory, in which objects may belong to one or more groups, in varying degrees of fitness. A cognitive approach accepts that natural categories are graded (they tend to be fuzzy at their boundaries) and inconsistent in the status of their constituent members. The idea of necessary and sufficient conditions is almost never met in categories of naturally occurring things.
## Category learning
While an exhaustive discussion of category learning is beyond the scope of this article, a brief overview of category learning and its associated theories is useful in understanding formal models of categorization.
|
https://en.wikipedia.org/wiki/Cognitive_categorization
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.