text
stringlengths 105
4.17k
| source
stringclasses 883
values |
---|---|
As a result, if is uniformly distributed on , then the cumulative distribution function of is .
For example, suppose we want to generate a random variable having an exponential distribution with parameter
$$
\lambda
$$
— that is, with cumulative distribution function
$$
F : x \mapsto 1 - e^{-\lambda x}.
$$
$$
\begin{align}
F(x) = u &\Leftrightarrow 1-e^{-\lambda x} = u \\[2pt]
&\Leftrightarrow e^{-\lambda x } = 1-u \\[2pt]
&\Leftrightarrow -\lambda x = \ln(1-u) \\[2pt]
&\Leftrightarrow x = \frac{-1}{\lambda}\ln(1-u)
\end{align}
$$
so _ BLOCK9_, and if has a uniform distribution on then
$$
X = -\tfrac{1}{\lambda}\ln(1-U)
$$
has an exponential distribution with parameter
$$
\lambda.
$$
Although from a theoretical point of view this method always works, in practice the inverse distribution function is unknown and/or cannot be computed efficiently. In this case, other methods (such as the Monte Carlo method) are used.
|
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
|
BLOCK9_, and if has a uniform distribution on then
$$
X = -\tfrac{1}{\lambda}\ln(1-U)
$$
has an exponential distribution with parameter
$$
\lambda.
$$
Although from a theoretical point of view this method always works, in practice the inverse distribution function is unknown and/or cannot be computed efficiently. In this case, other methods (such as the Monte Carlo method) are used.
## Common probability distributions and their applications
The concept of the probability distribution and the random variables which they describe underlies the mathematical discipline of probability theory, and the science of statistics. There is spread or variability in almost any value that can be measured in a population (e.g. height of people, durability of a metal, sales growth, traffic flow, etc.); almost all measurements are made with some intrinsic error; in physics, many processes are described probabilistically, from the kinetic properties of gases to the quantum mechanical description of fundamental particles. For these and many other reasons, simple numbers are often inadequate for describing a quantity, while probability distributions are often more appropriate.
The following is a list of some of the most common probability distributions, grouped by the type of process that they are related to.
|
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
|
For these and many other reasons, simple numbers are often inadequate for describing a quantity, while probability distributions are often more appropriate.
The following is a list of some of the most common probability distributions, grouped by the type of process that they are related to. For a more complete list, see list of probability distributions, which groups by the nature of the outcome being considered (discrete, absolutely continuous, multivariate, etc.)
All of the univariate distributions below are singly peaked; that is, it is assumed that the values cluster around a single point. In practice, actually observed quantities may cluster around multiple values. Such quantities can be modeled using a mixture distribution.
### Linear growth (e.g. errors, offsets)
- Normal distribution (Gaussian distribution), for a single such quantity; the most commonly used absolutely continuous distribution
### Exponential growth (e.g. prices, incomes, populations)
- Log-normal distribution, for a single such quantity whose log is normally distributed
- Pareto distribution, for a single such quantity whose log is exponentially distributed; the prototypical power law distribution
### Uniformly distributed quantities
- Discrete uniform distribution, for a finite set of values (e.g. the outcome of a fair dice)
- Continuous uniform distribution, for absolutely continuously distributed values
|
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
|
### Exponential growth (e.g. prices, incomes, populations)
- Log-normal distribution, for a single such quantity whose log is normally distributed
- Pareto distribution, for a single such quantity whose log is exponentially distributed; the prototypical power law distribution
### Uniformly distributed quantities
- Discrete uniform distribution, for a finite set of values (e.g. the outcome of a fair dice)
- Continuous uniform distribution, for absolutely continuously distributed values
### Bernoulli trials (yes/no events, with a given probability)
- Basic distributions:
- Bernoulli distribution, for the outcome of a single Bernoulli trial (e.g. success/failure, yes/no)
- Binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed total number of independent occurrences
- Negative binomial distribution, for binomial-type observations but where the quantity of interest is the number of failures before a given number of successes occurs
- Geometric distribution, for binomial-type observations but where the quantity of interest is the number of failures before the first success; a special case of the negative binomial distribution
- Related to sampling schemes over a finite population:
- Hypergeometric distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, using sampling without replacement
- Beta-binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, sampling using a Pólya urn model (in some sense, the "opposite" of sampling without replacement)
|
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
|
### Uniformly distributed quantities
- Discrete uniform distribution, for a finite set of values (e.g. the outcome of a fair dice)
- Continuous uniform distribution, for absolutely continuously distributed values
### Bernoulli trials (yes/no events, with a given probability)
- Basic distributions:
- Bernoulli distribution, for the outcome of a single Bernoulli trial (e.g. success/failure, yes/no)
- Binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed total number of independent occurrences
- Negative binomial distribution, for binomial-type observations but where the quantity of interest is the number of failures before a given number of successes occurs
- Geometric distribution, for binomial-type observations but where the quantity of interest is the number of failures before the first success; a special case of the negative binomial distribution
- Related to sampling schemes over a finite population:
- Hypergeometric distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, using sampling without replacement
- Beta-binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, sampling using a Pólya urn model (in some sense, the "opposite" of sampling without replacement)
### Categorical outcomes (events with possible outcomes)
- Categorical distribution, for a single categorical outcome (e.g. yes/no/maybe in a survey); a generalization of the Bernoulli distribution
- Multinomial distribution, for the number of each type of categorical outcome, given a fixed number of total outcomes; a generalization of the binomial distribution
- Multivariate hypergeometric distribution, similar to the multinomial distribution, but using sampling without replacement; a generalization of the hypergeometric distribution
|
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
|
### Bernoulli trials (yes/no events, with a given probability)
- Basic distributions:
- Bernoulli distribution, for the outcome of a single Bernoulli trial (e.g. success/failure, yes/no)
- Binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed total number of independent occurrences
- Negative binomial distribution, for binomial-type observations but where the quantity of interest is the number of failures before a given number of successes occurs
- Geometric distribution, for binomial-type observations but where the quantity of interest is the number of failures before the first success; a special case of the negative binomial distribution
- Related to sampling schemes over a finite population:
- Hypergeometric distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, using sampling without replacement
- Beta-binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, sampling using a Pólya urn model (in some sense, the "opposite" of sampling without replacement)
### Categorical outcomes (events with possible outcomes)
- Categorical distribution, for a single categorical outcome (e.g. yes/no/maybe in a survey); a generalization of the Bernoulli distribution
- Multinomial distribution, for the number of each type of categorical outcome, given a fixed number of total outcomes; a generalization of the binomial distribution
- Multivariate hypergeometric distribution, similar to the multinomial distribution, but using sampling without replacement; a generalization of the hypergeometric distribution
### Poisson process (events that occur independently with a given rate)
- Poisson distribution, for the number of occurrences of a Poisson-type event in a given period of time
- Exponential distribution, for the time before the next Poisson-type event occurs
- Gamma distribution, for the time before the next k
|
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
|
### Categorical outcomes (events with possible outcomes)
- Categorical distribution, for a single categorical outcome (e.g. yes/no/maybe in a survey); a generalization of the Bernoulli distribution
- Multinomial distribution, for the number of each type of categorical outcome, given a fixed number of total outcomes; a generalization of the binomial distribution
- Multivariate hypergeometric distribution, similar to the multinomial distribution, but using sampling without replacement; a generalization of the hypergeometric distribution
### Poisson process (events that occur independently with a given rate)
- Poisson distribution, for the number of occurrences of a Poisson-type event in a given period of time
- Exponential distribution, for the time before the next Poisson-type event occurs
- Gamma distribution, for the time before the next k Poisson-type events occur
### Absolute values of vectors with normally distributed components
- Rayleigh distribution, for the distribution of vector magnitudes with Gaussian distributed orthogonal components. Rayleigh distributions are found in RF signals with Gaussian real and imaginary components.
- Rice distribution, a generalization of the Rayleigh distributions for where there is a stationary background signal component. Found in Rician fading of radio signals due to multipath propagation and in MR images with noise corruption on non-zero NMR signals.
|
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
|
Rayleigh distributions are found in RF signals with Gaussian real and imaginary components.
- Rice distribution, a generalization of the Rayleigh distributions for where there is a stationary background signal component. Found in Rician fading of radio signals due to multipath propagation and in MR images with noise corruption on non-zero NMR signals.
### Normally distributed quantities operated with sum of squares
- Chi-squared distribution, the distribution of a sum of squared standard normal variables; useful e.g. for inference regarding the sample variance of normally distributed samples (see chi-squared test)
- Student's t distribution, the distribution of the ratio of a standard normal variable and the square root of a scaled chi squared variable; useful for inference regarding the mean of normally distributed samples with unknown variance (see Student's t-test)
- F-distribution, the distribution of the ratio of two scaled chi squared variables; useful e.g. for inferences that involve comparing variances or involving R-squared (the squared correlation coefficient)
|
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
|
Found in Rician fading of radio signals due to multipath propagation and in MR images with noise corruption on non-zero NMR signals.
### Normally distributed quantities operated with sum of squares
- Chi-squared distribution, the distribution of a sum of squared standard normal variables; useful e.g. for inference regarding the sample variance of normally distributed samples (see chi-squared test)
- Student's t distribution, the distribution of the ratio of a standard normal variable and the square root of a scaled chi squared variable; useful for inference regarding the mean of normally distributed samples with unknown variance (see Student's t-test)
- F-distribution, the distribution of the ratio of two scaled chi squared variables; useful e.g. for inferences that involve comparing variances or involving R-squared (the squared correlation coefficient)
### As conjugate prior distributions in Bayesian inference
- Beta distribution, for a single probability (real number between 0 and 1); conjugate to the Bernoulli distribution and binomial distribution
- Gamma distribution, for a non-negative scaling parameter; conjugate to the rate parameter of a Poisson distribution or exponential distribution, the precision (inverse variance) of a normal distribution, etc.
- Dirichlet distribution, for a vector of probabilities that must sum to 1; conjugate to the categorical distribution and multinomial distribution; generalization of the beta distribution
- Wishart distribution, for a symmetric non-negative definite matrix; conjugate to the inverse of the covariance matrix of a multivariate normal distribution; generalization of the gamma distribution
|
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
|
### Normally distributed quantities operated with sum of squares
- Chi-squared distribution, the distribution of a sum of squared standard normal variables; useful e.g. for inference regarding the sample variance of normally distributed samples (see chi-squared test)
- Student's t distribution, the distribution of the ratio of a standard normal variable and the square root of a scaled chi squared variable; useful for inference regarding the mean of normally distributed samples with unknown variance (see Student's t-test)
- F-distribution, the distribution of the ratio of two scaled chi squared variables; useful e.g. for inferences that involve comparing variances or involving R-squared (the squared correlation coefficient)
### As conjugate prior distributions in Bayesian inference
- Beta distribution, for a single probability (real number between 0 and 1); conjugate to the Bernoulli distribution and binomial distribution
- Gamma distribution, for a non-negative scaling parameter; conjugate to the rate parameter of a Poisson distribution or exponential distribution, the precision (inverse variance) of a normal distribution, etc.
- Dirichlet distribution, for a vector of probabilities that must sum to 1; conjugate to the categorical distribution and multinomial distribution; generalization of the beta distribution
- Wishart distribution, for a symmetric non-negative definite matrix; conjugate to the inverse of the covariance matrix of a multivariate normal distribution; generalization of the gamma distribution
### Some specialized applications of probability distributions
- The cache language models and other statistical language models used in natural language processing to assign probabilities to the occurrence of particular words and word sequences do so by means of probability distributions.
-
|
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
|
### As conjugate prior distributions in Bayesian inference
- Beta distribution, for a single probability (real number between 0 and 1); conjugate to the Bernoulli distribution and binomial distribution
- Gamma distribution, for a non-negative scaling parameter; conjugate to the rate parameter of a Poisson distribution or exponential distribution, the precision (inverse variance) of a normal distribution, etc.
- Dirichlet distribution, for a vector of probabilities that must sum to 1; conjugate to the categorical distribution and multinomial distribution; generalization of the beta distribution
- Wishart distribution, for a symmetric non-negative definite matrix; conjugate to the inverse of the covariance matrix of a multivariate normal distribution; generalization of the gamma distribution
### Some specialized applications of probability distributions
- The cache language models and other statistical language models used in natural language processing to assign probabilities to the occurrence of particular words and word sequences do so by means of probability distributions.
- In quantum mechanics, the probability density of finding the particle at a given point is proportional to the square of the magnitude of the particle's wavefunction at that point (see Born rule).
|
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
|
### Some specialized applications of probability distributions
- The cache language models and other statistical language models used in natural language processing to assign probabilities to the occurrence of particular words and word sequences do so by means of probability distributions.
- In quantum mechanics, the probability density of finding the particle at a given point is proportional to the square of the magnitude of the particle's wavefunction at that point (see Born rule). Therefore, the probability distribution function of the position of a particle is described by
$$
P_{a\le x\le b} (t) = \int_a^b d x\,|\Psi(x,t)|^2
$$
, probability that the particle's position will be in the interval in dimension one, and a similar triple integral in dimension three. This is a key principle of quantum mechanics.
- Probabilistic load flow in power-flow study explains the uncertainties of input variables as probability distribution and provides the power flow calculation also in term of probability distribution.
- Prediction of natural phenomena occurrences based on previous frequency distributions such as tropical cyclones, hail, time in between events, etc.
## Fitting
|
https://en.wikipedia.org/wiki/Probability_distribution%23Discrete_probability_distribution
|
In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.
The Hamiltonian is named after William Rowan Hamilton, who developed a revolutionary reformulation of Newtonian mechanics, known as Hamiltonian mechanics, which was historically important to the development of quantum physics. Similar to vector notation, it is typically denoted by
$$
\hat{H}
$$
, where the hat indicates that it is an operator. It can also be written as
$$
H
$$
or
$$
\check{H}
$$
.
## Introduction
The Hamiltonian of a system represents the total energy of the system; that is, the sum of the kinetic and potential energies of all particles associated with the system. The Hamiltonian takes different forms and can be simplified in some cases by taking into account the concrete characteristics of the system under analysis, such as single or several particles in the system, interaction between particles, kind of potential energy, time varying potential or time independent one.
## Schrödinger Hamiltonian
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
The Hamiltonian takes different forms and can be simplified in some cases by taking into account the concrete characteristics of the system under analysis, such as single or several particles in the system, interaction between particles, kind of potential energy, time varying potential or time independent one.
## Schrödinger Hamiltonian
### One particle
By analogy with classical mechanics, the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the form
$$
\hat{H} = \hat{T} + \hat{V},
$$
where
$$
\hat{V} = V = V(\mathbf{r},t) ,
$$
is the potential energy operator and
$$
\hat{T} = \frac{\mathbf{\hat{p}}\cdot\mathbf{\hat{p}}}{2m} = \frac{\hat{p}^2}{2m} = -\frac{\hbar^2}{2m}\nabla^2,
$$
is the kinetic energy operator in which
$$
m
$$
is the mass of the particle, the dot denotes the dot product of vectors, and
$$
\hat{p} = -i\hbar\nabla ,
$$
is the momentum operator where a
$$
\nabla
$$
is the del operator.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
## Schrödinger Hamiltonian
### One particle
By analogy with classical mechanics, the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the form
$$
\hat{H} = \hat{T} + \hat{V},
$$
where
$$
\hat{V} = V = V(\mathbf{r},t) ,
$$
is the potential energy operator and
$$
\hat{T} = \frac{\mathbf{\hat{p}}\cdot\mathbf{\hat{p}}}{2m} = \frac{\hat{p}^2}{2m} = -\frac{\hbar^2}{2m}\nabla^2,
$$
is the kinetic energy operator in which
$$
m
$$
is the mass of the particle, the dot denotes the dot product of vectors, and
$$
\hat{p} = -i\hbar\nabla ,
$$
is the momentum operator where a
$$
\nabla
$$
is the del operator. The dot product of
$$
\nabla
$$
with itself is the Laplacian
$$
\nabla^2
$$
.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
### One particle
By analogy with classical mechanics, the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the form
$$
\hat{H} = \hat{T} + \hat{V},
$$
where
$$
\hat{V} = V = V(\mathbf{r},t) ,
$$
is the potential energy operator and
$$
\hat{T} = \frac{\mathbf{\hat{p}}\cdot\mathbf{\hat{p}}}{2m} = \frac{\hat{p}^2}{2m} = -\frac{\hbar^2}{2m}\nabla^2,
$$
is the kinetic energy operator in which
$$
m
$$
is the mass of the particle, the dot denotes the dot product of vectors, and
$$
\hat{p} = -i\hbar\nabla ,
$$
is the momentum operator where a
$$
\nabla
$$
is the del operator. The dot product of
$$
\nabla
$$
with itself is the Laplacian
$$
\nabla^2
$$
. In three dimensions using Cartesian coordinates the Laplace operator is
$$
\nabla^2 = \frac{\partial^2}{ {\partial x}^2} + \frac{\partial^2}{ {\partial y}^2} + \frac{\partial^2}{ {\partial z}^2}
$$
Although this is not the technical definition of the Hamiltonian in classical mechanics, it is the form it most commonly takes.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
The dot product of
$$
\nabla
$$
with itself is the Laplacian
$$
\nabla^2
$$
. In three dimensions using Cartesian coordinates the Laplace operator is
$$
\nabla^2 = \frac{\partial^2}{ {\partial x}^2} + \frac{\partial^2}{ {\partial y}^2} + \frac{\partial^2}{ {\partial z}^2}
$$
Although this is not the technical definition of the Hamiltonian in classical mechanics, it is the form it most commonly takes. Combining these yields the form used in the
## Schrödinger equation
:
$$
\begin{align}
\hat{H} & = \hat{T} + \hat{V} \\[6pt]
& = \frac{\mathbf{\hat{p}}\cdot\mathbf{\hat{p}}}{2m}+ V(\mathbf{r},t) \\[6pt]
& = -\frac{\hbar^2}{2m}\nabla^2+ V(\mathbf{r},t)
\end{align}
$$
which allows one to apply the Hamiltonian to systems described by a wave function
$$
\Psi(\mathbf{r}, t)
$$
.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
Combining these yields the form used in the
## Schrödinger equation
:
$$
\begin{align}
\hat{H} & = \hat{T} + \hat{V} \\[6pt]
& = \frac{\mathbf{\hat{p}}\cdot\mathbf{\hat{p}}}{2m}+ V(\mathbf{r},t) \\[6pt]
& = -\frac{\hbar^2}{2m}\nabla^2+ V(\mathbf{r},t)
\end{align}
$$
which allows one to apply the Hamiltonian to systems described by a wave function
$$
\Psi(\mathbf{r}, t)
$$
. This is the approach commonly taken in introductory treatments of quantum mechanics, using the formalism of Schrödinger's wave mechanics.
One can also make substitutions to certain variables to fit specific cases, such as some involving electromagnetic fields.
#### Expectation value
It can be shown that the expectation value of the Hamiltonian which gives the energy expectation value will always be greater than or equal to the minimum potential of the system.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
One can also make substitutions to certain variables to fit specific cases, such as some involving electromagnetic fields.
#### Expectation value
It can be shown that the expectation value of the Hamiltonian which gives the energy expectation value will always be greater than or equal to the minimum potential of the system.
Consider computing the expectation value of kinetic energy:
$$
\begin{align}
T &= -\frac{\hbar^2}{2m} \int_{-\infty}^{+\infty} \psi^* \frac{d^2\psi}{dx^2} \, dx \\[1ex]
&=-\frac{\hbar^2}{2m} \left( {\left[ \psi'(x) \psi^*(x) \right]}_{-\infty}^{+\infty} - \int_{-\infty}^{+\infty} \frac{d\psi}{dx} \frac{d\psi^*}{dx} \, dx \right) \\[1ex]
&= \frac{\hbar^2}{2m} \int_{-\infty}^{+\infty} \left|\frac{d\psi}{dx} \right|^2 \, dx \geq 0
\end{align}
$$
Hence the expectation value of kinetic energy is always non-negative.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
#### Expectation value
It can be shown that the expectation value of the Hamiltonian which gives the energy expectation value will always be greater than or equal to the minimum potential of the system.
Consider computing the expectation value of kinetic energy:
$$
\begin{align}
T &= -\frac{\hbar^2}{2m} \int_{-\infty}^{+\infty} \psi^* \frac{d^2\psi}{dx^2} \, dx \\[1ex]
&=-\frac{\hbar^2}{2m} \left( {\left[ \psi'(x) \psi^*(x) \right]}_{-\infty}^{+\infty} - \int_{-\infty}^{+\infty} \frac{d\psi}{dx} \frac{d\psi^*}{dx} \, dx \right) \\[1ex]
&= \frac{\hbar^2}{2m} \int_{-\infty}^{+\infty} \left|\frac{d\psi}{dx} \right|^2 \, dx \geq 0
\end{align}
$$
Hence the expectation value of kinetic energy is always non-negative. This result can be used to calculate the expectation value of the total energy which is given for a normalized wavefunction as:
$$
E = T + \langle V(x) \rangle = T + \int_{-\infty}^{+\infty} V(x) |\psi(x)|^2 \, dx \geq V_{\text{min}}(x) \int_{-\infty}^{+\infty} |\psi(x)|^2 \, dx \geq V_{\text{min}}(x)
$$
which complete the proof.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
Consider computing the expectation value of kinetic energy:
$$
\begin{align}
T &= -\frac{\hbar^2}{2m} \int_{-\infty}^{+\infty} \psi^* \frac{d^2\psi}{dx^2} \, dx \\[1ex]
&=-\frac{\hbar^2}{2m} \left( {\left[ \psi'(x) \psi^*(x) \right]}_{-\infty}^{+\infty} - \int_{-\infty}^{+\infty} \frac{d\psi}{dx} \frac{d\psi^*}{dx} \, dx \right) \\[1ex]
&= \frac{\hbar^2}{2m} \int_{-\infty}^{+\infty} \left|\frac{d\psi}{dx} \right|^2 \, dx \geq 0
\end{align}
$$
Hence the expectation value of kinetic energy is always non-negative. This result can be used to calculate the expectation value of the total energy which is given for a normalized wavefunction as:
$$
E = T + \langle V(x) \rangle = T + \int_{-\infty}^{+\infty} V(x) |\psi(x)|^2 \, dx \geq V_{\text{min}}(x) \int_{-\infty}^{+\infty} |\psi(x)|^2 \, dx \geq V_{\text{min}}(x)
$$
which complete the proof. Similarly, the condition can be generalized to any higher dimensions using divergence theorem.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
This result can be used to calculate the expectation value of the total energy which is given for a normalized wavefunction as:
$$
E = T + \langle V(x) \rangle = T + \int_{-\infty}^{+\infty} V(x) |\psi(x)|^2 \, dx \geq V_{\text{min}}(x) \int_{-\infty}^{+\infty} |\psi(x)|^2 \, dx \geq V_{\text{min}}(x)
$$
which complete the proof. Similarly, the condition can be generalized to any higher dimensions using divergence theorem.
###
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
Similarly, the condition can be generalized to any higher dimensions using divergence theorem.
### Many particles
The formalism can be extended to
$$
N
$$
particles:
$$
\hat{H} = \sum_{n=1}^N \hat{T}_n + \hat{V}
$$
where
$$
\hat{V} = V(\mathbf{r}_1,\mathbf{r}_2,\ldots, \mathbf{r}_N,t) ,
$$
is the potential energy function, now a function of the spatial configuration of the system and time (a particular set of spatial positions at some instant of time defines a configuration) and
$$
\hat{T}_n = \frac{\mathbf{\hat{p}}_n\cdot\mathbf{\hat{p}}_n}{2m_n} = -\frac{\hbar^2}{2m_n}\nabla_n^2
$$
is the kinetic energy operator of particle
$$
n
$$
,
$$
\nabla_n
$$
is the gradient for particle
$$
n
$$
, and
$$
\nabla_n^2
$$
is the Laplacian for particle :
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
### Many particles
The formalism can be extended to
$$
N
$$
particles:
$$
\hat{H} = \sum_{n=1}^N \hat{T}_n + \hat{V}
$$
where
$$
\hat{V} = V(\mathbf{r}_1,\mathbf{r}_2,\ldots, \mathbf{r}_N,t) ,
$$
is the potential energy function, now a function of the spatial configuration of the system and time (a particular set of spatial positions at some instant of time defines a configuration) and
$$
\hat{T}_n = \frac{\mathbf{\hat{p}}_n\cdot\mathbf{\hat{p}}_n}{2m_n} = -\frac{\hbar^2}{2m_n}\nabla_n^2
$$
is the kinetic energy operator of particle
$$
n
$$
,
$$
\nabla_n
$$
is the gradient for particle
$$
n
$$
, and
$$
\nabla_n^2
$$
is the Laplacian for particle :
$$
\nabla_n^2 = \frac{\partial^2}{\partial x_n^2} + \frac{\partial^2}{\partial y_n^2} + \frac{\partial^2}{\partial z_n^2},
$$
Combining these yields the Schrödinger Hamiltonian for the
$$
N
$$
-particle case:
$$
\begin{align}
\hat{H} & = \sum_{n=1}^N \hat{T}_n + \hat{V} \\[6pt]
& = \sum_{n=1}^N \frac{\mathbf{\hat{p}}_n\cdot\mathbf{\hat{p}}_n}{2m_n}+ V(\mathbf{r}_1,\mathbf{r}_2,\ldots,\mathbf{r}_N,t) \\[6pt]
& = -\frac{\hbar^2}{2}\sum_{n=1}^N \frac{1}{m_n}\nabla_n^2 + V(\mathbf{r}_1,\mathbf{r}_2,\ldots,\mathbf{r}_N,t)
\end{align}
$$
However, complications can arise in the many-body problem.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
Many particles
The formalism can be extended to
$$
N
$$
particles:
$$
\hat{H} = \sum_{n=1}^N \hat{T}_n + \hat{V}
$$
where
$$
\hat{V} = V(\mathbf{r}_1,\mathbf{r}_2,\ldots, \mathbf{r}_N,t) ,
$$
is the potential energy function, now a function of the spatial configuration of the system and time (a particular set of spatial positions at some instant of time defines a configuration) and
$$
\hat{T}_n = \frac{\mathbf{\hat{p}}_n\cdot\mathbf{\hat{p}}_n}{2m_n} = -\frac{\hbar^2}{2m_n}\nabla_n^2
$$
is the kinetic energy operator of particle
$$
n
$$
,
$$
\nabla_n
$$
is the gradient for particle
$$
n
$$
, and
$$
\nabla_n^2
$$
is the Laplacian for particle :
$$
\nabla_n^2 = \frac{\partial^2}{\partial x_n^2} + \frac{\partial^2}{\partial y_n^2} + \frac{\partial^2}{\partial z_n^2},
$$
Combining these yields the Schrödinger Hamiltonian for the
$$
N
$$
-particle case:
$$
\begin{align}
\hat{H} & = \sum_{n=1}^N \hat{T}_n + \hat{V} \\[6pt]
& = \sum_{n=1}^N \frac{\mathbf{\hat{p}}_n\cdot\mathbf{\hat{p}}_n}{2m_n}+ V(\mathbf{r}_1,\mathbf{r}_2,\ldots,\mathbf{r}_N,t) \\[6pt]
& = -\frac{\hbar^2}{2}\sum_{n=1}^N \frac{1}{m_n}\nabla_n^2 + V(\mathbf{r}_1,\mathbf{r}_2,\ldots,\mathbf{r}_N,t)
\end{align}
$$
However, complications can arise in the many-body problem. Since the potential energy depends on the spatial arrangement of the particles, the kinetic energy will also depend on the spatial configuration to conserve energy.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
$$
\nabla_n^2 = \frac{\partial^2}{\partial x_n^2} + \frac{\partial^2}{\partial y_n^2} + \frac{\partial^2}{\partial z_n^2},
$$
Combining these yields the Schrödinger Hamiltonian for the
$$
N
$$
-particle case:
$$
\begin{align}
\hat{H} & = \sum_{n=1}^N \hat{T}_n + \hat{V} \\[6pt]
& = \sum_{n=1}^N \frac{\mathbf{\hat{p}}_n\cdot\mathbf{\hat{p}}_n}{2m_n}+ V(\mathbf{r}_1,\mathbf{r}_2,\ldots,\mathbf{r}_N,t) \\[6pt]
& = -\frac{\hbar^2}{2}\sum_{n=1}^N \frac{1}{m_n}\nabla_n^2 + V(\mathbf{r}_1,\mathbf{r}_2,\ldots,\mathbf{r}_N,t)
\end{align}
$$
However, complications can arise in the many-body problem. Since the potential energy depends on the spatial arrangement of the particles, the kinetic energy will also depend on the spatial configuration to conserve energy. The motion due to any one particle will vary due to the motion of all the other particles in the system.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
Since the potential energy depends on the spatial arrangement of the particles, the kinetic energy will also depend on the spatial configuration to conserve energy. The motion due to any one particle will vary due to the motion of all the other particles in the system. For this reason cross terms for kinetic energy may appear in the Hamiltonian; a mix of the gradients for two particles:
$$
-\frac{\hbar^2}{2M}\nabla_i\cdot\nabla_j
$$
where
$$
M
$$
denotes the mass of the collection of particles resulting in this extra kinetic energy. Terms of this form are known as mass polarization terms, and appear in the Hamiltonian of many-electron atoms (see below).
For
$$
N
$$
interacting particles, i.e. particles which interact mutually and constitute a many-body situation, the potential energy function
$$
V
$$
is not simply a sum of the separate potentials (and certainly not a product, as this is dimensionally incorrect). The potential energy function can only be written as above: a function of all the spatial positions of each particle.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
For
$$
N
$$
interacting particles, i.e. particles which interact mutually and constitute a many-body situation, the potential energy function
$$
V
$$
is not simply a sum of the separate potentials (and certainly not a product, as this is dimensionally incorrect). The potential energy function can only be written as above: a function of all the spatial positions of each particle.
For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle, that is
$$
V = \sum_{i=1}^N V(\mathbf{r}_i,t) = V(\mathbf{r}_1,t) + V(\mathbf{r}_2,t) + \cdots + V(\mathbf{r}_N,t)
$$
The general form of the Hamiltonian in this case is:
$$
\begin{align}
\hat{H} & = -\frac{\hbar^2}{2}\sum_{i=1}^N \frac{1}{m_i}\nabla_i^2 + \sum_{i=1}^N V_i \\[6pt]
& = \sum_{i=1}^N \left(-\frac{\hbar^2}{2m_i}\nabla_i^2 + V_i \right) \\[6pt]
& = \sum_{i=1}^N \hat{H}_i
\end{align}
$$
where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
The potential energy function can only be written as above: a function of all the spatial positions of each particle.
For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle, that is
$$
V = \sum_{i=1}^N V(\mathbf{r}_i,t) = V(\mathbf{r}_1,t) + V(\mathbf{r}_2,t) + \cdots + V(\mathbf{r}_N,t)
$$
The general form of the Hamiltonian in this case is:
$$
\begin{align}
\hat{H} & = -\frac{\hbar^2}{2}\sum_{i=1}^N \frac{1}{m_i}\nabla_i^2 + \sum_{i=1}^N V_i \\[6pt]
& = \sum_{i=1}^N \left(-\frac{\hbar^2}{2m_i}\nabla_i^2 + V_i \right) \\[6pt]
& = \sum_{i=1}^N \hat{H}_i
\end{align}
$$
where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle. This is an idealized situation—in practice the particles are almost always influenced by some potential, and there are many-body interactions.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle, that is
$$
V = \sum_{i=1}^N V(\mathbf{r}_i,t) = V(\mathbf{r}_1,t) + V(\mathbf{r}_2,t) + \cdots + V(\mathbf{r}_N,t)
$$
The general form of the Hamiltonian in this case is:
$$
\begin{align}
\hat{H} & = -\frac{\hbar^2}{2}\sum_{i=1}^N \frac{1}{m_i}\nabla_i^2 + \sum_{i=1}^N V_i \\[6pt]
& = \sum_{i=1}^N \left(-\frac{\hbar^2}{2m_i}\nabla_i^2 + V_i \right) \\[6pt]
& = \sum_{i=1}^N \hat{H}_i
\end{align}
$$
where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle. This is an idealized situation—in practice the particles are almost always influenced by some potential, and there are many-body interactions. One illustrative example of a two-body interaction where this form would not apply is for electrostatic potentials due to charged particles, because they interact with each other by Coulomb interaction (electrostatic force), as shown below.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
This is an idealized situation—in practice the particles are almost always influenced by some potential, and there are many-body interactions. One illustrative example of a two-body interaction where this form would not apply is for electrostatic potentials due to charged particles, because they interact with each other by Coulomb interaction (electrostatic force), as shown below.
Schrödinger equation
The Hamiltonian generates the time evolution of quantum states. If
$$
\left| \psi (t) \right\rangle
$$
is the state of the system at time
$$
t
$$
, then
$$
H \left| \psi (t) \right\rangle = i \hbar {d\over\ d t} \left| \psi (t) \right\rangle.
$$
This equation is the Schrödinger equation. It takes the same form as the Hamilton–Jacobi equation, which is one of the reasons
$$
H
$$
is also called the Hamiltonian. Given the state at some initial time (
$$
t = 0
$$
), we can solve it to obtain the state at any subsequent time.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
It takes the same form as the Hamilton–Jacobi equation, which is one of the reasons
$$
H
$$
is also called the Hamiltonian. Given the state at some initial time (
$$
t = 0
$$
), we can solve it to obtain the state at any subsequent time. In particular, if
$$
H
$$
is independent of time, then
$$
\left| \psi (t) \right\rangle = e^{-iHt/\hbar} \left| \psi (0) \right\rangle.
$$
The exponential operator on the right hand side of the Schrödinger equation is usually defined by the corresponding power series in
$$
H
$$
. One might notice that taking polynomials or power series of unbounded operators that are not defined everywhere may not make mathematical sense. Rigorously, to take functions of unbounded operators, a functional calculus is required. In the case of the exponential function, the continuous, or just the holomorphic functional calculus suffices. We note again, however, that for common calculations the physicists' formulation is quite sufficient.
By the *-homomorphism property of the functional calculus, the operator
$$
U = e^{-iHt/\hbar}
$$
is a unitary operator. It is the time evolution operator or propagator of a closed quantum system.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
By the *-homomorphism property of the functional calculus, the operator
$$
U = e^{-iHt/\hbar}
$$
is a unitary operator. It is the time evolution operator or propagator of a closed quantum system. If the Hamiltonian is time-independent,
$$
\{U(t)\}
$$
form a one parameter unitary group (more than a semigroup); this gives rise to the physical principle of detailed balance.
## Dirac formalism
However, in the more general formalism of Dirac, the Hamiltonian is typically implemented as an operator on a Hilbert space in the following way:
The eigenkets of
$$
H
$$
, denoted
$$
\left| a \right\rang
$$
, provide an orthonormal basis for the Hilbert space. The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted
$$
\{ E_a \}
$$
, solving the equation:
$$
H \left| a \right\rangle = E_a \left| a \right\rangle.
$$
Since
$$
H
$$
is a Hermitian operator, the energy is always a real number.
From a mathematically rigorous point of view, care must be taken with the above assumptions.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted
$$
\{ E_a \}
$$
, solving the equation:
$$
H \left| a \right\rangle = E_a \left| a \right\rangle.
$$
Since
$$
H
$$
is a Hermitian operator, the energy is always a real number.
From a mathematically rigorous point of view, care must be taken with the above assumptions. Operators on infinite-dimensional Hilbert spaces need not have eigenvalues (the set of eigenvalues does not necessarily coincide with the spectrum of an operator). However, all routine quantum mechanical calculations can be done using the physical formulation.
## Expressions for the Hamiltonian
Following are expressions for the Hamiltonian in a number of situations. Typical ways to classify the expressions are the number of particles, number of dimensions, and the nature of the potential energy function—importantly space and time dependence. Masses are denoted by
$$
m
$$
, and charges by
$$
q
$$
.
### Free particle
The particle is not bound by any potential energy, so the potential is zero and this Hamiltonian is the simplest.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
Masses are denoted by
$$
m
$$
, and charges by
$$
q
$$
.
### Free particle
The particle is not bound by any potential energy, so the potential is zero and this Hamiltonian is the simplest. For one dimension:
$$
\hat{H} = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}
$$
and in higher dimensions:
$$
\hat{H} = -\frac{\hbar^2}{2m}\nabla^2
$$
### Constant-potential well
For a particle in a region of constant potential
$$
V = V_0
$$
(no dependence on space or time), in one dimension, the Hamiltonian is:
$$
\hat{H} = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2} + V_0
$$
in three dimensions
$$
\hat{H} = -\frac{\hbar^2}{2m}\nabla^2 + V_0
$$
This applies to the elementary "particle in a box" problem, and step potentials.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
For one dimension:
$$
\hat{H} = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}
$$
and in higher dimensions:
$$
\hat{H} = -\frac{\hbar^2}{2m}\nabla^2
$$
### Constant-potential well
For a particle in a region of constant potential
$$
V = V_0
$$
(no dependence on space or time), in one dimension, the Hamiltonian is:
$$
\hat{H} = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2} + V_0
$$
in three dimensions
$$
\hat{H} = -\frac{\hbar^2}{2m}\nabla^2 + V_0
$$
This applies to the elementary "particle in a box" problem, and step potentials.
### Simple harmonic oscillator
For a simple harmonic oscillator in one dimension, the potential varies with position (but not time), according to:
$$
V = \frac{k}{2}x^2 = \frac{m\omega^2}{2}x^2
$$
where the angular frequency
$$
\omega
$$
, effective spring constant
$$
k
$$
, and mass
$$
m
$$
of the oscillator satisfy:
$$
\omega^2 = \frac{k}{m}
$$
so the Hamiltonian is:
$$
\hat{H} = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2} + \frac{m\omega^2}{2}x^2
$$
For three dimensions, this becomes
$$
\hat{H} = -\frac{\hbar^2}{2m}\nabla^2 + \frac{m\omega^2}{2} r^2
$$
where the three-dimensional position vector _
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
BLOCK7_ using Cartesian coordinates is
$$
(x, y, z)
$$
, its magnitude is
$$
r^2 = \mathbf{r}\cdot\mathbf{r} = |\mathbf{r}|^2 = x^2+y^2+z^2
$$
Writing the Hamiltonian out in full shows it is simply the sum of the one-dimensional Hamiltonians in each direction:
$$
\begin{align}
\hat{H} & = -\frac{\hbar^2}{2m}\left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2} \right) + \frac{m\omega^2}{2} \left(x^2 + y^2 + z^2\right) \\[6pt]
& = \left(-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2} + \frac{m\omega^2}{2}x^2\right) + \left(-\frac{\hbar^2}{2m} \frac{\partial^2}{\partial y^2} + \frac{m\omega^2}{2}y^2 \right ) + \left(- \frac{\hbar^2}{2m}\frac{\partial^2}{\partial z^2} +\frac{m\omega^2}{2}z^2 \right)
\end{align}
$$
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
### Rigid rotor
For a rigid rotor—i.e., system of particles which can rotate freely about any axes, not bound in any potential (such as free molecules with negligible vibrational degrees of freedom, say due to double or triple chemical bonds), the Hamiltonian is:
$$
\hat{H} = -\frac{\hbar^2}{2I_{xx}}\hat{J}_x^2 -\frac{\hbar^2}{2I_{yy}}\hat{J}_y^2 -\frac{\hbar^2}{2I_{zz}}\hat{J}_z^2
$$
where
$$
I_{xx}
$$
,
$$
I_{yy}
$$
, and
$$
I_{zz}
$$
are the moment of inertia components (technically the diagonal elements of the moment of inertia tensor), and and
$$
\hat{J}_z
$$
are the total angular momentum operators (components), about the
$$
x
$$
,
$$
y
$$
, and
$$
z
$$
axes respectively.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
### Electrostatic (Coulomb) potential
The Coulomb potential energy for two point charges
$$
q_1
$$
and
$$
q_2
$$
(i.e., those that have no spatial extent independently), in three dimensions, is (in SI units—rather than Gaussian units which are frequently used in electromagnetism):
$$
V = \frac{q_1q_2}{4\pi\varepsilon_0 |\mathbf{r}|}
$$
However, this is only the potential for one point charge due to another. If there are many charged particles, each charge has a potential energy due to every other point charge (except itself).
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
### Electrostatic (Coulomb) potential
The Coulomb potential energy for two point charges
$$
q_1
$$
and
$$
q_2
$$
(i.e., those that have no spatial extent independently), in three dimensions, is (in SI units—rather than Gaussian units which are frequently used in electromagnetism):
$$
V = \frac{q_1q_2}{4\pi\varepsilon_0 |\mathbf{r}|}
$$
However, this is only the potential for one point charge due to another. If there are many charged particles, each charge has a potential energy due to every other point charge (except itself). For
$$
N
$$
charges, the potential energy of charge
$$
q_j
$$
due to all other charges is (see also Electrostatic potential energy stored in a configuration of discrete point charges):
$$
V_j = \frac{1}{2}\sum_{i\neq j} q_i \phi(\mathbf{r}_i)=\frac{1}{8\pi\varepsilon_0}\sum_{i\neq j} \frac{q_iq_j}{|\mathbf{r}_i-\mathbf{r}_j|}
$$
where
$$
\phi(\mathbf{r}_i)
$$
is the electrostatic potential of charge
$$
q_j
$$
at
$$
\mathbf{r}_i
$$
.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
If there are many charged particles, each charge has a potential energy due to every other point charge (except itself). For
$$
N
$$
charges, the potential energy of charge
$$
q_j
$$
due to all other charges is (see also Electrostatic potential energy stored in a configuration of discrete point charges):
$$
V_j = \frac{1}{2}\sum_{i\neq j} q_i \phi(\mathbf{r}_i)=\frac{1}{8\pi\varepsilon_0}\sum_{i\neq j} \frac{q_iq_j}{|\mathbf{r}_i-\mathbf{r}_j|}
$$
where
$$
\phi(\mathbf{r}_i)
$$
is the electrostatic potential of charge
$$
q_j
$$
at
$$
\mathbf{r}_i
$$
. The total potential of the system is then the sum over
$$
j
$$
:
$$
V = \frac{1}{8\pi\varepsilon_0}\sum_{j=1}^N\sum_{i\neq j} \frac{q_iq_j}{|\mathbf{r}_i-\mathbf{r}_j|}
$$
so the Hamiltonian is:
$$
\begin{align}
\hat{H} & = -\frac{\hbar^2}{2}\sum_{j=1}^N\frac{1}{m_j}\nabla_j^2 + \frac{1}{8\pi\varepsilon_0}\sum_{j=1}^N\sum_{i\neq j} \frac{q_iq_j}{|\mathbf{r}_i-\mathbf{r}_j|} \\
& = \sum_{j=1}^N \left ( -\frac{\hbar^2}{2m_j}\nabla_j^2 + \frac{1}{8\pi\varepsilon_0}\sum_{i\neq j} \frac{q_iq_j}{|\mathbf{r}_i-\mathbf{r}_j|}\right) \\
\end{align}
$$
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
For
$$
N
$$
charges, the potential energy of charge
$$
q_j
$$
due to all other charges is (see also Electrostatic potential energy stored in a configuration of discrete point charges):
$$
V_j = \frac{1}{2}\sum_{i\neq j} q_i \phi(\mathbf{r}_i)=\frac{1}{8\pi\varepsilon_0}\sum_{i\neq j} \frac{q_iq_j}{|\mathbf{r}_i-\mathbf{r}_j|}
$$
where
$$
\phi(\mathbf{r}_i)
$$
is the electrostatic potential of charge
$$
q_j
$$
at
$$
\mathbf{r}_i
$$
. The total potential of the system is then the sum over
$$
j
$$
:
$$
V = \frac{1}{8\pi\varepsilon_0}\sum_{j=1}^N\sum_{i\neq j} \frac{q_iq_j}{|\mathbf{r}_i-\mathbf{r}_j|}
$$
so the Hamiltonian is:
$$
\begin{align}
\hat{H} & = -\frac{\hbar^2}{2}\sum_{j=1}^N\frac{1}{m_j}\nabla_j^2 + \frac{1}{8\pi\varepsilon_0}\sum_{j=1}^N\sum_{i\neq j} \frac{q_iq_j}{|\mathbf{r}_i-\mathbf{r}_j|} \\
& = \sum_{j=1}^N \left ( -\frac{\hbar^2}{2m_j}\nabla_j^2 + \frac{1}{8\pi\varepsilon_0}\sum_{i\neq j} \frac{q_iq_j}{|\mathbf{r}_i-\mathbf{r}_j|}\right) \\
\end{align}
$$
### Electric dipole in an electric field
For an electric dipole moment
$$
\mathbf{d}
$$
constituting charges of magnitude
$$
q
$$
, in a uniform, electrostatic field (time-independent)
$$
\mathbf{E}
$$
, positioned in one place, the potential is:
$$
V = -\mathbf{\hat{d}}\cdot\mathbf{E}
$$
the dipole moment itself is the operator
$$
\mathbf{\hat{d}} = q\mathbf{\hat{r}}
$$
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
$$
\hat{H} = -\mathbf{\hat{d}}\cdot\mathbf{E} = -q\mathbf{\hat{r}}\cdot\mathbf{E}
$$
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
The total potential of the system is then the sum over
$$
j
$$
:
$$
V = \frac{1}{8\pi\varepsilon_0}\sum_{j=1}^N\sum_{i\neq j} \frac{q_iq_j}{|\mathbf{r}_i-\mathbf{r}_j|}
$$
so the Hamiltonian is:
$$
\begin{align}
\hat{H} & = -\frac{\hbar^2}{2}\sum_{j=1}^N\frac{1}{m_j}\nabla_j^2 + \frac{1}{8\pi\varepsilon_0}\sum_{j=1}^N\sum_{i\neq j} \frac{q_iq_j}{|\mathbf{r}_i-\mathbf{r}_j|} \\
& = \sum_{j=1}^N \left ( -\frac{\hbar^2}{2m_j}\nabla_j^2 + \frac{1}{8\pi\varepsilon_0}\sum_{i\neq j} \frac{q_iq_j}{|\mathbf{r}_i-\mathbf{r}_j|}\right) \\
\end{align}
$$
### Electric dipole in an electric field
For an electric dipole moment
$$
\mathbf{d}
$$
constituting charges of magnitude
$$
q
$$
, in a uniform, electrostatic field (time-independent)
$$
\mathbf{E}
$$
, positioned in one place, the potential is:
$$
V = -\mathbf{\hat{d}}\cdot\mathbf{E}
$$
the dipole moment itself is the operator
$$
\mathbf{\hat{d}} = q\mathbf{\hat{r}}
$$
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
$$
\hat{H} = -\mathbf{\hat{d}}\cdot\mathbf{E} = -q\mathbf{\hat{r}}\cdot\mathbf{E}
$$
### Magnetic dipole in a magnetic field
For a magnetic dipole moment
$$
\boldsymbol{\mu}
$$
in a uniform, magnetostatic field (time-independent)
$$
\mathbf{B}
$$
, positioned in one place, the potential is:
$$
V = -\boldsymbol{\mu}\cdot\mathbf{B}
$$
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
$$
\hat{H} = -\boldsymbol{\mu}\cdot\mathbf{B}
$$
For a spin- particle, the corresponding spin magnetic moment is:
$$
\boldsymbol{\mu}_S = \frac{g_s e}{2m} \mathbf{S}
$$
where
$$
g_s
$$
is the "spin g-factor" (not to be confused with the gyromagnetic ratio),
$$
e
$$
is the electron charge, _
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
### Electric dipole in an electric field
For an electric dipole moment
$$
\mathbf{d}
$$
constituting charges of magnitude
$$
q
$$
, in a uniform, electrostatic field (time-independent)
$$
\mathbf{E}
$$
, positioned in one place, the potential is:
$$
V = -\mathbf{\hat{d}}\cdot\mathbf{E}
$$
the dipole moment itself is the operator
$$
\mathbf{\hat{d}} = q\mathbf{\hat{r}}
$$
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
$$
\hat{H} = -\mathbf{\hat{d}}\cdot\mathbf{E} = -q\mathbf{\hat{r}}\cdot\mathbf{E}
$$
### Magnetic dipole in a magnetic field
For a magnetic dipole moment
$$
\boldsymbol{\mu}
$$
in a uniform, magnetostatic field (time-independent)
$$
\mathbf{B}
$$
, positioned in one place, the potential is:
$$
V = -\boldsymbol{\mu}\cdot\mathbf{B}
$$
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
$$
\hat{H} = -\boldsymbol{\mu}\cdot\mathbf{B}
$$
For a spin- particle, the corresponding spin magnetic moment is:
$$
\boldsymbol{\mu}_S = \frac{g_s e}{2m} \mathbf{S}
$$
where
$$
g_s
$$
is the "spin g-factor" (not to be confused with the gyromagnetic ratio),
$$
e
$$
is the electron charge, _ BLOCK7_ is the spin operator vector, whose components are the Pauli matrices, hence
$$
\hat{H} = \frac{g_s e}{2m} \mathbf{S} \cdot\mathbf{B}
$$
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
### Magnetic dipole in a magnetic field
For a magnetic dipole moment
$$
\boldsymbol{\mu}
$$
in a uniform, magnetostatic field (time-independent)
$$
\mathbf{B}
$$
, positioned in one place, the potential is:
$$
V = -\boldsymbol{\mu}\cdot\mathbf{B}
$$
Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:
$$
\hat{H} = -\boldsymbol{\mu}\cdot\mathbf{B}
$$
For a spin- particle, the corresponding spin magnetic moment is:
$$
\boldsymbol{\mu}_S = \frac{g_s e}{2m} \mathbf{S}
$$
where
$$
g_s
$$
is the "spin g-factor" (not to be confused with the gyromagnetic ratio),
$$
e
$$
is the electron charge, _ BLOCK7_ is the spin operator vector, whose components are the Pauli matrices, hence
$$
\hat{H} = \frac{g_s e}{2m} \mathbf{S} \cdot\mathbf{B}
$$
### Charged particle in an electromagnetic field
For a particle with mass
$$
m
$$
and charge
$$
q
$$
in an electromagnetic field, described by the scalar potential
$$
\phi
$$
and vector potential
$$
\mathbf{A}
$$
, there are two parts to the Hamiltonian to substitute for.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
BLOCK7_ is the spin operator vector, whose components are the Pauli matrices, hence
$$
\hat{H} = \frac{g_s e}{2m} \mathbf{S} \cdot\mathbf{B}
$$
### Charged particle in an electromagnetic field
For a particle with mass
$$
m
$$
and charge
$$
q
$$
in an electromagnetic field, described by the scalar potential
$$
\phi
$$
and vector potential
$$
\mathbf{A}
$$
, there are two parts to the Hamiltonian to substitute for. The canonical momentum operator
$$
\mathbf{\hat{p}}
$$
, which includes a contribution from the
$$
\mathbf{A}
$$
field and fulfils the canonical commutation relation, must be quantized;
$$
\mathbf{\hat{p}} = m\dot{\mathbf{r}} + q\mathbf{A} ,
$$
where
$$
m\dot{\mathbf{r}}
$$
is the kinetic momentum.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
### Charged particle in an electromagnetic field
For a particle with mass
$$
m
$$
and charge
$$
q
$$
in an electromagnetic field, described by the scalar potential
$$
\phi
$$
and vector potential
$$
\mathbf{A}
$$
, there are two parts to the Hamiltonian to substitute for. The canonical momentum operator
$$
\mathbf{\hat{p}}
$$
, which includes a contribution from the
$$
\mathbf{A}
$$
field and fulfils the canonical commutation relation, must be quantized;
$$
\mathbf{\hat{p}} = m\dot{\mathbf{r}} + q\mathbf{A} ,
$$
where
$$
m\dot{\mathbf{r}}
$$
is the kinetic momentum. The quantization prescription reads
$$
\mathbf{\hat{p}} = -i\hbar\nabla ,
$$
so the corresponding kinetic energy operator is
$$
\hat{T} = \frac{1}{2} m\dot{\mathbf{r}}\cdot\dot{\mathbf{r}} = \frac{1}{2m} \left ( \mathbf{\hat{p}} - q\mathbf{A} \right)^2
$$
and the potential energy, which is due to the
$$
\phi
$$
field, is given by
$$
\hat{V} = q\phi .
$$
Casting all of these into the Hamiltonian gives
$$
\hat{H} = \frac{1}{2m} \left ( -i\hbar\nabla - q\mathbf{A} \right)^2 + q\phi .
$$
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
The canonical momentum operator
$$
\mathbf{\hat{p}}
$$
, which includes a contribution from the
$$
\mathbf{A}
$$
field and fulfils the canonical commutation relation, must be quantized;
$$
\mathbf{\hat{p}} = m\dot{\mathbf{r}} + q\mathbf{A} ,
$$
where
$$
m\dot{\mathbf{r}}
$$
is the kinetic momentum. The quantization prescription reads
$$
\mathbf{\hat{p}} = -i\hbar\nabla ,
$$
so the corresponding kinetic energy operator is
$$
\hat{T} = \frac{1}{2} m\dot{\mathbf{r}}\cdot\dot{\mathbf{r}} = \frac{1}{2m} \left ( \mathbf{\hat{p}} - q\mathbf{A} \right)^2
$$
and the potential energy, which is due to the
$$
\phi
$$
field, is given by
$$
\hat{V} = q\phi .
$$
Casting all of these into the Hamiltonian gives
$$
\hat{H} = \frac{1}{2m} \left ( -i\hbar\nabla - q\mathbf{A} \right)^2 + q\phi .
$$
## Energy eigenket degeneracy, symmetry, and conservation laws
In many systems, two or more energy eigenstates have the same energy.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
The quantization prescription reads
$$
\mathbf{\hat{p}} = -i\hbar\nabla ,
$$
so the corresponding kinetic energy operator is
$$
\hat{T} = \frac{1}{2} m\dot{\mathbf{r}}\cdot\dot{\mathbf{r}} = \frac{1}{2m} \left ( \mathbf{\hat{p}} - q\mathbf{A} \right)^2
$$
and the potential energy, which is due to the
$$
\phi
$$
field, is given by
$$
\hat{V} = q\phi .
$$
Casting all of these into the Hamiltonian gives
$$
\hat{H} = \frac{1}{2m} \left ( -i\hbar\nabla - q\mathbf{A} \right)^2 + q\phi .
$$
## Energy eigenket degeneracy, symmetry, and conservation laws
In many systems, two or more energy eigenstates have the same energy. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength. A wave propagating in the
$$
x
$$
direction is a different state from one propagating in the
$$
y
$$
direction, but if they have the same wavelength, then their energies will be the same. When this happens, the states are said to be degenerate.
It turns out that degeneracy occurs whenever a nontrivial unitary operator
$$
U
$$
commutes with the Hamiltonian. To see this, suppose that
$$
|a\rang
$$
is an energy eigenket. Then
$$
U|a\rang
$$
is an energy eigenket with the same eigenvalue, since
$$
UH |a\rangle = U E_a|a\rangle = E_a (U|a\rangle) = H \; (U|a\rangle).
$$
Since _ BLOCK6_ is nontrivial, at least one pair of
$$
|a\rang
$$
and
$$
U|a\rang
$$
must represent distinct states. Therefore,
$$
H
$$
has at least one pair of degenerate energy eigenkets.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
BLOCK6_ is nontrivial, at least one pair of
$$
|a\rang
$$
and
$$
U|a\rang
$$
must represent distinct states. Therefore,
$$
H
$$
has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape.
The existence of a symmetry operator implies the existence of a conserved observable.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape.
The existence of a symmetry operator implies the existence of a conserved observable. Let
$$
G
$$
be the Hermitian generator of
$$
U
$$
:
$$
U = I - i \varepsilon G + O(\varepsilon^2)
$$
It is straightforward to show that if
$$
U
$$
commutes with
$$
H
$$
, then so does
$$
G
$$
:
$$
[H, G] = 0
$$
Therefore,
$$
\frac{\partial}{\partial t} \langle\psi(t)|G|\psi(t)\rangle
= \frac{1}{i\hbar} \langle\psi(t)|[G,H]|\psi(t)\rangle
= 0.
$$
In obtaining this result, we have used the Schrödinger equation, as well as its dual,
$$
\langle\psi (t)|H = - i \hbar {d\over\ d t} \langle\psi(t)|.
$$
Thus, the expected value of the observable
$$
G
$$
is conserved for any state of the system.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
The existence of a symmetry operator implies the existence of a conserved observable. Let
$$
G
$$
be the Hermitian generator of
$$
U
$$
:
$$
U = I - i \varepsilon G + O(\varepsilon^2)
$$
It is straightforward to show that if
$$
U
$$
commutes with
$$
H
$$
, then so does
$$
G
$$
:
$$
[H, G] = 0
$$
Therefore,
$$
\frac{\partial}{\partial t} \langle\psi(t)|G|\psi(t)\rangle
= \frac{1}{i\hbar} \langle\psi(t)|[G,H]|\psi(t)\rangle
= 0.
$$
In obtaining this result, we have used the Schrödinger equation, as well as its dual,
$$
\langle\psi (t)|H = - i \hbar {d\over\ d t} \langle\psi(t)|.
$$
Thus, the expected value of the observable
$$
G
$$
is conserved for any state of the system. In the case of the free particle, the conserved quantity is the angular momentum.
## Hamilton's equations
Hamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
In the case of the free particle, the conserved quantity is the angular momentum.
## Hamilton's equations
Hamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states
$$
\left\{\left| n \right\rangle\right\}
$$
, which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e.,
$$
\langle n' | n \rangle = \delta_{nn'}
$$
Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time.
The instantaneous state of the system at time
$$
t
$$
,
$$
\left| \psi\left(t\right) \right\rangle
$$
, can be expanded in terms of these basis states:
$$
|\psi (t)\rangle = \sum_{n} a_n(t) |n\rangle
$$
where
$$
a_n(t) = \langle n | \psi(t) \rangle.
$$
The coefficients
$$
a_n(t)
$$
are complex variables.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
We will assume that the Hamiltonian is also independent of time.
The instantaneous state of the system at time
$$
t
$$
,
$$
\left| \psi\left(t\right) \right\rangle
$$
, can be expanded in terms of these basis states:
$$
|\psi (t)\rangle = \sum_{n} a_n(t) |n\rangle
$$
where
$$
a_n(t) = \langle n | \psi(t) \rangle.
$$
The coefficients
$$
a_n(t)
$$
are complex variables. We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole.
The expectation value of the Hamiltonian of this state, which is also the mean energy, is
$$
\langle H(t) \rangle \mathrel\stackrel{\mathrm{def}}{=} \langle\psi(t)|H|\psi(t)\rangle
= \sum_{nn'} a_{n'}^* a_n \langle n'|H|n \rangle
$$
where the last step was obtained by expanding
$$
\left| \psi\left(t\right) \right\rangle
$$
in terms of the basis states.
Each
$$
a_n(t)
$$
actually corresponds to two independent degrees of freedom, since the variable has a real part and an imaginary part. We now perform the following trick: instead of using the real and imaginary parts as the independent variables, we use
$$
a_n(t)
$$
and its complex conjugate
$$
a_n^*(t)
$$
.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
With this choice of independent variables, we can calculate the partial derivative
$$
\frac{\partial \langle H \rangle}{\partial a_{n'}^{*}}
= \sum_{n} a_n \langle n'|H|n \rangle
= \langle n'|H|\psi\rangle
$$
By applying Schrödinger's equation and using the orthonormality of the basis states, this further reduces to
$$
\frac{\partial \langle H \rangle}{\partial a_{n'}^{*}}
= i \hbar \frac{\partial a_{n'}}{\partial t}
$$
Similarly, one can show that
$$
\frac{\partial \langle H \rangle}{\partial a_n}
= - i \hbar \frac{\partial a_{n}^{*}}{\partial t}
$$
If we define "conjugate momentum" variables
$$
\pi_n
$$
by
$$
\pi_{n}(t) = i \hbar a_n^*(t)
$$
then the above equations become
$$
\frac{\partial \langle H \rangle}{\partial \pi_n} = \frac{\partial a_n}{\partial t},\quad \frac{\partial \langle H \rangle}{\partial a_n} = - \frac{\partial \pi_n}{\partial t}
$$
which is precisely the form of Hamilton's equations, with the
$$
a_n
$$
s as the generalized coordinates, the
$$
\pi_n
$$
s as the conjugate momenta, and
$$
\langle H\rangle
$$
taking the place of the classical Hamiltonian.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
|
Codd's twelve rules are a set of thirteen rules (numbered zero to twelve) proposed by Edgar F. Codd, a pioneer of the relational model for databases, designed to define what is required from a database management system in order for it to be considered relational, i.e., a relational database management system (RDBMS).. They are sometimes referred to as "Codd's Twelve Commandments".
## History
Codd originally set out the rules in 1970, and developed them further in a 1974 conference paper. His aim was to prevent the vision of the original relational database from being diluted, as database vendors scrambled in the early 1980s to repackage existing products with a relational veneer. Rule 12 was particularly designed to counter such a positioning.
While in 1999, a textbook stated "Nowadays, most RDBMSs ... pass the test", another in 2007 suggested "no database system complies with all twelve rules." Codd himself, in his book "The Relational Model for Database Management: Version 2", acknowledged that while his original set of 12 rules can be used for coarse distinctions, the 333 features of his Relational Model Version 2 (RM/V2) are needed for distinctions of a finer grain.
## Rules
Rule 0: The foundation rule:
|
https://en.wikipedia.org/wiki/Codd%27s_12_rules
|
Codd himself, in his book "The Relational Model for Database Management: Version 2", acknowledged that while his original set of 12 rules can be used for coarse distinctions, the 333 features of his Relational Model Version 2 (RM/V2) are needed for distinctions of a finer grain.
## Rules
Rule 0: The foundation rule:
For any system that is advertised as, or claimed to be, a relational data base management system, that system must be able to manage data bases entirely through its relational capabilities.
Rule 1: The information rule:
All information in a relational data base is represented explicitly at the logical level and in exactly one way by values in tables.
Rule 2: The guaranteed access rule:
Each and every datum (atomic value) in a relational data base is guaranteed to be logically accessible by resorting to a combination of table name, primary key value and column name.
Rule 3: Systematic treatment of null values:
Null values (distinct from the empty character string or a string of blank characters and distinct from zero or any other number) are supported in fully relational DBMS for representing missing information and inapplicable information in a systematic way, independent of data type.
Rule 4: Dynamic online catalog based on the relational model:
The data base description is represented at the logical level in the same way as ordinary data, so that authorized users can apply the same relational language to its interrogation as they apply to the regular data.
|
https://en.wikipedia.org/wiki/Codd%27s_12_rules
|
Rule 4: Dynamic online catalog based on the relational model:
The data base description is represented at the logical level in the same way as ordinary data, so that authorized users can apply the same relational language to its interrogation as they apply to the regular data.
Rule 5: The comprehensive data sublanguage rule:
A relational system may support several languages and various modes of terminal use (for example, the fill-in-the-blanks mode). However, there must be at least one language whose statements are expressible, per some well-defined syntax, as character strings and that is comprehensive in supporting all of the following items:
Data definition.
View definition.
Data manipulation (interactive and by program).
Integrity constraints.
Authorization.
Transaction boundaries (begin, commit and rollback).
Rule 6: The view updating rule:
All views that are theoretically updatable are also updatable by the system.
Rule 7: Relational Operations Rule / Possible for high-level insert, update, and delete:
The capability of handling a base relation or a derived relation as a single operand applies not only to the retrieval of data but also to the insertion, update and deletion of data.
Rule 8: Physical data independence:
Application programs and terminal activities remain logically unimpaired whenever any changes are made in either storage representations or access methods.
|
https://en.wikipedia.org/wiki/Codd%27s_12_rules
|
The capability of handling a base relation or a derived relation as a single operand applies not only to the retrieval of data but also to the insertion, update and deletion of data.
Rule 8: Physical data independence:
Application programs and terminal activities remain logically unimpaired whenever any changes are made in either storage representations or access methods.
Rule 9: Logical data independence:
Application programs and terminal activities remain logically unimpaired when information-preserving changes of any kind that theoretically permit unimpairment are made to the base tables.
Rule 10: Integrity independence:
Integrity constraints specific to a particular relational data base must be definable in the relational data sublanguage and storable in the catalog, not in the application programs.
Rule 11: Distribution independence:
The end-user must not be able to see that the data is distributed over various locations. Users should always get the impression that the data is located at one site only.
Rule 12: The nonsubversion rule:
If a relational system has a low-level (single-record-at-a-time) language, that low level cannot be used to subvert or bypass the integrity rules and constraints expressed in the higher level relational language (multiple-records-at-a-time).
|
https://en.wikipedia.org/wiki/Codd%27s_12_rules
|
Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion (see Wiener process). It has important applications in mathematical finance and stochastic differential equations.
The central concept is the Itô stochastic integral, a stochastic generalization of the Riemann–Stieltjes integral in analysis. The integrands and the integrators are now stochastic processes:
$$
Y_t = \int_0^t H_s\,dX_s,
$$
where is a locally square-integrable process adapted to the filtration generated by , which is a Brownian motion or, more generally, a semimartingale. The result of the integration is then another stochastic process. Concretely, the integral from 0 to any particular is a random variable, defined as a limit of a certain sequence of random variables. The paths of Brownian motion fail to satisfy the requirements to be able to apply the standard techniques of calculus. So with the integrand a stochastic process, the Itô stochastic integral amounts to an integral with respect to a function which is not differentiable at any point and has infinite variation over every time interval.
The main insight is that the integral can be defined as long as the integrand is adapted, which loosely speaking means that its value at time can only depend on information available up until this time.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
So with the integrand a stochastic process, the Itô stochastic integral amounts to an integral with respect to a function which is not differentiable at any point and has infinite variation over every time interval.
The main insight is that the integral can be defined as long as the integrand is adapted, which loosely speaking means that its value at time can only depend on information available up until this time. Roughly speaking, one chooses a sequence of partitions of the interval from 0 to and constructs Riemann sums. Every time we are computing a Riemann sum, we are using a particular instantiation of the integrator. It is crucial which point in each of the small intervals is used to compute the value of the function. The limit then is taken in probability as the mesh of the partition is going to zero. Numerous technical details have to be taken care of to show that this limit exists and is independent of the particular sequence of partitions. Typically, the left end of the interval is used.
Important results of Itô calculus include the integration by parts formula and
## Itô's lemma
, which is a change of variables formula. These differ from the formulas of standard calculus, due to quadratic variation terms. This can be contrasted to the Stratonovich integral as an alternative formulation; it does follow the chain rule, and does not require Itô's lemma.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
These differ from the formulas of standard calculus, due to quadratic variation terms. This can be contrasted to the Stratonovich integral as an alternative formulation; it does follow the chain rule, and does not require Itô's lemma. The two integral forms can be converted to one-another. The Stratonovich integral is obtained as the limiting form of a Riemann sum that employs the average of stochastic variable over each small timestep, whereas the Itô integral considers it only at the beginning.
In mathematical finance, the described evaluation strategy of the integral is conceptualized as that we are first deciding what to do, then observing the change in the prices. The integrand is how much stock we hold, the integrator represents the movement of the prices, and the integral is how much money we have in total including what our stock is worth, at any given moment. The prices of stocks and other traded financial assets can be modeled by stochastic processes such as Brownian motion or, more often, geometric Brownian motion (see Black–Scholes). Then, the Itô stochastic integral represents the payoff of a continuous-time trading strategy consisting of holding an amount Ht of the stock at time t. In this situation, the condition that is adapted corresponds to the necessary restriction that the trading strategy can only make use of the available information at any time.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
Then, the Itô stochastic integral represents the payoff of a continuous-time trading strategy consisting of holding an amount Ht of the stock at time t. In this situation, the condition that is adapted corresponds to the necessary restriction that the trading strategy can only make use of the available information at any time. This prevents the possibility of unlimited gains through clairvoyance: buying the stock just before each uptick in the market and selling before each downtick. Similarly, the condition that is adapted implies that the stochastic integral will not diverge when calculated as a limit of Riemann sums .
## Notation
The process defined before as
$$
Y_t = \int_0^t H\,dX\equiv\int_0^t H_s\,dX_s ,
$$
is itself a stochastic process with time parameter t, which is also sometimes written as . Alternatively, the integral is often written in differential form , which is equivalent to .
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
## Notation
The process defined before as
$$
Y_t = \int_0^t H\,dX\equiv\int_0^t H_s\,dX_s ,
$$
is itself a stochastic process with time parameter t, which is also sometimes written as . Alternatively, the integral is often written in differential form , which is equivalent to . As Itô calculus is concerned with continuous-time stochastic processes, it is assumed that an underlying filtered probability space is given
$$
(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\ge 0},\mathbb{P}) .
$$
The σ-algebra represents the information available up until time , and a process is adapted if is
$$
\mathcal{F}_t
$$
-measurable. A Brownian motion is understood to be an
$$
\mathcal{F}_t
$$
-Brownian motion, which is just a standard Brownian motion with the properties that is
$$
\mathcal{F}_t
$$
-measurable and that is independent of
$$
\mathcal{F}_t
$$
for all .
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
As Itô calculus is concerned with continuous-time stochastic processes, it is assumed that an underlying filtered probability space is given
$$
(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\ge 0},\mathbb{P}) .
$$
The σ-algebra represents the information available up until time , and a process is adapted if is
$$
\mathcal{F}_t
$$
-measurable. A Brownian motion is understood to be an
$$
\mathcal{F}_t
$$
-Brownian motion, which is just a standard Brownian motion with the properties that is
$$
\mathcal{F}_t
$$
-measurable and that is independent of
$$
\mathcal{F}_t
$$
for all .
## Integration with respect to Brownian motion
The Itô integral can be defined in a manner similar to the Riemann–Stieltjes integral, that is as a limit in probability of Riemann sums; such a limit does not necessarily exist pathwise. Suppose that is a Wiener process (Brownian motion) and that is a right-continuous (càdlàg), adapted and locally bounded process.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
## Integration with respect to Brownian motion
The Itô integral can be defined in a manner similar to the Riemann–Stieltjes integral, that is as a limit in probability of Riemann sums; such a limit does not necessarily exist pathwise. Suppose that is a Wiener process (Brownian motion) and that is a right-continuous (càdlàg), adapted and locally bounded process. If
$$
\{\pi_n\}
$$
is a sequence of partitions of with mesh width going to zero, then the Itô integral of with respect to up to time is a random variable
$$
\int_0^t H \,d B =\lim_{n\rightarrow\infty} \sum_{[t_{i-1},t_i]\in\pi_n}H_{t_{i-1}}(B_{t_i}-B_{t_{i-1}}).
$$
It can be shown that this limit converges in probability.
For some applications, such as martingale representation theorems and local times, the integral is needed for processes that are not continuous. The predictable processes form the smallest class that is closed under taking limits of sequences and contains all adapted left-continuous processes. If is any predictable process such that for every then the integral of with respect to can be defined, and is said to be -integrable.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
The predictable processes form the smallest class that is closed under taking limits of sequences and contains all adapted left-continuous processes. If is any predictable process such that for every then the integral of with respect to can be defined, and is said to be -integrable. Any such process can be approximated by a sequence Hn of left-continuous, adapted and locally bounded processes, in the sense that
$$
\int_0^t (H-H_n)^2\,ds\to 0
$$
in probability. Then, the Itô integral is
$$
\int_0^t H\,dB = \lim_{n\to\infty}\int_0^t H_n\,dB
$$
where, again, the limit can be shown to converge in probability. The stochastic integral satisfies the Itô isometry
$$
\mathbb{E}\left[ \left(\int_0^t H_s \, dB_s\right)^2\right] = \mathbb{E} \left[ \int_0^t H_s^2\,ds\right ]
$$
which holds when is bounded or, more generally, when the integral on the right hand side is finite.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
Then, the Itô integral is
$$
\int_0^t H\,dB = \lim_{n\to\infty}\int_0^t H_n\,dB
$$
where, again, the limit can be shown to converge in probability. The stochastic integral satisfies the Itô isometry
$$
\mathbb{E}\left[ \left(\int_0^t H_s \, dB_s\right)^2\right] = \mathbb{E} \left[ \int_0^t H_s^2\,ds\right ]
$$
which holds when is bounded or, more generally, when the integral on the right hand side is finite.
## Itô processes
An Itô process is defined to be an adapted stochastic process that can be expressed as the sum of an integral with respect to Brownian motion and an integral with respect to time,
$$
X_t=X_0+\int_0^t\sigma_s\,dB_s + \int_0^t\mu_s\,ds.
$$
Here, is a Brownian motion and it is required that σ is a predictable -integrable process, and μ is predictable and (Lebesgue) integrable.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
The stochastic integral satisfies the Itô isometry
$$
\mathbb{E}\left[ \left(\int_0^t H_s \, dB_s\right)^2\right] = \mathbb{E} \left[ \int_0^t H_s^2\,ds\right ]
$$
which holds when is bounded or, more generally, when the integral on the right hand side is finite.
## Itô processes
An Itô process is defined to be an adapted stochastic process that can be expressed as the sum of an integral with respect to Brownian motion and an integral with respect to time,
$$
X_t=X_0+\int_0^t\sigma_s\,dB_s + \int_0^t\mu_s\,ds.
$$
Here, is a Brownian motion and it is required that σ is a predictable -integrable process, and μ is predictable and (Lebesgue) integrable. That is,
$$
\int_0^t(\sigma_s^2+|\mu_s|)\,ds<\infty
$$
for each .
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
## Itô processes
An Itô process is defined to be an adapted stochastic process that can be expressed as the sum of an integral with respect to Brownian motion and an integral with respect to time,
$$
X_t=X_0+\int_0^t\sigma_s\,dB_s + \int_0^t\mu_s\,ds.
$$
Here, is a Brownian motion and it is required that σ is a predictable -integrable process, and μ is predictable and (Lebesgue) integrable. That is,
$$
\int_0^t(\sigma_s^2+|\mu_s|)\,ds<\infty
$$
for each . The stochastic integral can be extended to such Itô processes,
$$
\int_0^t H\,dX =\int_0^t H_s\sigma_s\,dB_s + \int_0^t H_s\mu_s\,ds.
$$
This is defined for all locally bounded and predictable integrands. More generally, it is required that be -integrable and be Lebesgue integrable, so that
$$
\int_0^t \left(H^2 \sigma^2 + |H\mu| \right) ds < \infty.
$$
Such predictable processes are called -integrable.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
The stochastic integral can be extended to such Itô processes,
$$
\int_0^t H\,dX =\int_0^t H_s\sigma_s\,dB_s + \int_0^t H_s\mu_s\,ds.
$$
This is defined for all locally bounded and predictable integrands. More generally, it is required that be -integrable and be Lebesgue integrable, so that
$$
\int_0^t \left(H^2 \sigma^2 + |H\mu| \right) ds < \infty.
$$
Such predictable processes are called -integrable.
An important result for the study of Itô processes is Itô's lemma.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
More generally, it is required that be -integrable and be Lebesgue integrable, so that
$$
\int_0^t \left(H^2 \sigma^2 + |H\mu| \right) ds < \infty.
$$
Such predictable processes are called -integrable.
An important result for the study of Itô processes is Itô's lemma. In its simplest form, for any twice continuously differentiable function on the reals and Itô process as described above, it states that
$$
Y_t=f(X_t)
$$
is itself an Itô process satisfying
$$
d Y_t = f^\prime(X_t) \mu_t\,d t + \tfrac{1}{2} f^{\prime\prime} (X_t) \sigma_t^2 \, d t
+ f^\prime(X_t) \sigma_t \,dB_t .
$$
This is the stochastic calculus version of the change of variables formula and chain rule. It differs from the standard result due to the additional term involving the second derivative of , which comes from the property that Brownian motion has non-zero quadratic variation.
## Semimartingales as integrators
The Itô integral is defined with respect to a semimartingale .
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
It differs from the standard result due to the additional term involving the second derivative of , which comes from the property that Brownian motion has non-zero quadratic variation.
## Semimartingales as integrators
The Itô integral is defined with respect to a semimartingale . These are processes which can be decomposed as for a local martingale and finite variation process . Important examples of such processes include Brownian motion, which is a martingale, and Lévy processes. For a left continuous, locally bounded and adapted process the integral exists, and can be calculated as a limit of Riemann sums. Let be a sequence of partitions of with mesh going to zero,
$$
\int_0^t H\,dX = \lim_{n\to\infty} \sum_{t_{i-1},t_i\in\pi_n}H_{t_{i-1}}(X_{t_i}-X_{t_{i-1}}).
$$
This limit converges in probability. The stochastic integral of left-continuous processes is general enough for studying much of stochastic calculus. For example, it is sufficient for applications of Itô's Lemma, changes of measure via Girsanov's theorem, and for the study of stochastic differential equations.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
The stochastic integral of left-continuous processes is general enough for studying much of stochastic calculus. For example, it is sufficient for applications of Itô's Lemma, changes of measure via Girsanov's theorem, and for the study of stochastic differential equations. However, it is inadequate for other important topics such as martingale representation theorems and local times.
The integral extends to all predictable and locally bounded integrands, in a unique way, such that the dominated convergence theorem holds. That is, if and for a locally bounded process , then
$$
\int_0^t H_n \,dX \to \int_0^t H \,dX,
$$
in probability. The uniqueness of the extension from left-continuous to predictable integrands is a result of the monotone class lemma.
In general, the stochastic integral can be defined even in cases where the predictable process is not locally bounded. If then and are bounded. Associativity of stochastic integration implies that is -integrable, with integral , if and only if and . The set of -integrable processes is denoted by .
## Properties
The following properties can be found in works such as and :
- The stochastic integral is a càdlàg process.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
The set of -integrable processes is denoted by .
## Properties
The following properties can be found in works such as and :
- The stochastic integral is a càdlàg process. Furthermore, it is a semimartingale.
- The discontinuities of the stochastic integral are given by the jumps of the integrator multiplied by the integrand. The jump of a càdlàg process at a time is , and is often denoted by . With this notation, . A particular consequence of this is that integrals with respect to a continuous process are always themselves continuous.
- Associativity. Let , be predictable processes, and be -integrable. Then, is integrable if and only if is -integrable, in which case
$$
J\cdot (K\cdot X) = (JK)\cdot X
$$
- Dominated convergence. Suppose that and , where is an -integrable process. then . Convergence is in probability at each time . In fact, it converges uniformly on compact sets in probability.
- The stochastic integral commutes with the operation of taking quadratic covariations. If and are semimartingales then any -integrable process will also be -integrable, and .
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
The stochastic integral commutes with the operation of taking quadratic covariations. If and are semimartingales then any -integrable process will also be -integrable, and . A consequence of this is that the quadratic variation process of a stochastic integral is equal to an integral of a quadratic variation process,
$$
[H\cdot X] = H^2\cdot[X]
$$
## Integration by parts
As with ordinary calculus, integration by parts is an important result in stochastic calculus. The integration by parts formula for the Itô integral differs from the standard result due to the inclusion of a quadratic covariation term. This term comes from the fact that Itô calculus deals with processes with non-zero quadratic variation, which only occurs for infinite variation processes (such as Brownian motion). If and are semimartingales then
$$
X_t Y_t = X_0 Y_0 + \int_0^t X_{s-} \, dY_s + \int_0^t Y_{s-} \, dX_s + [X,Y]_t
$$
where is the quadratic covariation process.
The result is similar to the integration by parts theorem for the Riemann–Stieltjes integral but has an additional quadratic variation term.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
If and are semimartingales then
$$
X_t Y_t = X_0 Y_0 + \int_0^t X_{s-} \, dY_s + \int_0^t Y_{s-} \, dX_s + [X,Y]_t
$$
where is the quadratic covariation process.
The result is similar to the integration by parts theorem for the Riemann–Stieltjes integral but has an additional quadratic variation term.
Itô's lemma
Itô's lemma is the version of the chain rule or change of variables formula which applies to the Itô integral. It is one of the most powerful and frequently used theorems in stochastic calculus. For a continuous -dimensional semimartingale and twice continuously differentiable function from to , it states that is a semimartingale and,
$$
df(X_t)= \sum_{i=1}^n f_{i}(X_t)\,dX^i_t + \frac{1}{2} \sum_{i,j=1}^n f_{i,j}(X_{t}) \, d[X^i,X^j]_t.
$$
This differs from the chain rule used in standard calculus due to the term involving the quadratic covariation .
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
It is one of the most powerful and frequently used theorems in stochastic calculus. For a continuous -dimensional semimartingale and twice continuously differentiable function from to , it states that is a semimartingale and,
$$
df(X_t)= \sum_{i=1}^n f_{i}(X_t)\,dX^i_t + \frac{1}{2} \sum_{i,j=1}^n f_{i,j}(X_{t}) \, d[X^i,X^j]_t.
$$
This differs from the chain rule used in standard calculus due to the term involving the quadratic covariation . The formula can be generalized to include an explicit time-dependence in
$$
f,
$$
and in other ways (see Itô's lemma).
## Martingale integrators
### Local martingales
An important property of the Itô integral is that it preserves the local martingale property. If is a local martingale and is a locally bounded predictable process then is also a local martingale. For integrands which are not locally bounded, there are examples where is not a local martingale. However, this can only occur when is not continuous.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
For integrands which are not locally bounded, there are examples where is not a local martingale. However, this can only occur when is not continuous. If is a continuous local martingale then a predictable process is -integrable if and only if
$$
\int_0^t H^2 \, d[M] <\infty,
$$
for each , and is always a local martingale.
The most general statement for a discontinuous local martingale is that if is locally integrable then exists and is a local martingale.
### Square integrable martingales
For bounded integrands, the Itô stochastic integral preserves the space of square integrable martingales, which is the set of càdlàg martingales such that is finite for all . For any such square integrable martingale , the quadratic variation process is integrable, and the Itô isometry states that
$$
\mathbb{E}\left [(H\cdot M_t)^2\right ]=\mathbb{E}\left [\int_0^t H^2\,d[M]\right ].
$$
This equality holds more generally for any martingale such that is integrable.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
### Square integrable martingales
For bounded integrands, the Itô stochastic integral preserves the space of square integrable martingales, which is the set of càdlàg martingales such that is finite for all . For any such square integrable martingale , the quadratic variation process is integrable, and the Itô isometry states that
$$
\mathbb{E}\left [(H\cdot M_t)^2\right ]=\mathbb{E}\left [\int_0^t H^2\,d[M]\right ].
$$
This equality holds more generally for any martingale such that is integrable. The Itô isometry is often used as an important step in the construction of the stochastic integral, by defining to be the unique extension of this isometry from a certain class of simple integrands to all bounded and predictable processes.
### p-Integrable martingales
For any , and bounded predictable integrand, the stochastic integral preserves the space of -integrable martingales. These are càdlàg martingales such that is finite for all . However, this is not always true in the case where . There are examples of integrals of bounded predictable processes with respect to martingales which are not themselves martingales.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
However, this is not always true in the case where . There are examples of integrals of bounded predictable processes with respect to martingales which are not themselves martingales.
The maximum process of a càdlàg process is written as . For any and bounded predictable integrand, the stochastic integral preserves the space of càdlàg martingales such that is finite for all . If then this is the same as the space of -integrable martingales, by Doob's inequalities.
The Burkholder–Davis–Gundy inequalities state that, for any given , there exist positive constants , that depend on , but not or on such that
$$
c\mathbb{E} \left [ [M]_t^{\frac{p}{2}} \right ] \le \mathbb{E}\left [(M^*_t)^p \right ]\le C\mathbb{E}\left [ [M]_t^{\frac{p}{2}} \right ]
$$
for all càdlàg local martingales .
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
If then this is the same as the space of -integrable martingales, by Doob's inequalities.
The Burkholder–Davis–Gundy inequalities state that, for any given , there exist positive constants , that depend on , but not or on such that
$$
c\mathbb{E} \left [ [M]_t^{\frac{p}{2}} \right ] \le \mathbb{E}\left [(M^*_t)^p \right ]\le C\mathbb{E}\left [ [M]_t^{\frac{p}{2}} \right ]
$$
for all càdlàg local martingales . These are used to show that if is integrable and is a bounded predictable process then
$$
\mathbb{E}\left [ ((H\cdot M)_t^*)^p \right ] \le C\mathbb{E}\left [(H^2\cdot[M]_t)^{\frac{p}{2}} \right ] < \infty
$$
and, consequently, is a -integrable martingale. More generally, this statement is true whenever is integrable.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
These are used to show that if is integrable and is a bounded predictable process then
$$
\mathbb{E}\left [ ((H\cdot M)_t^*)^p \right ] \le C\mathbb{E}\left [(H^2\cdot[M]_t)^{\frac{p}{2}} \right ] < \infty
$$
and, consequently, is a -integrable martingale. More generally, this statement is true whenever is integrable.
## Existence of the integral
Proofs that the Itô integral is well defined typically proceed by first looking at very simple integrands, such as piecewise constant, left continuous and adapted processes where the integral can be written explicitly. Such simple predictable processes are linear combinations of terms of the form for stopping times and -measurable random variables , for which the integral is
$$
H\cdot X_t\equiv \mathbf{1}_{\{t>T\}}A(X_t-X_T).
$$
This is extended to all simple predictable processes by the linearity of in .
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
## Existence of the integral
Proofs that the Itô integral is well defined typically proceed by first looking at very simple integrands, such as piecewise constant, left continuous and adapted processes where the integral can be written explicitly. Such simple predictable processes are linear combinations of terms of the form for stopping times and -measurable random variables , for which the integral is
$$
H\cdot X_t\equiv \mathbf{1}_{\{t>T\}}A(X_t-X_T).
$$
This is extended to all simple predictable processes by the linearity of in .
For a Brownian motion , the property that it has independent increments with zero mean and variance can be used to prove the Itô isometry for simple predictable integrands,
$$
\mathbb{E} \left [ (H\cdot B_t)^2\right ] = \mathbb{E} \left [\int_0^tH_s^2\,ds\right ].
$$
By a continuous linear extension, the integral extends uniquely to all predictable integrands satisfying
$$
\mathbb{E} \left[ \int_0^t H^2 \, ds \right ] < \infty,
$$
in such way that the Itô isometry still holds.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
Such simple predictable processes are linear combinations of terms of the form for stopping times and -measurable random variables , for which the integral is
$$
H\cdot X_t\equiv \mathbf{1}_{\{t>T\}}A(X_t-X_T).
$$
This is extended to all simple predictable processes by the linearity of in .
For a Brownian motion , the property that it has independent increments with zero mean and variance can be used to prove the Itô isometry for simple predictable integrands,
$$
\mathbb{E} \left [ (H\cdot B_t)^2\right ] = \mathbb{E} \left [\int_0^tH_s^2\,ds\right ].
$$
By a continuous linear extension, the integral extends uniquely to all predictable integrands satisfying
$$
\mathbb{E} \left[ \int_0^t H^2 \, ds \right ] < \infty,
$$
in such way that the Itô isometry still holds. It can then be extended to all -integrable processes by localization. This method allows the integral to be defined with respect to any Itô process.
For a general semimartingale , the decomposition into a local martingale plus a finite variation process can be used.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
This method allows the integral to be defined with respect to any Itô process.
For a general semimartingale , the decomposition into a local martingale plus a finite variation process can be used. Then, the integral can be shown to exist separately with respect to and and combined using linearity, , to get the integral with respect to X. The standard Lebesgue–Stieltjes integral allows integration to be defined with respect to finite variation processes, so the existence of the Itô integral for semimartingales will follow from any construction for local martingales.
For a càdlàg square integrable martingale , a generalized form of the Itô isometry can be used. First, the Doob–Meyer decomposition theorem is used to show that a decomposition exists, where is a martingale and is a right-continuous, increasing and predictable process starting at zero. This uniquely defines , which is referred to as the predictable quadratic variation of . The Itô isometry for square integrable martingales is then
$$
\mathbb{E} \left [(H\cdot M_t)^2\right ]= \mathbb{E} \left [\int_0^tH^2_s\,d\langle M\rangle_s\right],
$$
which can be proved directly for simple predictable integrands.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
This uniquely defines , which is referred to as the predictable quadratic variation of . The Itô isometry for square integrable martingales is then
$$
\mathbb{E} \left [(H\cdot M_t)^2\right ]= \mathbb{E} \left [\int_0^tH^2_s\,d\langle M\rangle_s\right],
$$
which can be proved directly for simple predictable integrands. As with the case above for Brownian motion, a continuous linear extension can be used to uniquely extend to all predictable integrands satisfying . This method can be extended to all local square integrable martingales by localization. Finally, the Doob–Meyer decomposition can be used to decompose any local martingale into the sum of a local square integrable martingale and a finite variation process, allowing the Itô integral to be constructed with respect to any semimartingale.
Many other proofs exist which apply similar methods but which avoid the need to use the Doob–Meyer decomposition theorem, such as the use of the quadratic variation [M] in the Itô isometry, the use of the Doléans measure for submartingales, or the use of the Burkholder–Davis–Gundy inequalities instead of the Itô isometry.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
Many other proofs exist which apply similar methods but which avoid the need to use the Doob–Meyer decomposition theorem, such as the use of the quadratic variation [M] in the Itô isometry, the use of the Doléans measure for submartingales, or the use of the Burkholder–Davis–Gundy inequalities instead of the Itô isometry. The latter applies directly to local martingales without having to first deal with the square integrable martingale case.
Alternative proofs exist only making use of the fact that is càdlàg, adapted, and the set {H · Xt: |H| ≤ 1 is simple previsible} is bounded in probability for each time , which is an alternative definition for to be a semimartingale. A continuous linear extension can be used to construct the integral for all left-continuous and adapted integrands with right limits everywhere (caglad or L-processes). This is general enough to be able to apply techniques such as Itô's lemma . Also, a Khintchine inequality can be used to prove the dominated convergence theorem and extend the integral to general predictable integrands .
## Differentiation in Itô calculus
The Itô calculus is first and foremost defined as an integral calculus as outlined above. However, there are also different notions of "derivative" with respect to Brownian motion:
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
## Differentiation in Itô calculus
The Itô calculus is first and foremost defined as an integral calculus as outlined above. However, there are also different notions of "derivative" with respect to Brownian motion:
### Malliavin derivative
Malliavin calculus provides a theory of differentiation for random variables defined over Wiener space, including an integration by parts formula .
### Martingale representation
The following result allows to express martingales as Itô integrals: if is a square-integrable martingale on a time interval with respect to the filtration generated by a Brownian motion , then there is a unique adapted square integrable process
$$
\alpha
$$
on such that
$$
M_{t} = M_{0} + \int_{0}^{t} \alpha_{s} \, \mathrm{d} B_{s}
$$
almost surely, and for all . This representation theorem can be interpreted formally as saying that α is the "time derivative" of with respect to Brownian motion , since α is precisely the process that must be integrated up to time to obtain , as in deterministic calculus.
## Itô calculus for physicists
In physics, usually stochastic differential equations (SDEs), such as Langevin equations, are used, rather than stochastic integrals.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
## Itô calculus for physicists
In physics, usually stochastic differential equations (SDEs), such as Langevin equations, are used, rather than stochastic integrals. Here an Itô stochastic differential equation (SDE) is often formulated via
$$
\dot{x}_k = h_k + g_{kl} \xi_l,
$$
where _ BLOCK1_ is Gaussian white noise with
$$
\langle\xi_k(t_1)\,\xi_l(t_2)\rangle = \delta_{kl}\delta(t_1-t_2)
$$
and Einstein's summation convention is used.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
Here an Itô stochastic differential equation (SDE) is often formulated via
$$
\dot{x}_k = h_k + g_{kl} \xi_l,
$$
where _ BLOCK1_ is Gaussian white noise with
$$
\langle\xi_k(t_1)\,\xi_l(t_2)\rangle = \delta_{kl}\delta(t_1-t_2)
$$
and Einstein's summation convention is used.
If
$$
y = y(x_k)
$$
is a function of the , then Itô's lemma has to be used:
$$
\dot{y}=\frac{\partial y}{\partial x_j}\dot{x}_j+\frac{1}{2}\frac{\partial^2 y}{\partial x_k \, \partial x_l} g_{km}g_{ml}.
$$
An Itô SDE as above also corresponds to a Stratonovich SDE which reads
$$
\dot{x}_k = h_k + g_{kl} \xi_l - \frac{1}{2} \frac{\partial g_{kl}}{\partial {x_m}} g_{ml}.
$$
SDEs frequently occur in physics in Stratonovich form, as limits of stochastic differential equations driven by colored noise if the correlation time of the noise term approaches zero.
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
BLOCK1_ is Gaussian white noise with
$$
\langle\xi_k(t_1)\,\xi_l(t_2)\rangle = \delta_{kl}\delta(t_1-t_2)
$$
and Einstein's summation convention is used.
If
$$
y = y(x_k)
$$
is a function of the , then Itô's lemma has to be used:
$$
\dot{y}=\frac{\partial y}{\partial x_j}\dot{x}_j+\frac{1}{2}\frac{\partial^2 y}{\partial x_k \, \partial x_l} g_{km}g_{ml}.
$$
An Itô SDE as above also corresponds to a Stratonovich SDE which reads
$$
\dot{x}_k = h_k + g_{kl} \xi_l - \frac{1}{2} \frac{\partial g_{kl}}{\partial {x_m}} g_{ml}.
$$
SDEs frequently occur in physics in Stratonovich form, as limits of stochastic differential equations driven by colored noise if the correlation time of the noise term approaches zero.
For a recent treatment of different interpretations of stochastic differential equations see for example .
|
https://en.wikipedia.org/wiki/It%C3%B4_calculus
|
A tree stack automaton (plural: tree stack automata) is a formalism considered in automata theory. It is a finite-state automaton with the additional ability to manipulate a tree-shaped stack. It is an automaton with storage whose storage roughly resembles the configurations of a thread automaton. A restricted class of tree stack automata recognises exactly the languages generated by multiple context-free grammars (or linear context-free rewriting systems).
## Definition
### Tree stack
For a finite and non-empty set , a tree stack over is a tuple where
- is a partial function from strings of positive integers to the set } with prefix-closed domain (called tree),
- (called bottom symbol) is not in and appears exactly at the root of , and
- is an element of the domain of (called stack pointer).
The set of all tree stacks over is denoted by .
The set of predicates on , denoted by , contains the following unary predicates:
- which is true for any tree stack over ,
- which is true for tree stacks whose stack pointer points to the bottom symbol, and
- which is true for some tree stack if ,
for every .
|
https://en.wikipedia.org/wiki/Tree_stack_automaton
|
The set of all tree stacks over is denoted by .
The set of predicates on , denoted by , contains the following unary predicates:
- which is true for any tree stack over ,
- which is true for tree stacks whose stack pointer points to the bottom symbol, and
- which is true for some tree stack if ,
for every .
The set of instructions on , denoted by , contains the following partial functions:
- which is the identity function on ,
- which adds for a given tree stack a pair to the tree and sets the stack pointer to (i.e. it pushes to the -th child position) if is not yet in the domain of ,
- which replaces the current stack pointer by (i.e. it moves the stack pointer to the -th child position) if is in the domain of ,
- which removes the last symbol from the stack pointer (i.e. it moves the stack pointer to the parent position), and
- which replaces the symbol currently under the stack pointer by ,
for every positive integer and every .
### Tree stack automata
A tree stack automaton is a 6-tuple where
- , , and are finite sets (whose elements are called states, stack symbols, and input symbols, respectively),
- (the initial state),
- (whose elements are called transitions), and
- (whose elements are called final states).
|
https://en.wikipedia.org/wiki/Tree_stack_automaton
|
The set of instructions on , denoted by , contains the following partial functions:
- which is the identity function on ,
- which adds for a given tree stack a pair to the tree and sets the stack pointer to (i.e. it pushes to the -th child position) if is not yet in the domain of ,
- which replaces the current stack pointer by (i.e. it moves the stack pointer to the -th child position) if is in the domain of ,
- which removes the last symbol from the stack pointer (i.e. it moves the stack pointer to the parent position), and
- which replaces the symbol currently under the stack pointer by ,
for every positive integer and every .
### Tree stack automata
A tree stack automaton is a 6-tuple where
- , , and are finite sets (whose elements are called states, stack symbols, and input symbols, respectively),
- (the initial state),
- (whose elements are called transitions), and
- (whose elements are called final states).
A configuration of is a tuple where
- is a state (the current state),
- is a tree stack (the current tree stack), and
- is a word over (the remaining word to be read).
A transition is applicable to a configuration if
- ,
- is true on ,
- is defined for , and
- is a prefix of .
|
https://en.wikipedia.org/wiki/Tree_stack_automaton
|
A configuration of is a tuple where
- is a state (the current state),
- is a tree stack (the current tree stack), and
- is a word over (the remaining word to be read).
A transition is applicable to a configuration if
- ,
- is true on ,
- is defined for , and
- is a prefix of .
The transition relation of is the binary relation on configurations of that is the union of all the relations for a transition where, whenever is applicable to , we have and is obtained from by removing the prefix .
The language of is the set of all words for which there is some state and some tree stack such that where
- is the reflexive transitive closure of and
- such that assigns for the symbol and is undefined otherwise.
## Related formalisms
Tree stack automata are equivalent to Turing machines.
A tree stack automaton is called -restricted for some positive natural number if, during any run of the automaton, any position of the tree stack is accessed at most times from below.
1-restricted tree stack automata are equivalent to pushdown automata and therefore also to context-free grammars.
-restricted tree stack automata are equivalent to linear context-free rewriting systems and multiple context-free grammars of fan-out at most (for every positive integer ).
## Notes
## References
Category:Models of computation
Category:Automata (computation)
|
https://en.wikipedia.org/wiki/Tree_stack_automaton
|
Information retrieval (IR) in computing and information science is the task of identifying and retrieving information system resources that are relevant to an information need. The information need can be specified in the form of a search query. In the case of document retrieval, queries can be based on full-text or other content-based indexing. Information retrieval is the science of searching for information in a document, searching for documents themselves, and also searching for the metadata that describes data, and for databases of texts, images or sounds.
Automated information retrieval systems are used to reduce what has been called information overload. An IR system is a software system that provides access to books, journals and other documents; it also stores and manages those documents. Web search engines are the most visible IR applications.
## Overview
An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval, a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees of relevance.
An object is an entity that is represented by information in a content collection or database. User queries are matched against the database information.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
An object is an entity that is represented by information in a content collection or database. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. This ranking of results is a key difference of information retrieval searching compared to database searching.
Depending on the application the data objects may be, for example, text documents, images, audio, mind maps or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates or metadata.
Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.
## History
The idea of using computers to search for relevant pieces of information was popularized in the article As We May Think by Vannevar Bush in 1945. It would appear that Bush was inspired by patents for a 'statistical machine' – filed by Emanuel Goldberg in the 1920s and 1930s – that searched for documents stored on film. The first description of a computer searching for information was described by Holmstrom in 1948, detailing an early mention of the Univac computer.
|
https://en.wikipedia.org/wiki/Information_retrieval
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.