text
stringlengths 105
4.17k
| source
stringclasses 883
values |
---|---|
This may or may not be a convex program. In general, whether the program is convex affects the difficulty of solving it.
- Stochastic programming studies the case in which some of the constraints or parameters depend on random variables.
- Robust optimization is, like stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. Robust optimization aims to find solutions that are valid under all possible realizations of the uncertainties defined by an uncertainty set.
- Combinatorial optimization is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one.
- Stochastic optimization is used with random (noisy) function measurements or random inputs in the search process.
- Infinite-dimensional optimization studies the case when the set of feasible solutions is a subset of an infinite-dimensional space, such as a space of functions.
-
### Heuristics
and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
### Heuristics
and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to find approximate solutions for many complicated optimization problems.
- Constraint satisfaction studies the case in which the objective function f is constant (this is used in artificial intelligence, particularly in automated reasoning).
- Constraint programming is a programming paradigm wherein relations between variables are stated in the form of constraints.
- Disjunctive programming is used where at least one constraint must be satisfied but not all. It is of particular use in scheduling.
- Space mapping is a concept for modeling and optimization of an engineering system to high-fidelity (fine) model accuracy exploiting a suitable physically meaningful coarse or surrogate model.
In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time):
- Calculus of variations is concerned with finding the best way to achieve some goal, such as finding a surface whose boundary is a specific curve, but with the least possible area.
- Optimal control theory is a generalization of the calculus of variations which introduces control policies.
- Dynamic programming is the approach to solve the stochastic optimization problem with stochastic, randomness, and unknown model parameters.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
- Space mapping is a concept for modeling and optimization of an engineering system to high-fidelity (fine) model accuracy exploiting a suitable physically meaningful coarse or surrogate model.
In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time):
- Calculus of variations is concerned with finding the best way to achieve some goal, such as finding a surface whose boundary is a specific curve, but with the least possible area.
- Optimal control theory is a generalization of the calculus of variations which introduces control policies.
- Dynamic programming is the approach to solve the stochastic optimization problem with stochastic, randomness, and unknown model parameters. It studies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that describes the relationship between these subproblems is called the Bellman equation.
- Mathematical programming with equilibrium constraints is where the constraints include variational inequalities or complementarities.
### Multi-objective optimization
Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as the Pareto set. The curve created plotting weight against stiffness of the best designs is known as the Pareto frontier.
A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal.
The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker.
Multi-objective optimization problems have been generalized further into vector optimization problems where the (partial) ordering is no longer given by the Pareto ordering.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
In some cases, the missing information can be derived by interactive sessions with the decision maker.
Multi-objective optimization problems have been generalized further into vector optimization problems where the (partial) ordering is no longer given by the Pareto ordering.
### Multi-modal or global optimization
Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer.
Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm.
Common approaches to global optimization problems, where multiple local extrema may be present include evolutionary algorithms, Bayesian optimization and simulated annealing.
## Classification of critical points and extrema
### Feasibility problem
The satisfiability problem, also called the feasibility problem, is just the problem of finding any feasible solution at all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal.
Many optimization algorithms need to start from a feasible point.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal.
Many optimization algorithms need to start from a feasible point. One way to obtain such a point is to relax the feasibility conditions using a slack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative.
### Existence
The extreme value theorem of Karl Weierstrass states that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view.
### Necessary conditions for optimality
One of Fermat's theorems states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero (see first derivative test). More generally, they may be found at critical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
More generally, they may be found at critical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions.
Optima of equality-constrained problems can be found by the Lagrange multiplier method. The optima of problems with equality and/or inequality constraints can be found using the 'Karush–Kuhn–Tucker conditions'.
### Sufficient conditions for optimality
While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test').
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality.
### Sensitivity and continuity of optima
The envelope theorem describes how the value of an optimal solution changes when an underlying parameter changes. The process of computing this change is called comparative statics.
The maximum theorem of Claude Berge (1963) describes the continuity of an optimal solution as a function of underlying parameters.
### Calculus of optimization
For unconstrained problems with twice-differentiable functions, some critical points can be found by finding the points where the gradient of the objective function is zero (that is, the stationary points). More generally, a zero subgradient certifies that a local minimum has been found for minimization problems with convex functions and other locally Lipschitz functions, which meet in loss function minimization of the neural network.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
### Calculus of optimization
For unconstrained problems with twice-differentiable functions, some critical points can be found by finding the points where the gradient of the objective function is zero (that is, the stationary points). More generally, a zero subgradient certifies that a local minimum has been found for minimization problems with convex functions and other locally Lipschitz functions, which meet in loss function minimization of the neural network. The positive-negative momentum estimation lets to avoid the local minimum and converges at the objective function global minimum.
Further, critical points can be classified using the definiteness of the Hessian matrix: If the Hessian is positive definite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind of saddle point.
Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers. Lagrangian relaxation can also provide approximate solutions to difficult constrained problems.
When the objective function is a convex function, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such as interior-point methods.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
When the objective function is a convex function, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such as interior-point methods.
### Global convergence
More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies on line searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence uses trust regions. Both line searches and trust regions are used in modern methods of non-differentiable optimization. Usually, a global optimizer is much slower than advanced local optimizers (such as BFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points.
## Computational optimization techniques
To solve problems, researchers may use algorithms that terminate in a finite number of steps, or iterative methods that converge to a solution (on some specified class of problems), or heuristics that may provide approximate solutions to some problems (although their iterates need not converge).
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
Usually, a global optimizer is much slower than advanced local optimizers (such as BFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points.
## Computational optimization techniques
To solve problems, researchers may use algorithms that terminate in a finite number of steps, or iterative methods that converge to a solution (on some specified class of problems), or heuristics that may provide approximate solutions to some problems (although their iterates need not converge).
### Optimization algorithms
- Simplex algorithm of George Dantzig, designed for linear programming
- Extensions of the simplex algorithm, designed for quadratic programming and for linear-fractional programming
- Variants of the simplex algorithm that are especially suited for network optimization
- Combinatorial algorithms
- Quantum optimization algorithms
### Iterative methods
The iterative methods used to solve problems of nonlinear programming differ according to whether they evaluate Hessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the computational complexity (or computational cost) of each iteration. In some cases, the computational complexity may be excessively high.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the computational complexity (or computational cost) of each iteration. In some cases, the computational complexity may be excessively high.
One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix), the number of function evaluations is in the order of N². Newton's method requires the 2nd-order derivatives, so for each iteration, the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself.
- Methods that evaluate Hessians (or approximate Hessians, using finite differences):
- Newton's method
- Sequential quadratic programming:
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
Which one is best with respect to the number of function calls depends on the problem itself.
- Methods that evaluate Hessians (or approximate Hessians, using finite differences):
- Newton's method
- Sequential quadratic programming: A Newton-based method for small-medium scale constrained problems. Some versions can handle large-dimensional problems.
- Interior point methods: This is a large class of methods for constrained optimization, some of which use only (sub)gradient information and others of which require the evaluation of Hessians.
- Methods that evaluate gradients, or approximate gradients in some way (or even subgradients):
- Coordinate descent methods: Algorithms which update a single coordinate in each iteration
- Conjugate gradient methods: Iterative methods for large problems. (In theory, these methods terminate in a finite number of steps with quadratic objective functions, but this finite termination is not observed in practice on finite–precision computers.)
- Gradient descent (alternatively, "steepest descent" or "steepest ascent"): A (slow) method of historical and theoretical interest, which has had renewed interest for finding approximate solutions of enormous problems.
- Subgradient methods: An iterative method for large locally Lipschitz functions using generalized gradients.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
(In theory, these methods terminate in a finite number of steps with quadratic objective functions, but this finite termination is not observed in practice on finite–precision computers.)
- Gradient descent (alternatively, "steepest descent" or "steepest ascent"): A (slow) method of historical and theoretical interest, which has had renewed interest for finding approximate solutions of enormous problems.
- Subgradient methods: An iterative method for large locally Lipschitz functions using generalized gradients. Following Boris T. Polyak, subgradient–projection methods are similar to conjugate–gradient methods.
- Bundle method of descent: An iterative method for small–medium-sized problems with locally Lipschitz functions, particularly for convex minimization problems (similar to conjugate gradient methods).
- Ellipsoid method: An iterative method for small problems with quasiconvex objective functions and of great theoretical interest, particularly in establishing the polynomial time complexity of some combinatorial optimization problems. It has similarities with Quasi-Newton methods.
- Conditional gradient method (Frank–Wolfe) for approximate minimization of specially structured problems with linear constraints, especially with traffic networks.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
- Bundle method of descent: An iterative method for small–medium-sized problems with locally Lipschitz functions, particularly for convex minimization problems (similar to conjugate gradient methods).
- Ellipsoid method: An iterative method for small problems with quasiconvex objective functions and of great theoretical interest, particularly in establishing the polynomial time complexity of some combinatorial optimization problems. It has similarities with Quasi-Newton methods.
- Conditional gradient method (Frank–Wolfe) for approximate minimization of specially structured problems with linear constraints, especially with traffic networks. For general unconstrained problems, this method reduces to the gradient method, which is regarded as obsolete (for almost all problems).
- Quasi-Newton methods: Iterative methods for medium-large problems (e.g. N<1000).
- Simultaneous perturbation stochastic approximation (SPSA) method for stochastic optimization; uses random (efficient) gradient approximation.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
It has similarities with Quasi-Newton methods.
- Conditional gradient method (Frank–Wolfe) for approximate minimization of specially structured problems with linear constraints, especially with traffic networks. For general unconstrained problems, this method reduces to the gradient method, which is regarded as obsolete (for almost all problems).
- Quasi-Newton methods: Iterative methods for medium-large problems (e.g. N<1000).
- Simultaneous perturbation stochastic approximation (SPSA) method for stochastic optimization; uses random (efficient) gradient approximation.
- Methods that evaluate only function values: If a problem is continuously differentiable, then gradients can be approximated using finite differences, in which case a gradient-based method can be used.
- Interpolation methods
- Pattern search methods, which have better convergence properties than the Nelder–Mead heuristic (with simplices), which is listed below.
- Mirror descent
Heuristics
Besides (finitely terminating) algorithms and (convergent) iterative methods, there are heuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
- Methods that evaluate only function values: If a problem is continuously differentiable, then gradients can be approximated using finite differences, in which case a gradient-based method can be used.
- Interpolation methods
- Pattern search methods, which have better convergence properties than the Nelder–Mead heuristic (with simplices), which is listed below.
- Mirror descent
Heuristics
Besides (finitely terminating) algorithms and (convergent) iterative methods, there are heuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics:
- Differential evolution
- Dynamic relaxation
- Evolutionary algorithms
- Genetic algorithms
- Hill climbing with random restart
- Memetic algorithm
- Nelder–Mead simplicial heuristic: A popular heuristic for approximate minimization (without calling gradients)
- Particle swarm optimization
- Simulated annealing
- Stochastic tunneling
- Tabu search
## Applications
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
List of some well-known heuristics:
- Differential evolution
- Dynamic relaxation
- Evolutionary algorithms
- Genetic algorithms
- Hill climbing with random restart
- Memetic algorithm
- Nelder–Mead simplicial heuristic: A popular heuristic for approximate minimization (without calling gradients)
- Particle swarm optimization
- Simulated annealing
- Stochastic tunneling
- Tabu search
## Applications
### Mechanics
Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equation on a constraint manifold; the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving a linear complementarity problem, which can also be viewed as a QP (quadratic programming) problem.
Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems.
This approach may be applied in cosmology and astrophysics.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems.
This approach may be applied in cosmology and astrophysics.
### Economics and finance
Economics is closely enough linked to optimization of agents that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses. Modern optimization theory includes traditional optimization theory but also overlaps with game theory and the study of economic equilibria. The Journal of Economic Literature codes classify mathematical programming, optimization techniques, and related topics under JEL:C61-C63.
In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem, are economic optimization problems. Insofar as they behave consistently, consumers are assumed to maximize their utility, while firms are usually assumed to maximize their profit. Also, agents are often modeled as being risk-averse, thereby preferring to avoid risk. Asset prices are also modeled using optimization theory, though the underlying mathematics relies on optimizing stochastic processes rather than on static optimization. International trade theory also uses optimization to explain trade patterns between nations.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
Asset prices are also modeled using optimization theory, though the underlying mathematics relies on optimizing stochastic processes rather than on static optimization. International trade theory also uses optimization to explain trade patterns between nations. The optimization of portfolios is an example of multi-objective optimization in economics.
Since the 1970s, economists have modeled dynamic decisions over time using control theory. For example, dynamic search models are used to study labor-market behavior. A crucial distinction is between deterministic and stochastic models. Macroeconomists build dynamic stochastic general equilibrium (DSGE) models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments. From The New Palgrave Dictionary of Economics (2008), 2nd Edition with Abstract links:• "numerical optimization methods in economics" by Karl Schmedders• "convex programming" by Lawrence E. Blume• "Arrow–Debreu model of general equilibrium" by John Geanakoplos.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
From The New Palgrave Dictionary of Economics (2008), 2nd Edition with Abstract links:• "numerical optimization methods in economics" by Karl Schmedders• "convex programming" by Lawrence E. Blume• "Arrow–Debreu model of general equilibrium" by John Geanakoplos.
### Electrical engineering
Some common applications of optimization techniques in electrical engineering include active filter design, stray field reduction in superconducting magnetic energy storage systems, space mapping design of microwave structures, handset antennas,N. Friedrich, “Space mapping outpaces EM optimization in handset-antenna design,” microwaves&rf, August 30, 2013. electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empirical surrogate model and space mapping methodologies since the discovery of space mapping in 1993. Optimization techniques are also used in power-flow analysis.
### Civil engineering
Optimization has been widely used in civil engineering. Construction management and transportation engineering are among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures, resource leveling, water resource allocation, traffic management and schedule optimization.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
Construction management and transportation engineering are among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures, resource leveling, water resource allocation, traffic management and schedule optimization.
### Operations research
Another field that uses optimization techniques extensively is operations research. Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research uses stochastic programming to model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization and stochastic optimization methods.
### Control engineering
Mathematical optimization is used in much modern controller design. High-level controllers such as model predictive control (MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled.
### Geophysics
Optimization techniques are regularly used in geophysical parameter estimation problems.
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled.
### Geophysics
Optimization techniques are regularly used in geophysical parameter estimation problems. Given a set of geophysical measurements, e.g. seismic recordings, it is common to solve for the physical properties and geometrical shapes of the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used.
### Molecular modeling
Nonlinear optimization methods are widely used in conformational analysis.
### Computational systems biology
Optimization techniques are used in many facets of computational systems biology such as model building, optimal experimental design, metabolic engineering, and synthetic biology. Linear programming has been applied to calculate the maximal possible yields of fermentation products, and to infer gene regulatory networks from multiple microarray datasets as well as transcriptional regulatory networks from high-throughput data. Nonlinear programming has been used to analyze energy metabolism and has been applied to metabolic engineering and parameter estimation in biochemical pathways.
### Machine learning
## Solvers
|
https://en.wikipedia.org/wiki/Mathematical_optimization%23Computational_optimization_techniques
|
In mathematics, an n-dimensional differential structure (or differentiable structure) on a set M makes M into an n-dimensional differential manifold, which is a topological manifold with some additional structure that allows for differential calculus on the manifold. If M is already a topological manifold, it is required that the new topology be identical to the existing one.
## Definition
For a natural number n and some k which may be a non-negative integer or infinity, an n-dimensional Ck differential structure is defined using a Ck-atlas, which is a set of bijections called charts between subsets of M (whose union is the whole of M) and open subsets of
$$
\mathbb{R}^{n}
$$
:
$$
\varphi_{i}:M\supset W_{i}\rightarrow U_{i}\subset\mathbb{R}^{n}
$$
which are Ck-compatible (in the sense defined below):
Each chart allows a subset of the manifold to be viewed as an open subset of
$$
\mathbb{R}^{n}
$$
, but the usefulness of this depends on how much the charts agree when their domains overlap.
|
https://en.wikipedia.org/wiki/Differential_structure
|
If M is already a topological manifold, it is required that the new topology be identical to the existing one.
## Definition
For a natural number n and some k which may be a non-negative integer or infinity, an n-dimensional Ck differential structure is defined using a Ck-atlas, which is a set of bijections called charts between subsets of M (whose union is the whole of M) and open subsets of
$$
\mathbb{R}^{n}
$$
:
$$
\varphi_{i}:M\supset W_{i}\rightarrow U_{i}\subset\mathbb{R}^{n}
$$
which are Ck-compatible (in the sense defined below):
Each chart allows a subset of the manifold to be viewed as an open subset of
$$
\mathbb{R}^{n}
$$
, but the usefulness of this depends on how much the charts agree when their domains overlap.
Consider two charts:
$$
\varphi_{i}:W_{i}\rightarrow U_{i},
$$
$$
\varphi_{j}:W_{j}\rightarrow U_{j}.
$$
The intersection of their domains is
$$
W_{ij}=W_{i}\cap W_{j}
$$
whose images under the two charts are
$$
U_{ij}=\varphi_{i}\left(W_{ij}\right),
$$
$$
U_{ji}=\varphi_{j}\left(W_{ij}\right).
$$
The transition map between the two charts translates between their images on their shared domain:
$$
\varphi_{ij}:U_{ij}\rightarrow U_{ji}
$$
$$
\varphi_{ij}(x)=\varphi_{j}\left(\varphi_{i}^{-1}\left(x\right)\right).
$$
Two charts
$$
\varphi_{i},\,\varphi_{j}
$$
are Ck-compatible if
$$
U_{ij},\, U_{ji}
$$
are open, and the transition maps
$$
\varphi_{ij},\,\varphi_{ji}
$$
have continuous partial derivatives of order k.
|
https://en.wikipedia.org/wiki/Differential_structure
|
## Definition
For a natural number n and some k which may be a non-negative integer or infinity, an n-dimensional Ck differential structure is defined using a Ck-atlas, which is a set of bijections called charts between subsets of M (whose union is the whole of M) and open subsets of
$$
\mathbb{R}^{n}
$$
:
$$
\varphi_{i}:M\supset W_{i}\rightarrow U_{i}\subset\mathbb{R}^{n}
$$
which are Ck-compatible (in the sense defined below):
Each chart allows a subset of the manifold to be viewed as an open subset of
$$
\mathbb{R}^{n}
$$
, but the usefulness of this depends on how much the charts agree when their domains overlap.
Consider two charts:
$$
\varphi_{i}:W_{i}\rightarrow U_{i},
$$
$$
\varphi_{j}:W_{j}\rightarrow U_{j}.
$$
The intersection of their domains is
$$
W_{ij}=W_{i}\cap W_{j}
$$
whose images under the two charts are
$$
U_{ij}=\varphi_{i}\left(W_{ij}\right),
$$
$$
U_{ji}=\varphi_{j}\left(W_{ij}\right).
$$
The transition map between the two charts translates between their images on their shared domain:
$$
\varphi_{ij}:U_{ij}\rightarrow U_{ji}
$$
$$
\varphi_{ij}(x)=\varphi_{j}\left(\varphi_{i}^{-1}\left(x\right)\right).
$$
Two charts
$$
\varphi_{i},\,\varphi_{j}
$$
are Ck-compatible if
$$
U_{ij},\, U_{ji}
$$
are open, and the transition maps
$$
\varphi_{ij},\,\varphi_{ji}
$$
have continuous partial derivatives of order k. If k = 0, we only require that the transition maps are continuous, consequently a C0-atlas is simply another way to define a topological manifold.
|
https://en.wikipedia.org/wiki/Differential_structure
|
Consider two charts:
$$
\varphi_{i}:W_{i}\rightarrow U_{i},
$$
$$
\varphi_{j}:W_{j}\rightarrow U_{j}.
$$
The intersection of their domains is
$$
W_{ij}=W_{i}\cap W_{j}
$$
whose images under the two charts are
$$
U_{ij}=\varphi_{i}\left(W_{ij}\right),
$$
$$
U_{ji}=\varphi_{j}\left(W_{ij}\right).
$$
The transition map between the two charts translates between their images on their shared domain:
$$
\varphi_{ij}:U_{ij}\rightarrow U_{ji}
$$
$$
\varphi_{ij}(x)=\varphi_{j}\left(\varphi_{i}^{-1}\left(x\right)\right).
$$
Two charts
$$
\varphi_{i},\,\varphi_{j}
$$
are Ck-compatible if
$$
U_{ij},\, U_{ji}
$$
are open, and the transition maps
$$
\varphi_{ij},\,\varphi_{ji}
$$
have continuous partial derivatives of order k. If k = 0, we only require that the transition maps are continuous, consequently a C0-atlas is simply another way to define a topological manifold. If k = ∞, derivatives of all orders must be continuous.
|
https://en.wikipedia.org/wiki/Differential_structure
|
If k = 0, we only require that the transition maps are continuous, consequently a C0-atlas is simply another way to define a topological manifold. If k = ∞, derivatives of all orders must be continuous. A family of Ck-compatible charts covering the whole manifold is a Ck-atlas defining a Ck differential manifold. Two atlases are Ck-equivalent if the union of their sets of charts forms a Ck-atlas. In particular, a Ck-atlas that is C0-compatible with a C0-atlas that defines a topological manifold is said to determine a Ck differential structure on the topological manifold. The Ck equivalence classes of such atlases are the distinct Ck differential structures of the manifold. Each distinct differential structure is determined by a unique maximal atlas, which is simply the union of all atlases in the equivalence class.
For simplification of language, without any loss of precision, one might just call a maximal Ck−atlas on a given set a Ck−manifold. This maximal atlas then uniquely determines both the topology and the underlying set, the latter being the union of the domains of all charts, and the former having the set of all these domains as a basis.
|
https://en.wikipedia.org/wiki/Differential_structure
|
For simplification of language, without any loss of precision, one might just call a maximal Ck−atlas on a given set a Ck−manifold. This maximal atlas then uniquely determines both the topology and the underlying set, the latter being the union of the domains of all charts, and the former having the set of all these domains as a basis.
## Existence and uniqueness theorems
For any integer k > 0 and any n−dimensional Ck−manifold, the maximal atlas contains a C∞−atlas on the same underlying set by a theorem due to Hassler Whitney. It has also been shown that any maximal Ck−atlas contains some number of distinct maximal C∞−atlases whenever n > 0, although for any pair of these distinct C∞−atlases there exists a C∞−diffeomorphism identifying the two. It follows that there is only one class of smooth structures (modulo pairwise smooth diffeomorphism) over any topological manifold which admits a differentiable structure, i.e. The C∞−, structures in a Ck−manifold. A bit loosely, one might express this by saying that the smooth structure is (essentially) unique. The case for k = 0 is different.
|
https://en.wikipedia.org/wiki/Differential_structure
|
A bit loosely, one might express this by saying that the smooth structure is (essentially) unique. The case for k = 0 is different. Namely, there exist topological manifolds which admit no C1−structure, a result proved by , and later explained in the context of Donaldson's theorem (compare Hilbert's fifth problem).
Smooth structures on an orientable manifold are usually counted modulo orientation-preserving smooth homeomorphisms. There then arises the question whether orientation-reversing diffeomorphisms exist. There is an "essentially unique" smooth structure for any topological manifold of dimension smaller than 4. For compact manifolds of dimension greater than 4, there is a finite number of "smooth types", i.e. equivalence classes of pairwise smoothly diffeomorphic smooth structures. In the case of Rn with n ≠ 4, the number of these types is one, whereas for n = 4, there are uncountably many such types. One refers to these by exotic R4.
## Differential structures on spheres of dimension 1 to 20
The following table lists the number of smooth types of the topological m−sphere Sm for the values of the dimension m from 1 up to 20.
|
https://en.wikipedia.org/wiki/Differential_structure
|
One refers to these by exotic R4.
## Differential structures on spheres of dimension 1 to 20
The following table lists the number of smooth types of the topological m−sphere Sm for the values of the dimension m from 1 up to 20. Spheres with a smooth, i.e. C∞−differential structure not smoothly diffeomorphic to the usual one are known as exotic spheres.
Dimension 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Smooth types 1 1 1 ≥1 1 1 28 2 8 6 992 1 3 2 16256 2 16 16 523264 24
It is not currently known how many smooth types the topological 4-sphere S4 has, except that there is at least one. There may be one, a finite number, or an infinite number. The claim that there is just one is known as the smooth Poincaré conjecture (see Generalized Poincaré conjecture). Most mathematicians believe that this conjecture is false, i.e. that S4 has more than one smooth type. The problem is connected with the existence of more than one smooth type of the topological 4-disk (or 4-ball).
## Differential structures on topological manifolds
As mentioned above, in dimensions smaller than 4, there is only one differential structure for each topological manifold. That was proved by Tibor Radó for dimension 1 and 2, and by Edwin E. Moise in dimension 3.
|
https://en.wikipedia.org/wiki/Differential_structure
|
## Differential structures on topological manifolds
As mentioned above, in dimensions smaller than 4, there is only one differential structure for each topological manifold. That was proved by Tibor Radó for dimension 1 and 2, and by Edwin E. Moise in dimension 3. By using obstruction theory, Robion Kirby and Laurent C. Siebenmann were able to show that the number of PL structures for compact topological manifolds of dimension greater than 4 is finite. John Milnor, Michel Kervaire, and Morris Hirsch proved that the number of smooth structures on a compact PL manifold is finite and agrees with the number of differential structures on the sphere for the same dimension (see the book Asselmeyer-Maluga, Brans chapter 7) . By combining these results, the number of smooth structures on a compact topological manifold of dimension not equal to 4 is finite.
Dimension 4 is more complicated. For compact manifolds, results depend on the complexity of the manifold as measured by the second Betti number b2. For large Betti numbers b2 > 18 in a simply connected 4-manifold, one can use a surgery along a knot or link to produce a new differential structure. With the help of this procedure one can produce countably infinite many differential structures.
|
https://en.wikipedia.org/wiki/Differential_structure
|
For large Betti numbers b2 > 18 in a simply connected 4-manifold, one can use a surgery along a knot or link to produce a new differential structure. With the help of this procedure one can produce countably infinite many differential structures. But even for simple spaces such as
$$
S^4, {\mathbb C}P^2,...
$$
one doesn't know the construction of other differential structures. For non-compact 4-manifolds there are many examples like
$$
{\mathbb R}^4,S^3\times {\mathbb R},M^4\smallsetminus\{*\},...
$$
having uncountably many differential structures.
|
https://en.wikipedia.org/wiki/Differential_structure
|
In statistics, a confidence region is a multi-dimensional generalization of a confidence interval. For a bivariate normal distribution, it is an ellipse, also known as the error ellipse. More generally, it is a set of points in an n-dimensional space, often represented as a hyperellipsoid around a point which is an estimated solution to a problem, although other shapes can occur.
## Interpretation
The confidence region is calculated in such a way that if a set of measurements were repeated many times and a confidence region calculated in the same way on each set of measurements, then a certain percentage of the time (e.g. 95%) the confidence region would include the point representing the "true" values of the set of variables being estimated. However, unless certain assumptions about prior probabilities are made, it does not mean, when one confidence region has been calculated, that there is a 95% probability that the "true" values lie inside the region, since we do not assume any particular probability distribution of the "true" values and we may or may not have other information about where they are likely to lie.
##
|
https://en.wikipedia.org/wiki/Confidence_region
|
However, unless certain assumptions about prior probabilities are made, it does not mean, when one confidence region has been calculated, that there is a 95% probability that the "true" values lie inside the region, since we do not assume any particular probability distribution of the "true" values and we may or may not have other information about where they are likely to lie.
## The case of independent, identically normally-distributed errors
Suppose we have found a solution
$$
\boldsymbol{\beta}
$$
to the following overdetermined problem:
$$
\mathbf{Y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\varepsilon}
$$
where Y is an n-dimensional column vector containing observed values of the dependent variable, X is an n-by-p matrix of observed values of independent variables (which can represent a physical model) which is assumed to be known exactly,
$$
\boldsymbol{\beta}
$$
is a column vector containing the p parameters which are to be estimated, and
$$
\boldsymbol{\varepsilon}
$$
is an n-dimensional column vector of errors which are assumed to be independently distributed with normal distributions with zero mean and each having the same unknown variance
$$
\sigma^2
$$
.
|
https://en.wikipedia.org/wiki/Confidence_region
|
## The case of independent, identically normally-distributed errors
Suppose we have found a solution
$$
\boldsymbol{\beta}
$$
to the following overdetermined problem:
$$
\mathbf{Y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\varepsilon}
$$
where Y is an n-dimensional column vector containing observed values of the dependent variable, X is an n-by-p matrix of observed values of independent variables (which can represent a physical model) which is assumed to be known exactly,
$$
\boldsymbol{\beta}
$$
is a column vector containing the p parameters which are to be estimated, and
$$
\boldsymbol{\varepsilon}
$$
is an n-dimensional column vector of errors which are assumed to be independently distributed with normal distributions with zero mean and each having the same unknown variance
$$
\sigma^2
$$
.
A joint 100(1 − α) % confidence region for the elements of
$$
\boldsymbol{\beta}
$$
is represented by the set of values of the vector b which satisfy the following inequality:
$$
(\boldsymbol{\hat{\beta}} - \mathbf{b})^\operatorname{T} \mathbf{X}^\operatorname{T} \mathbf{X}(\boldsymbol{\hat{\beta}} - \mathbf{b}) \le ps^2 F_{1 - \alpha}(p,\nu) ,
$$
where the variable b represents any point in the confidence region, p is the number of parameters, i.e. number of elements of the vector _
|
https://en.wikipedia.org/wiki/Confidence_region
|
The case of independent, identically normally-distributed errors
Suppose we have found a solution
$$
\boldsymbol{\beta}
$$
to the following overdetermined problem:
$$
\mathbf{Y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\varepsilon}
$$
where Y is an n-dimensional column vector containing observed values of the dependent variable, X is an n-by-p matrix of observed values of independent variables (which can represent a physical model) which is assumed to be known exactly,
$$
\boldsymbol{\beta}
$$
is a column vector containing the p parameters which are to be estimated, and
$$
\boldsymbol{\varepsilon}
$$
is an n-dimensional column vector of errors which are assumed to be independently distributed with normal distributions with zero mean and each having the same unknown variance
$$
\sigma^2
$$
.
A joint 100(1 − α) % confidence region for the elements of
$$
\boldsymbol{\beta}
$$
is represented by the set of values of the vector b which satisfy the following inequality:
$$
(\boldsymbol{\hat{\beta}} - \mathbf{b})^\operatorname{T} \mathbf{X}^\operatorname{T} \mathbf{X}(\boldsymbol{\hat{\beta}} - \mathbf{b}) \le ps^2 F_{1 - \alpha}(p,\nu) ,
$$
where the variable b represents any point in the confidence region, p is the number of parameters, i.e. number of elements of the vector _ BLOCK7_$$
\boldsymbol{\hat{\beta}}
$$
is the vector of estimated parameters, and s2 is the reduced chi-squared, an unbiased estimate of
$$
\sigma^2
$$
equal to
$$
s^2=\frac{\varepsilon^\operatorname{T} \varepsilon}{n - p}.
$$
Further, F is the quantile function of the F-distribution, with p and
$$
\nu = n - p
$$
degrees of freedom,
$$
\alpha
$$
is the statistical significance level, and the symbol
$$
X^\operatorname{T}
$$
means the transpose of
$$
X
$$
.
|
https://en.wikipedia.org/wiki/Confidence_region
|
A joint 100(1 − α) % confidence region for the elements of
$$
\boldsymbol{\beta}
$$
is represented by the set of values of the vector b which satisfy the following inequality:
$$
(\boldsymbol{\hat{\beta}} - \mathbf{b})^\operatorname{T} \mathbf{X}^\operatorname{T} \mathbf{X}(\boldsymbol{\hat{\beta}} - \mathbf{b}) \le ps^2 F_{1 - \alpha}(p,\nu) ,
$$
where the variable b represents any point in the confidence region, p is the number of parameters, i.e. number of elements of the vector _ BLOCK7_$$
\boldsymbol{\hat{\beta}}
$$
is the vector of estimated parameters, and s2 is the reduced chi-squared, an unbiased estimate of
$$
\sigma^2
$$
equal to
$$
s^2=\frac{\varepsilon^\operatorname{T} \varepsilon}{n - p}.
$$
Further, F is the quantile function of the F-distribution, with p and
$$
\nu = n - p
$$
degrees of freedom,
$$
\alpha
$$
is the statistical significance level, and the symbol
$$
X^\operatorname{T}
$$
means the transpose of
$$
X
$$
.
The expression can be rewritten as:
$$
(\boldsymbol{\hat{\beta}} - \mathbf{b})^\operatorname{T} \mathbf{C}_\mathbf{\beta}^{-1} (\boldsymbol{\hat{\beta}} - \mathbf{b}) \le p F_{1 - \alpha}(p,\nu) ,
$$
where
$$
\mathbf{C}_\mathbf{\beta} = s^2 \left( \mathbf{X}^\operatorname{T} \mathbf{X} \right)^{-1}
$$
is the least-squares scaled covariance matrix of
$$
\boldsymbol{\hat{\beta}}
$$
.
|
https://en.wikipedia.org/wiki/Confidence_region
|
BLOCK7_$$
\boldsymbol{\hat{\beta}}
$$
is the vector of estimated parameters, and s2 is the reduced chi-squared, an unbiased estimate of
$$
\sigma^2
$$
equal to
$$
s^2=\frac{\varepsilon^\operatorname{T} \varepsilon}{n - p}.
$$
Further, F is the quantile function of the F-distribution, with p and
$$
\nu = n - p
$$
degrees of freedom,
$$
\alpha
$$
is the statistical significance level, and the symbol
$$
X^\operatorname{T}
$$
means the transpose of
$$
X
$$
.
The expression can be rewritten as:
$$
(\boldsymbol{\hat{\beta}} - \mathbf{b})^\operatorname{T} \mathbf{C}_\mathbf{\beta}^{-1} (\boldsymbol{\hat{\beta}} - \mathbf{b}) \le p F_{1 - \alpha}(p,\nu) ,
$$
where
$$
\mathbf{C}_\mathbf{\beta} = s^2 \left( \mathbf{X}^\operatorname{T} \mathbf{X} \right)^{-1}
$$
is the least-squares scaled covariance matrix of
$$
\boldsymbol{\hat{\beta}}
$$
.
The above inequality defines an ellipsoidal region in the p-dimensional Cartesian parameter space Rp.
|
https://en.wikipedia.org/wiki/Confidence_region
|
The expression can be rewritten as:
$$
(\boldsymbol{\hat{\beta}} - \mathbf{b})^\operatorname{T} \mathbf{C}_\mathbf{\beta}^{-1} (\boldsymbol{\hat{\beta}} - \mathbf{b}) \le p F_{1 - \alpha}(p,\nu) ,
$$
where
$$
\mathbf{C}_\mathbf{\beta} = s^2 \left( \mathbf{X}^\operatorname{T} \mathbf{X} \right)^{-1}
$$
is the least-squares scaled covariance matrix of
$$
\boldsymbol{\hat{\beta}}
$$
.
The above inequality defines an ellipsoidal region in the p-dimensional Cartesian parameter space Rp. The centre of the ellipsoid is at the estimate
$$
\boldsymbol{\hat{\beta}}
$$
. According to Press et al., it is easier to plot the ellipsoid after doing singular value decomposition. The lengths of the axes of the ellipsoid are proportional to the reciprocals of the values on the diagonals of the diagonal matrix, and the directions of these axes are given by the rows of the 3rd matrix of the decomposition.
|
https://en.wikipedia.org/wiki/Confidence_region
|
According to Press et al., it is easier to plot the ellipsoid after doing singular value decomposition. The lengths of the axes of the ellipsoid are proportional to the reciprocals of the values on the diagonals of the diagonal matrix, and the directions of these axes are given by the rows of the 3rd matrix of the decomposition.
## Weighted and generalised least squares
Now consider the more general case where some distinct elements of
$$
\boldsymbol{\varepsilon}
$$
have known nonzero covariance (in other words, the errors in the observations are not independently distributed), and/or the standard deviations of the errors are not all equal. Suppose the covariance matrix of
$$
\boldsymbol{\varepsilon}
$$
is
$$
\mathbf{V}\sigma^2
$$
, where V is an n-by-n nonsingular matrix which was equal to
$$
\mathbf{I}
$$
in the more specific case handled in the previous section, (where I is the identity matrix,) but here is allowed to have nonzero off-diagonal elements representing the covariance of pairs of individual observations, as well as not necessarily having all the diagonal elements equal.
|
https://en.wikipedia.org/wiki/Confidence_region
|
It is possible to find a nonsingular symmetric matrix P such that
$$
\mathbf{P}^\prime\mathbf{P} = \mathbf{P}\mathbf{P} = \mathbf{V}
$$
In effect, P is a square root of the covariance matrix V.
The least-squares problem
$$
\mathbf{Y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\varepsilon}
$$
can then be transformed by left-multiplying each term by the inverse of P, forming the new problem formulation
$$
\mathbf{Z} = \mathbf{Q}\boldsymbol{\beta} + \mathbf{f} ,
$$
where
$$
\mathbf{Z} = \mathbf{P}^{-1}\mathbf{Y}
$$
$$
\mathbf{Q} = \mathbf{P}^{-1}\mathbf{X}
$$
and
$$
\mathbf{f} = \mathbf{P}^{-1}\boldsymbol{\varepsilon}
$$
A joint confidence region for the parameters, i.e. for the elements of
$$
\boldsymbol{\beta}
$$
, is then bounded by the ellipsoid given by:
$$
(\mathbf{b} - \boldsymbol{\hat{\beta}})^\prime \mathbf{Q}^\prime\mathbf{Q}(\mathbf{b} - \boldsymbol{\hat{\beta}}) = {\frac{p}{n - p}} (\mathbf{Z}^\prime\mathbf{Z}
- \mathbf{b}^\prime\mathbf{Q}^\prime\mathbf{Z})F_{1 - \alpha}(p,n-p).
$$
Here F represents the percentage point of the F-distribution and the quantities p and n-p are the degrees of freedom which are the parameters of this distribution.
|
https://en.wikipedia.org/wiki/Confidence_region
|
## Nonlinear problems
Confidence regions can be defined for any probability distribution. The experimenter can choose the significance level and the shape of the region, and then the size of the region is determined by the probability distribution. A natural choice is to use as a boundary a set of points with constant
$$
\chi^2
$$
(chi-squared) values.
One approach is to use a linear approximation to the nonlinear model, which may be a close approximation in the vicinity of the solution, and then apply the analysis for a linear problem to find an approximate confidence region. This may be a reasonable approach if the confidence region is not very large and the second derivatives of the model are also not very large.
Bootstrapping approaches can also be used.
|
https://en.wikipedia.org/wiki/Confidence_region
|
In solid-state physics, the electronic band structure (or simply band structure) of a solid describes the range of energy levels that electrons may have within it, as well as the ranges of energy that they may not have (called band gaps or forbidden bands).
Band theory derives these bands and band gaps by examining the allowed quantum mechanical wave functions for an electron in a large, periodic lattice of atoms or molecules. Band theory has been successfully used to explain many physical properties of solids, such as electrical resistivity and optical absorption, and forms the foundation of the understanding of all solid-state devices (transistors, solar cells, etc.).
## Why bands and band gaps occur
The formation of electronic bands and band gaps can be illustrated with two complementary models for electrons in solids. The first one is the nearly free electron model, in which the electrons are assumed to move almost freely within the material. In this model, the electronic states resemble free electron plane waves, and are only slightly perturbed by the crystal lattice. This model explains the origin of the electronic dispersion relation, but the explanation for band gaps is subtle in this model.
The second model starts from the opposite limit, in which the electrons are tightly bound to individual atoms. The electrons of a single, isolated atom occupy atomic orbitals with discrete energy levels.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
The second model starts from the opposite limit, in which the electrons are tightly bound to individual atoms. The electrons of a single, isolated atom occupy atomic orbitals with discrete energy levels. If two atoms come close enough so that their atomic orbitals overlap, the electrons can tunnel between the atoms. This tunneling splits (hybridizes) the atomic orbitals into molecular orbitals with different energies.
Similarly, if a large number of identical atoms come together to form a solid, such as a crystal lattice, the atoms' atomic orbitals overlap with the nearby orbitals. Each discrete energy level splits into levels, each with a different energy. Since the number of atoms in a macroscopic piece of solid is a very large number (), the number of orbitals that hybridize with each other is very large. For this reason, the adjacent levels are very closely spaced in energy (of the order of ), and can be considered to form a continuum, an energy band.
This formation of bands is mostly a feature of the outermost electrons (valence electrons) in the atom, which are the ones involved in chemical bonding and electrical conductivity. The inner electron orbitals do not overlap to a significant degree, so their bands are very narrow.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
This formation of bands is mostly a feature of the outermost electrons (valence electrons) in the atom, which are the ones involved in chemical bonding and electrical conductivity. The inner electron orbitals do not overlap to a significant degree, so their bands are very narrow.
Band gaps are essentially leftover ranges of energy not covered by any band, a result of the finite widths of the energy bands. The bands have different widths, with the widths depending upon the degree of overlap in the atomic orbitals from which they arise. Two adjacent bands may simply not be wide enough to fully cover the range of energy. For example, the bands associated with core orbitals (such as 1s electrons) are extremely narrow due to the small overlap between adjacent atoms. As a result, there tend to be large band gaps between the core bands. Higher bands involve comparatively larger orbitals with more overlap, becoming progressively wider at higher energies so that there are no band gaps at higher energies.
## Basic concepts
### Assumptions and limits of band structure theory
Band theory is only an approximation to the quantum state of a solid, which applies to solids consisting of many identical atoms or molecules bonded together.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
## Basic concepts
### Assumptions and limits of band structure theory
Band theory is only an approximation to the quantum state of a solid, which applies to solids consisting of many identical atoms or molecules bonded together. These are the assumptions necessary for band theory to be valid:
- Infinite-size system: For the bands to be continuous, the piece of material must consist of a large number of atoms. Since a macroscopic piece of material contains on the order of 1022 atoms, this is not a serious restriction; band theory even applies to microscopic-sized transistors in integrated circuits. With modifications, the concept of band structure can also be extended to systems which are only "large" along some dimensions, such as two-dimensional electron systems.
- Homogeneous system: Band structure is an intrinsic property of a material, which assumes that the material is homogeneous. Practically, this means that the chemical makeup of the material must be uniform throughout the piece.
- Non-interactivity: The band structure describes "single electron states". The existence of these states assumes that the electrons travel in a static potential without dynamically interacting with lattice vibrations, other electrons, photons, etc.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
Practically, this means that the chemical makeup of the material must be uniform throughout the piece.
- Non-interactivity: The band structure describes "single electron states". The existence of these states assumes that the electrons travel in a static potential without dynamically interacting with lattice vibrations, other electrons, photons, etc.
The above assumptions are broken in a number of important practical situations, and the use of band structure requires one to keep a close check on the limitations of band theory:
- Inhomogeneities and interfaces: Near surfaces, junctions, and other inhomogeneities, the bulk band structure is disrupted. Not only are there local small-scale disruptions (e.g., surface states or dopant states inside the band gap), but also local charge imbalances. These charge imbalances have electrostatic effects that extend deeply into semiconductors, insulators, and the vacuum (see doping, band bending).
- Along the same lines, most electronic effects (capacitance, electrical conductance, electric-field screening) involve the physics of electrons passing through surfaces and/or near interfaces.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
These charge imbalances have electrostatic effects that extend deeply into semiconductors, insulators, and the vacuum (see doping, band bending).
- Along the same lines, most electronic effects (capacitance, electrical conductance, electric-field screening) involve the physics of electrons passing through surfaces and/or near interfaces. The full description of these effects, in a band structure picture, requires at least a rudimentary model of electron-electron interactions (see space charge, band bending).
- Small systems: For systems which are small along every dimension (e.g., a small molecule or a quantum dot), there is no continuous band structure. The crossover between small and large dimensions is the realm of mesoscopic physics.
- Strongly correlated materials (for example, Mott insulators) simply cannot be understood in terms of single-electron states. The electronic band structures of these materials are poorly defined (or at least, not uniquely defined) and may not provide useful information about their physical state.
### Crystalline symmetry and wavevectors
Band structure calculations take advantage of the periodic nature of a crystal lattice, exploiting its symmetry.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
The electronic band structures of these materials are poorly defined (or at least, not uniquely defined) and may not provide useful information about their physical state.
### Crystalline symmetry and wavevectors
Band structure calculations take advantage of the periodic nature of a crystal lattice, exploiting its symmetry. The single-electron Schrödinger equation is solved for an electron in a lattice-periodic potential, giving Bloch electrons as solutions
$$
\psi_{n\mathbf{k}}(\mathbf{r}) = e^{i\mathbf{k}\cdot\mathbf{r}} u_{n\mathbf{k}}(\mathbf{r}),
$$
where is called the wavevector. For each value of , there are multiple solutions to the Schrödinger equation labelled by , the band index, which simply numbers the energy bands.
Each of these energy levels evolves smoothly with changes in , forming a smooth band of states. For each band we can define a function , which is the dispersion relation for electrons in that band.
The wavevector takes on any value inside the Brillouin zone, which is a polyhedron in wavevector (reciprocal lattice) space that is related to the crystal's lattice.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
For each band we can define a function , which is the dispersion relation for electrons in that band.
The wavevector takes on any value inside the Brillouin zone, which is a polyhedron in wavevector (reciprocal lattice) space that is related to the crystal's lattice.
Wavevectors outside the Brillouin zone simply correspond to states that are physically identical to those states within the Brillouin zone.
Special high symmetry points/lines in the Brillouin zone are assigned labels like Γ, Δ, Λ, Σ (see Fig 1).
It is difficult to visualize the shape of a band as a function of wavevector, as it would require a plot in four-dimensional space, vs. , , . In scientific literature it is common to see band structure plots which show the values of for values of along straight lines connecting symmetry points, often labelled Δ, Λ, Σ, or [100], [111], and [110], respectively. Another method for visualizing band structure is to plot a constant-energy isosurface in wavevector space, showing all of the states with energy equal to a particular value. The isosurface of states with energy equal to the Fermi level is known as the Fermi surface.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
Another method for visualizing band structure is to plot a constant-energy isosurface in wavevector space, showing all of the states with energy equal to a particular value. The isosurface of states with energy equal to the Fermi level is known as the Fermi surface.
Energy band gaps can be classified using the wavevectors of the states surrounding the band gap:
- Direct band gap: the lowest-energy state above the band gap has the same as the highest-energy state beneath the band gap.
- Indirect band gap: the closest states above and beneath the band gap do not have the same value.
#### Asymmetry: Band structures in non-crystalline solids
Although electronic band structures are usually associated with crystalline materials, quasi-crystalline and amorphous solids may also exhibit band gaps. These are somewhat more difficult to study theoretically since they lack the simple symmetry of a crystal, and it is not usually possible to determine a precise dispersion relation. As a result, virtually all of the existing theoretical work on the electronic band structure of solids has focused on crystalline materials.
### Density of states
The density of states function is defined as the number of electronic states per unit volume, per unit energy, for electron energies near .
The density of states function is important for calculations of effects based on band theory.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
### Density of states
The density of states function is defined as the number of electronic states per unit volume, per unit energy, for electron energies near .
The density of states function is important for calculations of effects based on band theory.
In Fermi's Golden Rule, a calculation for the rate of optical absorption, it provides both the number of excitable electrons and the number of final states for an electron. It appears in calculations of electrical conductivity where it provides the number of mobile states, and in computing electron scattering rates where it provides the number of final states after scattering.
For energies inside a band gap, .
### Filling of bands
At thermodynamic equilibrium, the likelihood of a state of energy being filled with an electron is given by the Fermi–Dirac distribution, a thermodynamic distribution that takes into account the Pauli exclusion principle:
$$
f(E) = \frac{1}{1 + e^{{(E-\mu)}/{k_\text{B} T}}}
$$
where:
- is the product of the Boltzmann constant and temperature, and
- is the total chemical potential of electrons, or Fermi level (in semiconductor physics, this quantity is more often denoted ). The Fermi level of a solid is directly related to the voltage on that solid, as measured with a voltmeter.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
### Filling of bands
At thermodynamic equilibrium, the likelihood of a state of energy being filled with an electron is given by the Fermi–Dirac distribution, a thermodynamic distribution that takes into account the Pauli exclusion principle:
$$
f(E) = \frac{1}{1 + e^{{(E-\mu)}/{k_\text{B} T}}}
$$
where:
- is the product of the Boltzmann constant and temperature, and
- is the total chemical potential of electrons, or Fermi level (in semiconductor physics, this quantity is more often denoted ). The Fermi level of a solid is directly related to the voltage on that solid, as measured with a voltmeter. Conventionally, in band structure plots the Fermi level is taken to be the zero of energy (an arbitrary choice).
The density of electrons in the material is simply the integral of the Fermi–Dirac distribution times the density of states:
$$
N/V = \int_{-\infty}^{\infty} g(E) f(E)\, dE
$$
Although there are an infinite number of bands and thus an infinite number of states, there are only a finite number of electrons to place in these bands.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
Conventionally, in band structure plots the Fermi level is taken to be the zero of energy (an arbitrary choice).
The density of electrons in the material is simply the integral of the Fermi–Dirac distribution times the density of states:
$$
N/V = \int_{-\infty}^{\infty} g(E) f(E)\, dE
$$
Although there are an infinite number of bands and thus an infinite number of states, there are only a finite number of electrons to place in these bands.
The preferred value for the number of electrons is a consequence of electrostatics: even though the surface of a material can be charged, the internal bulk of a material prefers to be charge neutral.
The condition of charge neutrality means that must match the density of protons in the material. For this to occur, the material electrostatically adjusts itself, shifting its band structure up or down in energy (thereby shifting ), until it is at the correct equilibrium with respect to the Fermi level.
#### Names of bands near the Fermi level (conduction band, valence band)
A solid has an infinite number of allowed bands, just as an atom has infinitely many energy levels. However, most of the bands simply have too high energy, and are usually disregarded under ordinary circumstances.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
A solid has an infinite number of allowed bands, just as an atom has infinitely many energy levels. However, most of the bands simply have too high energy, and are usually disregarded under ordinary circumstances.
Conversely, there are very low energy bands associated with the core orbitals (such as 1s electrons). These low-energy core bands are also usually disregarded since they remain filled with electrons at all times, and are therefore inert.
Likewise, materials have several band gaps throughout their band structure.
The most important bands and band gaps—those relevant for electronics and optoelectronics—are those with energies near the Fermi level.
The bands and band gaps near the Fermi level are given special names, depending on the material:
- In a semiconductor or band insulator, the Fermi level is surrounded by a band gap, referred to as the band gap (to distinguish it from the other band gaps in the band structure). The closest band above the band gap is called the conduction band, and the closest band beneath the band gap is called the valence band. The name "valence band" was coined by analogy to chemistry, since in semiconductors (and insulators) the valence band is built out of the valence orbitals.
- In a metal or semimetal, the Fermi level is inside of one or more allowed bands.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
The name "valence band" was coined by analogy to chemistry, since in semiconductors (and insulators) the valence band is built out of the valence orbitals.
- In a metal or semimetal, the Fermi level is inside of one or more allowed bands. In semimetals the bands are usually referred to as "conduction band" or "valence band" depending on whether the charge transport is more electron-like or hole-like, by analogy to semiconductors. In many metals, however, the bands are neither electron-like nor hole-like, and often just called "valence band" as they are made of valence orbitals. The band gaps in a metal's band structure are not important for low energy physics, since they are too far from the Fermi level.
## Theory in crystals
The ansatz is the special case of electron waves in a periodic crystal lattice using Bloch's theorem as treated generally in the dynamical theory of diffraction. Every crystal is a periodic structure which can be characterized by a Bravais lattice, and for each Bravais lattice we can determine the reciprocal lattice, which encapsulates the periodicity in a set of three reciprocal lattice vectors .
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
## Theory in crystals
The ansatz is the special case of electron waves in a periodic crystal lattice using Bloch's theorem as treated generally in the dynamical theory of diffraction. Every crystal is a periodic structure which can be characterized by a Bravais lattice, and for each Bravais lattice we can determine the reciprocal lattice, which encapsulates the periodicity in a set of three reciprocal lattice vectors . Now, any periodic potential which shares the same periodicity as the direct lattice can be expanded out as a Fourier series whose only non-vanishing components are those associated with the reciprocal lattice vectors. So the expansion can be written as:
$$
V(\mathbf{r}) = \sum_{\mathbf{K}} {V_{\mathbf{K}} e^{i \mathbf{K}\cdot\mathbf{r}}}
$$
where for any set of integers .
From this theory, an attempt can be made to predict the band structure of a particular material, however most ab initio methods for electronic structure calculations fail to predict the observed band gap.
### Nearly free electron approximation
In the nearly free electron approximation, interactions between electrons are completely ignored.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
### Nearly free electron approximation
In the nearly free electron approximation, interactions between electrons are completely ignored. This approximation allows use of Bloch's Theorem which states that electrons in a periodic potential have wavefunctions and energies which are periodic in wavevector up to a constant phase shift between neighboring reciprocal lattice vectors. The consequences of periodicity are described mathematically by the Bloch's theorem, which states that the eigenstate wavefunctions have the form
$$
\Psi_{n,\mathbf{k}} (\mathbf{r}) = e^{i \mathbf{k}\cdot\mathbf{r}} u_n(\mathbf{r})
$$
where the Bloch function _ BLOCK1_ is periodic over the crystal lattice, that is,
$$
u_n(\mathbf{r}) = u_n(\mathbf{r}-\mathbf{R}) .
$$
Here index refers to the th energy band, wavevector is related to the direction of motion of the electron, is the position in the crystal, and is the location of an atomic site.
The NFE model works particularly well in materials like metals where distances between neighbouring atoms are small. In such materials the overlap of atomic orbitals and potentials on neighbouring atoms is relatively large.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
The NFE model works particularly well in materials like metals where distances between neighbouring atoms are small. In such materials the overlap of atomic orbitals and potentials on neighbouring atoms is relatively large. In that case the wave function of the electron can be approximated by a (modified) plane wave. The band structure of a metal like aluminium even gets close to the empty lattice approximation.
### Tight binding model
The opposite extreme to the nearly free electron approximation assumes the electrons in the crystal behave much like an assembly of constituent atoms. This tight binding model assumes the solution to the time-independent single electron Schrödinger equation _ BLOCK0_ is well approximated by a linear combination of atomic orbitals
$$
\psi_n(\mathbf{r})
$$
.
$$
\Psi(\mathbf{r}) = \sum_{n,\mathbf{R}} b_{n,\mathbf{R}} \psi_n(\mathbf{r}-\mathbf{R}),
$$
where the coefficients
$$
b_{n,\mathbf{R}}
$$
are selected to give the best approximate solution of this form. Index refers to an atomic energy level and refers to an atomic site.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
the coefficients
$$
b_{n,\mathbf{R}}
$$
are selected to give the best approximate solution of this form. Index refers to an atomic energy level and refers to an atomic site. A more accurate approach using this idea employs Wannier functions, defined by:
$$
a_n(\mathbf{r}-\mathbf{R}) = \frac{V_{C}}{(2\pi)^{3}} \int_\text{BZ} d\mathbf{k} e^{-i\mathbf{k}\cdot(\mathbf{R} -\mathbf{r})}u_{n\mathbf{k}};
$$
in which
$$
u_{n\mathbf{k}}
$$
is the periodic part of the Bloch's theorem and the integral is over the Brillouin zone. Here index refers to the -th energy band in the crystal. The Wannier functions are localized near atomic sites, like atomic orbitals, but being defined in terms of Bloch functions they are accurately related to solutions based upon the crystal potential. Wannier functions on different atomic sites are orthogonal.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
The Wannier functions are localized near atomic sites, like atomic orbitals, but being defined in terms of Bloch functions they are accurately related to solutions based upon the crystal potential. Wannier functions on different atomic sites are orthogonal. The Wannier functions can be used to form the Schrödinger solution for the -th energy band as:
$$
\Psi_{n,\mathbf{k}} (\mathbf{r}) = \sum_{\mathbf{R}} e^{-i\mathbf{k}\cdot(\mathbf{R}-\mathbf{r})}a_n(\mathbf{r} - \mathbf{R}).
$$
The TB model works well in materials with limited overlap between atomic orbitals and potentials on neighbouring atoms. Band structures of materials like Si, GaAs, SiO2 and diamond for instance are well described by TB-Hamiltonians on the basis of atomic sp3 orbitals. In transition metals a mixed TB-NFE model is used to describe the broad NFE conduction band and the narrow embedded TB d-bands. The radial functions of the atomic orbital part of the Wannier functions are most easily calculated by the use of pseudopotential methods.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
In transition metals a mixed TB-NFE model is used to describe the broad NFE conduction band and the narrow embedded TB d-bands. The radial functions of the atomic orbital part of the Wannier functions are most easily calculated by the use of pseudopotential methods. NFE, TB or combined NFE-TB band structure calculations,
sometimes extended with wave function approximations based on pseudopotential methods, are often used as an economic starting point for further calculations.
### KKR model
The KKR method, also called "multiple scattering theory" or Green's function method, finds the stationary values of the inverse transition matrix T rather than the Hamiltonian. A variational implementation was suggested by Korringa, Kohn and Rostocker, and is often referred to as the Korringa–Kohn–Rostoker method. The most important features of the KKR or Green's function formulation are (1) it separates the two aspects of the problem: structure (positions of the atoms) from the scattering (chemical identity of the atoms); and (2) Green's functions provide a natural approach to a localized description of electronic properties that can be adapted to alloys and other disordered system. The simplest form of this approximation centers non-overlapping spheres (referred to as muffin tins) on the atomic positions.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
The most important features of the KKR or Green's function formulation are (1) it separates the two aspects of the problem: structure (positions of the atoms) from the scattering (chemical identity of the atoms); and (2) Green's functions provide a natural approach to a localized description of electronic properties that can be adapted to alloys and other disordered system. The simplest form of this approximation centers non-overlapping spheres (referred to as muffin tins) on the atomic positions. Within these regions, the potential experienced by an electron is approximated to be spherically symmetric about the given nucleus. In the remaining interstitial region, the screened potential is approximated as a constant. Continuity of the potential between the atom-centered spheres and interstitial region is enforced.
### Density-functional theory
In recent physics literature, a large majority of the electronic structures and band plots are calculated using density-functional theory (DFT), which is not a model but rather a theory, i.e., a microscopic first-principles theory of condensed matter physics that tries to cope with the electron-electron many-body problem via the introduction of an exchange-correlation term in the functional of the electronic density.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
Continuity of the potential between the atom-centered spheres and interstitial region is enforced.
### Density-functional theory
In recent physics literature, a large majority of the electronic structures and band plots are calculated using density-functional theory (DFT), which is not a model but rather a theory, i.e., a microscopic first-principles theory of condensed matter physics that tries to cope with the electron-electron many-body problem via the introduction of an exchange-correlation term in the functional of the electronic density. DFT-calculated bands are in many cases found to be in agreement with experimentally measured bands, for example by angle-resolved photoemission spectroscopy (ARPES). In particular, the band shape is typically well reproduced by DFT. But there are also systematic errors in DFT bands when compared to experiment results. In particular, DFT seems to systematically underestimate by about 30-40% the band gap in insulators and semiconductors.
It is commonly believed that DFT is a theory to predict ground state properties of a system only (e.g. the total energy, the atomic structure, etc.), and that excited state properties cannot be determined by DFT. This is a misconception.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
It is commonly believed that DFT is a theory to predict ground state properties of a system only (e.g. the total energy, the atomic structure, etc.), and that excited state properties cannot be determined by DFT. This is a misconception. In principle, DFT can determine any property (ground state or excited state) of a system given a functional that maps the ground state density to that property. This is the essence of the Hohenberg–Kohn theorem. In practice, however, no known functional exists that maps the ground state density to excitation energies of electrons within a material. Thus, what in the literature is quoted as a DFT band plot is a representation of the DFT Kohn–Sham energies, i.e., the energies of a fictive non-interacting system, the Kohn–Sham system, which has no physical interpretation at all. The Kohn–Sham electronic structure must not be confused with the real, quasiparticle electronic structure of a system, and there is no Koopmans' theorem holding for Kohn–Sham energies, as there is for Hartree–Fock energies, which can be truly considered as an approximation for quasiparticle energies. Hence, in principle, Kohn–Sham based DFT is not a band theory, i.e., not a theory suitable for calculating bands and band-plots.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
The Kohn–Sham electronic structure must not be confused with the real, quasiparticle electronic structure of a system, and there is no Koopmans' theorem holding for Kohn–Sham energies, as there is for Hartree–Fock energies, which can be truly considered as an approximation for quasiparticle energies. Hence, in principle, Kohn–Sham based DFT is not a band theory, i.e., not a theory suitable for calculating bands and band-plots. In principle time-dependent DFT can be used to calculate the true band structure although in practice this is often difficult. A popular approach is the use of hybrid functionals, which incorporate a portion of Hartree–Fock exact exchange; this produces a substantial improvement in predicted bandgaps of semiconductors, but is less reliable for metals and wide-bandgap materials.
### Green's function methods and the ab initio GW approximation
To calculate the bands including electron-electron interaction many-body effects, one can resort to so-called Green's function methods. Indeed, knowledge of the Green's function of a system provides both ground (the total energy) and also excited state observables of the system. The poles of the Green's function are the quasiparticle energies, the bands of a solid. The Green's function can be calculated by solving the Dyson equation once the self-energy of the system is known.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
The poles of the Green's function are the quasiparticle energies, the bands of a solid. The Green's function can be calculated by solving the Dyson equation once the self-energy of the system is known. For real systems like solids, the self-energy is a very complex quantity and usually approximations are needed to solve the problem. One such approximation is the GW approximation, so called from the mathematical form the self-energy takes as the product Σ = GW of the Green's function G and the dynamically screened interaction W. This approach is more pertinent when addressing the calculation of band plots (and also quantities beyond, such as the spectral function) and can also be formulated in a completely ab initio way. The GW approximation seems to provide band gaps of insulators and semiconductors in agreement with experiment, and hence to correct the systematic DFT underestimation.
### Dynamical mean-field theory
Although the nearly free electron approximation is able to describe many properties of electron band structures, one consequence of this theory is that it predicts the same number of electrons in each unit cell. If the number of electrons is odd, we would then expect that there is an unpaired electron in each unit cell, and thus that the valence band is not fully occupied, making the material a conductor.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
Although the nearly free electron approximation is able to describe many properties of electron band structures, one consequence of this theory is that it predicts the same number of electrons in each unit cell. If the number of electrons is odd, we would then expect that there is an unpaired electron in each unit cell, and thus that the valence band is not fully occupied, making the material a conductor. However, materials such as CoO that have an odd number of electrons per unit cell are insulators, in direct conflict with this result. This kind of material is known as a Mott insulator, and requires inclusion of detailed electron-electron interactions (treated only as an averaged effect on the crystal potential in band theory) to explain the discrepancy. The Hubbard model is an approximate theory that can include these interactions. It can be treated non-perturbatively within the so-called dynamical mean-field theory, which attempts to bridge the gap between the nearly free electron approximation and the atomic limit. Formally, however, the states are not non-interacting in this case and the concept of a band structure is not adequate to describe these cases.
### Others
Calculating band structures is an important topic in theoretical solid state physics.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
Formally, however, the states are not non-interacting in this case and the concept of a band structure is not adequate to describe these cases.
### Others
Calculating band structures is an important topic in theoretical solid state physics. In addition to the models mentioned above, other models include the following:
- Empty lattice approximation: the "band structure" of a region of free space that has been divided into a lattice.
- k·p perturbation theory is a technique that allows a band structure to be approximately described in terms of just a few parameters. The technique is commonly used for semiconductors, and the parameters in the model are often determined by experiment.
- The Kronig–Penney model, a one-dimensional rectangular well model useful for illustration of band formation. While simple, it predicts many important phenomena, but is not quantitative.
- Hubbard model
The band structure has been generalised to wavevectors that are complex numbers, resulting in what is called a complex band structure, which is of interest at surfaces and interfaces.
Each model describes some types of solids very well, and others poorly. The nearly free electron model works well for metals, but poorly for non-metals. The tight binding model is extremely accurate for ionic insulators, such as metal halide salts (e.g. NaCl).
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
The nearly free electron model works well for metals, but poorly for non-metals. The tight binding model is extremely accurate for ionic insulators, such as metal halide salts (e.g. NaCl).
## Band diagrams
To understand how band structure changes relative to the Fermi level in real space, a band structure plot is often first simplified in the form of a band diagram. In a band diagram the vertical axis is energy while the horizontal axis represents real space. Horizontal lines represent energy levels, while blocks represent energy bands. When the horizontal lines in these diagram are slanted then the energy of the level or band changes with distance. Diagrammatically, this depicts the presence of an electric field within the crystal system. Band diagrams are useful in relating the general band structure properties of different materials to one another when placed in contact with each other.
|
https://en.wikipedia.org/wiki/Electronic_band_structure
|
In quantum mechanics, information theory, and Fourier analysis, the entropic uncertainty or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg's uncertainty principle can be expressed as a lower bound on the sum of these entropies. This is stronger than the usual statement of the uncertainty principle in terms of the product of standard deviations.
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
In 1957, Hirschman considered a function f and its Fourier transform g such that
$$
g(y) \approx \int_{-\infty}^\infty \exp (-2\pi ixy) f(x)\, dx,\qquad f(x) \approx \int_{-\infty}^\infty \exp (2\pi ixy) g(y)\, dy ~,
$$
where the "≈" indicates convergence in 2, and normalized so that (by Plancherel's theorem),
$$
\int_{-\infty}^\infty |f(x)|^2\, dx = \int_{-\infty}^\infty |g(y)|^2 \,dy = 1~.
$$
He showed that for any such functions the sum of the Shannon entropies is non-negative,
$$
H(|f|^2) + H(|g|^2) \equiv - \int_{-\infty}^\infty |f(x)|^2 \log |f(x)|^2\, dx - \int_{-\infty}^\infty |g(y)|^2 \log |g(y)|^2 \,dy \ge 0.
$$
A tighter bound,
was conjectured by Hirschman and Everett, proven in 1975 by W. Beckner and in the same year interpreted as a generalized quantum mechanical uncertainty principle by Białynicki-Birula and Mycielski.
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
The equality holds in the case of Gaussian distributions.
Note, however, that the above entropic uncertainty function is distinctly different from the quantum Von Neumann entropy represented in phase space.
## Sketch of proof
The proof of this tight inequality depends on the so-called (q, p)-norm of the Fourier transformation. (Establishing this norm is the most difficult part of the proof.)
From this norm, one is able to establish a lower bound on the sum of the (differential) Rényi entropies, , where , which generalize the Shannon entropies. For simplicity, we consider this inequality only in one dimension; the extension to multiple dimensions is straightforward and can be found in the literature cited.
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
From this norm, one is able to establish a lower bound on the sum of the (differential) Rényi entropies, , where , which generalize the Shannon entropies. For simplicity, we consider this inequality only in one dimension; the extension to multiple dimensions is straightforward and can be found in the literature cited.
### Babenko–Beckner inequality
The (q, p)-norm of the Fourier transform is defined to be
$$
\|\mathcal F\|_{q,p} = \sup_{f\in L^p(\mathbb R)} \frac{\|\mathcal Ff\|_q}{\|f\|_p},
$$
where
$$
1 < p \le 2~,
$$
and
$$
\frac 1 p + \frac 1 q = 1.
$$
In 1961, Babenko found this norm for even integer values of q. Finally, in 1975,
using Hermite functions as eigenfunctions of the Fourier transform, Beckner proved that the value of this norm (in one dimension) for all q ≥ 2 is
$$
\|\mathcal F\|_{q,p} = \sqrt{p^{1/p}/q^{1/q}}.
$$
Thus we have the Babenko–Beckner inequality that
$$
\|\mathcal Ff\|_q \le \left(p^{1/p}/q^{1/q}\right)^{1/2} \|f\|_p.
$$
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
For simplicity, we consider this inequality only in one dimension; the extension to multiple dimensions is straightforward and can be found in the literature cited.
### Babenko–Beckner inequality
The (q, p)-norm of the Fourier transform is defined to be
$$
\|\mathcal F\|_{q,p} = \sup_{f\in L^p(\mathbb R)} \frac{\|\mathcal Ff\|_q}{\|f\|_p},
$$
where
$$
1 < p \le 2~,
$$
and
$$
\frac 1 p + \frac 1 q = 1.
$$
In 1961, Babenko found this norm for even integer values of q. Finally, in 1975,
using Hermite functions as eigenfunctions of the Fourier transform, Beckner proved that the value of this norm (in one dimension) for all q ≥ 2 is
$$
\|\mathcal F\|_{q,p} = \sqrt{p^{1/p}/q^{1/q}}.
$$
Thus we have the Babenko–Beckner inequality that
$$
\|\mathcal Ff\|_q \le \left(p^{1/p}/q^{1/q}\right)^{1/2} \|f\|_p.
$$
### Rényi entropy bound
From this inequality, an expression of the uncertainty principle in terms of the Rényi entropy can be derived.
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
### Babenko–Beckner inequality
The (q, p)-norm of the Fourier transform is defined to be
$$
\|\mathcal F\|_{q,p} = \sup_{f\in L^p(\mathbb R)} \frac{\|\mathcal Ff\|_q}{\|f\|_p},
$$
where
$$
1 < p \le 2~,
$$
and
$$
\frac 1 p + \frac 1 q = 1.
$$
In 1961, Babenko found this norm for even integer values of q. Finally, in 1975,
using Hermite functions as eigenfunctions of the Fourier transform, Beckner proved that the value of this norm (in one dimension) for all q ≥ 2 is
$$
\|\mathcal F\|_{q,p} = \sqrt{p^{1/p}/q^{1/q}}.
$$
Thus we have the Babenko–Beckner inequality that
$$
\|\mathcal Ff\|_q \le \left(p^{1/p}/q^{1/q}\right)^{1/2} \|f\|_p.
$$
### Rényi entropy bound
From this inequality, an expression of the uncertainty principle in terms of the Rényi entropy can be derived. H.P. Heinig and M. Smith, Extensions of the Heisenberg–Weil inequality.
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
### Rényi entropy bound
From this inequality, an expression of the uncertainty principle in terms of the Rényi entropy can be derived. H.P. Heinig and M. Smith, Extensions of the Heisenberg–Weil inequality. Internat. J. Math. & Math. Sci., Vol. 9, No. 1 (1986) pp. 185–192.
Let
$$
g=\mathcal Ff, \, 2\alpha=p, \, 2\beta=q,
$$
so that
$$
\frac1\alpha+\frac1\beta=2
$$
and
$$
\frac12\le\alpha\le1\le\beta
$$
, we have
$$
\left(\int_{\mathbb R} |g(y)|^{2\beta}\,dy\right)^{1/2\beta}
_BLOCK0_$$
Squaring both sides and taking the logarithm, we get
$$
\frac 1\beta \log\left(\int_{\mathbb R} |g(y)|^{2\beta}\,dy\right)
_BLOCK1_$$
We can rewrite the condition on
$$
\alpha, \beta
$$
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
185–192.
Let
$$
g=\mathcal Ff, \, 2\alpha=p, \, 2\beta=q,
$$
so that
$$
\frac1\alpha+\frac1\beta=2
$$
and
$$
\frac12\le\alpha\le1\le\beta
$$
, we have
$$
\left(\int_{\mathbb R} |g(y)|^{2\beta}\,dy\right)^{1/2\beta}
_BLOCK0_$$
Squaring both sides and taking the logarithm, we get
$$
\frac 1\beta \log\left(\int_{\mathbb R} |g(y)|^{2\beta}\,dy\right)
_BLOCK1_$$
We can rewrite the condition on
$$
\alpha, \beta
$$
as
$$
\alpha(1-\beta)+\beta(1-\alpha)=0
$$
Assume
$$
\alpha,\beta\ne1
$$
, then we multiply both sides by the negative
$$
\frac{\beta}{1-\beta}=-\frac{\alpha}{1-\alpha}
$$
to get
$$
\frac {1}{1-\beta} \log\left(\int_{\mathbb R} |g(y)|^{2\beta}\,dy\right)
_BLOCK2_$$
Rearranging terms yields an inequality in terms of the sum of Rényi entropies,
$$
\frac{1}{1-\alpha} \log \left(\int_{\mathbb R} |f(x)|^{2\alpha}\,dx\right)
_BLOCK3_$$
$$
H_\alpha(|f|^2) + H_\beta(|g|^2) \ge \frac 1 2 \left(\frac{\log\alpha}{\alpha-1}+\frac{\log\beta}{\beta-1}\right) - \log 2
$$
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
#### Right-hand side
$$
\frac\alpha{2(\alpha-1)}\log\frac{(2\alpha)^{1/\alpha}}{(2\beta)^{1/\beta}}
$$
$$
=\frac12\left[\frac{\alpha}{\alpha-1}\log(2\alpha)^{1/\alpha} + \frac{\beta}{\beta-1}\log(2\beta)^{1/\beta}\right]
$$
$$
=\frac12\left[\frac{\log2\alpha}{\alpha-1} + \frac{\log2\beta}{\beta-1}\right]
$$
$$
=\frac12\left[\frac{\log\alpha}{\alpha-1} + \frac{\log\beta}{\beta-1}\right] + \frac12\log2\left[\frac{1}{\alpha-1} + \frac{1}{\beta-1}\right]
$$
$$
=\frac12\left[\frac{\log\alpha}{\alpha-1} + \frac{\log\beta}{\beta-1}\right] + \frac12\log2\left[\frac{1}{\alpha-1} + \frac{1}{\beta-1} - \frac{\alpha}{\alpha-1} - \frac{\beta}{\beta-1}\right]
$$
$$
=\frac12\left[\frac{\log\alpha}{\alpha-1} + \frac{\log\beta}{\beta-1}\right] + \frac12\log2\left[-2\right]
$$
$$
=\frac12\left[\frac{\log\alpha}{\alpha-1} + \frac{\log\beta}{\beta-1}\right] - \log2
$$
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
### Shannon entropy bound
Taking the limit of this last inequality as
$$
\alpha, \, \beta \to 1
$$
and the substitutions
$$
\Alpha=\alpha-1, \Beta=\beta-1
$$
yields the less general Shannon entropy inequality,
$$
H(|f|^2) + H(|g|^2) \ge \log\frac e 2,\quad\textrm{where}\quad g(y) \approx \int_{\mathbb R} e^{-2\pi ixy}f(x)\,dx~,
$$
valid for any base of logarithm, as long as we choose an appropriate unit of information, bit, nat, etc.
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
The constant will be different, though, for a different normalization of the Fourier transform, (such as is usually used in physics, with normalizations chosen so that ħ=1 ), i.e.,
$$
H(|f|^2) + H(|g|^2) \ge \log(\pi e)\quad\textrm{for}\quad g(y) \approx \frac 1{\sqrt{2\pi}}\int_{\mathbb R} e^{-ixy}f(x)\,dx~.
$$
In this case, the dilation of the Fourier transform absolute squared by a factor of 2 simply adds log(2) to its entropy.
## Entropy versus variance bounds
The Gaussian or normal probability distribution plays an important role in the relationship between variance and entropy: it is a problem of the calculus of variations to show that this distribution maximizes entropy for a given variance, and at the same time minimizes the variance for a given entropy.
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
The constant will be different, though, for a different normalization of the Fourier transform, (such as is usually used in physics, with normalizations chosen so that ħ=1 ), i.e.,
$$
H(|f|^2) + H(|g|^2) \ge \log(\pi e)\quad\textrm{for}\quad g(y) \approx \frac 1{\sqrt{2\pi}}\int_{\mathbb R} e^{-ixy}f(x)\,dx~.
$$
In this case, the dilation of the Fourier transform absolute squared by a factor of 2 simply adds log(2) to its entropy.
## Entropy versus variance bounds
The Gaussian or normal probability distribution plays an important role in the relationship between variance and entropy: it is a problem of the calculus of variations to show that this distribution maximizes entropy for a given variance, and at the same time minimizes the variance for a given entropy. In fact, for any probability density function
$$
\phi
$$
on the real line, Shannon's entropy inequality specifies:
$$
H(\phi) \le \log \sqrt {2\pi eV(\phi)},
$$
where H is the Shannon entropy and V is the variance, an inequality that is saturated only in the case of a normal distribution.
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
## Entropy versus variance bounds
The Gaussian or normal probability distribution plays an important role in the relationship between variance and entropy: it is a problem of the calculus of variations to show that this distribution maximizes entropy for a given variance, and at the same time minimizes the variance for a given entropy. In fact, for any probability density function
$$
\phi
$$
on the real line, Shannon's entropy inequality specifies:
$$
H(\phi) \le \log \sqrt {2\pi eV(\phi)},
$$
where H is the Shannon entropy and V is the variance, an inequality that is saturated only in the case of a normal distribution.
Moreover, the Fourier transform of a Gaussian probability amplitude function is also Gaussian—and the absolute squares of both of these are Gaussian, too. This can then be used to derive the usual Robertson variance uncertainty inequality from the above entropic inequality, enabling the latter to be tighter than the former.
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
Moreover, the Fourier transform of a Gaussian probability amplitude function is also Gaussian—and the absolute squares of both of these are Gaussian, too. This can then be used to derive the usual Robertson variance uncertainty inequality from the above entropic inequality, enabling the latter to be tighter than the former. That is (for ħ=1), exponentiating the Hirschman inequality and using Shannon's expression above,
$$
1/2 \le \exp (H(|f|^2)+H(|g|^2)) /(2e\pi) \le \sqrt {V(|f|^2)V(|g|^2)}~.
$$
Hirschman explained that entropy—his version of entropy was the negative of Shannon's—is a "measure of the concentration of [a probability distribution] in a set of small measure." Thus a low or large negative Shannon entropy means that a considerable mass of the probability distribution is confined to a set of small measure.
Note that this set of small measure need not be contiguous; a probability distribution can have several concentrations of mass in intervals of small measure, and the entropy may still be low no matter how widely scattered those intervals are.
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
Thus a low or large negative Shannon entropy means that a considerable mass of the probability distribution is confined to a set of small measure.
Note that this set of small measure need not be contiguous; a probability distribution can have several concentrations of mass in intervals of small measure, and the entropy may still be low no matter how widely scattered those intervals are. This is not the case with the variance: variance measures the concentration of mass about the mean of the distribution, and a low variance means that a considerable mass of the probability distribution is concentrated in a contiguous interval of small measure.
To formalize this distinction, we say that two probability density functions
$$
\phi_1
$$
and
$$
\phi_2
$$
are equimeasurable if
$$
\forall \delta > 0,\,\mu\{x\in\mathbb R|\phi_1(x)\ge\delta\} = \mu\{x\in\mathbb R|\phi_2(x)\ge\delta\},
$$
where is the Lebesgue measure. Any two equimeasurable probability density functions have the same Shannon entropy, and in fact the same Rényi entropy, of any order. The same is not true of variance, however.
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
Any two equimeasurable probability density functions have the same Shannon entropy, and in fact the same Rényi entropy, of any order. The same is not true of variance, however. Any probability density function has a radially decreasing equimeasurable "rearrangement" whose variance is less (up to translation) than any other rearrangement of the function; and there exist rearrangements of arbitrarily high variance, (all having the same entropy.)
|
https://en.wikipedia.org/wiki/Entropic_uncertainty
|
Matter waves are a central part of the theory of quantum mechanics, being half of wave–particle duality. At all scales where measurements have been practical, matter exhibits wave-like behavior. For example, a beam of electrons can be diffracted just like a beam of light or a water wave.
The concept that matter behaves like a wave was proposed by French physicist Louis de Broglie () in 1924, and so matter waves are also known as de Broglie waves.
The de Broglie wavelength is the wavelength, , associated with a particle with momentum through the Planck constant, :
$$
\lambda = \frac{h}{p}.
$$
Wave-like behavior of matter has been experimentally demonstrated, first for electrons in 1927 and for other elementary particles, neutral atoms and molecules in the years since.
Matter waves have more complex velocity relations than solid objects and they also differ from electromagnetic waves (light).
### Collective matter waves
are used to model phenomena in solid state physics; standing matter waves are used in molecular chemistry.
Matter wave concepts are widely used in the study of materials where different wavelength and interaction characteristics of electrons, neutrons, and atoms are leveraged for advanced microscopy and diffraction technologies.
## History
|
https://en.wikipedia.org/wiki/Matter_wave
|
Matter wave concepts are widely used in the study of materials where different wavelength and interaction characteristics of electrons, neutrons, and atoms are leveraged for advanced microscopy and diffraction technologies.
## History
### Background
At the end of the 19th century, light was thought to consist of waves of electromagnetic fields which propagated according to Maxwell's equations, while matter was thought to consist of localized particles (see history of wave and particle duality). In 1900, this division was questioned when, investigating the theory of black-body radiation, Max Planck proposed that the thermal energy of oscillating atoms is divided into discrete portions, or quanta. Extending Planck's investigation in several ways, including its connection with the photoelectric effect, Albert Einstein proposed in 1905 that light is also propagated and absorbed in quanta, now called photons. These quanta would have an energy given by the Planck–Einstein relation:
$$
E = h\nu
$$
and a momentum vector _
|
https://en.wikipedia.org/wiki/Matter_wave
|
Extending Planck's investigation in several ways, including its connection with the photoelectric effect, Albert Einstein proposed in 1905 that light is also propagated and absorbed in quanta, now called photons. These quanta would have an energy given by the Planck–Einstein relation:
$$
E = h\nu
$$
and a momentum vector _ BLOCK1_$$
\left|\mathbf{p}\right| = p = \frac{E}{c} = \frac{h}{\lambda} ,
$$
where (lowercase Greek letter nu) and (lowercase Greek letter lambda) denote the frequency and wavelength of the light, the speed of light, and the Planck constant. In the modern convention, frequency is symbolized by as is done in the rest of this article. Einstein's postulate was verified experimentally by K. T. Compton and O. W. Richardson and by A. L. Hughes in 1912 then more carefully including a measurement of the Planck constant in 1916 by Robert Millikan.
### De Broglie hypothesis
De Broglie, in his 1924 PhD thesis, proposed that just as light has both wave-like and particle-like properties, electrons also have wave-like properties.
|
https://en.wikipedia.org/wiki/Matter_wave
|
Einstein's postulate was verified experimentally by K. T. Compton and O. W. Richardson and by A. L. Hughes in 1912 then more carefully including a measurement of the Planck constant in 1916 by Robert Millikan.
### De Broglie hypothesis
De Broglie, in his 1924 PhD thesis, proposed that just as light has both wave-like and particle-like properties, electrons also have wave-like properties.
His thesis started from the hypothesis, "that to each portion of energy with a proper mass one may associate a periodic phenomenon of the frequency , such that one finds: . The frequency is to be measured, of course, in the rest frame of the energy packet. This hypothesis is the basis of our theory. "MacKinnon, E. (1976). De Broglie's thesis: a critical retrospective, Am. J. Phys. 44: 1047–1055. (This frequency is also known as Compton frequency.)
To find the wavelength equivalent to a moving body, de Broglie set the total energy from special relativity for that body equal to :_ BLOCK0_(Modern physics no longer uses this form of the total energy; the energy–momentum relation has proven more useful.)
|
https://en.wikipedia.org/wiki/Matter_wave
|
To find the wavelength equivalent to a moving body, de Broglie set the total energy from special relativity for that body equal to :_ BLOCK0_(Modern physics no longer uses this form of the total energy; the energy–momentum relation has proven more useful.) De Broglie identified the velocity of the particle, , with the wave group velocity in free space:
$$
v_\text{g} \equiv \frac{\partial \omega}{\partial k} = \frac{d\nu}{d(1/\lambda)}
$$
(The modern definition of group velocity uses angular frequency and wave number ). By applying the differentials to the energy equation and identifying the relativistic momentum:
$$
p = \frac{mv}{\sqrt{1-\frac{v^2}{c^2}}}
$$
then integrating, de Broglie arrived at his formula for the relationship between the wavelength, , associated with an electron and the modulus of its momentum, , through the Planck constant, :
$$
\lambda = \frac{h}{p}.
$$
|
https://en.wikipedia.org/wiki/Matter_wave
|
De Broglie identified the velocity of the particle, , with the wave group velocity in free space:
$$
v_\text{g} \equiv \frac{\partial \omega}{\partial k} = \frac{d\nu}{d(1/\lambda)}
$$
(The modern definition of group velocity uses angular frequency and wave number ). By applying the differentials to the energy equation and identifying the relativistic momentum:
$$
p = \frac{mv}{\sqrt{1-\frac{v^2}{c^2}}}
$$
then integrating, de Broglie arrived at his formula for the relationship between the wavelength, , associated with an electron and the modulus of its momentum, , through the Planck constant, :
$$
\lambda = \frac{h}{p}.
$$
### Schrödinger's (matter) wave equation
Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Erwin Schrödinger decided to find a proper three-dimensional wave equation for the electron.
|
https://en.wikipedia.org/wiki/Matter_wave
|
### Schrödinger's (matter) wave equation
Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Erwin Schrödinger decided to find a proper three-dimensional wave equation for the electron. He was guided by William Rowan Hamilton's analogy between mechanics and optics (see Hamilton's optico-mechanical analogy), encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system – the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action.
In 1926, Schrödinger published the wave equation that now bears his name – the matter wave analogue of Maxwell's equations – and used it to derive the energy spectrum of hydrogen. Frequencies of solutions of the non-relativistic Schrödinger equation differ from de Broglie waves by the Compton frequency since the energy corresponding to the rest mass of a particle is not part of the non-relativistic Schrödinger equation. The Schrödinger equation describes the time evolution of a wavefunction, a function that assigns a complex number to each point in space.
|
https://en.wikipedia.org/wiki/Matter_wave
|
Frequencies of solutions of the non-relativistic Schrödinger equation differ from de Broglie waves by the Compton frequency since the energy corresponding to the rest mass of a particle is not part of the non-relativistic Schrödinger equation. The Schrödinger equation describes the time evolution of a wavefunction, a function that assigns a complex number to each point in space. Schrödinger tried to interpret the modulus squared of the wavefunction as a charge density. This approach was, however, unsuccessful. Max Born proposed that the modulus squared of the wavefunction is instead a probability density, a successful proposal now known as the Born rule.
The following year, 1927, C. G. Darwin (grandson of the famous biologist) explored Schrödinger's equation in several idealized scenarios. For an unbound electron in free space he worked out the propagation of the wave, assuming an initial Gaussian wave packet.
|
https://en.wikipedia.org/wiki/Matter_wave
|
The following year, 1927, C. G. Darwin (grandson of the famous biologist) explored Schrödinger's equation in several idealized scenarios. For an unbound electron in free space he worked out the propagation of the wave, assuming an initial Gaussian wave packet. Darwin showed that at time
$$
t
$$
later the position
$$
x
$$
of the packet traveling at velocity
$$
v
$$
would be
$$
x_0 + vt \pm \sqrt{\sigma^2 + (ht/2\pi\sigma m)^2}
$$
where
$$
\sigma
$$
is the uncertainty in the initial position. This position uncertainty creates uncertainty in velocity (the extra second term in the square root) consistent with Heisenberg's uncertainty relation The wave packet spreads out as show in the figure.
### Experimental confirmation
In 1927, matter waves were first experimentally confirmed to occur in George Paget Thomson and Alexander Reid's diffraction experiment and the Davisson–Germer experiment, both for electrons.
The de Broglie hypothesis and the existence of matter waves has been confirmed for other elementary particles, neutral atoms and even molecules have been shown to be wave-like.
|
https://en.wikipedia.org/wiki/Matter_wave
|
### Experimental confirmation
In 1927, matter waves were first experimentally confirmed to occur in George Paget Thomson and Alexander Reid's diffraction experiment and the Davisson–Germer experiment, both for electrons.
The de Broglie hypothesis and the existence of matter waves has been confirmed for other elementary particles, neutral atoms and even molecules have been shown to be wave-like.
The first electron wave interference patterns directly demonstrating wave–particle duality used electron biprisms (essentially a wire placed in an electron microscope) and measured single electrons building up the diffraction pattern.
A close copy of the famous double-slit experiment using electrons through physical apertures gave the movie shown.
####
### Electrons
In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow-moving electrons at a crystalline nickel target. The diffracted electron intensity was measured, and was determined to have a similar angular dependence to diffraction patterns predicted by Bragg for x-rays. At the same time George Paget Thomson and Alexander Reid at the University of Aberdeen were independently firing electrons at thin celluloid foils and later metal films, observing rings which can be similarly interpreted.
|
https://en.wikipedia.org/wiki/Matter_wave
|
The diffracted electron intensity was measured, and was determined to have a similar angular dependence to diffraction patterns predicted by Bragg for x-rays. At the same time George Paget Thomson and Alexander Reid at the University of Aberdeen were independently firing electrons at thin celluloid foils and later metal films, observing rings which can be similarly interpreted. (Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident and is rarely mentioned.) Before the acceptance of the de Broglie hypothesis, diffraction was a property that was thought to be exhibited only by waves. Therefore, the presence of any diffraction effects by matter demonstrated the wave-like nature of matter. The matter wave interpretation was placed onto a solid foundation in 1928 by Hans Bethe, who solved the Schrödinger equation, showing how this could explain the experimental results. His approach is similar to what is used in modern electron diffraction approaches.
This was a pivotal result in the development of quantum mechanics. Just as the photoelectric effect demonstrated the particle nature of light, these experiments showed the wave nature of matter.
####
### Neutrons
Neutrons, produced in nuclear reactors with kinetic energy of around , thermalize to around as they scatter from light atoms.
|
https://en.wikipedia.org/wiki/Matter_wave
|
Just as the photoelectric effect demonstrated the particle nature of light, these experiments showed the wave nature of matter.
####
### Neutrons
Neutrons, produced in nuclear reactors with kinetic energy of around , thermalize to around as they scatter from light atoms. The resulting de Broglie wavelength (around ) matches interatomic spacing and neutrons scatter strongly from hydrogen atoms. Consequently, neutron matter waves are used in crystallography, especially for biological materials. Neutrons were discovered in the early 1930s, and their diffraction was observed in 1936. In 1944, Ernest O. Wollan, with a background in X-ray scattering from his PhD work under Arthur Compton, recognized the potential for applying thermal neutrons from the newly operational X-10 nuclear reactor to crystallography. Joined by Clifford G. Shull, they developed neutron diffraction throughout the 1940s.
In the 1970s, a neutron interferometer demonstrated the action of gravity in relation to wave–particle duality. The double-slit experiment was performed using neutrons in 1988.
#### Atoms
Interference of atom matter waves was first observed by Immanuel Estermann and Otto Stern in 1930, when a Na beam was diffracted off a surface of NaCl.
|
https://en.wikipedia.org/wiki/Matter_wave
|
The double-slit experiment was performed using neutrons in 1988.
#### Atoms
Interference of atom matter waves was first observed by Immanuel Estermann and Otto Stern in 1930, when a Na beam was diffracted off a surface of NaCl. The short de Broglie wavelength of atoms prevented progress for many years until two technological breakthroughs revived interest: microlithography allowing precise small devices and laser cooling allowing atoms to be slowed, increasing their de Broglie wavelength. The double-slit experiment on atoms was performed in 1991.
Advances in laser cooling allowed cooling of neutral atoms down to nanokelvin temperatures. At these temperatures, the de Broglie wavelengths come into the micrometre range. Using Bragg diffraction of atoms and a Ramsey interferometry technique, the de Broglie wavelength of cold sodium atoms was explicitly measured and found to be consistent with the temperature measured by a different method.
####
### Molecules
Recent experiments confirm the relations for molecules and even macromolecules that otherwise might be supposed too large to undergo quantum mechanical effects. In 1999, a research team in Vienna demonstrated diffraction for molecules as large as fullerenes. The researchers calculated a de Broglie wavelength of the most probable C60 velocity as .
|
https://en.wikipedia.org/wiki/Matter_wave
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.